48540

Signals and
Systems
Lecture Notes
2014

R (s )

E (s )

G (s)
controller

F (s )

M (s )

V (s )

1
s

C (s )

maze rover
H ( s)
controller

RF
R1
R5
R3
C1
R0
Vi

R2

A1
V1

C5

R4
A2
V2

A3

Vo

PMcL

i
Contents
LECTURE 1A – SIGNALS
OVERVIEW ..........................................................................................................1A.1
SIGNAL OPERATIONS ...........................................................................................1A.2
CONTINUOUS AND DISCRETE SIGNALS ................................................................1A.3
PERIODIC AND APERIODIC SIGNALS ....................................................................1A.4
DETERMINISTIC AND RANDOM SIGNALS .............................................................1A.6
ENERGY AND POWER SIGNALS ............................................................................1A.7
COMMON CONTINUOUS-TIME SIGNALS.............................................................1A.10
THE CONTINUOUS-TIME STEP FUNCTION .....................................................1A.10
THE RECTANGLE FUNCTION .........................................................................1A.13
THE STRAIGHT LINE .....................................................................................1A.15
THE SINC FUNCTION .....................................................................................1A.20
THE IMPULSE FUNCTION ...............................................................................1A.22
SINUSOIDS .........................................................................................................1A.26
WHY SINUSOIDS? ..............................................................................................1A.26
REPRESENTATION OF SINUSOIDS ..................................................................1A.28
RESOLUTION OF AN ARBITRARY SINUSOID INTO ORTHOGONAL FUNCTIONS 1A.30
REPRESENTATION OF A SINUSOID BY A COMPLEX NUMBER .........................1A.31
FORMALISATION OF THE RELATIONSHIP BETWEEN PHASOR AND SINUSOID ..1A.33
EULER’S COMPLEX EXPONENTIAL RELATIONSHIPS FOR COSINE AND SINE ..1A.34
A NEW DEFINITION OF THE PHASOR .............................................................1A.36
GRAPHICAL ILLUSTRATION OF THE RELATIONSHIP BETWEEN THE TWO TYPES OF
PHASOR AND THEIR CORRESPONDING SINUSOID ..........................................1A.37
NEGATIVE FREQUENCY ................................................................................1A.38
COMMON DISCRETE-TIME SIGNALS ..................................................................1A.39
THE DISCRETE-TIME STEP FUNCTION ..........................................................1A.40
THE UNIT-PULSE FUNCTION .........................................................................1A.41
SUMMARY .........................................................................................................1A.42
QUIZ ..................................................................................................................1A.43
EXERCISES ........................................................................................................1A.44
LEONHARD EULER (1707-1783) ........................................................................1A.47

Signals and Systems 2014

ii
LECTURE 1B – SYSTEMS
LINEAR DIFFERENTIAL EQUATIONS WITH CONSTANT COEFFICIENTS ................. 1B.1
INITIAL CONDITIONS ...................................................................................... 1B.1
FIRST-ORDER CASE ....................................................................................... 1B.2
SYSTEM MODELLING .......................................................................................... 1B.3
ELECTRICAL CIRCUITS ................................................................................... 1B.3
MECHANICAL SYSTEMS ................................................................................. 1B.3
DISCRETE-TIME SYSTEMS ................................................................................... 1B.4
LINEAR DIFFERENCE EQUATIONS WITH CONSTANT COEFFICIENTS .................... 1B.5
SOLUTION BY RECURSION .............................................................................. 1B.5
COMPLETE SOLUTION .................................................................................... 1B.5
FIRST-ORDER CASE ....................................................................................... 1B.5
DISCRETE-TIME BLOCK DIAGRAMS.................................................................... 1B.6
DISCRETIZATION IN TIME OF DIFFERENTIAL EQUATIONS ................................... 1B.7
FIRST-ORDER CASE ....................................................................................... 1B.7
SECOND-ORDER CASE .................................................................................... 1B.8
CONVOLUTION IN LINEAR TIME-INVARIANT DISCRETE-TIME SYSTEMS.............. 1B.9
FIRST-ORDER SYSTEM ................................................................................... 1B.9
UNIT-PULSE RESPONSE OF A FIRST-ORDER SYSTEM ................................... 1B.10
GENERAL SYSTEM ....................................................................................... 1B.10
SYSTEM MEMORY ........................................................................................ 1B.14
SYSTEM STABILITY ...................................................................................... 1B.15
CONVOLUTION IN LINEAR TIME-INVARIANT CONTINUOUS-TIME SYSTEMS ...... 1B.15
GRAPHICAL DESCRIPTION OF CONVOLUTION ................................................... 1B.18
PROPERTIES OF CONVOLUTION ......................................................................... 1B.23
NUMERICAL CONVOLUTION ............................................................................. 1B.23
CONVOLUTION WITH AN IMPULSE..................................................................... 1B.26
SUMMARY ........................................................................................................ 1B.28
EXERCISES ........................................................................................................ 1B.29
GUSTAV ROBERT KIRCHHOFF (1824-1887) ...................................................... 1B.37

Signals and Systems 2014

iii
LECTURE 2A – FOURIER SERIES, SPECTRA
ORTHOGONALITY ................................................................................................2A.1
ORTHOGONALITY IN MATHEMATICS ..............................................................2A.1
THE INNER PRODUCT ......................................................................................2A.4
ORTHOGONALITY IN POWER SIGNALS ............................................................2A.6
ORTHOGONALITY IN ENERGY SIGNALS ..........................................................2A.7
THE TRIGONOMETRIC FOURIER SERIES ...............................................................2A.8
THE COMPACT TRIGONOMETRIC FOURIER SERIES ............................................2A.14
THE SPECTRUM .................................................................................................2A.18
THE COMPLEX EXPONENTIAL FOURIER SERIES .................................................2A.22
HOW TO USE MATLAB® TO CHECK FOURIER SERIES COEFFICIENTS...............2A.29
SYMMETRY IN THE TIME DOMAIN .....................................................................2A.31
EVEN SYMMETRY ..............................................................................................2A.31
ODD SYMMETRY ...............................................................................................2A.35
HALF-WAVE SYMMETRY ..................................................................................2A.39
POWER ..............................................................................................................2A.42
FILTERS .............................................................................................................2A.45
RELATIONSHIPS BETWEEN THE THREE FOURIER SERIES REPRESENTATIONS.....2A.49
SUMMARY .........................................................................................................2A.50
QUIZ ..................................................................................................................2A.51
EXERCISES ........................................................................................................2A.52
JOSEPH FOURIER (1768-1830) ..........................................................................2A.55

LECTURE 2B – THE FOURIER TRANSFORM
THE FOURIER TRANSFORM .................................................................................. 2B.1
CONTINUOUS SPECTRA........................................................................................ 2B.7
EXISTENCE OF THE FOURIER TRANSFORM ........................................................... 2B.9
FINDING FOURIER TRANSFORMS ....................................................................... 2B.10
SYMMETRY BETWEEN THE TIME-DOMAIN AND FREQUENCY-DOMAIN .............. 2B.13
TIME SHIFTING .................................................................................................. 2B.18
FREQUENCY SHIFTING ....................................................................................... 2B.19
THE FOURIER TRANSFORM OF SINUSOIDS ......................................................... 2B.20
RELATIONSHIP BETWEEN THE FOURIER SERIES AND FOURIER TRANSFORM ...... 2B.21
THE FOURIER TRANSFORM OF A UNIFORM TRAIN OF IMPULSES ........................ 2B.23
STANDARD FOURIER TRANSFORMS ................................................................... 2B.24
FOURIER TRANSFORM PROPERTIES ................................................................... 2B.25
SUMMARY ......................................................................................................... 2B.26
EXERCISES ........................................................................................................ 2B.27
WILLIAM THOMSON (LORD KELVIN) (1824-1907)............................................ 2B.32
THE AGE OF THE EARTH ............................................................................... 2B.34
THE TRANSATLANTIC CABLE ....................................................................... 2B.37
OTHER ACHIEVEMENTS ................................................................................ 2B.40

Signals and Systems 2014

iv
LECTURE 3A – FILTERING AND SAMPLING
INTRODUCTION ................................................................................................... 3A.1
RESPONSE TO A SINUSOIDAL INPUT .................................................................... 3A.1
RESPONSE TO AN ARBITRARY INPUT .................................................................. 3A.7
PERIODIC INPUTS ........................................................................................... 3A.7
APERIODIC INPUTS ......................................................................................... 3A.8
IDEAL FILTERS .................................................................................................. 3A.10
PHASE RESPONSE OF AN IDEAL FILTER ........................................................ 3A.11
WHAT DOES A FILTER DO TO A SIGNAL? ........................................................... 3A.16
SAMPLING......................................................................................................... 3A.19
SAMPLING AND PERIODICY .......................................................................... 3A.22
RECONSTRUCTION ............................................................................................ 3A.23
ALIASING .......................................................................................................... 3A.25
PRACTICAL SAMPLING AND RECONSTRUCTION ................................................ 3A.27
SUMMARY OF THE SAMPLING AND RECONSTRUCTION PROCESS ....................... 3A.28
FINDING THE FOURIER SERIES OF A PERIODIC FUNCTION FROM THE FOURIER
TRANSFORM OF A SINGLE PERIOD................................................................ 3A.32
WINDOWING IN THE TIME DOMAIN .................................................................. 3A.32
PRACTICAL MULTIPLICATION AND CONVOLUTION ........................................... 3A.37
SUMMARY ........................................................................................................ 3A.39
QUIZ ................................................................................................................. 3A.41
EXERCISES ........................................................................................................ 3A.42
LECTURE 3B – AMPLITUDE MODULATION
INTRODUCTION ................................................................................................... 3B.1
THE COMMUNICATION CHANNEL .................................................................. 3B.2
ANALOG AND DIGITAL MESSAGES................................................................. 3B.3
BASEBAND AND CARRIER COMMUNICATION ................................................. 3B.3
MODULATION ..................................................................................................... 3B.3
EASE OF RADIATION ...................................................................................... 3B.5
SIMULTANEOUS TRANSMISSION OF SEVERAL SIGNALS .................................. 3B.5
DOUBLE-SIDEBAND, SUPPRESSED-CARRIER (DSB-SC) MODULATION .............. 3B.6
DEMODULATION ............................................................................................ 3B.9
SUMMARY OF DBS-SC MODULATION AND DEMODULATION ...................... 3B.11
AMPLITUDE MODULATION (AM) ..................................................................... 3B.12
ENVELOPE DETECTION ................................................................................ 3B.13
SINGLE SIDEBAND (SSB) MODULATION ........................................................... 3B.14
QUADRATURE AMPLITUDE MODULATION (QAM) ........................................... 3B.15
COHERENT DEMODULATION ........................................................................ 3B.17
SUMMARY ........................................................................................................ 3B.18
EXERCISES ........................................................................................................ 3B.19

Signals and Systems 2014

v
LECTURE 4A – THE LAPLACE TRANSFORM
THE LAPLACE TRANSFORM .................................................................................4A.1
REGION OF CONVERGENCE (ROC) .................................................................4A.4
FINDING A FOURIER TRANSFORM USING A LAPLACE TRANSFORM ......................4A.6
FINDING LAPLACE TRANSFORMS ........................................................................4A.9
DIFFERENTIATION PROPERTY .......................................................................4A.11
STANDARD LAPLACE TRANSFORMS ..................................................................4A.12
LAPLACE TRANSFORM PROPERTIES...................................................................4A.13
EVALUATION OF THE INVERSE LAPLACE TRANSFORM ......................................4A.14
RATIONAL LAPLACE TRANSFORMS ..............................................................4A.14
TRANSFORMS OF DIFFERENTIAL EQUATIONS ....................................................4A.20
THE SYSTEM TRANSFER FUNCTION ...................................................................4A.23
BLOCK DIAGRAMS ............................................................................................4A.24
NOTATION ....................................................................................................4A.25
CASCADING BLOCKS .........................................................................................4A.26
STANDARD FORM OF A FEEDBACK CONTROL SYSTEM ......................................4A.28
BLOCK DIAGRAM TRANSFORMATIONS ..............................................................4A.30
SUMMARY .........................................................................................................4A.34
EXERCISES ........................................................................................................4A.35
PIERRE SIMON DE LAPLACE (1749-1827) ..........................................................4A.40
LECTURE 4B – TRANSFER FUNCTIONS
OVERVIEW ..........................................................................................................4A.1
STABILITY ...........................................................................................................4A.1
UNIT-STEP RESPONSE .........................................................................................4A.2
FIRST-ORDER SYSTEMS ..................................................................................4A.4
SECOND-ORDER SYSTEMS ..............................................................................4A.5
DISTINCT REAL POLES (   1 ) – OVERDAMPED .............................................4A.6
REPEATED REAL POLES (   1 ) – CRITICALLY DAMPED ................................4A.7
COMPLEX CONJUGATE POLES ( 0    1 ) – UNDERDAMPED ..........................4A.8
SECOND-ORDER POLE LOCATIONS .................................................................4A.9
SINUSOIDAL RESPONSE .....................................................................................4A.10
ARBITRARY RESPONSE ......................................................................................4A.11
SUMMARY .........................................................................................................4A.12
EXERCISES ........................................................................................................4A.13
OLIVER HEAVISIDE (1850-1925).......................................................................4A.15

Signals and Systems 2014

vi
LECTURE 5A – FREQUENCY RESPONSE
OVERVIEW .......................................................................................................... 5A.1
THE FREQUENCY RESPONSE FUNCTION .............................................................. 5A.1
DETERMINING THE FREQUENCY RESPONSE FROM A TRANSFER FUNCTION ......... 5A.2
MAGNITUDE RESPONSES .................................................................................... 5A.3
PHASE RESPONSES .............................................................................................. 5A.7
FREQUENCY RESPONSE OF A LOWPASS SECOND-ORDER SYSTEM ...................... 5A.9
VISUALIZATION OF THE FREQUENCY RESPONSE FROM A POLE-ZERO PLOT ...... 5A.11
BODE PLOTS ..................................................................................................... 5A.13
APPROXIMATING BODE PLOTS USING TRANSFER FUNCTION FACTORS ............. 5A.14
TRANSFER FUNCTION SYNTHESIS ..................................................................... 5A.15
DIGITAL FILTERS .............................................................................................. 5A.19
SUMMARY ........................................................................................................ 5A.20
EXERCISES ........................................................................................................ 5A.21
LECTURE 5B – TIME-DOMAIN RESPONSE
OVERVIEW .......................................................................................................... 5B.1
STEADY-STATE ERROR ....................................................................................... 5B.1
TRANSIENT RESPONSE ........................................................................................ 5B.3
SECOND-ORDER STEP RESPONSE ........................................................................ 5B.6
SETTLING TIME................................................................................................. 5B.10
PEAK TIME ....................................................................................................... 5B.17
PERCENT OVERSHOOT ...................................................................................... 5B.17
RISE TIME AND DELAY TIME ............................................................................ 5B.17
SUMMARY ........................................................................................................ 5B.18
EXERCISES ........................................................................................................ 5B.19
LECTURE 6A – EFFECTS OF FEEDBACK
OVERVIEW .......................................................................................................... 6A.1
TRANSIENT RESPONSE ........................................................................................ 6A.2
CLOSED-LOOP CONTROL .................................................................................... 6A.4
PROPORTIONAL CONTROL (P CONTROLLER) .................................................. 6A.5
INTEGRAL CONTROL (I CONTROLLER) ........................................................... 6A.6
PROPORTIONAL PLUS INTEGRAL CONTROL (PI CONTROLLER) ...................... 6A.7
PROPORTIONAL, INTEGRAL, DERIVATIVE CONTROL (PID CONTROLLER) ...... 6A.9
DISTURBANCE REJECTION .................................................................................. 6A.9
SENSITIVITY ..................................................................................................... 6A.11
SYSTEM SENSITIVITY ................................................................................... 6A.11
SUMMARY ........................................................................................................ 6A.13
EXERCISES ........................................................................................................ 6A.14
LECTURE 6B – REVISION

Signals and Systems 2014

vii
LECTURE 7A – THE Z-TRANSFORM
OVERVIEW ..........................................................................................................7A.1
THE Z-TRANSFORM .............................................................................................7A.1
MAPPING BETWEEN S-DOMAIN AND Z-DOMAIN .............................................7A.4
MAPPING THE S-PLANE IMAGINARY AXIS.......................................................7A.5
ALIASING........................................................................................................7A.6
FINDING Z-TRANSFORMS .....................................................................................7A.8
RIGHT SHIFT (DELAY) PROPERTY .................................................................7A.12
STANDARD Z-TRANSFORMS ...............................................................................7A.13
Z-TRANSFORM PROPERTIES ...............................................................................7A.14
EVALUATION OF INVERSE Z-TRANSFORMS ........................................................7A.15
TRANSFORMS OF DIFFERENCE EQUATIONS .......................................................7A.17
THE SYSTEM TRANSFER FUNCTION ...................................................................7A.19
STABILITY ....................................................................................................7A.21
TRANSFER FUNCTION INTERCONNECTIONS ..................................................7A.22
SUMMARY .........................................................................................................7A.24
EXERCISES ........................................................................................................7A.25
LECTURE 7B – DISCRETIZATION
OVERVIEW .......................................................................................................... 7B.1
SIGNAL DISCRETIZATION .................................................................................... 7B.1
SIGNAL RECONSTRUCTION .................................................................................. 7B.2
HOLD OPERATION .......................................................................................... 7B.2
SYSTEM DISCRETIZATION ................................................................................... 7B.4
FREQUENCY RESPONSE ....................................................................................... 7B.7
RESPONSE MATCHING ....................................................................................... 7B.10
SUMMARY ......................................................................................................... 7B.13
EXERCISES ........................................................................................................ 7B.14
LECTURE 8A – SYSTEM DESIGN
OVERVIEW ..........................................................................................................8A.1
DESIGN CRITERIA FOR CONTINUOUS-TIME SYSTEMS ..........................................8A.2
PERCENT OVERSHOOT ....................................................................................8A.2
PEAK TIME .....................................................................................................8A.3
SETTLING TIME...............................................................................................8A.4
COMBINED SPECIFICATIONS ...........................................................................8A.5
DESIGN CRITERIA FOR DISCRETE-TIME SYSTEMS ...............................................8A.9
MAPPING OF A POINT FROM THE S-PLANE TO THE Z-PLANE ...........................8A.10
PERCENT OVERSHOOT ..................................................................................8A.11
PEAK TIME ...................................................................................................8A.12
SETTLING TIME.............................................................................................8A.13
COMBINED SPECIFICATIONS .........................................................................8A.14
SUMMARY .........................................................................................................8A.15

Signals and Systems 2014

viii
LECTURE 8B – ROOT LOCUS
OVERVIEW .......................................................................................................... 8B.1
ROOT LOCUS ...................................................................................................... 8B.1
ROOT LOCUS RULES ........................................................................................... 8B.5
1. NUMBER OF BRANCHES ............................................................................. 8B.5
2. LOCUS END POINTS .................................................................................... 8B.5
3. REAL AXIS SYMMETRY .............................................................................. 8B.5
4. REAL AXIS SECTIONS ................................................................................. 8B.5
5. ASYMPTOTE ANGLES ................................................................................. 8B.6
6. ASYMPTOTIC INTERCEPT (CENTROID) ........................................................ 8B.6
7. REAL AXIS BREAKAWAY AND BREAK-IN POINTS ...................................... 8B.6
8. IMAGINARY AXIS CROSSING POINTS .......................................................... 8B.9
9. EFFECT OF POLES AND ZEROS .................................................................... 8B.9
10. USE A COMPUTER ..................................................................................... 8B.9
MATLAB®’S RLTOOL .................................................................................... 8B.14
ROOT LOCI OF DISCRETE-TIME SYSTEMS ......................................................... 8B.16
TIME RESPONSE OF DISCRETE-TIME SYSTEMS ................................................. 8B.17
SUMMARY ........................................................................................................ 8B.19
EXERCISES ........................................................................................................ 8B.20
JAMES CLERK MAXWELL (1831-1879)............................................................. 8B.24
LECTURE 9A – STATE-VARIABLES
OVERVIEW .......................................................................................................... 9A.1
STATE REPRESENTATION .................................................................................... 9A.1
STATES .......................................................................................................... 9A.2
OUTPUT ......................................................................................................... 9A.4
MULTIPLE INPUT-MULTIPLE OUTPUT SYSTEMS............................................. 9A.4
SOLUTION OF THE STATE EQUATIONS ................................................................. 9A.5
TRANSITION MATRIX .......................................................................................... 9A.6
TRANSFER FUNCTION ....................................................................................... 9A.10
IMPULSE RESPONSE .......................................................................................... 9A.10
LINEAR STATE-VARIABLE FEEDBACK .............................................................. 9A.12
SUMMARY ........................................................................................................ 9A.18
EXERCISES ........................................................................................................ 9A.19
LECTURE 9B – STATE-VARIABLES 2
OVERVIEW .......................................................................................................... 9B.1
NORMAL FORM ................................................................................................... 9B.1
SIMILARITY TRANSFORM .................................................................................... 9B.4
SOLUTION OF THE STATE EQUATIONS FOR THE ZIR ........................................... 9B.6
POLES AND REPEATED EIGENVALUES............................................................... 9B.11
POLES .......................................................................................................... 9B.11
REPEATED EIGENVALUES ............................................................................ 9B.11
DISCRETE-TIME STATE-VARIABLES.................................................................. 9B.12
DISCRETE-TIME RESPONSE ............................................................................... 9B.16
DISCRETE-TIME TRANSFER FUNCTION .............................................................. 9B.17
SUMMARY ........................................................................................................ 9B.23
EXERCISES ........................................................................................................ 9B.24

Signals and Systems 2014

ix
APPENDIX A - THE FAST FOURIER TRANSFORM
OVERVIEW ............................................................................................................ F.1
THE DISCRETE-TIME FOURIER TRANSFORM (DTFT) ............................................ F.2
THE DISCRETE FOURIER TRANSFORM (DFT) ........................................................ F.4
THE FAST FOURIER TRANSFORM (FFT) ................................................................ F.8
CREATING FFTS .................................................................................................... F.9
INTERPRETING FFTS ........................................................................................... F.11
CASE 1 – IDEALLY SAMPLED ONE-OFF WAVEFORM ......................................... F.11
CASE 2 – IDEALLY SAMPLED PERIODIC WAVEFORM ........................................ F.12
CASE 3 – CONTINUOUS TIME-LIMITED WAVEFORM ......................................... F.13
CASE 4 – CONTINUOUS PERIODIC WAVEFORM ................................................ F.14
APPENDIX B - THE PHASE-LOCKED LOOP
OVERVIEW ............................................................................................................ P.1
DISTORTION IN SYNCHRONOUS AM DEMODULATION ...................................... P.1
CARRIER REGENERATION ................................................................................. P.2
THE PHASE-LOCKED LOOP (PLL) ......................................................................... P.3
VOLTAGE CONTROLLED OSCILLATOR (VCO) .................................................. P.4
PHASE DETECTOR ............................................................................................. P.7
PLL MODEL .......................................................................................................... P.9
LINEAR PLL MODEL ........................................................................................ P.9
LOOP FILTER .................................................................................................. P.10

FFT - QUICK REFERENCE GUIDE
MATLAB® - QUICK REFERENCE GUIDE
MATRICES - QUICK REFERENCE GUIDE
ANSWERS

Signals and Systems 2014

1A.1
Lecture 1A – Signals
Overview. Signal operations. Periodic & aperiodic signals. Deterministic &
random signals. Energy & power signals. Common signals. Sinusoids.

Overview
Electrical engineers should never forget “the big picture”.
Every day we take for granted the power that comes out of the wall, or the light
that instantly illuminates the darkness at the flick of a switch. We take for
granted the fact that electrical machines are at the heart of every manufacturing
industry. There has been no bigger benefit to humankind than the supply of
electricity to residential, commercial and industrial sites. Behind this “magic”
is a large infrastructure of generators, transmission lines, transformers,
protection relays, motors and motor drives.
We also take for granted the automation of once hazardous or laborious tasks,
we take for granted the ability of electronics to control something as
complicated as a jet aircraft, and we seem not to marvel at the idea of your Why we study
systems

car’s engine having just the right amount of fuel injected into the cylinder with
just the right amount of air, with a spark igniting the mixture at just the right
time to produce the maximum power and the least amount of noxious gases as
you tell it to accelerate up a hill when the engine is cold!
We forget that we are now living in an age where we can communicate with
anyone (and almost anything), anywhere, at anytime. We have a point-to-point
telephone system, mobile phones, the Internet, radio and TV. We have never
lived in an age so “information rich”.
Electrical engineers are engaged in the business of designing, improving,
extending, maintaining and operating this amazing array of systems.
You are in the business of becoming an engineer.

Signals and Systems 2014

such as a sinusoidal oscillator. But first. and a way of describing the inputs and outputs of these systems . let’s characterise mathematically and pictorially some of the more common signal types. we can amplify. attenuate and filter. Linear system operations We can multiply two signals together.1A.2 One thing that engineers need to do well is to break down a seemingly complex system into smaller. We normally treat all the operations as linear. synthesise and process signals that is mathematically rigorous. It turns out that Fourier analysis of signals and systems is one suitable method to achieve this goal.1 Some of the signals in this system come from “natural” sources.signals. although in practice some nonlinearity always arises. some come from “artificial” sources. Signal Operations There really aren’t many things that you can do to signals. We therefore need a way of describing these systems mathematically. such as music or speech. we can add two signals together. Take a simple FM modulator: Example of signal operations L' L Preemphasiser L+R L+R+ (L-R)cos(2 f c t )   R' R Preemphasiser  L-R FM modulator to antenna (L-R)cos(2 f c t ) cos(2 f c t ) Figure 1A. yet simple to picture. Signals and Systems 2014 . easier parts. and the Laplace Transform is even more powerful. We seek a way to analyse.

1A. The parentheses indicate that t is a real number. The brackets indicate that n is an integer.g. A discrete signal is usually described mathematically as a function with the argument in brackets. An example of a graph of a discrete-time signal is shown below: g [n] n Signals and Systems 2014 . monthly sales of a corporation. usually time.2 A discrete signal is one which exists only at specific instants of the independent variable. and video signals displayed on a TV. Common examples of discrete-time signals include your bank balance.g. e. An example of a graph of a continuous-time signal is shown below: g(t ) t Figure 1A. and it is usually described mathematically as a function with the argument in parentheses. We say that the signal is a function of the independent variable. usually time. audio output from a speaker. and the data read from a CD. Common examples of continuous-time signals include temperature. g t  . voltage. e. g n .3 Continuous and Discrete Signals A continuous signal can be broadly defined as a quantity or measurement that varies continuously in relation to another variable.

1A. T0  0 Aperiodic signals defined (1A. The fundamental frequency of the signal is defined as: Fundamental frequency defined f0  1 T0 (1A.4 Figure 1A.3 Periodic and Aperiodic Signals A periodic signal g t  is defined by the property: Periodic function defined g t   g t  T0 . T0  0 (1A. Signals and Systems 2014 . (1A. We can also have periodic discrete-time signals. (1A.3) An aperiodic signal is one for which Eq.1) or Eq.1) is called the period.2) A periodic signal remains unchanged by a positive or negative shift of any integral multiple of T0 . (1A. This means a periodic signal must begin at t   and go on forever until t   . Any finite duration signal is aperiodic. in which case: g n  g n  T0 . An example of a periodic signal is shown below: A periodic signal g(t ) t T0 Figure 1A.3) does not hold.1) The smallest value of T0 that satisfies Eq.

then the result may or may not be periodic.5).5 Example Find the period and fundamental frequency (if they exist) of the following signal: g t   3 cos 2t  4 sin 4t (1A. We know that a sinusoid is a periodic signal. It is difficult to determine the period mathematically (in this case) until we look at the signal’s spectrum (Lecture 2A).4) We can graph this function to easily determine its period: g(t ) -2 -1 0 1 2 t. and we shall soon see that any signal composed of a sum of sinusoids with frequencies that are integral multiples of a fundamental frequency is also periodic. the integers p and q must have no common factors. (s) T0 = 1 Figure 1A. Signals and Systems 2014 . If we add two periodic signals.5) In Eq.5 From the graph we find that T0  1 s and f 0  1 Hz . (1A.1A. The result will only be periodic if an integral number of periods of the first signal coincides with an integral number of periods of the second signal: p T0  qT1  pT2 where is rational q (1A.

the uncertainty is the occurrence of a particular outcome in a given trial.on the contrary they can be deterministic. We know the outcome will be a head or a tail . television signals. Random or stochastic signals cannot be specified at precisely one instant in Random signals defined time. etc.the randomness of this situation is associated not with the waveform but with the uncertainty as to which waveform will occur.6 Deterministic and Random Signals A deterministic signal is one in which the past. Electrical “noise” is also a random signal.6 Each of the possible 2 8  256 waveforms is deterministic . For example.1A. For example. This is completely analogous to the situation of tossing a coin. present and future values are Deterministic signals defined completely specified. This does not necessarily mean that any particular random signal is unknown . Signals and Systems 2014 . digital data (computer files). consider some outputs of a binary signal generator over 8 bits: g1(t) t g2(t) t g3(t) t g4(t) t Figure 1A. Random signals are information bearing signals Random signals are most often the information bearing signals we are used to voice signals. g t   cos2t and g t   e  t u t  are obviously signals specified completely for all time.

1A.8) The dissipated energy.7 Energy and Power Signals For electrical engineers. With a signal g t  the formula for instantaneous power in both cases becomes: p t   g 2  t  (1A.9) The total energy of a signal  A signal is classified as an energy signal if and only if the total energy of the signal is finite: 0 E  Signals and Systems 2014 (1A. Accordingly.7) or: In signal analysis it is customary to normalise these equations by setting R  1. the instantaneous power developed by these signals will be either: v2  t  p t   R (1A. the signals we wish to describe will be predominantly voltage or current.6) p t   Ri 2  t  (1A.10) is finite for an energy signal . or the total energy of the signal is then given by:  E   g  t  dt 2 (1A.

8 The average power of a signal is correspondingly defined as: 1 T2 2 g t dt T  T T 2 The average power of a signal P  lim (1A.12) graphically. If a sinusoid is given by g t   A cos2t T0    . then what is the average power? The easiest way to find the average power is by performing the integration in Eq.7 Note that in drawing the graph we don’t really need to know the identity cos 2    1  cos 2  2 – all we need to know is that if we start off with a Signals and Systems 2014 . (1A.1A. For the arbitrary sinusoid given.11) If the signal is periodic with period T0 . we can graph the integrand g 2 t   A 2 cos 2 2t T0    as: g 2( t ) A2 equal areas A 2 2 T0 t Figure 1A.12) Example A sinusoid is a periodic signal and therefore has a finite power. we then have the special case: 1 T0 2 2 P  g t dt T0 T0 2 and the average power of a periodic signal (1A.

then its power P is zero. It is obvious that a signal may be classified as one or the other but not both.13) This is a surprising result! The power of any sinusoid. if we integrate (i. no matter what its frequency or phase. We can also see that the average value of the resulting waveform is A 2 2 . find the area beneath the curve) over an interval spanning T0 .e. because both E and P are infinite. there are some signals. we must have an area equal to the average value times the span. Therefore. So the average power is this area divided by the period: P The power of any arbitrary sinusoid of amplitude A 2 A 2 (1A.9 sinusoid uniformly oscillating between A and  A . (This is the Mean Value Theorem for Integrals). It is interesting to note that periodic signals and most random signals are power signals. is finite.1A. i. the energy of g t  .e. for example: g t   e at (1A. and if P is finite.15) that cannot be classified as either energy or power signals. On the other hand. A signal is classified as a power signal if and only if the average power of the signal is finite and non-zero: 0  P  (1A. is dependent only on its amplitude. Confirm the above result by performing the integration algebraically. then E is infinite. then after squaring we obtain a sinusoid that oscillates (at twice the frequency) between A 2 and 0. and signals that are both deterministic and aperiodic are energy signals. A 2T0 2 .14) We observe that if E. Signals and Systems 2014 are finite and nonzero for a power signal . because there are equal areas above and below this level.

16) Graphically: and graphed u(t ) 1 0 t Figure 1A. Now consider the delayed step function:  0. t  0  1. t  t0  1.8 We will now make a very important observation: it is the argument of the The argument of a function determines its position function which determines the position of the function along the t-axis. t  t0  ut  t0   1 2. The Continuous-Time Step Function We define the continuous-time step function to be:  0. t  t0 Signals and Systems 2014 (1A.we can apply superposition to build complexity out of simple parts. It may come as a pleasant surprise that the study of only a few signals will enable us to handle almost any amount of complexity in a deterministic signal.17) . t  0  ut   1 2. t  0 The continuous-time step function defined (1A.1A.10 Common Continuous-Time Signals Earlier we stated that the key to handling complexity is to reduce to many Superposition is the key to building complexity out of simple parts simple parts. The converse is also true .

an advance in time.9 We see that the argument t  t 0  simply shifts the origin of the original function to t 0 . we have: u(t.a scaling factor . Graphically. A positive value of t 0 shifts the function to the right corresponding to a delay in time.11 We obtain the conditions on the values of the function by the simple substitution t  t  t 0 in Eq.then we regulate as well as “width” and orientation the orientation of the function about the point t 0 . We will now introduce another concept associated with the argument of a function: if we divide it by a real constant . Consider the scaled and shifted step function: t t0  . u   T   T T t t0  1. A negative value shifts the function to the left .1A.   T T Signals and Systems 2014 (1A.t0 ) 1 0 t0 t Figure 1A. (1A.18) . 0   T T t t0  t  t0     1 2.16). and usually change the “width” of the function.

1A. and the only purpose of the constant T is to allow the function to be reflected about the line t  t 0 .18) to verify the bottom step function in Figure 1A. as shown below: u( t.2 -1 1 ) 2 3 4 t Figure 1A. The utility of the step function is that it can be used as a “switch” to turn another function on or off at some point.t0 T ) t0.11 Signals and Systems 2014 4 t . the product given by u t 1cos2t is as shown below: The step function as a “switch” 1 -2 -1 0 u ( t.1) cos 2 t 1 2 3 Figure 1A. For example.10.10 Use Eq.12 In this case it is not meaningful to talk about the width of the step function.T positive 1 0 1 -2 -1 t t0 u( 0 t. (1A.

20) . t  t0 1  T 2 t  t0 1  2 T t  t0 1  2 T Signals and Systems 2014 (1A. we get:   0. rect   T     1.   1. this is a “rectangle” with a height. width and area of one: and graphed rect ( t ) 1 -1/2 0 1/2 t Figure 1A.   t  t0     1 2.  rect t   1 2.1A. defined as:   0. as we did for the step function.12 If we generalise the argument.  1 t 2 1 t 2 1 t 2 The rectangle function defined (1A.13 The Rectangle Function One of the most useful functions in signal analysis is the rectangle function.19) Graphically.

the magnitude of T still acts as a scaling factor. Signals and Systems 2014 .13 In the time-domain the rectangle function can be used to represent a gating The rectangle function can be used to turn another function on and off operation in an electrical circuit. and both its width and area are equal to T : rect ( t. Mathematically it provides an easy way to turn a function on and off. it is centred on t  t 0 .t0 T t0 positive Area = |T| ) 1 0 t0 t |T| Area = 3 -4 -3 1 -2 -1 rect ( t+ 0 2 ±3 1 ) 2 t Figure 1A.1A. Notice from the symmetry of the function that the sign of T has no effect on the function’s orientation. it is a rectangle with a height of unity.14 Graphically. However.

We start with one of the simplest straight lines possible.1A. simply apply the usual scaling factor. to make g t   t  t 0 : g ( t ) = t.t0 slope=1 0 t0 t Figure 1A.15 The Straight Line It is surprising how something so simple can cause a lot of confusion.15 To change the slope of the line.21) . Let g t   t : 1 g (t ) = t slope=1 -2 -1 0 1 2 t -1 Figure 1A. to make: g t   t  t0 T Signals and Systems 2014 The straight line defined (1A.14 Now shift the straight line along the t-axis in the standard fashion.

with slope 1 T and t-axis intercept t 0 : and graphed 1 t. Signals and Systems 2014 .t0 g (t ) = T 0 t0 g (t ) = slope = -1/3 slope = 1/T t+T 0 t t.16 This is the equation of a straight line.1A.16 We can now use our knowledge of the straight line and rectangle function to completely specify piece-wise linear signals.1 -3 1 -2 -1 0 1 t -1 Figure 1A.

We start by recognising the fact that the waveform is periodic.17 Example A function generator produces the following sawtooth waveform: 1 -8 -4 0 g (t ) 4 8 12 16 t Figure 1A.22) .17 We seek a mathematical description of this waveform.1A. We recognise that the ramp part of the sawtooth is a straight line multiplied by a rectangle function: t  t  2  g0  t   rect  4  4 Signals and Systems 2014 (1A. with period T0  4 . First we describe only one period (say the one beginning at the origin).

24) where g n  t  is the nth period and n is any integer.18 The next period is just g 0  t  shifted right (delayed in time) by 4. In general we have: gn  t   g0  t  4n  t  4n  t  4n  2   rect   4 4  (1A. Now all we have to do is add up all the periods to get the complete mathematical expression for the sawtooth: t  4n  t  4n  2  rect     4 4 n   gt    Signals and Systems 2014 (1A.18 Graphically.1A.25) . The next period is therefore described by: g1  t   g0  t  4  t 4  t 42  rect   4  4 (1A. it is: 1 -8 -4 0 4 8 12 16 t 4 8 12 16 t 8 12 16 t 1 -8 -4 0 1 -8 -4 0 g0( t ) 4 Figure 1A.23) It is now easy to see the pattern.

19 Example Sketch the following waveform: g  t    t  1 rect  t  15 .5    rect    2   2  (1A. we can compose the waveform out of the three specified parts: (t -1)rect(t -1.5) 1 0 1 2 3 4 5 t 2 3 4 5 t 1 rect(t -2.5) 0 1 1 t-4 ( t-5 -2 )rect( 2 ) 0 1 0 1 2 3 4 5 t 1 2 3 4 5 t g (t ) Figure 1A.1A.27) From this.   rect  t  2.   rect  t  2.5 rect  0.26) We can start by putting arguments into our “standard” form:  t  5  t  4 g t    t 1 rect t 15 .19 Signals and Systems 2014 .5t  2.5t  2 (1A.5    0.

We then made explicit the use of this value for a zero argument. it looks like: and graphed 1. Features of the sinc function its width between the first two zeros is two. the zeros of sinc t  occur at non-zero integral values. Notice also that it appears that the sinc function is undefined for t  0 . The limit is 0 (approaching from the negative side) and 1 (approaching from the positive side).20 The Sinc Function The sinc function will show up quite often in studies of linear systems. We overcame this by calculating the limits of the function approaching zero from the positive and negative sides.0 sinc( t ) 0.2 1 2 3 4 t -0.1A. For example.28) Graphically. particularly in signal spectra. and it is interesting to note that there is a close relationship between this function and the rectangle function. In one sense it is.8 0. Signals and Systems 2014 .4 Figure 1A.20 The inclusion of  in the formula for the sinc function gives it certain “nice” properties. but in another sense we can define a function’s value at a singularity by approaching it from either side and averaging the limits. and the average of these two is 1 2 .2 -4 -3 -2 -1 0 -0. and its area (including positive and negative lobes) is just one. This is not unusual – we did it explicitly in the definition of the step function.6 0. where there is obviously a singularity at the “step”.4 0. Its definition is: The sinc function defined sinc t   sint t (1A.

0 t 0+3T t 0+2T t 10 t t-5 sinc( 2 ) 0.2 2 4 6 8 -0. With a generalised argument.8 0.2 0 -0.21 The limit of the sinc function as t 0 can be obtained using l’Hôpital’s rule: sint  cost lim  lim 1 t 0 t t 0  (1A.29) Therefore.6 0. its height is 1. the sinc function becomes:  t  t0   sin    t t T   0  sinc  T   t t   0   T  (1A.4 -2 0.21 Signals and Systems 2014 .4 Figure 1A.1A.30) Its zeros occur at t 0  nT . we say the sinc function has a value of 1 when its argument is zero.0 t 0-3T t 0-T t 0+T 0 t0 t 0-2T 1. its width between the first two zeros is 2 T and its area is T : t-t sinc( T 0) 1.

and then defining   t  to be: 1 t  t   lim rect  T 0 T  T A more formal definition of the impulse function (1A. is not very satisfying either. or Dirac delta function. We shall employ the widely accepted symbol   t  to denote the impulse function.31) As T gets smaller and smaller the members of the sequence become taller and narrower. It will enable us to represent “densities” at a single point. however. is of great importance in the The need for an impulse function study of signal theory.22 The Impulse Function The impulse function. An informal definition of the impulse function The impulse function is often described as having an infinite height and zero width such that its area is equal to unity.22 This definition. but such a description is not particularly satisfying. but their area remains constant as shown below: 2 1 2rect ( 2t ) rect ( t ) 1 2 -2 -1 0 rect ( 1 Figure 1A. A more formal definition is obtained by first forming a sequence of functions.1A. such as 1 T rectt T  . Signals and Systems 2014 t 2 ) 2 t .

as we continue to output of a linear system approaches make T shorter and shorter. we can consider the delta function to be so An impulse function narrow that making it any narrower would in no way influence the results in in the lab is just a which we are interested. the only noticeable change is in the time it takes the “impulse response” the output to reach a maximum. The output voltage v o  t  will vary with time. as in the top diagram of Figure 1A. and it can be seen that there is a considerable difference between this output and the proceeding one.21. in which a rectangle function is applied as vi  t  : R vi C vo Figure 1A.23 We choose the width of the pulse to be T. As an example.21. Signals and Systems 2014 . consider the simple RC circuit shown very narrow pulse below.23 From a physical point of view. and this time is just equal to T. If we shorten the pulse so that T  RC . we and smaller. Thus. and its exact form will depend on the relative values of T and the product RC. as in the bottom diagram of Figure 1A. the output voltage behaves and smaller as in the middle diagram of Figure 1A.21. In fact. and its height to be equal to 1 T such that its area is 1 as T is varied. If T is much larger than RC. the capacitor will not have a chance to become fully charged before the pulse ends. If we now make T still shorter. at which time it will begin to discharge back to zero.1A. the As the duration of a rectangular input capacitor will be almost completely charged to the voltage 1 T before the pulse get smaller pulse ends. the note very little change in the shape of the output.

Graphical derivation of the impulse response of an RC circuit by decreasing a pulse’s width while maintaining its area vi ( t) vo ( t) Area=1 1 T 0 RC 1 T T t 0 vi ( t) 1 (1-e-t/RC) T RC T t vo ( t) 1 T Area=1 0 1 T T RC 1  RC t 1 (1-e-t/RC) T 0 vi ( t) T RC t vo ( t) Area=1 -t/RC 1 T (1-e )  1 RC 0 T RC t 0 T Figure 1A.24 Signals and Systems 2014 RC t . which now closely resembles the impulse response of the circuit.24 If this interval is too short to be resolved by our measuring device. the input is effectively behaving as a delta function and decreasing its duration further will have no observable effect on the output.1A.

we shall use the following properties as the definition of the delta function: given the real constant t 0 and the arbitrary complex-valued function f  t  .t0 ) 1 0 Area = 2 -4 -3 t0 t 2 ( t+2 ) 2 1 -2 -1  ( t. because it “Sifting” property of the impulse function defined “sifts” out a single value of the function f  t  . Eq.25 With the previous discussion in mind. we do not specify the value of  t  t 0  at the point t  t 0 .1) 0 1 Figure 1A. but observe that the height of the spike corresponds to the area of the delta function. (1A. which is continuous at the point t  t 0 . With this definition of the delta function.as behaviour upon integration If f  t  is discontinuous at the point t  t 0 . Eq. Such a representation is shown below: Graphical representation of an impulse function t0 positive Area = 1 (t.25 Signals and Systems 2014 Area = 1 2 t . (1A.32b) may still be used but the value of f t 0  is taken to be the average of the limiting values as t 0 is approached from above and from below.32a) t1  t0  t2 (1A.  f t  t  t dt  f t  t2 0 t1 0 t  t0 (1A. Note that we do not care about the exact form of the delta function itself but only about its behaviour under integration.32b) is often called the sifting property of the delta function.32b) The impulse function defined . Graphically we will represent  t  t 0  as a spike of unit height located at the point t  t 0 .1A. then  t  t0   0.

Only sinusoids have this property with respect to linear systems.e. D’Alembert (17171783) and Lagrange (1736-1813). Bernoulli (1710-1790). He first identified the fact that is highly significant for us: The special relationship enjoyed by sinusoids and linear systems For systems described by linear differential equations a (1A.26 Sinusoids Here we start deviating from the previous descriptions of time-domain signals. e and i  1 which we call j). Ricatti (1676-1754). Mathematicians who made important contributions included Huygens (1596-1687). This “new” way of describing signals will be fully exploited later when we look at Fourier series and Fourier transforms. Why Sinusoids? The sinusoid with which we are so familiar today appeared first as the solution to differential equations that 17th century mathematicians had derived to describe the behaviour of various vibrating systems. We are so familiar with this fact that we sometimes overlook its significance. Taylor (1685-1731). i. The output sinusoid has the same frequency as the input. it is however altered in amplitude and phase.33) sinusoidal input yields a sinusoidal output. Signals and Systems 2014 . With periodic signals (such as sinusoids).1A. a shift from describing signals in the time-domain to one that describes them in the frequency-domain. Perhaps the greatest contribution was that of Euler (who invented the symbols  . applying a square wave input does not produce a square wave output. Euler (1707-1783). They were no doubt surprised that the functions that had been used for centuries in trigonometry appeared in this new context. Sinusoids can be described in the frequency-domain All the signals described so far are aperiodic. we can start to introduce the concept of “frequency content”. For example.

then for a linear system the output would be H .Fourier Series . who in 1807 recognised that: Any periodic function can be represented as the weighted (1A. This technique is called frequency analysis. Euler found that the input/output form invariance exhibited by the sinusoid could be mathematically expressed most concisely using the exponential components. We know this already from circuit analysis.34) A sinusoid can be expressed as the sum of a forward and backward rotating phasor where X  A 2e j . and the whole topic of system design concerns the manipulation of H.27 It was Euler who recognised that an arbitrary sinusoid could be resolved into a pair of complex exponentials: Acost     Xe jt  X *e jt (1A. Thus if A 2e j  t   was the input. A 2e j  t   where H is a complex-valued function of  . Its first application was probably that of Newton passing white light through a prism and discovering that red light was unchanged after passage through a second prism. The way now lay open for the analysis of the behaviour of a linear system for any periodic input by determining the response to the individual sinusoidal (or exponential) components and adding them up (superposition).1A. The second major reason for the importance of the sinusoid can be attributed to Fourier.35) sum of a family of sinusoids. Signals and Systems 2014 Periodic signals are made up of sinusoids .

26 Delaying the waveform by t 0 results in: A t t0 Figure 1A. 2 T is the angular frequency.1A. (1A. we have: A cos( 2 t ) T t T Figure 1A. we relate sinusoids to complex numbers using a utilitarian argument. mathematical approach. Consider the sinusoid:  2t  x t   Acos   T  (1A.27 Signals and Systems 2014 . In this section.36) A is the amplitude. instead of following the historical. and T is the period.34). Graphically.28 Representation of Sinusoids Sinusoids can be resolved into pairs of complex exponentials as in Eq.

29 Mathematically. we have:  2 t  t0    2t 2t0      x t  Acos   Acos T  T  T    2t   Acos   T  (1A.1A. which means a delayed sinusoid has negative phase.  is the phase of the sinusoid.39) .37) A sinusoid expressed in the most general fashion where    2t 0 T .38) We can thus represent an arbitrary sinusoid as either: x t   Asint    or Acost   2    Signals and Systems 2014 (1A.28 and:  2t       x t  Acos  T 2  2t    Asin  T  (1A. We note that when t 0  T 4 we get: A t t0=T/4 Figure 1A. Note the negative sign in the definition of phase.

If a sinusoid is expressed in the sine form.52  sin3t  Signals and Systems 2014 (1A.40) We shall use the cosine form. it is the phase angle in the cosine form (in these lecture notes). Resolution of an Arbitrary Sinusoid into Orthogonal Functions An arbitrary sinusoid can be expressed as a weighted sum of a cos() and a -sin() term.1A.42) . Mathematically.30 Similarly: xt   Acost    or Asint   2    (1A.41) Example Lets resolve a simple sinusoid into the above components: 2cos 3t 15  2cos 15cos3t  2sin 15  sin3t  193 . we can derive the following: A sinusoid can be broken down into two orthogonal components Acost     Acos cost  Asin sint  Acos cost  Asin   sint  (1A. cos3t   0. Phase refers to the angle of a cosinusoid at t=0 When the phase of a sinusoid is referred to. then we need to subtract 90 from the phase angle to get the cosine form.

amplitude and phase. two real Three real numbers completely specify a numbers are required to describe the sinusoid completely.31 Graphically. For our purposes orthogonal may be understood as the property that two vectors or functions A quick definition of mutually have when one cannot be expressed as a sum containing the other. this is: 1.29 The cos() and -sin() terms are said to be orthogonal.41). but different amplitude -sin3t term 2 Figure 1A. Provided the frequency is unambiguous. (1A. Signals and Systems 2014 .52 Any sinusoid can be decomposed into two orthogonal sinusoids of the same frequency. Why not use a sinusoid complex number to store these two real numbers? Suppose the convention is adopted that: the real part of a complex number is Two real numbers the cost amplitude. and the imaginary part is the sint amplitude of the completely specify a resolution described by Eq.1A.93 cos3t term 0. orthogonal We will look at basis functions and orthogonality more formally later on. sinusoid of a given frequency Suppose we call the complex number created in this way the phasor associated which we store as a complex number called a phasor with the sinusoid. Representation of a Sinusoid by a Complex Number There are three distinct pieces of information about a sinusoid .frequency.

then X 3  X 1  X 2 . using our new convention. Example If x 3  t   cost  2sint then X 3 102 901 j 2  2. That is.2463 corresponds to x 3  t   2.24cost  63 .52  2 15 . if x 3  t   x1  t   x 2  t  where x1  t  . Therefore X  3120 . In general. The sum of two phasors corresponds to the sinusoid which is the sum of the two component sinusoids represented by the phasors.32 Our notation will be: x t  – time-domain expression Time-domain and frequency-domain symbols X – phasor associated with x t  The reason for the bar over X will become apparent shortly. Example With the previous example. x 2  t  and x 3  t  are sinusoids with the same frequency.1A. we had x t   2cos 3t 15 . The convenience of complex numbers extends beyond their compact Phasors make manipulating sinusoids of the same frequency easy representation of the amplitude and phase.  j   0.43) Example If x t   3sint  30 then we have to convert to our cos notation: x t   3cost 120 . All we can say is that x t   3cost 120 is represented by X  3120 . Signals and Systems 2014 which . the correspondence between a sinusoid and its phasor is: xt   A cost     The phasor that corresponds to an arbitrary sinusoid X  Ae j (1A. Therefore the phasor associated with it. A phasor can be “read off” a timedomain expression We can see that the magnitude of the complex number is the amplitude of the sinusoid and the angle of the complex number is the phase of the sinusoid. Note carefully that X  3cost 120 . is X  193 .

30 Signals and Systems 2014 .46) A phasor / timedomain relationship This can be visualised as: Graphical interpretation of rotating phasor / time-domain relationship Im Xe j t = Ae je j t A  x( t ) Re complex plane Figure 1A.33 Formalisation of the Relationship between Phasor and Sinusoid Using Euler’s expansion: e j  cos  j sin (1A.1A. Therefore: xt   Re Xe jt  (1A.45) we have: We can see that the sinusoid A cos t    represented by the phasor   X  Ae j is equal to Re Xe jt .44) Ae j e jt  Ae j  t    A cos t     jA sin t    (1A.

34 Euler’s Complex Exponential Relationships for Cosine and Sine Euler’s expansion: e j  cos  j sin (1A.31 By mirroring the vectors about the real axis. it is obvious that: e  j  cos  j sin  (1A.48) and: Im cos  Re -j sin e -j Figure 1A.32 Signals and Systems 2014 .47) can be visualized as: Im e j sin j  cos Re Figure 1A.1A.

34 Signals and Systems 2014 Sin represented as a sum of complex exponentials .33 By subtracting Eqs. we can write: sin   e j e j2  j (1A.35 By adding Eqs.49) Cos represented as a sum of complex exponentials and: Im e j cos Re e -j Figure 1A.50) and: Im -e-j jsin e j Re e -j Figure 1A.45) and (1A.46). (1A.1A.46).45) and (1A. (1A. we can write: e j   e  j cos  2 (1A.

In communication engineering we will find it more convenient to use a new definition: A new phasor definition A X X  e j . Using Euler’s complex exponential expansion of cos. as shown below: Graphical interpretation of counter-rotating phasors / timedomain relationship Im A /2 Xe j t = 2Ae je j t  x ( t ) Re X*e. another definition of the phasor is often adopted.35 Signals and Systems 2014 . In circuit analysis we use the definition of the phasor as given before.51) We realise that this phasor definition.j t complex plane Figure 1A.1A. or X  2 2 (1A.36 A New Definition of the Phasor To avoid the mathematical “clumsiness” of needing to take the real part. is just half of the sinusoid’s full representation. we get: A better phasor / time-domain relationship A cost     A j jt A  j  jt e e  e e 2 2  Xe jt  X *e  jt (1A.52) The two terms in the summation represent two counter-rotating phasors with angular velocities  and   in the complex plane. although a unique and sufficient representation for every sinusoid.

Graphically.1A.37 Graphical Illustration of the Relationship between the Two Types of Phasor and their Corresponding Sinusoid Consider the first representation of a sinusoid: x t   Re Xe jt  . onto the real axis: complex plane time-domain X Figure 1A.36 Signals and Systems 2014 A sinusoid can be generated by taking the real part of a rotating complex number . x t  can be “generated” by taking the projection of the rotating phasor formed by multiplying X by e jt .

Being able to manipulate signals mathematically.1A.38 Now consider the second representation of a sinusoid: x t   Xe jt  X * e  jt .37 Negative Frequency Negative frequency just means the phasor is going clockwise The phasor X * e  jt rotates with a speed  but in the clockwise direction. Signals and Systems 2014 . The link between maths and graphs should be well understood You should become very familiar with all of these signal types. and you should feel comfortable representing a sinusoid as a complex exponential. is the key to readily understanding signals and systems. The result will always be a real number: A sinusoid can be generated by adding up a forward rotating complex number and its backward rotating complex conjugate complex plane time-domain X X * Figure 1A. x t  can be “generated” by simply adding the two complex conjugate counter-rotating phasors Xe jt and X * e  jt . Graphically. Therefore. The concept of negative frequency will be very useful when we start to manipulate signals in the frequency-domain. we consider this counter-rotating phasor to have a negative frequency. while at the same time imagining what is happening to them graphically.

In these cases. we get: g t  t nT  g nTs   cos2f 0 nTs  (1A. If we “sample” this signal at discrete times.54) where   2f 0Ts is obviously a constant.53) s Since this is valid only at times which are multiples of Ts . Therefore. the discrete-time versions of the rectangle and impulse are defined from first principles. it is a discrete-time signal and can be written as such: g n  cos2f 0 nTs   cosn  (1A. A graph of this discrete-time signal is given below: g [n] 1 n -1 Figure 1A. Signals and Systems 2014 .39 Common Discrete-Time Signals A lot of discrete-time signals are obtained by “sampling” continuous-time signals at regular intervals.1A.38 This “sampling” process only works when the continuous-time signal is smooth and of finite value. Example Consider the sinusoidal function g t   cos2f 0 t  . we can simply form a discrete-time Superposition is the key to building into the mathematical expression for the complexity out of simple parts signal by substituting t  nTs continuous-time signal. and then rewriting it using discrete notation.

which has a discontinuity at t  0 . Signals and Systems 2014 . n  0 The discrete-time step function defined (1A. Example A discrete-time signal has the following graph: g [n] 5 4 3 2 1 -2 -1 0 1 2 3 4 5 n We recognise that the signal is increasing linearly after it “turns on” at n  0 . an expression for this signal is g n  nun . Therefore. We define the discrete-time step to have a value of 1 at n  0 (instead of having a value of 1 2 if it were obtained by sampling the continuous-time signal).55) Graphically: and graphed u [n] 1 -4 -3 -2 -1 0 1 2 3 4 n Figure 1A. n  0 un   1.39 Note that this is not a sampled version of the continuous-time step function.1A.40 The Discrete-Time Step Function We define the discrete-time step function to be: 0.

we shall see that the discrete-time unit-pulse.41 The Unit-Pulse Function There is no way to sample an impulse. plays the same role in discrete-time systems as the impulse does in continuoustime systems. n  0  n   1. since its value is undefined. However. or Kronecker delta function. It is defined as: 0. we get: g n   n  2  2 n  1  3 n  2 n  2   n  3  4 n  4 .1A. Working from left to right.56) Graphically: and graphed  [n] 1 -4 -3 -2 -1 0 1 2 3 4 n Figure 1A. Signals and Systems 2014 . n  0 The unit-pulse defined (1A.40 Example An arbitrary-looking discrete-time signal has the following graph: g [n] -2 -1 -4 -3 5 4 3 2 1 -1 -2 1 2 3 4 5 6 n With no obvious formula. we can express the signal as the sum of a series of delayed and weighted unit-pulses.

They are often represented as the sum of two complex conjugate.42 Summary  Signals can be characterized with many attributes – continuous or discrete. The discrete-time step function and unit-pulse are defined from first principles. They are the only real signals that retain their shape when passing through a linear time-invariant system.  T   Sinusoids are special signals. Inc.: Fundamentals of Signals and Systems Using MATLAB®.  The phasor corresponding to xt   A cost    is X   Most discrete-time signals can be obtained by “sampling” continuous-time A j e . 2 signals. periodic or aperiodic. References Kamen. B. & Heck.  Signals written mathematically in a standard form will benefit our analysis  t  t0  in the future. energy or power. E.1A. 1997. We use arguments of the form  . Signals and Systems 2014 .W. counter-rotating phasors. deterministic or random. Prentice-Hall.S.. Each characterization tells us something useful about the signal.

g(t) The periodic sawtooth 1 waveform can be -12 -8 -4 0 (a) gt   3. x 3. [one or none correct] 1. 4 2 -12 -8 -4 0 -2 The energy of the 4 8 12 t -4 (a) 16 5. b 2. x 5.994 (b) 5. If the angular frequency is 1 rads-1.1A. 4 8 12 t (b)   t  4n n   4   t 2  rect    4  represented by: (c) g t     t  4n n   4   t  2n  rect    4  g t     n    t  4  t  2  4n  rect    4  4 The sinusoid  5sin 314t  80 can be represented by the phasor X : (a)  5 80 (b) 5100 (c) 510 4. cross out the wrong answers.890 . X  330 .088 Answers: 1. (b) 64 signal shown is: (c) 0 Using forward and backward rotating phasor notation.43 Quiz Encircle the correct answer. c Signals and Systems 2014 (c) -4. x 2 is: (a) 5. The signal cos(10t   3)  sin(11t   2) has: (a) period = 1 s (b) period = 2 s (c) no period 2. c 4.

their periods and average power: 1 (a) -2  g (t )  0 2 t 2 g (t ) (b) -2 -1 0 1 (c) -20 0 -10 2 1 3 2 g (t ) e t -t / 10 10 20 30 40 t g ( t ) 2cos 200 t (d) -3 -1 4 1 3 5 Signals and Systems 2014 t .44 Exercises 1. For each of the periodic signals shown. determine their time-domain expressions.1A.

Using the sifting property of the impulse function. evaluate the following integrals: (iv)  f t dt (v)   4 dt (vi)  (i)  (ii)   t  3e (iii)   1 t  t   t  2 sintdt   t    3 Signals and Systems 2014     1  t   t  t 2 dt e jt  t  2dt f t    t  t 0  . For the signals shown.1A. determine their time-domain expressions and calculate their energy: g (t ) 10 (a) -2 -1 1 -5 t 2 g (t ) cos t 1 (b) /2 -/2 t g ( t ) Two identical cycles 2 t 1 (c) 0 1 2 3 4 1 2 3 4 t g (t ) 3 2 1 (d) 0 t 5 3.45 2.

5.1A. Complete the following table: (a) x t   27cos 100t  15 X (b) x t   5sin 2t  80 X (c) x t   100sin 2t  45 phasor diagram is: (d) x t   16cos 314t  30 amplitude of cos 314t  term = amplitude of sin 314t  term = (e) X  27  j11 x t   (f) X  100 60 x t   (g) x t   4cos 2t  45  2sin 2t  X (h) x t   2cos t  30 X (i) X  3 30 . At first glance it might   appear that   t  t 0  T    t  t 0  because “you obviously can’t scale something whose width is zero to begin with.” However.  t  t0    T  t  t 0   T   Show that: Hint: Show that:  tt  f  t   T   0   dt  T f  t 0   This is a case where common sense does not prevail. then its area is also T.   1 x 2  (j) X  3 30 X Signals and Systems 2014 X* X* . and not its shape! Also notice how it conforms nicely to our notion of the impulse function being a limit of a narrow rectangle function. If a rectangle function is scaled by T. it is the impulse function’s behaviour upon integration that is important.46 4.

In this work Euler gave the formula e ix  cos x  i sin x . i for the imaginary 1 (1777). An introduction to pure mathematics. at the Academy of Sciences in Russia. and algebra into a recognizably modern form. from 1768. electricity and magnetism. the theory of equations. and at age 26 professor of mathematics. He put Newton’s laws. and C for the opposite angles. Introductio in analysin infinitorum. and hydrostatics are only some of Euler’s fields. Astronomy. represented the flourishing state of Newtonian physics under the guidance of mathematical rigor. and c for the sides of a triangle and A.71828…(1727). Mechanica. and even a. trigonometry and analytical geometry. from 1755. B. Euler also published the first two complete works on calculus: Institutiones calculi differentialis. appeared in 1748 which treated algebra. optics. Among the symbols that Euler initiated are the sigma () for summation (1755). and before he was an adolescent it was recognized that he had a prodigious memory and an obvious mathematical gift. After his death in 1783 the St Petersburg Academy continued to publish Euler's unpublished work for nearly 50 more years! Signals and Systems 2014 . and treated logarithms and exponents as functions instead of merely aids to calculation. e to represent the constant 2. He also standardised the use of  for 3.14159… His 1736 treatise.47 Leonhard Euler (1707-1783) (Len´ ard Oy´ ler) The work of Euler built upon that of Newton and made mathematics the tool of analysis. cos and tan . trigonometry. the geometry of surfaces. and Institutiones calculi integralis. He received both his bachelor’s and his master’s degrees at the age of 15. He transformed the trigonometric ratios into functions and abbreviated them sin . It did for calculus what Euclid had done for geometry. calculus. artillery and ballistics. and at the age of 23 he was appointed professor of physics. Euler's work in mathematics is vast. Euler was born in Switzerland.1A. He was the most prolific writer of mathematics of all time. b.

Analytic functions of a complex variable were investigated by Euler in a number of different contexts. on the calculation of planetary orbits. He studied continuum mechanics. He also studied beta and gamma functions. on shipbuilding and navigation. and many others.1A. power series solutions of differential equations. lunar theory. He considered linear equations with constant coefficients. a method of variation of constants.48 Some of his phenomenal output includes: books on the calculus of variations. Many unpublished results by Euler in this area were rediscovered by Gauss. the three body problem. Euler was led to the Bessel equation which he solved by introducing Bessel functions. and music. He introduced beta and gamma functions. on artillery and ballistics. He discovered the Cauchy-Riemann equations used in complex variable theory. When considering vibrating membranes. integrating factors. a method of approximating solutions. on analysis. second order differential equations with variable coefficients. acoustics. elasticity. investigating the theory of surfaces and curvature of surfaces. the wave theory of light. and integrating factors for differential equations. including the study of orthogonal trajectories and cartography. As well as investigating double integrals. He laid the foundation of analytical mechanics. Euler considered ordinary and partial differential equations. He made decisive and formative contributions to geometry. He published a full theory of logarithms of complex numbers. He proved many of Fermat’s assertions including Fermat’s Last Theorem for the case n  3 . Euler made a thorough investigation of integrals which can be expressed in terms of elementary functions. hydraulics. Euler made substantial contributions to differential geometry. The calculus of variations is another area in which Euler made fundamental discoveries. on the motion of the moon. Signals and Systems 2014 . He integrated Leibniz's differential calculus and Newton's method of fluxions into mathematical analysis. lectures on the differential calculus. calculus and number theory.

while playing with one of his grandchildren. He analysed the motion of a point mass under a central force and also considered the motion of a point mass on a surface. the theory of refraction.1A. His last words. and discussing the calculation of the orbit of the planet Uranus..49 Euler considered the motion of a point mass both in a vacuum and in a resisting medium. In this latter topic he had to solve various problems of differential geometry and geodesics. Euler died of a stroke after a day spent: giving a mathematics lesson to one of his grandchildren. He wrote a two volume work on naval science. He set up the main formulas for the topic of fluid mechanics. recently discovered by William Herschel. and the Euler equations for the motion of an inviscid incompressible fluid. doing some calculations on the motion of balloons. despite his eyesight failing. Euler also published on the theory of music. methods of calculation of the parallax of the sun.. He eventually went blind and employed his sons to help him write down long equations which he was able to keep in memory. the continuity equation. He decomposed the motion of a solid into a rectilinear motion and a rotational motion. He studied rotational problems which were motivated by the problem of the precession of the equinoxes.” Signals and Systems 2014 . the Laplace velocity potential equation. consideration of the physical nature of comets. Euler did not stop working in old age. He did important work in astronomy including: the determination of the orbits of comets and planets by a few observations. were: “I die.

.

Signals and Systems 2014 Linear differential equation . y  N 1 0  (1B. In these cases. The more accurate we make the model.3) We take 0  as the time for initial conditions to take into account the possibility of an impulse being applied at t  0 . Convolution in LTI discrete-time systems.1 Lecture 1B – Systems Differential equations. Discrete-time block diagrams. A lot of the time our modelling ends up describing a continuous-time system that is linear.2) i  i 0 i  i 0 where: y N  Initial Conditions The above equation needs the N initial conditions:       y 0  . the simpler the model. Linear Differential Equations with Constant Coefficients Modelling of real systems involves approximating the real system to such a degree that it is tractable to our mathematics. Discrete-time signals and systems. which will change the output instantaneously. Difference equations.1B.1) d N y t  t   N dt (1B. Properties of convolution. the harder it is to analyse. Obviously the more assumptions we make about a system. the system is described by the following equation: y N  N 1 M t    ai y t    bi x t  (1B. System modelling. We need to make a trade-off based on some specification or our previous experience. . Convolution in LTI continuous-time systems. y 1 0  . time-invariant (LTI) and finite dimensional. and the more easily solved. Numerical convolution. Discretization in time of differential equations. Graphical description of convolution.

This gives:  dy t   e at   ay t   e at bxt   dt  (1B. The two parts of the response given in Eq. t  0 t (1B. It will be shown later that the ZSR is given by a convolution between the system’s impulse response and the input signal.1) in a useful (and familiar) form.7) d at e y t   e at bxt  dt Integrating both sides gives:   t 0 Finally.4) To solve. (1B.8) 0 Use this to solve the simple revision problem for the case of the unit step. t  0 (1B.1B. (1B. A first order system is given by: dy t   ay t   bxt  dt First-order linear differential equation (1B. dividing both sides by the integrating factor gives: First glimpse at a convolution integral – as the solution of a first-order linear differential equation   y t   e  at y 0     e  a t  bx d .5) Thus:   (1B. first multiply both sides by an integrating factor equal to e at .8) have the obvious names zeroinput response (ZIR) and zero-state response (ZSR). Signals and Systems 2014 .6) e at y t   y 0     e a bx d .2 First-Order Case For the first order case we can express the solution to Eq.

9b) Cause / effect relationships for electrical systems (1B. the three basic linear.10b) (1B. Electrical Circuits The three basic linear.1B. damping force and spring force are respectively: d 2 xt  F t   M dt F t   k d dxt  dt F t   k s xt  Where xt  is the position of the object under study. which is a differential equation in the case of continuous-time systems.9a) (1B.9c) Mechanical Systems In linear translational systems.3 System Modelling In modelling a system. time-invariant relationships for the resistor. capacitor and inductor are respectively: vt   Rit  dvt  it   C dt dit  vt   L dt (1B. each having a relationship between cause and effect. we can break a system down into a connection of simple components. Signals and Systems 2014 (1B. time-invariant relationships for the inertia force.10a) (1B. we are nearly always after the input/output relationship.10c) Cause / effect relationships for mechanical translational systems . If we’re clever.

e. processing of electrocardiograms.11a) (1B. e. Discrete-time signals arise naturally in studies of economic systems – amortization (paying off a loan). Internet). Signals and Systems 2014 . quarterly or yearly).g. because of digital control. in studies of population. the deflection of a Discrete-time systems are important… weighted beam. They arise all the time in electrical engineering.11b) (1B. the relationships for the inertia torque.4 For rotational motion.g. But since there is a great depth to the analysis and design techniques of continuous-time systems.11c) Finding an input-output relationship for signals in systems is just a matter of applying the above relationships to a conservation law: for electrical circuits it is one of Kirchhoff’s laws. etc. it is still beneficial to study systems in the continuous-time domain. in mechanical systems it is D’Alembert’s principle. mobile phone. and hence they are a special case of discrete-time signals. They arise in science. radar tracking system. It is now cheaper and easier to perform most signal operations inside a …especially as microprocessors play a central role in today’s signal processing microprocessor or microcontroller than it is with an equivalent analog continuous-time system. Discrete-time Systems A discrete-time signal is one that takes on values only at discrete instants of time. and since most physical systems are continuous-time in nature.1B. models of the inventory cycle in a factory. chemical reactions. Their importance is probably now reaching that of continuous-time systems in terms of analysis and design – specifically because today signals are processed digitally. digital communication (CD. damping torque and spring torque are: Cause / effect relationships for mechanical rotational systems d 2 t  T t   I dt d t  T t   kd dt T t   k s t  (1B. models of the national income (monthly.

12) i 0 Linear time-invariant (LTI) difference equation Solution by Recursion We can solve difference equations by a direct numerical procedure. show that: y0  ay 1  bx0 y1  a 2 y 1  abx0  bx1 y2  a 3 y 1  a 2 bx0  abx1  bx2 Signals and Systems 2014 (1B.13) with initial condition y 1 .1B. First-Order Case Consider the first-order linear difference equation: yn  ayn  1  bxn (1B. (1B. There is a MATLAB® function available for download from the Signals and Systems web site called recur that solves the above equation.5 Linear Difference Equations with Constant Coefficients Linear. time-invariant. By successive substitution.14) First-order linear difference equation . Complete Solution By solving Eq. discrete-time systems can be modelled with the difference equation: N M yn   ai yn  i    bi xn  i  i 1 (1B.12) recursively it is possible to generate an expression for the complete solution yn in terms of the initial conditions and the input xn .

8).6 From the pattern. Discrete-Time Block Diagrams An LTI discrete-time system can be represented as a block diagram consisting of adders.2 Such an element is normally implemented by the memory of a computer.1B. it can be seen that for n  0 .1 The unit-delay element is shown below: A discrete-time unitdelay element x [n ] D y [n ] = x [n -1] Figure 1B.(1B. Signals and Systems 2014 . or a digital delay line. The gain element is shown below: A discrete-time gain element x [n ] y [n ] = A x [n ] A Figure 1B.15) i 0 This solution is the discrete-time counterpart to Eq. First look at a convolution summation – as the solution of a firstorder linear difference equation yn   a  n 1 n y 1    a  bxi  n i (1B. gains and delays.

4) with a discrete-time equation. the approximation will be accurate. The system is shown below: x [n ] y [n ] b y [n -1] a D Discretization in Time of Differential Equations Often we wish to use a computer for the solution of continuous-time differential equations. we can construct a representation of the discrete-time system given by yn  ayn  1  bxn . First-Order Case Let’s see if we can approximate the first-order linear differential equation given by Eq. (1B. We can: if we are careful about interpreting the results. Signals and Systems 2014 The first-order difference equation approximation of a first-order differential equation . Show that yn gives approximate values of the solution y t  at the times t  nT with arbitrary initial condition y 1 for the special case of zero input.16) If T is suitably small and y t  is continuous.1B. Substituting this approximation into Eq.17) The discrete values yn are approximations to the solution y nT  . (1B. We can approximate the continuous-time derivative using Euler’s approximation.4) results in a discretetime approximation given by the difference equation: yn  1  aT  yn  1  bTxn  1 (1B. or forward difference: dy t  y nT  T   y nT   dt t  nT T Approximating a derivative with a difference (1B.7 Example Using these two elements and an adder.

20) .8 Second-order Case We can generalize the discretization process to higher-order differential equations.19) Show that the discrete-time approximation to the solution y t  is given by: An nth-order differential equation can be approximated with an nth-order difference equation   yn  2  a1T  yn  1  1  a1T  a0T 2 yn  2    b1Txn  1  b0T 2  b1T xn  2 Signals and Systems 2014 (1B. In the second-order case the following approximation can be used: The second-order difference equation approximation of a second-order derivative d 2 y t  y nT  2T   2 y nT  T   y nT   dt 2 t  nT T2 (1B.1B.18) Now consider the second-order differential equation: d 2 y t  dyt  dxt        b0 xt  a a y t b 1 0 1 dt dt dt 2 (1B.

(1B.23) i 0 In contrast to Eq. we can see that Eq. we can develop an equivalent representation called the convolution representation.9 Convolution in Linear Time-invariant Discrete-time Systems Although the linear difference equation is the most basic description of a linear discrete-time system. We then have: n yn    a  bxi  n i (1B.23) depends exclusively on present and past values of the input signal. we are after an expression for the output with all initial conditions zero. Signals and Systems 2014 The zero-state response (ZSR) – a convolution summation .21).1B. (1B. an input xi  contributes an amount  a  bxi  to the totality of the ni output at the nth period. It is only valid when the system’s initial conditions are all zero.22) By the definition of the convolution representation. For example. This representation will help us to determine important system properties that are not readily apparent from observation of the difference equation. One advantage of this representation is that the output is written as a linear combination of past and present input signal elements.21) Linear difference equation and showed by successive substitution that: yn   a  n 1 n y 1    a  bxi  The complete response n i i 0 (1B. One advantage of this is that we may directly observe how each past input affects the present output signal. First-Order System We have previously considered the difference equation: yn  ayn  1  bxn (1B.

24) i 0 which reduces to: yn   a  b n (1B. the response to a delayed unit-pulse  n  i  must be hn  i  .25) The unit-pulse response for this system is therefore given by: A first-order discrete-time system’s unit-pulse response hn   a  bun n (1B.1B. (1B. if we let xn   n .10 Unit-Pulse Response of a First-Order System The output of a system subjected to a unit-pulse response  n is denoted hn A discrete-time system’s unit-pulse response defined and is called the unit-pulse response. or weighting sequence of the discretetime system.26) General System For a general linear time-invariant (LTI) system. It is very important because it completely characterises a system’s behaviour.27) . (1B. then the output of the system to a unit-pulse input can be expressed using Eq.23) as: n yn    a  b i  n i (1B. Since xn can be written as:  xn   xi  n  i  i 0 Signals and Systems 2014 (1B. For the first-order system of Eq. It may also provide an experimental or mathematical means to determine system behaviour.21).

we can now represent the system as: x [n ] Graphical notation for a discrete-time system using the unit-pulse response y [n ] h [n ] Figure 1B.30) Convolution notation for a discrete-time system Graphically.28). (1B. the response yi n to x[i ] n  i  is given by: yi n  xi hn  i  (1B.11 and since the system is LTI. also written as: yn  hn* xn (1B.28) The response to the sum Eq. (1B.1B.3 It should be pointed out that the convolution representation is not very efficient in terms of a digital implementation of the output of a system (needs lots more memory and calculating time) compared with the difference equation.31) . Thus the response to xn is:  yn   xi h[n  i ] (1B. Convolution is commutative which means that it is also true to write:  yn   hi x[n  i ] i 0 Signals and Systems 2014 (1B.29) i 0 Convolution summation defined for a discrete-time system This is the convolution representation of a discrete-time system.27) must be equal to the sum of the yi n defined by Eq.

or arrays.32)    hi x[n  i ] i 0  h0xn  h1xn  1    hk xn  k  Graphically. x. Suppose the unit-pulse response is that of a filter of finite length k. this summation can be viewed as two buffers.12 Discrete-time convolution can be illustrated as follows. sliding past one another. Then the output of such a filter is: yn  hn* xn (1B. The system can be considered to have a memory because at any particular time. Signals and Systems 2014 . The array locations that overlap are multiplied and summed to form the output at that instant.4 In other words. Graphical view of the convolution operation in discrete-time h0 h1 h2 h3 h4 h5 xn x n-1 x n-2 x n-3 x n-4 x n-5 fixed array sliding array future input signal values past input signal values present input signal value y n = h0 x n + h1x n-1+ h2 x n-2 + h3 x n-3 + h4 x n-4 + h5 x n-5 Figure 1B. the output at time n is equal to a linear combination of past and present values of the input signal. the output is still responding to an input at a previous time.1B.

Use the conv function.k] D h [2]  y [n ] Figure 1B.1B.13 Discrete-time convolution can be implemented by a transversal digital filter: x [n -1] x [n ] D h [0] x [n -2] D h [1] x [n .5 MATLAB® can do convolution for us. Signals and Systems 2014 h [k ] Transversal digital filter performs discrete-time convolution .

on how long it takes to decay to zero. It is apparent that a measure of system memory is obtained by noting how quickly the system unit-pulse response decays to zero: the more quickly a …specifically . where the output is more readily influenced by the most recent behaviour of the input signal. Consider the two unit-pulse responses below: System memory depends on the unitpulse response… h1 [n] 0 1 2 3 4 5 6 18 19 20 0 1 2 3 4 5 6 18 19 20 n h2 [n] n Figure 1B. System 1 is said to have a shorter memory than system 2.14 System Memory A system’s memory can be roughly interpreted as a measure of how significant past inputs are on the current output. system’s weighting sequence goes to zero. The output of system 2 depends strongly on inputs 20 or more iterations ago. Such systems are fast responding.6 System 1 depends strongly on inputs applied five or six iterations ago and less so on inputs applied more than six iterations ago. the shorter the memory. Signals and Systems 2014 . Some applications require a short memory. A system with long memory does not respond as readily to changes in the recent behaviour of the input signal and is said to be sluggish.1B.

15 System Stability A system is stable if its output signal remains bounded in (1B. A more thorough treatment of system stability will be given later. BIBO stability defined If a bounded input (BI) produces a bounded output (BO).1B.33) response to any bounded signal. lim xr t    t  (1B.21)? Convolution in Linear Time-invariant Continuous-time Systems The input / output relationship of a continuous time system can be specified in Deriving convolution the continuousterms of a convolution operation between the input and the impulse response of for time case the system.35) as T  0 . (1B.37) As the input approaches an impulse function and since: T 0 Signals and Systems 2014 . This implies that: lim hi   0 (1B. The system response to this input is: y t   y r t  (1B.34) i  This is something not readily apparent from the difference equation. What can you say about the stability of the system described by Eq. Recall that we can consider the impulse as the limit of a rectangle function: 1 t xr t   rect  T T  Start with a rectangle input (1B.36) and the output response. then the system is termed BIBO stable.

7: Treat an arbitrary input waveform as a sum of rectangles x( t ) ~ x (t. T  (1B.35) as: The staircase is just a sum of weighted rectangle inputs… ~ x t .39) T 0 where: ~ x t .40) We can rewrite Eq.2T 0 2T 4T 6T t Figure 1B. (1B. T     xiT Tx t  iT  i   r Signals and Systems 2014 (1B. (1B.38) T 0 Now expressing the general input signal as the limit of a staircase approximation as shown in Figure 1B.41) .40) using Eq.1B. T    t  iT    x iT rect    T   i    (1B.16 then: lim y r t   ht  then the output approaches the impulse response (1B.4T .T) .7 we have: which get smaller and smaller and eventually approach the original waveform xt   lim ~ x t .

when   t .17 Since the system is time-invariant. The system response to xt  is just the response:  y t   lim ~ y t . T     xiT Ty t  iT  r i   (1B. xiT   x  .e. then ht     0 for negative arguments. i.45) Convolution integral if the input starts at time t=0  If the input xt   0 for all t  0 then:  y t    x ht   d 0 If the input is causal.46) 0 Convolution integral if the input starts at time t=0. and the system is causal Once again. T  is: ~ y t . it can be shown that convolution is commutative which means that it is also true to write (compare with Eq.43) …even in the limit as the staircase approximation approaches the original input When we perform the limit.31) ): y t    h xt   d t 0 Signals and Systems 2014 (1B. T   lim  xiT  yr t  iT T T 0 T 0 i   (1B. the response to xr t  iT  is y r t  iT  . (1B.47) Alternative way of writing the convolution integral . Therefore the system response to ~ x t . y r t  iT   ht    and T  d .42) …and we already know the output… because superposition holds for linear systems. The upper limit in the integration can then be changed so that: y t    x ht   d t (1B.44) Convolution integral for continuous-time systems defined (1B. Hence the output response can be expressed in the form:  y t    x ht   d (1B.1B.

the output y t  can be obtained.48) Graphically. ht   0 for t  0 . and let the input signal be the unit-step: h (t ) 1 x (t ) 1 e -t t t Figure 1B. In other words. initial conditions are zero.18 With the convolution operation denoted by an asterisk. we can represent the system as: Graphical notation for a continuoustime system using the impulse response x (t ) y ( t ) = h (t )* x ( t ) h (t ) Figure 1B. ie. the input / output relationship becomes: Convolution notation for a continuoustime system y t   ht * xt  (1B. the input signal is flipped in time about the origin. as the time “parameter” t Signals and Systems 2014 . “*”. Graphical Description of Convolution Consider the following continuous-time example which has a causal impulse response function. The output response using convolution is just the ZSR.9 Using graphical convolution. Then. A causal impulse response implies that there is no response from the system until an impulse is applied at t  0 . First. Let the impulse response of the system be a decaying exponential.8 It should be pointed out. that the convolution relationship is only valid when there is no initial energy stored in the system.1B. once again.

there is obviously no overlap between the impulse response and input signal.10 Letting time “roll-on” a bit further. You can think of this graphical technique as the continuous-time version of a digital transversal filter (you might like to think of it as a discretetime system and input signal.30) turns into a continuous-time integration). (1B. with the time delay between successive values so tiny that the finite summation of Eq. This is shown below: Signals and Systems 2014 . The output must be zero since we have assumed the system to be in the zero-state (all initial conditions zero). This is illustrated below: Graphical illustration of continuous-time “snapshot” at t=0 h ( ) 1 e   x (0- ) 1  h ( ) x (0- ) 1  Area under the curve = y(0) Figure 1B.1B. Therefore y 0   0 . When t  0 . we take a snapshot of the situation when t  1 . the input signal “slides” past the impulse response – in much the same way as the input values slide past the unit-pulse values for discrete-time convolution.19 advances.

20 Graphical illustration of continuous-time “snapshot” at t=1 h ( ) 1 e   1 1 x (1- )  1 h ( ) x (1- ) 1 e  1  Area under the curve = y(1) Figure 1B.11 The output value at t  1 is now given by: y 1   h x1   d 1 0 1     e  d  e 0  1 0 (1B.63 Signals and Systems 2014 .1B.49)  1  e 1  0.

12 The output value at t  2 is now given by: y 2    h x2   d 2 0 2     e d  e 0   2 0 (1B.50)  1 e 2 Signals and Systems 2014  0.1B.86 .21 Taking a snapshot at t  2 gives: Graphical illustration of continuous-time “snapshot” at t=2 h ( ) 1 e  1 2 1  x (2- ) 1 2  h ( ) x (2- ) 1 e  1 2  Area under the curve = y(2) Figure 1B.

1B. it is often the graphical approach that provides a quick insight into the form of the output signal. Signals and Systems 2014 . and it can be used to give a rough sketch of the output without too much work.86 y(1) = 0. we can build up a graphical picture of the output for all time: y( t ) 1-e -t 1 y(2) = 0. (1B.51)  1  e t In more complicated situations.13 In this simple case.22 If we keep evaluating the output for various values of t. it is easy to verify the graphical solution using Eq.63 y(0) = 0 t 1 2 Figure 1B.47). The output value at any time t is given by: y t    h xt   d t 0 t     e  d  e  0 t 0 (1B.

The integral in Eq. which operates on discrete-time data.52c) xt  t 0   yt  t 0  (1B. Numerical Convolution We have already looked at how to discretize a continuous-time system by Computers work discretizing a system’s input / output differential equation. It should be noted that the two different methods produce two different discrete-time representations.23 Properties of Convolution In the following list of continuous-time properties. commutative and distributive with addition. axt   ay t  (1B. all due to the linearity property.47) can be discretized by setting t  nT : y nT    h xnT   d nT 0 Signals and Systems 2014 (1B. The following with discrete data procedure provides another method for discretizing a continuous-time system. the notation xt   y t  should be read as “the input xt  produces the output y t  ”.52a) x1 t   x2 t   y1 t   y2 t  (1B. We start by thinking about how to simulate a continuous-time convolution with a computer.53) .52d) Convolution properties Linearity Time-invariance Convolution is also associative. Similar properties also hold for discrete-time convolution. (1B.1B.52b) a1 x1 t   a2 x2 t   a1 y1 t   a2 y2 t  (1B.

(1B.54) T 0   h xnT   d   2T T  i 1T iT h xnT   d   which can be rewritten using the summation symbol: n y nT     i 0 iT T iT h xnT   d (1B.24 By effectively reversing the procedure in arriving at Eq.56) . we can break this integral into regions of width T: y nT    h xnT   d (1B.55) If T is small enough.1B. apply Euler’s approximation: h   hiT  xnT     xnT  iT  Signals and Systems 2014 (1B. h  and xnT    can be taken to be constant over each interval: h ( ) h (iT) 0 iT iT +T  Figure 1B.14 That is.47).

25 so that Eq. (1B.58) We approximate the integral with a summation Writing in the notation for discrete-time signals. where hn is the sampled version of the impulse response ht  of the original continuous-time system. 1. i 0 (1B. we have the following input / output relationship: n yn   hi xn  i T .57) Since the integrand is constant with respect to  . n  0. Signals and Systems 2014 Convolution approximation for causal systems with inputs applied at t=0 . 2.1B.55) becomes: n y nT     i 0 iT T iT hiT xnT  iT d (1B. it can be moved outside the integral which is easily evaluated: n y nT    hiT xnT  iT T i 0 (1B.59) This equation can be viewed as the convolution-summation representation of a linear time-invariant system with unit-pulse response Thn .

We can tackle the problem three ways: graphically.60) .1B. is the impulse response. ht  . We can also arrive at this result algebraically by performing the convolution integral.15 The output. we can surmise what the solution is by recognising that the convolution of a function ht  with an impulse is equivalent to applying an impulse to a system that has an impulse response given by ht  : Applying an impulse to a system creates the impulse response  ( t) y ( t ) = h (t )*  ( t ) = h (t ) h (t )  ( t) h (t ) h (t ) 1 t 1 t t Figure 1B. by definition.26 Convolution with an Impulse One very important particular case of convolution that we will use all the time is that of convolving a function with a delayed impulse. algebraically. Using this last approach. or by using the concept that a system performs convolution. and noting that it is really a sifting integral:   t   ht      ht   d  ht   Signals and Systems 2014 (1B.

in general.1B. using the definition of the convolution integral and the sifting property of the impulse. and since the system is timeinvariant.t 0) h (t ) h ( t.t 0) h (t )  ( t.61) Therefore.t 0) 1 t0 Applying a delayed impulse to a system creates a delayed impulse response 1 t t0 t t Figure 1B.x 0 ) f (x ) x x0 x Figure 1B. we can arrive at the result algebraically:   t  t 0   ht       t 0 ht   d   ht  t 0  (1B.62) This can be represented graphically as:  (x.16 Again.17 Signals and Systems 2014 Convolving a function with an impulse shifts the original function to the impulse’s location f (x. we should get out a delayed impulse response:  ( t.x 0 ) x0 x .27 If we now apply a delayed impulse to the system.t 0) = h ( t. we have: f  x     x  x0   f  x  x0  (1B.t 0) y ( t ) = h (t )* ( t.

: Fundamentals of Signals and Systems using MATLAB®. B. or if its unit-pulse response decays to zero in the discrete-time case. Knowing this response. if the initial conditions are zero.  Most systems can be derived from simple cause / effect relationships. Signals and Systems 2014 . Inc.  Convolution is another (equivalent) way of representing an input / output relationship of a system.  Convolution introduces us to the concept of an impulse response for a continuous-time system. and a unit-pulse response for a discrete-time system. together with a few conservation laws. 1997. Discrete-time systems are commonly implemented using microprocessors.  Discrete-time signals occur naturally and frequently – they are signals that exist only at discrete points in time.28 Summary  Systems are predominantly described by differential or difference equations – they are the equations of dynamics. It shows us features of the system that were otherwise “hidden” when written in terms of a differential or difference equation.  We can approximate continuous-time systems with discrete-time systems by a process known as discretization – we replace differentials with differences. E.  A system is BIBO stable if its impulse response decays to zero in the continuous-time case. References Kamen. Prentice-Hall International.  Convolving a function with an impulse shifts the original function to the impulse’s location.. and tell us how outputs and various states of the system change with time for a given input. we can determine the output for any input.1B. & Heck.

Show graphically that the resulting sequence is equivalent to the sum of the following delayed unit-step sequences: an  3un  1  un  1  1 2 un  2  9 2 un  3  3un  4 4.1B. Plot the sequences given by: (a) y1 n  3 n  1   n  2 n  1  1 2  n  2 (b) y 2 n  4 n   n  2  3 n  3 3. 2.  1 .3 s . (ii) T  0.  y 2 n   n n  0. (iv) T  1 s . From your solution in Question 2.1 s .  0. 0. -2.5 s .29 Exercises 1. 2. y1 n   n 2 1 n  0. 1 2 1   1 . -3. Plot the discrete signals which result if the sampling period T is (i) T  0. 1. -2. (iii) T  0. Find yn  y1 n  y 2 n when: n  1. The following continuous-time functions are to be uniformly sampled. How does the sampling time affect the accuracy of the resulting signal? (a) xt   1 (b) xt   cos 4t (c) xt   cos10t 2. 1. find an  y1 n  y 2 n . -3.   Signals and Systems 2014 . n  1.

2 and 3 given y 2  2 . Construct block diagrams for the following difference equations: (i) yn  yn  2  xn  xn  1 (ii) yn  2 yn  1  yn  2  3 xn  4 7. 21. 2 … n Signals and Systems 2014 . 2. (b) By evaluating the first few terms show that the following formula also describes the numbers in the Fibonacci sequence: yn      n n 1  0.5  1. 1. 1. y 1  1 . 13. 1. 3. (i) Construct a difference equation from the following block diagram: x [n ] y [n ] 3 D D -2 D D (ii) From your solution calculate yn for n = 0. xn  0 for n  0 and xn   1 for n = 0.30 5.25   5  (c) Using your answer in (a) find y20 and y25 .25  0. Which approach is easier? 6. 5. Check your results using the equation in (b). The following series of numbers is known as the Fibonacci sequence: 0.5  1.1B. 1. when y0  0 and y1  1 . 8. 34… (a) Find a difference equation which describes this number sequence yn for n  2 .

75 xn  1  0. n  1 n  1 . using (i) the basic difference equation. (Note yn  0 for n  2 ).  3.31 8. 1. 2. n  0.1B. For the single input-single output continuous. (a) Find the unit-pulse response of the linear systems given by the following equations: (i) yn  T xn  xn  1  yn  1 2 (ii) yn  xn  0.   0.  xn   1. determine which coefficients must be zero for the systems to be (a) linear (b) time invariant 2 (i)  d3y  d2y dy a1  3   a2 2  a3  a4 y  a5 sin t   a6 y  a7 x dt dt  dt  (ii) a1 y 2 n  3  a2 yn  2  a3  a4 yn  a5 sin n  yn  1  a6 yn  a7 xn Signals and Systems 2014 .and discrete-time systems characterized by the following equations.  4. 9. (ii) graphical convolution and (iii) the convolution summation.5 yn  1 (b) Determine the first five terms of the response of the equation in (ii) to the input: n  2.

To demonstrate that nonlinear systems do not obey the principle of superposition. Signals and Systems 2014 . 1. show that the response of the system to the input xn  2 x1 n is not 2 y1 n . -3.32 10. A system has the unit-pulse response: hn  2un  un  2  un  4 Find the response of this system when the input is the sequence:  n   n  1   n  2   n  3 using (i) graphical convolution and (ii) convolution summation. If y1 n denotes this response. 1. determine the first five terms of the response of the system: yn  2 yn  1  x 2 n to the input: 0. n  1. Can convolution methods be applied to nonlinear systems? Why? 11. 2.1B. x1 n   n  0. -2.

For x1 n and x2 n as shown below find (i) x1 n x1 n (ii) x1 n  x2 n (iii) x2 n x2 n using (a) graphical convolution and (b) convolution summation. L  C  1. x 1[n] x 2[n] 2 2 0 1 2 3 n 0 1 2 3 4 5 6 7 8 n 13.33 12.5 3  t e t  cost  . Use MATLAB® to graph the output voltage of the following RLC circuit: R L v i (t ) C v o(t ) when R  2.   Compare with the exact solution: vo t   0.1B. t  0 . 14. How do you decide what value of T to use? Signals and Systems 2014 . vo 0  1. vo 0  1 and vi t   sin t u t  . Use MATLAB® and discretization to produce approximate solutions to the revision problem.

y t  represents the produced temperature change. Find: a) the differential equation relating xt  and y t  . b) the impulse response of the system. Use MATLAB® and the numerical convolution method to solve Q14. and c) the temperature change produced by the system when the gain K is 0. the signal xt  represents the commanded temperature change from the preset value.5 and a step change of 0.34 15. d) Plot the temperature change produced. e) Use MATLAB® and numerical convolution to produce approximate solutions to this problem and compare with the theoretical answer. A simple model for this system is represented by the block diagram shown below: x (t ) K  t  y (t ) In the model. Signals and Systems 2014 . 16. and t is measured in minutes. A feedback control system is used to control a room’s temperature with respect to a preset value.1B.75 is commanded at t  4 min .

x(t) y(t) 2 1 -1 0 4 t 0 0. Signals and Systems 2014 .25 s Plot the input.5 t 18. and output for  0.25 n unT . 1. and the smoothed discrete-time signal is converted to a continuous-time. 1. T  0. That is. The smoothing filter has the unit-pulse response:   hnT   0.35 17. Quickly changing inputs to an aircraft rudder control are smoothed using a digital processor. T  0.75  t  1.5 s . the control signal is converted to a discrete-time signal by an A/D converter. control signal by a D/A converter.5 n  0.1B. smoothed. unit-pulse response. the discrete-time signal is smoothed with a discrete-time filter. Sketch the convolution of the two functions shown below.25 s Find the zero-state response of the discrete-time filter when the input signal samples are: xnT   1.

hnT  . and the filter output signal ynT  .1  0.87  sin 0. is processed with a low-pass filter to reduce the noise. the filter input signal.5.41n   0. A wave staff measures ocean wave height in meters as a function of time. snT  .5 cos2 0.2 nT  1. The transmission system corrupts the signal with additive noise given by the MATLAB® function: function n0=drn(n) N=size(n. for 0t  6s. no(i)=0.1820.2). the unitpulse response of the filter.2*no(i-1)+(rand-0. for I=2:N.1440.5). no(1)=rand-0. The height signal is sampled at a rate of 5 samples per second. These samples form the discrete-time signal: snT   cos2 0.36 19. 0). Signals and Systems 2014 . end The received signal plus noise. xnT  .87  cos0. xnT  .76  0.41n  unT  n n n Plot the sampled height signal.1940.3nT  1. The filter unit-pulse response is:   hnT   0.1B.5 The signal is transmitted to a central wave-monitoring station. rand(‘seed’.

and showed an early interest in mathematics. from 1859 to 1862. He also analysed circuits using topology. or distributed parameter circuits such as microwave striplines. It remained for James Clerk Maxwell to propose that light belongs to the electromagnetic spectrum.37 Gustav Robert Kirchhoff (1824-1887) Kirchhoff was born in Russia. analysing various salts that impart specific colours to a flame when burned. When Kirchhoff visited the laboratory. Kirchhoff’s most significant work. He studied at the University of Königsberg. which allow the calculation of current and voltage for any circuit. he suggested that a better analysis might be achieved by passing the light from the flame through a prism. and in 1845. Instead.1B. Each element and compound showed a spectrum as unique as any fingerprint. they discovered new Signals and Systems 2014 . while still a student. he moved to Berlin where he met and formed a close friendship with Robert Bunsen. which could be viewed. Spectral analysis. by describing charge flow in three dimensions. Bunsen was using coloured glasses to view the flame. In 1857 Kirchhoff extended the work done by the German physicist Georg Simon Ohm. Bunsen was in his laboratory.” They not only analysed the known elements. The value of spectroscopy became immediately clear. He based his calculations on experimental results which determine a constant for the speed of the propagation of electric charge. the inorganic chemist and physicist who popularized use of the “Bunsen burner”. He graduated from university in 1847 and received a scholarship to study in Paris. he pronounced Kirchhoff’s Laws. They are the Laws electrical engineers apply on a routine basis – they even apply to non-linear circuits such as those containing semiconductors. Kirchhoff noted that this constant is approximately the speed of light – but the greater implications of this fact escaped him. In further studies. Kirchhoff and Bunsen wrote not long afterward. measured. but the revolutions of 1848 intervened. recorded and compared. involved his close collaboration with Bunsen. he offered a general theory of how electricity is conducted. promises “the chemical exploration of a domain which up till now has been completely closed.

An accident-related disability meant he had to spend much of his life on crutches or in a wheelchair.” We can now analyse the collective light of a hundred billion stars in a remote galaxy billions of light-years away – we can tell its composition. its age. shortly before his death in 1887. Both of these elements are used today in atomic clocks. which depends only on the temperature and not on the material. and even how fast the galaxy is receding from us – simply by looking at its spectrum! As a consequence of his work with Fraunhofer’s lines. Kirchhoff wrote “It is plausible that spectroscopy is also applicable to the solar atmosphere and the brighter fixed stars. This fundamental quandary arose because heating a black body – such as a metal bar – causes it to give off heat and light. Studying lepidolite (a lithiumbased mica) in 1861. Using spectroscopy.38 ones. which had enormous implications for twentieth-century science. To search after truth in its purest shape and to give utterance with almost an abstract selfforgetfulness. 1890. and the field had expanded enormously – between 1900 and 1912 a “handbook” of spectroscopy was published by Kayser in six volumes comprising five thousand pages! “[Kirchhoff is] a perfect example of the true German investigator. Kirchhoff developed a general theory of emission and radiation in terms of thermodynamics. Signals and Systems 2014 . ten more new elements were discovered before the end of the century. He remained at the University of Berlin until he retired in 1886. could not be predicted by classical physics. It stated that a substance’s capacity to emit light is equivalent to its ability to absorb it at the same temperature. The spectral radiation. He was the first to explain the dark lines (Fraunhofer lines) in the Sun's spectrum as caused by absorption of particular wavelengths as the light passes through a gas.1B. One of the problems that this new theory created was the “blackbody” problem. Analyzing salts from evaporated mineral water. which was to plague physics for forty years. was the religion and purpose of his life. In 1900 Max Planck solved the problem by discovering quanta. In 1875 he was appointed to the chair of mathematical physics at Berlin and he ceased his experimental work. Kirchhoff’s work on spectrum analysis led on to a study of the composition of light from the Sun. sky blue).” – Robert von Helmholtz. deepest red). Kirchhoff and Bunsen detected a blue spectral line – it belonged to an element they christened caesium (from the Latin caesius. Bunsen found an alkali metal he called rubidium (from the Latin rubidius.

.

age. Orthogonality in Mathematics Vectors and functions are similarly often best represented.1 Lecture 2A – Fourier Series and Spectra Orthogonality. extrovert-introvert. Our memory of a person might be confined to the complex down into the simple characteristics gender. b and c which do not lie in the same plane as: A  A1a  A2 b  A3c where A1 . characteristics are to adopt as characteristics both current age and birthdate. How to use MATLAB® to check Fourier series coefficients. Symmetry in the time domain. Relationships between the three Fourier series representations. Signals and Systems 2014 (2A. We only need to know how to travel east and how to travel north and we can go from any point on earth to any other point. weight and how they rate on a small number of personality attribute scales such as optimistpessimist. Orthogonality The idea of breaking a complex phenomenon down into easily comprehended components is quite fundamental to human understanding. The complex exponential Fourier series. or all three of total height and height above and below the waist. in turn having to think concept of about it in its totality. In choosing the components it is most efficient if they are independent as in Independent gender and height for a person. The spectrum. for example. height. perhaps associating a scale The breaking the with each characteristic. aggressive-submissive etc. Filters. we identify characteristics. Instead of trying to commit the totality of something to memory. memorised and manipulated in terms of a set of magnitudes of independent components. The trigonometric Fourier series. An artist (or a television tube) needs only three primary colours to make any colour. hair colour. as one could be efficient descriptors predicted from the other. The compact trigonometric Fourier series. Recall that any vector A in 3 dimensional space can be expressed in terms of any three vectors a . and then. Power.1) Specifying a 3D vector in terms of 3 components . A2 and A3 are appropriately chosen constants.2A. skin colour. It would waste memory capacity.

1 The vectors a . If a . b and c are said to be linearly independent for no one of them can be expressed as a linear combination of the other two. For example.2A. In vector analysis the projection of one vector onto another is given by the dot product. for a . b and c are mutually perpendicular they form an orthogonal basis set. b is the length of b . b and c orthogonal. Orthogonality defined for vectors Orthogonal means that the projection of one component onto another is zero. etc. Such a set of linearly independent vectors is said to form a basis set for three dimensional vector space. it is Basis set described as set of linearly independent vectors impossible to write c  a  b no matter what choice is made for  and  (because a linear combination of a and b must stay in the plane of a and b).2 Vector in 3D space showing components c A3 A A1 A2 b a Figure 2A. Thus.2) . They span three dimensional space in the sense that any vector A can be expressed as a linear combination of them. Signals and Systems 2014 (2A. we have the relations: a  a  a2 ab  0 ac  0 b a  0 b  b  b2 bc  0 ca  0 cb  0 c  c  c2 Here a is the length of a .

when projecting A onto a.4) A vector described in terms of orthogonal components It is straightforward to extend this analysis to an infinite dimensional vector space (although hard to imagine in geometrical terms):  Az   A a   A b  A   2 a   2 b     2 z    z   a   b  (2A. we get: A  a  A1a  a  A2b  a  A3c  a  A1a 2 (2A. In Orthogonal this case we project light through red. b 2 . the set of vectors are not just orthogonal they are Orthonormal defined orthonormal. where A1 is the constant that scales a in the original expression A  A1a  A2 b  A3c . green and blue filters and find the components is a general concept with intensity of each of the three components.2A. c 2 etc.5) If a 2 . if a . green and blue lights of appropriate intensity. Hence. when we project the vector A onto each of the basis vectors.3) Finding the components of a vector A  b  A1a  b  A2b  b  A3c  b  A1b 2 A  c  A1a  c  A2b  c  A3c  c  A1c 2 We can see that projection of A onto a particular basis vector results in a scalar quantity that is proportional to the “amount of A in that direction”. The above description of a vector in three dimensional space is exactly analogous to resolving a colour into three primary (orthogonal) components. if we have no knowledge of the components of A. Signals and Systems 2014 . b and c are orthogonal.3 Hence. we have an easy way of finding them when the basis set is orthogonal – just project the vector onto each of the component vectors in the basis set and “normalise”:  Aa   Ab   Ac  A   2  a  2  b  2  c  a   b   c  (2A. For example. The original colour can be wide application synthesised once again by red. are 1. we get the quantity A1 a 2 .

in 3 dimensions:  v1  u.7)  u v cos If   90 the two vectors are orthogonal and u. Signals and Systems 2014 . that is.2 The inner product can be written mathematically as: Inner product for vectors defined u. one vector cannot be written in a way that contains a component of the other. v  u1 u 2 u3 v2  v3   u1v1  u 2 v2  u3 v3 (2A.4 The Inner Product The definition of the dot product for vectors in space can be extended to any general vector space. v  u T v  v T u (2A.6) For example. Consider two n-dimensional vectors: Two n-dimensional vectors v  u Figure 2A.2A. v  0 . They are linearly independent.

Signals and Systems 2014 . For functions. since they obey the same laws of addition and scalar multiplication as spatial vectors. y   12 1 2 sin 2t sin 4t dt  0 Thus. the functions xt  and y t  are orthogonal over the interval  1 2  t  1 2 .2A. we say that “two functions are orthogonal” if their inner product is zero: x.9) Orthogonality for functions defined Example Let xt   sin 2t  and y t   sin 4t  over the interval  1 2  t  1 2 : 1 x( t ) t -1/2 1/2 -1 1 y( t ) t -1/2 1/2 -1 Then the inner product is (due to odd symmetry of the integrand): x.5 Real functions such as xt  and y t  on a given interval a  t  b can be considered a “vector space”. we can define the inner product by the integral: x.8) Inner product for functions defined a This definition ensures that the inner product of functions behaves in exactly the same way as the inner product of vectors. y   xt  y t dt b (2A. y  0 (2A. Just like vectors.

6 Orthogonality in Power Signals Consider a finite power signal.11) This average can also be interpreted as a measure of the correlation between xt  and y t  . is: The power of a signal made up of orthogonal components is the sum of the component signal powers 1 x  y. then over a long enough period.12) This means that the total power in the combined signal can be obtained by adding the power of the individual orthogonal signals. xt  and y t  . y T 0 T (2A. In this case. x  y T 1 T 2   xt   y t  dt T 0 1 T   x 2 t   2 xt  y t   y 2 t  dt T 0 1 2 1  x. The average value of the product of the two signals observed over a particular interval. y T T T  Px  0  Py Px  y    (2A. y  0 . since x. T.2A. T. y  y.10) Now consider two finite power signals. when the signals are added together the total power (i. x  x. the average of the product tends to zero. x T 0 T (2A. the mean square value). The average power of xt  is: 1 T 2 1 Px   x t dt  x.e. If the two signals are orthogonal. Signals and Systems 2014 . is given by the following expression: 1 T 1     x t y t dt  x. xt  .

p 1( t ) 1 0 p 2( t ) 1 T/2 T t 0 Figure 2A.3.7 Orthogonality in Energy Signals Consider two finite energy signals in the form of pulses in a digital system. x  2 x. the total energy of a pulse produced by adding together two orthogonal pulses can be obtained by summing the individual energies of the separate pulses. are orthogonal over a time interval. x  y   xt   y t  dt T 0 2     x 2 t   2 xt  y t   y 2 t  dt T 0  x.13) Similar to the orthogonal finite power signals discussed above. p1 t  and p 2 t  . y (2A.3 Signals and Systems 2014 T/2 T t . y  0 0 (2A. their product is zero over the time period of interest which means that they are orthogonal.2A. The energy of a signal made up of orthogonal components is the sum of the component signal energies Ex  y  x  y. Figure 2A. if:  xt yt dt  T x. Therefore.14)  Ex  0  E y For example. Two pulses. T. illustrates two orthogonal pulses because they occupy two completely separate portions of the time interval 0 to T. y  y .

Signals and Systems 2014 .8 The Trigonometric Fourier Series In 1807 Joseph Fourier showed how to represent any periodic function as a weighted sum of a family of harmonically related sinusoids.18) 2 nm and: The “projection” of a function onto an orthogonal function gives zero  T0 2   t m  t  dt  0  T0 2 n nm When cn2  1 the basis set of functions n  t  (all n) are said to be orthonormal.2A. in the geometric vector case. The equivalent of the dot product (or the light filter) for obtaining a projection in this case is the inner product given by: g t .17)  ab  0 (2A. This discovery turned out to be just a particular case of a more general concept.16) This is the “projection” of g t  onto n  t  . b and c etc.  n t    T Definition of “inner product” for a function T0 2 0 2 g t  n t dt (2A. the nth member of the orthogonal basis set. just like the vectors a. Thus. we can declare a function to be represented by:  g t    Ann t  (2A. in general. Equivalent relationships hold between orthogonal functions as they do between orthogonal vectors:  The “projection” of a function onto itself gives a number T0 2  T0 2 n t m t dt  cn2  aa  a  (2A.15) n0 where the functions n  t  form an “orthogonal basis set”. We can actually represent any periodic function by a weighted sum of orthogonal functions – Fourier’s sinusoids are then just a special class of orthogonal functions.

5 Signals and Systems 2014 . the infinite set of Walsh functions shown below can be used as a basis set:  1( t ) Example of an orthogonal basis set – the Walsh functions  5( t ) A A T0 t T0 t  6( t )  2( t ) A A T0 T0 t t  3( t )  7( t ) A  4( t ) A A T0 t T0 t  8( t ) A T0 T0 t t Figure 2A.9 There are many possible orthogonal basis sets for representing a function over an interval of time T0 . For example with n  2 and m  3 :  22 ( t )  A2 T0 0 2 t 2 t dt  A2T0 T0 t  2 ( t)  3 ( t) A 2  T0 T0 0 2 t 3 t dt  0 t Figure 2A.4 We can confirm that the Walsh functions are orthogonal with a few simple integrations. For example. best performed graphically.2A.

and they have a compact notation using complex numbers. The right-hand side of Eq. The basis set chosen for the Fourier series is the set of pairs of sines and cosines with frequencies that are integer multiples of f 0  1 T0 : an t   cos2nf 0t  bn t   sin 2nf 0t  The orthogonal functions for the Fourier series are sinusoids (2A. (2A.19) They were chosen for the two reasons outlined in Lecture 1A – for linear systems a sinusoidal input yields a sinusoidal output. (2A. n (2A. (2A. and the representation of a function as given by Eq.21) n 1 The frequency f 0  1 T0 is the fundamental frequency and the frequency nf 0 is the nth harmonic frequency. then any periodic function g t  with period T0 be expressed as a sum of orthogonal components: The trigonometric Fourier series defined  g t   a0   an cos2nf 0 t   bn sin2nf 0 t  (2A. as can be seen from the following relations:  mn0  T0  cos2mf 0t cos2nf 0t dt  T0 2 m  n  0 2  0 mn  T0 2 T0  mn0  0  sin 2mf 0t sin 2nf 0t dt  T0 2 m  n  0 2  0 mn  (2A. with a n and bn known as Fourier series coefficients.2A.20a) T0 2  T0  T0 2  T0 2 cos 2mf 0 t  sin 2nf 0 t dt  0 all m. The constant cn2 in this case is either T0 2 or T0 . Signals and Systems 2014 .21) is known as a Fourier series.20c) If we choose the orthogonal basis set as in Eq.20b) (2A.19).10 The trigonometric Fourier series is a very special way of representing periodic functions.15).

(2A.22a) g t  cos2nf 0 t dt (2A. or DC part of the periodic signal g t  . We identify f 0  2 Hz and by comparison with:  g t   a0  an cos2nf 0 t  bn sin2nf 0 t  n 1 the Fourier series coefficients are: a 0  4. and the coefficient a 0 represents the average.22c) Compare these equations with Eq.2A. Signals and Systems 2014 How to find the trigonometric Fourier series coefficients . Example The following function: g t   4  3 cos( 4t )  2 sin(12t ) is already written out as a Fourier series. If we want to determine the coefficients in the Fourier series. b3  2 with all other Fourier series coefficients zero.4) and (2A. (2A. all we have to do is “project” the function onto each of the components of the basis set and normalise by dividing by cn2 : a0  2 an  T0 bn  2 T0 1 T0  T0 2  T0 2 T0 2 T0 2 T0 2  T0 2 g t dt (2A.22b) g t sin 2nf 0 t dt (2A.3). a1  3.11 Now look back at Eqs. These equations tell us how to “filter out” one particular component of the Fourier series. Note that frequency 0 is DC.17).

6 Here the period is T0 and f 0 1 T0 . we usually tabulate or graph the Fourier series coefficients. Using Eqs. (2A. we have for the Fourier series coefficients: a0  1 T0 an    T0 2 T0 2 2 T0 g t dt   2   2 1 T0  2  Adt   2 A  Af 0 T0 (2A.23c) bn  We can therefore say:   t  nT0  2 Af 0 sinc nf 0 cos 2nf 0 t   Arect    Af 0   n  n 1  (2A.24) This expression is quite unwieldy. To visualize what the equation means.12 Example Find the Fourier series for the rectangular pulse train g t  shown below:  g (t) A 0 t T0 Figure 2A.18).23b) 2 2 Asin2nf 0 t dt  0 T0  2 (2A. and think about the individual constituent sinusoids.23a) A cos2nf 0t dt 2A sin nf 0   2 Af 0 sin cnf 0  n (2A.2A. Signals and Systems 2014 .

2A.13 For example.0865A 0 The trigonometric Fourier series coefficients can be tabled A more useful representation of the Fourier series coefficients is a graph: but a graph of the trigonometric Fourier series coefficients is better an 0.2 A 6 f0 0 2 f0 8 f0 4 f0 10 f 0 12 f 0 f Figure 2A. This graphical representation of the Fourier series coefficients lets us see at a glance what the “dominant” frequencies are. -0.7 Observe that the graph of the Fourier series coefficients is discrete – the values of a n (and bn ) are associated with frequencies that are only multiples of the fundamental. consider the case where   T0 5 . and how rapidly the amplitudes reduce in magnitude at higher harmonics. Signals and Systems 2014 .3742A 0 2 0.2018A 0 4 0.0935A 0 5 0 0 6 -0.3027A 0 3 0.2A 0 1 0. nf 0 .0624A 0 etc. We can draw up a table of the Fourier series coefficients as a function of n: n an bn 0 0.4 A 0.

14 The Compact Trigonometric Fourier Series The trigonometric Fourier series in Eq. (2A.28) Gn can be derived directly from the Fourier series coefficients using Eq.25) n 0 By expanding An cos 2nf 0 t  n  it is easy to show that: an  An cos  n bn   An sin  n (2A. The nth harmonic.21) can be written in a more compact and meaningful way as follows: The compact trigonometric Fourier series defined  g t    An cos2nf 0 t   n  (2A. has amplitude An and phase n .26): and related to the trigonometric Fourier series coefficients Gn  an  jbn Signals and Systems 2014 (2A.27a) n   tan 1 bn an  (2A.27b) From the compact Fourier series it follows that g t  consists of sinusoidal signals of frequencies 0. f0 .2A.26) and therefore: and associated constants An  an2  bn2 (2A.29) . We can represent the nth harmonic sinusoid by its corresponding phasor: Harmonic phasors defined Gn  An e j n  An cos  n  jAn sin  n (2A. . 2 f0 . An cos2nf 0 t   n  . etc. (2A. We can store the amplitude ( An ) and phase ( n ) information at a harmonic frequency ( nf 0 ) using our phasor notation of Lecture 1A. nf 0 .

as in Eq. e  j  cos   j sin  . (2A.22c) results in: G0  a0  1 T0 T0 2  T0 2 (2A.2A. n 0  2nf 0 . Signals and Systems 2014 Fourier series expressed as a sum of harmonic phasors projected onto the real axis .31b) The expression for the compact Fourier series.22a) .32) Each term in the sum is a phasor rotating at an integer multiple of the fundamental’s angular speed. to give: 1 Gn  T0 2 Gn  T0  T0 2  T0 2 T0 2 T0 2 g t e  j 2nf 0t dt n0 g t e  j 2nf 0t dt n0 (2A. can now be written as:    gt   Re  Gn e j 2nf 0 t   n0  (2A. and the imaginary part of Gn is the amplitude of the -sin component. (2A.25). g t  is the projection of the instantaneous “vector” sum of these phasors onto the real axis.30) g t dt Gn  an  jbn  2 T0   2 T0  T0 2 T0 2 T0 2 T0 2 g t  cos2nf 0t dt  j 2 T0 T0 2  T0 2 g t  sin 2nf 0t dt g t cos2nf 0t   j sin 2nf 0t dt which can be simplified using Euler’s identity.(2A. Substituting for a n and bn from Eqs.15 The negative sign in Gn  a n  jbn comes from the fact that in the phasor representation of a sinusoid the real part of Gn is the amplitude of the cos component.31a) Obtaining the harmonic phasors directly (2A.

The compact Fourier series coefficients (harmonic phasors) are given by: 2 Gn  T0   0 Ae  j 2nf 0 t dt (2A. (2A.16 Example Find the compact Fourier series coefficients for the rectangular pulse train g t  shown below: g (t)  A 0 t T0 Again. the period is T0 and f 0 1 T0 . we have for the average value: 1 G0  T0   0 Adt  Af 0 (2A. Using Eqs.34)    1  j 2nf 0 t   2 Af 0  e   j 2 nf 0  0 A  1  e  j 2nf 0 jn   Signals and Systems 2014 .27).33) which can be seen (and checked) from direct inspection of the waveform.2A.

35) and apply Euler’s identity: e j  e  j sin   j2 (2A.39)  An e j n Therefore: An  2 Af 0sincnf 0  n  nf 0 Signals and Systems 2014 (2A. We separate out the term e  jnf 0 : Gn   A  jnf 0 jnf 0 e e  e  jnf 0 jn  (2A.38) we get:  sin nf 0    jnf 0 Gn  2 Af 0  e  nf 0   2 Af 0sincnf 0 e  jnf 0 (2A.36) to give: Gn  2A sin nf 0 e  jnf 0 n (2A.37) Now remembering that: sincx   sin x  x (2A.2A.17 The next step is not obvious.40) .

Let’s now imagine that we can graph each of the constituent sinusoids on its own time-axis.9 Signals and Systems 2014 . we can formulate an easy-tointerpret graphical representation of a periodic waveform’s constituent sinusoids.18 The Spectrum Using the compact trigonometric Fourier series.2A. and extend these into the z-direction: Figure 2A. made up of the first 6 terms of the expansion of a square wave as a Fourier series. The graph below shows a periodic waveform. versus time: Figure 2A.8 The graph also shows the constituent sinusoids superimposed on the original waveform.

Therefore.2A. Signals and Systems 2014 frequency .11 This conceptual view now needs to be formalised. with the horizontal scale set up so that each amplitude value is graphed at the corresponding frequency: Figure 2A.19 Each constituent sinusoid exists at a frequency that is harmonically related to the fundamental.10 We now have a graph that shows the amplitudes of the constituent sinusoids versus frequency: amplitude 3 f0 f0 7 f0 5 f0 11f0 9 f0 Figure 2A. we could set up a graph showing the amplitude of the constituent sinusoids.

12 Signals and Systems 2014 .2A. Gn  An e jn . We also know that each constituent sinusoid is completely characterised by its corresponding phasor. We can now think of a spectrum as a graph of the phasor value as a function of frequency. If we plot G n versus frequency.20 In a periodic signal. Therefore. amplitude information is not enough – we need to graph phase as well. f  and the phase spectrum n vs. we have The spectrum defined…as a graph of phasor values vs. So. frequency what is called a spectrum. we can make a plot of the magnitude spectrum  An vs. for a given periodic signal. However. since it only involves positive frequencies: ampltiude time ampltiude frequency phase spectrum frequency Figure 2A. since G n is complex. We call this representation “a single-sided” spectrum. the constituent sinusoids are given by An cos2nf 0 t   n  . f  . we have to resort to two plots.

n 0 2 f0 4 f0 6 f0 8 f0 10 f 0 12 f 0 f - -2 Note how we can have “phase” even though the amplitude is zero (at 5 f 0 .21 Example The compact Fourier series representation for a rectangular pulse train was found in the previous example.2A. We could also wrap the phase spectrum around to 0 instead of graphing linearly past the  2 point – in fact we could choose any range that spans 2 since phase is periodic with period 2 .4 A 0. A graph of the magnitude and phase spectra appear below for the case  T0 5 : An 0. If we wanted to. etc). 10 f 0 . Signals and Systems 2014 .2 A 6 f0 0 2 f0 8 f0 4 f0 10 f 0 12 f 0 f Note how we can have a negative “magnitude” of a sinusoid as the sinc function envelope goes negative. we could make An positive (so it relates directly to the amplitude of the corresponding sinusoid) by simply adding 180 to  n .

43) into the trigonometric Fourier series. (2A. (2A.41) e j   e  j cos  2 (2A.44) . Eq. gives:  g t   a0   an cos2nf0t   bn sin 2nf0t  n 1  a a  a0    n e j 2nf 0t  n e j 2nf 0 t 2 n 1  2 b b   j n e j 2nf 0t  j n e j 2nf 0t  2 2  a b   a0     n  j n  e j 2nf 0t 2 n 1   2   b  a   n  j n  e j 2nf 0 t  2 2  Signals and Systems 2014 (2A.42) and (2A. Recall that Euler’s formulas relating the complex exponential to cosines and sines are: e j  cos  j sin  (2A.42) e j  e  j sin   j2 (2A.22 The Complex Exponential Fourier Series The complex exponential Fourier series is the most mathematically convenient and useful representation of a periodic signal.2A.43) Substitution of Eqs.21).

31a) and (2A. we can see that an alternative way of writing the Fourier series coefficients. the trigonometric and complex exponential Fourier series are not two different series but represent two different ways of writing the same series.46b) The relationship between complex exponential and trigonometric Fourier series coefficients (2A.45) n The complex exponential Fourier series where: G0  a0  A0 (2A.46c) From Eqs.2A. (2A.23 This can be rewritten in the form: g t    G e n   j 2nf 0t (2A. The coefficients of one series can be obtained from those of the other. is: 1 Gn  T0  T0 2 T0 2 g t e  j 2nf 0 t dt (2A.46a) an  jbn An j n  e   2 2 n 1 an  jbn An  j n  e    2 2  Gn  G n (2A.46a). Signals and Systems 2014 The complex exponential Fourier series coefficients defined .47) Thus. in one neat formula instead of three.

if: then: A double-sided spectrum shows negative frequency phasors Gn is the magnitude and n is the phase of Gn .49) Gn  Gn e jn (2A.50) Gn  Gn e  jn (2A. and the double-sided magnitude spectrum Gn vs.48b) where: The symmetry of the complex exponential Fourier series coefficients Gn  Gn* (2A. f is an odd function of f because  n n .24 The complex exponential Fourier series can also be viewed as being based on the compact Fourier series but uses the fact that we can use the alternative Harmonic phasors can also have negative frequency phasor definition Gn instead of Gn . For a real g t  . In this case we remember that Gn  Gn 2 and that for every forward rotating phasor Gn there is a corresponding backward rotating phasor Gn* .32) and (2A.31a) then become: g t   The complex exponential Fourier series…  G e n   1 Gn  T0 and Fourier series coefficients defined  T0 2 T0 2 j 2nf 0 t n g t e j 2nf 0 t dt (2A. (2A. Eqs. Signals and Systems 2014 .2A.48a) (2A. Similarly. the phase spectrum n vs. f is an even function of f. G n  Gn .51) Thus.

48a). Signals and Systems 2014 .52) In this case Gn turns out to be a real number (the phase of all the constituent sinusoids is 0 or 180).13 The period is T0 and f 0 1 T0 .2A. Using Eq.25 Example Find the complex exponential Fourier series for the rectangular pulse train g t  shown below:  g (t) A 0 t T0 Figure 2A. we have for the complex exponential Fourier series coefficients: 1 Gn  T0  2   2 Ae j 2nf 0t dt   A  jnf 0  e jnf 0 e j 2n A  sin nf 0  n  Af 0sincnf 0    (2A. (2A.

In particular.2A.14 For the case of   T0 2 . with amplitude A T0 and zero crossings at integer multiples of 1  . the spectrum is: and for a 50% duty cycle square wave Gn -7f0 -11f0 -9 f0 A /2 -3f0 -5f0 3f0 .2 A -8 f0 -6 f0 6 f0 -4 f0 -2 f0 0 -12 f0 -10 f0 8 f0 4 f0 2 f0 10f0 12f0 f Figure 2A. where each line varies according to the complex quantity Gn . for a rectangular pulse train.15 Thus.f0 0 f0 7f0 5f0 11f0 9f0 f Figure 2A. Gn follows the envelope of a sinc function. Signals and Systems 2014 . the double-sided magnitude spectrum is then: The double-sided magnitude spectrum of a rectangular pulse train Gn 0. the Fourier series can be represented by spectral lines at all harmonics of f 0  1 T0 .26 For the case of   T0 5 .

which is fundamental to the analysis of digital systems.16 The result of Eq. n j 2nf 0 t n   Signals and Systems 2014 (2A. with A  1 . and when   0 .2A. Therefore: g t    G e n       t  nT   f  e n   0 0 j 2nf 0 t its Fourier series. In this case. sincnf 0   sinc0  1 .27 Another case of interest. each pulse in Figure 2A.e. is when we allow each pulse in a rectangular pulse train to turn into an impulse. as shown below: A uniform train of unit impulses. g (t) 1 -2T0 0 -T0 T0 2T0 3T0 4T0 t Figure 2A. and g t  is simply a uniform train of unit impulses.   0 and A such that A  1 .52) is still valid if we take the appropriate limit: Gn  lim Af 0sincnf 0   f 0 (2A.54) . the amplitude of the sinc function envelope is Af 0  f 0 .53)  0 Thus.13 becomes an impulse of unit strength. (2A. i.

including 0.f0 0 f0 2 f0 3 f0 4 f0 f Figure 2A. we are referring to its double-sided spectrum. when we refer to the spectrum of a signal. Signals and Systems 2014 . as shown below: and its spectrum Gn f0 -4f0 -3f0 -2f0 . all with an equal strength of f 0 . Therefore.2A.28 The spectrum has components of frequencies nf 0 . It turns out that the double-sided spectrum is the easiest to use when describing signal operations in systems.17 The complex exponential Fourier series is so convenient we will use it almost exclusively. n varying from  to  . It also enables us to calculate the average power of a signal in an easy manner.

G 2 should be zero if the Nyquist criterion is met. G2  N G1 . G=ifft(g)does the inverse Fourier transform. Step 1 Choose sample frequency . A vector can be set up which gives the samples of a signal. choose 4 Hz (minimum is > 2 Hz). and G N  N G1 .29 How to Use MATLAB® to Check Fourier Series Coefficients MATLAB® is a software package that is particularly suited to signal processing. G1  2 4 1 2 . where G1  N G0 is the DC term. Hence G0  0 . G N-1  N G 2 etc. where N is the size of the vector.  g 1 0 1  0 Step 3  Find G fftg  0 2 0  2 . It has instructions that will work on vectors and matrices.2A. Provided the sample spacing meets the Nyquist criterion the instruction G=fft(g) returns a vector containing N times the Fourier series coefficients. Note N  4 . G3  N G2 etc. Step 2 Take samples over one period starting at t  0 .since highest frequency present is 1 Hz. Signals and Systems 2014 . G1  2 4 1 2 . Note period = 1 s. These are in fact the Fourier series coefficients of g t  . Example Suppose we want to find the Fourier series coefficients of g t   cos 2t  .

and look at the example MATLAB® code in the “FFT .4142 8  0.5  1 Note: if samples occur on transitions.30 Example Find the Fourier series coefficients of a 50% duty cycle square wave.3015 .4142 0 0.4142 0 2. Step 2  g 1 1 0.4142 0  0. Signals and Systems 2014  . Using 16 points.Quick Reference Guide” for more complicated and useful examples of setting up and using the FFT. Step 1 In this case the spectrum is  so we can never choose f s high enough.3183.5 0 0 0 0. Suppose we choose 8 points of one cycle.3142 .4142 Therefore G1  2. There is always some error. The true value is 0.2A. Step 3  G fftg  4 2. input half way point. You should read Appendix A – The Fast Fourier Transform. G1  0.

That is. Even Symmetry An even function is one which possesses symmetry about the g t  axis.55) When g t  has even symmetry. Mathematically. which must be zero. and so we get bn  0 after performing the integration. An even function can be expressed as a sum of cosine waves only (cosines are even functions). the integrand in the formula for bn is even  odd  odd . Signals and Systems 2014 Even symmetry in the time-domain leads to real Fourier series coefficients . we consider the imaginary component of Gn  a n  jbn  2 . and therefore all the G n are real. The integrand in the formula for a n is even  even  even . Identification of time-domain symmetry can lead to conclusions about the spectrum without doing any calculations – a useful skill to possess.56) The phase of Gn is therefore 0 or 180. Thus we have the property: g t  even  Gn  real (2A.31 Symmetry in the Time Domain A waveform which is symmetric in the time domain will have a spectrum with certain properties. an even function is such that g t   g  t  . we look at the formula for bn as given by the trigonometric Fourier series: 2 bn  T0  T0 2 T0 2 g t  sin 2nf 0t dt (2A. To see why.2A. and will only be zero if the function g t  is zero. depending on whether the real number is positive or negative. The limits on the integral span equal portions of negative and positive time.

48a).18 The functions is represented by: 0  1  t  1 2   g t   2 cos3t   1 2  t  1 2  0 1 2t  (2A.57) and g t   g t  2  . Using Eq. The period is T0  2 and f 0  1 T0  1 2 Hz .5  e  e  j 3t   jnt e dt     0. (2A.32 Example Find the compact Fourier series coefficients for the cos-shaped pulse train g t  shown below: g( t ) 2 -1 0 t 1 2 3 4 5 -2 Figure 2A.5 j 3t 0.2A.5 2   Gn  Signals and Systems 2014 (2A. we have for the complex exponential Fourier series coefficients: 1 1  jnt   g t e dt   1 2 1 0.58) .5  0   2 cos3t e  jnt dt  0 2  0.

So we expect Gn  0 for every odd term. except n  3 when a3  1 . we can see that the imaginary part is zero (as we expect for an even function) and that: 1 1 sinc3  n  2  sinc3  n  2 2 2 an 1 1 Gn   sinc3  n  2  sinc3  n  2 n  1 2 2 2 G0  a0  (2A.59) e j 3 n  2 e  j 3 n  2  e  j 3 n  2 e j 3 n  2     j 3  n   j 3  n   j 3  n   j 3  n  1 sin 3  n  2  1 sin 3  n  2    2 3  n  2 2 3  n  2     Putting sincx   sin x  x .33 Now.61) and therefore: 2 1 1 a0  sinc3  n  2  sinc3  n  2   3 2 2 an  sinc3  n  2  sinc3  n  2 n 1 (2A. combining the exponentials and integrating. the Fourier series expansion of the wave contains only cosine terms:  2 g t      sinc3  n  2  sinc3  n  2cosnt  3 n1 (2A. we obtain: 1 1 Gn  sinc 3  n  2   sinc 3  n  2  2 2 (2A. we have:  e j 3 n t e  j 3 n t  Gn      j 3  n   j 3  n   0.5 (2A. Signals and Systems 2014 .5 0.63) Recall that sincx   0 for all integer values of x.60) Now since G0  a0 and Gn  a n  jbn  2 for n  1 .62) Thus.2A.

for even functions. for the first 5 terms and 10 terms.34 The Fourier series expansion. Gn  an 2 for n  1 . for even functions. the complex exponential Fourier series coefficients have even symmetry too! A graph of the spectrum in this case looks like: Gn An even symmetric waveform has an even and real spectrum -5 f 0 -4 f 0 -3 f 0 -2 f 0 .2A.f0 0 f0 2 f0 Signals and Systems 2014 3 f0 4 f0 5 f0 f . is shown below together with the original wave: One interesting point to note is that. That is. and therefore Gn  Gn*  a n 2  Gn for n  1 .

Thus we have the property: g t  odd  Gn  imaginary (2A. To see why.2A. the integrand in the formula for a n is odd  even  odd . depending on whether the imaginary number is positive or negative. The integrand in the formula for bn is odd  odd  even . and therefore all the G n are imaginary. we look at the formula for a n as given by the trigonometric Fourier series: 2 an  T0  T0 2 T0 2 g t  cos2nf 0t dt (2A. which must be zero. The limits on the integral span equal portions of negative and positive time. we consider the real component of Gn  a n  jbn  2 . an odd function is such that g t    g   t  . Mathematically. Signals and Systems 2014 Odd symmetry in the time-domain leads to imaginary Fourier series coefficients . and so we get a n  0 after performing the integration. and will only be zero if the function g t  is zero.65) The phase of Gn is therefore 90 or -90. That is. An odd function can be expressed as a sum of sine waves only (sine waves are odd functions).64) When g t  has odd symmetry.35 Odd Symmetry An odd function is one which possesses rotational symmetry about the origin.

(2A. apply integration by parts. where: ut dv  e  j 2nf 0 t dt du  dt e  j 2nf 0 t v  j 2nf 0 Signals and Systems 2014 (2A.66) By inspection of the original waveform. Using Eq.68) .19 The period is T0 and f 0 1 T0 .48a). we have for the complex exponential Fourier series coefficients: 1 Gn  T0  2 A  j 2nf 0 t te dt 2 T 0 T0 2 T0 (2A.2A.36 Example Find the complex exponential Fourier series coefficients for the sawtooth waveform g t  shown below: g (t ) A -T0 -T0 2 -A T0 2 T0 t Figure 2A.67) For n  1 . the DC component is zero and so: G0  0 (2A.

71) and therefore: bn   2A  1n n n 1 (2A. we can see that the real part is zero (as we expect for an odd function) and that: Gn   j bn jA  1n  2 n n 1 (2A.69)   j 2nf 0 t e dt  T0 2  T0 2   1  e  j 2nf 0t        j 2n   j 2nf 0  T0 2  T0 2  2A  1 e  jn  e jn  cosn   j 2n  j 2n   jA  sin n    n 1 cosn    n  n     Putting sin n   0 and cosn    1 for all integer values of n. the Fourier series expansion of the sawtooth wave contains only sine terms:  g t    2A  1n 1 sin 2nf0t  n 1 n Signals and Systems 2014 (2A. we obtain: n Gn  jA  1n n n 1 (2A.2A.73) .72) Thus.37 We then have: T0 2  j 2nf 0t  T0 2 e 2 A  e  j 2nf 0t  t dt    2   T0   j 2nf 0  T 2 T0 2  j 2nf 0  0   T e  jn e jn  2 A  T 1    2  0  0 T0  2  j 2nf 0 2  j 2nf 0  j 2nf 0 Gn    e jn  e  jn 2 A  1   T0   j 2nf 0  2  (2A.70) Now since G0  a0 and Gn  a n  jbn  2 for n  1 .

the complex exponential Fourier series coefficients have odd symmetry too! A graph of the spectrum in this case looks like (the values are imaginary): Gn An odd symmetric waveform has an odd and imaginary spectrum -4 f 0 f0 -2 f 0 -3 f 0 . is shown below together with the original sawtooth wave: One interesting point to note is that. Gn   j bn 2 .38 The Fourier series expansion. for odd functions. and therefore Gn  Gn*   j bn 2  Gn . for odd functions. That is.2A.f0 0 3 f0 2 f0 Signals and Systems 2014 4 f0 f . for the first 5 terms and 10 terms.

39 Half-Wave Symmetry A half-wave symmetric function is one in which each half-period is the same as the one before it. the saturated output of an amplifier. except it is “upside down”. An example of a half-wave symmetric function is: A half-wave symmetric waveform g(t ) 0 t Figure 2A.75) .20 Many half-wave symmetric waveforms occur in electrical engineering – examples include the magnetising current of a transformer.2A. Half-wave symmetry is not dependent on our choice of origin. this is expressed as g t    gt  T0 2 .74) we split it up into two parts: Gn  1 0 1 T0 2  j 2nf 0 t   g t e dt  g t e j 2nf 0t dt   T0 T0 2 T0 0     I1 I2 Signals and Systems 2014 (2A. Looking at the formula for the complex exponential Fourier series coefficients: Gn  1 T0  T0 2 T0 2 g t e  j 2nf 0t dt (2A. Mathematically. and any system whose transfer characteristic possesses odd symmetry.

2A. then: I2  1 T0  1 T0   T0 2 0 T0 2 0 (2A.wave symmetry  only odd harmonics Signals and Systems 2014 (2A.80) .79)  I1   1 I1 n 2 I n odd  1  0 n even Thus.78) Therefore: Gn  I1  I 2 (2A. a half-wave symmetric function will have only odd harmonics – all even harmonics are zero.77) g  e  j 2nf0 d Now since the value of a definite integral is independent of the variable used in the integration. we can see that: n I 2   1 I1 n (2A.40 Now. and noting that e  jn   1 . if g t  is half-wave symmetric. Half-wave symmetry in the time-domain leads to all odd harmonics being zero Thus we have the property: half . we have: I2  1 T0  0 -T0 2 e  jn  T0  g  e  j 2nf0  T0 2 d  0 -T0 2 (2A.76) g t e  j 2nf0t dt  g t  T0 2e  j 2nf0t dt Letting   t  T0 2 .

For this case g t  is: A 50% duty cycle square wave…  g(t ) A 0 t T0 Figure 2A.22 In this case the spacing of the discrete spectral frequencies. The spectrum is: and its spectrum Gn A /2 -11f0 -7f0 -9 f0 -3f0 -5f0 3f0 . we have for the complex exponential Fourier series coefficients: Gn  Af 0sincnf 0   A n sinc  2 2 (2A.81) Recalling that sinc x   0 for all integer values of x . (2A.2A. and substituting  T0 2 . nf 0 . g t  is half-wave symmetric. Signals and Systems 2014 . since apart from a DC component of A 2 .21 The period is T0 and f 0 1 T0 . we can see that Gn will be zero for all even harmonics (except G0 ) . for which  T0 2 .f0 0 f0 7f0 5f0 11f0 9f0 f Figure 2A.41 Example A square pulse train (50% duty cycle square wave) is a special case of the rectangular pulse train.52). We expect this. is such that each even harmonic falls on a zero of the sinc function envelope. Using Eq.

23 We first find the total power in the time-domain: A2 1 T0 4 2 P   A dt  T0 T0 4 2 Signals and Systems 2014 (2A.2A. Because the components are orthogonal.42 Power One of the advantages of representing a signal in terms of a set of orthogonal Calculating power using orthogonal components components is that it is very easy to calculate its average power. if the double-sided spectrum is being used. Example How much of the power of a 50% duty cycle rectangular pulse train is contained in the first three harmonics? g(t ) T0 /2 A 0 t T0 Figure 2A. Its power contribution is G02 which is correct. For example. the total power in the signal g t  is:   P   Gn   Gn Gn* Power for a doublesided spectrum 2 n  n  (2A. the total average power is just the sum of the average powers of the orthogonal components. since the magnitude of the sinusoid represented by the phasor Gn is 2 Gn . and the average power of a sinusoid of amplitude A is P  A 2 2 .82) Note that the DC component only appears once in the sum ( n 0 ).83) .

We note that the Fourier series coefficients are given by Eq.43 To find the power in each harmonic. (2A.52): A n sinc  2 2 Gn  (2A.f0 0 f0 -5f0 7f0 5f0 11f0 9f0 f Figure 2A.2A. we work in the frequency domain.24 We have for the DC power contribution: P0  G0 2 A2  4 (2A. Signals and Systems 2014 . The DC plus fundamental power is: P0  P1  1 G 2 (2A.5% of the total power.84) and draw the double-sided spectrum: Gn A /2 -11f0 -7f0 -9 f0 -3f0 3f0 .4526 A2 4  which is 90.85) which is 50% of the total power.86) n n  1  G0  2 G1 2 2 A2 2 A2   2  0.

This is important – we may wish to lowpass filter this signal for some reason. We are now in a position to give the cutoff frequency of a lowpass filter to retain any amount of power that we desire. it was surprising to learn that 95% of the power in a square wave is contained in the frequency components up to the 3rd harmonic. Thus. but retain most of its power. In this case.2A. the spectrum makes obvious a characteristic of a periodic signal that is not obvious in the time domain. Signals and Systems 2014 .44 The power of the components up to and including the 3rd harmonic is: P0  P1  P2  P3  3  Gn 2 (2A.4752 A2 which is 95% of the total power.87) n  3 A2 2 A2 2 A2   2 0 2 4 9   0.

adds to the phase of the component phasor. component sinusoid The magnitude of this complex number. multiplies the corresponding of a periodic function magnitude of the component phasor of the input signal. H  j  . Visual view of a filter’s operation X0 Y0 = H(0 ) X 0 excitation at DC H(0) X1 excitation with a sinusoid at the fundamental frequency H ( f0 ) Xn excitation with a sinusoid at the n th harmonic Y1 = H( f0) X 1 H (n f 0 ) response to a sinusoid at the fundamental frequency Yn = H ( n f 0 ) X n response to a sinusoid at the n th harmonic H( f ) Xn Xn response to DC Yn = H( f ) X n f Yn f f output secptrum input spectrum Figure 2A.25 Recall from Lecture 1A that it is the sinusoid that possesses the special property with a linear system of “a sinusoid in gives a sinusoid out”. The phase of the complex number. When evaluated at s  j the transfer function is a complex number. The input spectrum is changed by the frequency response of the system to give the output spectrum. H  j  .2A. This frequency-domain view of filters has been with us implicitly – we specify the filter in terms of a transfer function A filter acts to change each H  s . They shape the input spectrum by changing the amplitude and phase of each component sinusoid. We now have a view that “a sum of sinusoids in gives a sum of sinusoids out”.45 Filters Filters are devices that shape the input signal’s spectrum to produce a new output spectrum. Signals and Systems 2014 . or more simply: “a spectrum in gives a spectrum out”.

25. Example Let’s see what happens to a square wave when it is “passed through” a 3rd order Butterworth filter. For each component phasor of the input signal. as shown in the bottom half of Figure 2A.  0 is the “cutoff” frequency of the filter. We then reconstruct the signal by adding up the “filtered” component phasors. We then build up a picture of H  f  by plotting the experimentally derived points on a graph (if log scales are chosen then we have a Bode plot).69b) Here. we apply different sinusoids (including DC which can be thought of as a sinusoid of zero frequency) to a system and measure the resulting amplitude change and phase shift of the output sinusoid. we multiply by H  j  e jH  j  . superposition applies. Signals and Systems 2014 . we have: A filter is defined in terms of its magnitude and phase response H  j   1 1    0  6  2 02   3   H  j   tan  2 3   2 0   0  (2A. Since the filter is linear.2A. It does not represent the angular frequency of the fundamental component of the input signal – filters know nothing about the signals to be applied to them.25. The next example illustrates this process.46 This view of filters operating on individual components of an input signal has been implicit in the characterisation of systems via the frequency response. Experimentally. we determine the frequency response of a system by performing the operations in the top half of Figure 2A. That is. For the filter.88a) 1 (2A. After obtaining the frequency response. we should be able to tell what happens when we apply any periodic signal.

f0 0 f0 3f0 5f0 7f0 9f0 11f0 f Figure 2A. For the case of the filter cutoff frequency set at twice the input signal’s fundamental frequency we have for the output magnitude spectrum: Gn -11f0 -9 f0 -7f0 -5f0 -3f0 input signal magnitude spectrum A /2 . Signals and Systems 2014 Filter output magnitude spectrum obtained graphically using the input signal’s magnitude spectrum and the filter’s magnitude response .26 We could perform a similar operation for the phase spectrum.2A.47 This is an operation best performed and thought about graphically.f0 0 f0 3f0 5f0 7f0 9f0 11f0 f H () Gn -11f0 -9 f0 -7f0 -5f0 -3f0 1 3rd order Butterworth filter magnitude response A /2 output signal magnitude spectrum .

so we know the 2nd harmonic (an even harmonic) is zero. Signals and Systems 2014 . How do we know it is 3rd harmonic distortion? Without the DC component the waveform exhibits half-wave symmetry.27 This looks like a shifted sinusoid (DC + sine wave) with “a touch of 3rd harmonic distortion”.48 If we now take the output spectrum and reconstruct the time-domain waveform it represents.2A. If we extend the cutoff frequency to ten times the fundamental frequency of the input square wave. the filter has less effect: Sharp transitions in the time-domain are caused by high frequencies g (t) A T0 2T0 t Figure 2A.26. we get: The output signal in the time-domain obtained from the output spectrum g (t) A T0 2T0 t Figure 2A. you are able to recognise the components Some features to look for in a spectrum of waveforms and hence relate them to their magnitude spectrum as in Figure 2A. With practice.28 From this example it should be apparent that high frequencies are needed to make sharp transitions in the time-domain.

f0  0 f  0 - 0 f0 Gn Gn .2A.49 Relationships Between the Three Fourier Series Representations The table below shows the relationships between the three different representations of the Fourier series: Trigonometric Fourier Series Fourier  g t   a0   an cos2nf 0t  n 1  bn sin 2nf 0 t  a0  Series Coefficients an  bn  Spectrum of a single Compact Trigonometric  g t    An cos2nf 0 t   n  n 0 Complex Exponential  gt   Gn e j 2nf t 0 n      Re  Gn e j 2nf 0t   n 0  1 T0 2 g t dt T0 T0 2 Gn  1 T0  2 T0  T0 2 T0 2 g t e  j 2nf0t dt Gn  n  0 2 T0 2 gt  cos 2nf 0 t dt T0 T0 2 2 T0 2 g t  sin2nf 0 t dt T0 T0 2 an Gn  T0 2 T0 2 1 T0 2 gt e  j 2nf 0 t dt  2 T  T0 0 g t e  j 2nf0t dt n  0 |Gn| |Gn| A A 2 A cos sinusoid A cos2f 0 t    .f0 0 f0 f bn 0 f0 f f0 f Signals and Systems 2014 f0 f .A sin 0 f0 f .

S. New York.50 Summary  All periodic waveforms are made up of a sum of sinusoids – a Fourier series. B. 1994. and are just the phasor representation of the sinusoid at that particular harmonic frequency.  The complex exponential Fourier series expresses a periodic signal as a sum of counter-rotating harmonic phasors (“sum of harmonic phasors”).. 1983.  The coefficients of the basis functions in the Fourier series are called “Fourier series coefficients”. Signals and Systems 2014 . Inc. For the complex exponential Fourier series.  The most convenient spectrum is the double-sided spectrum.2A.  The trigonometric Fourier series expresses a periodic signal as a DC (or average) term and a sum of harmonically related cosinusoids and sinusoids.  A spectrum of a periodic waveform is a graph of the amplitudes and phases of constituent basis functions.  Fourier series coefficients can be found for any periodic waveform by taking “projections” of the periodic waveform onto each of the orthogonal basis functions making up the Fourier series. There are three equivalent notations for the Fourier series.  The compact trigonometric Fourier series expresses a periodic signal as a sum of cosinusoids of varying amplitude and phase. P. Lathi. Tokyo.: Modern Digital and Analog Communication Systems. John-Wiley & Sons.: Communication Systems. References Haykin. It is graph of the complex exponential Fourier series coefficients. they are complex numbers. HoltSaunders. We usually have to graph a “magnitude” spectrum and “phase” spectrum.

a Signals and Systems 2014 (c) 0 .2   g t    Arect  n  (a) 90 (b)  9 0  Answers: 1. The Fourier series of g(t) the periodic signal will have no: t (a) DC term (b) odd harmonics (c) even harmonics 3. [one or none correct] 1.2A. The power 1. |Gn| (V) Amplitude spectrum of 2 a signal.5 W 0 1 2 3 f (Hz) (b) 10 W (c) 15. c 2. 3  0 (b) G1  3 2   2 (c) a 2  0 .5 1 (across 1) is: -3 -2 -1 (a) 14.51 Quiz Encircle the correct answer. The phase of the 3rd harmonic component of the periodic signal  t 1 4n   is:  0.5 W 5. c 3. a 4. c 5. b2  4 2. The signal 3cos(2t )  4sin(4t ) has: (a) A3  3 . The double-sided amplitude spectrum of a real signal always possesses: (a) even symmetry (b) odd symmetry (c) no symmetry 4. cross out the wrong answers.

What is G for the sinusoid whose spectrum is shown below. t 1 and t  2 . G |G| 1 30° 1 -1 0 1 (rads ) -1 -1 0 -30° Signals and Systems 2014 (rads-1 ) . (b) Plot the double-sided magnitude spectrum and phase spectrum. A signal g t   2cos100t   sin100t  . What is g t  for the signal whose spectrum is sketched below.52 Exercises 1. Sketch photographs of the counter rotating phasors associated with the spectrum below at t  0 . Re{G n } Im{G n } 2 0 -100 100 f (Hz) 0 -100 100 f (Hz) 4.2A. (c) Plot the real and imaginary components of the double-sided spectrum. (a) Plot the single-sided magnitude spectrum and phase spectrum. G |G| 2 45° -100 -100 0 0 (rads ) -1 100 100 (rads-1 ) -45° 3. 2.

53 5. For each of the periodic signals shown. Plot the magnitude and phase spectra in MATLAB®. for  25  n  25 .2A. find the complex exponential Fourier series coefficients. g (t ) rectified sine wave 1 (a) -2   0 1 t 2 g (t ) (b) -2 -1 0 1 g (t ) 1 (c) -2 - 0 3 2 4 t e -t / 10  2 t -1 1 g ( t ) cos(8 t ) (d) -3 -1 1 3 5 t g (t ) 1 (e) 0 T0 Signals and Systems 2014 t .

0.08 s .08s is defined by: vi  60 V. Estimate the bandwidth B of the periodic signal g t  shown below if the power of all components of g t  within the band B is to be at least 99.01 t  0. A voltage waveform vi  t  with a period of 0. What average power is delivered to the load? 1 5 Vo Vi vi vo 16  mH f (Hz) 0 20 40 60 80 load 7.01 s .  3. For three different values of T0 ( T0  2 3.9 percent of the total power of g t  . g (t) A T0 t -A Signals and Systems 2014 . vi  0 V. 0  t  0.2A. Calculate the power of the output signal as a percentage of the power of the input signal g t  . The voltage vi is applied as the source in the circuit shown below. A periodic signal g t  is transmitted through a system with transfer function H   . and  6 ) find the double-sided magnitude spectrum of the output signal. g (t) H () T0 2 1 1 0 t 0 -12 12  T0 8.54 6.

In 1793 he became involved in politics and joined the local Revolutionary Committee. so that Napoleon found himself confined to the land he was occupying. He is also famous for the discovery that many functions could be expressed as infinite sums of sine and cosine terms. Fourier studied for the priesthood but did not end up taking his vows. but by the age of thirteen. mathematics became his real interest. As he wrote:- As the natural ideas of equality developed it was possible to conceive the sublime hope of establishing among us a free government exempt from kings and priests. Lagrange and Laplace. in my opinion the greatest and most beautiful which any nation has ever undertaken. By fourteen. among others. He was renowned as an outstanding lecturer but did not undertake original research at this time.2A. he attended the Ecole Normal and was taught by. and in 1794 he was arrested and imprisoned. and to free from this double yoke the long-usurped soil of Europe. Fourier became entangled in the French Revolution. Instead he became a teacher of mathematics. The theory that he developed now has applications in industry and in the study of the temperature of the Earth’s interior. The expedition was a great success (from the French point of view) until August 1798 when Nelson’s fleet completely destroyed the French fleet in the Battle of the Nile. or Fourier series. He feared he would go to the guillotine but political changes allowed him to be freed. he had completed a study of six volumes of a course on mathematics. Fourier acted as an administrator as French type political Signals and Systems 2014 . He started teaching again. In 1795.55 Joseph Fourier (1768-1830) (Jo´ sef Foor´ yay) Fourier is famous for his study of the flow of heat in metallic plates and rods. after another brief period in prison. Fourier first showed talent in literature. and began further mathematical research. In 1798 Fourier joined Napoleon on his invasion of Egypt as scientific adviser. In 1797. now called a trigonometric series. he succeeded Lagrange in being appointed to the chair of analysis and mechanics. I readily became enamoured of this cause.

…it was in attempting to verify a third theorem that I employed the procedure which consists of multiplying by cos x dx the two sides of the equation  x   a0  a1 cos x  a2 cos 2 x  . His work on the topic began around 1804 and by 1807 he had completed his important memoir On the Propagation of Heat in Solid Bodies..…. It was during his time in Grenoble that Fourier did his important mathematical work on the theory of heat. Fourier returned to France in 1801 with the remains of the expeditionary force and resumed his post as Professor of Analysis at the Ecole Polytechnique. In particular he helped establish educational facilities in Egypt and carried out archaeological explorations. One was the “Fourier integral” (the formula for the Fourier series coefficients) and the other marked the birth of Sturm-Liouville theory (Sturm and Liouville were nineteenth century mathematicians who found solutions to Signals and Systems 2014 . to invent a method of translating hieroglyphic writings of ancient Egypt in 1822. Fourier helped found the Institute d'Égypte and was put in charge of collating the scientific and literary discoveries made during the time in Egypt.2A. but was probably intended for Lagrange. While in Cairo. with 1  a1 cos x  a 2 cos 2 x  . The Institute d'Égypte was responsible for the completely serendipitous discovery of the Rosetta Stone in 1799. Regarding the researches of d’Alembert and Euler could one not add that if they knew this expansion they made but a very imperfect use of it. a 2 . They were both persuaded that an arbitrary…function could never be resolved in a series of this kind.. a protégé of Fourier. found a1 . and it does not seem that any one had developed a constant in cosines of multiple arcs [i. It caused controversy – both Lagrange and Laplace objected to Fourier’s expansion of functions as trigonometric series. demotic and Greek) enabled Thomas Young and JeanFrançois Champollion. and integrating between x  0 and x   ..56 institutions and administrations were set up. I am sorry not to have known the name of the mathematician who first made use of this method because I would have cited him. for   2  x   2 ] the first problem which I had to solve in the theory of heat. This extract is from a letter found among Fourier’s papers.e. Napoleon abandoned his army and returned to Paris in 1799 and soon held absolute power in France. and unfortunately lacks the name of the addressee. The three inscriptions on this stone in two languages and three scripts (hieroglyphic. Napoleon appointed Fourier to be Prefect at Grenoble where his duties were many and varied – they included draining swamps and building highways. Other people before Fourier had used expansions of the form f  x  ~ r  a r expirt  but Fourier’s work extended this idea in two totally  new ways..

“thoroughly cooked”. Cambridge University Press. References Körner. at high latitudes) and that daily variations are imperceptible at depths measured in tenths of metres. 1988. he developed at least one peculiar notion. The solution consists of a yearly component and a daily component. Both effects die off exponentially with depth but the high frequency daily effect dies off much more rapidly than the low frequency yearly effect. All these predictions are confirmed by measurements which show that annual variations in temperature are imperceptible at quite small depths (this accounts for the permafrost. Napoleon was defeated in 1815 and Fourier returned to Paris.W. Fourier was elected to the Académie des Sciences in 1817 and became Secretary in 1822. In this he obtains for the first time the equation of heat conduction. T. i. As Fourier grew older. He died in his sixty-third year. as Fourier remarked. which is a partial differential equation in three dimensions. Signals and Systems 2014 . and kept his rooms at high temperatures. gives a good depth for the construction of cellars. As an application he considered the temperature of the ground at a certain depth due to the sun’s heating.e. A reasonable value of soil thermal conductivity leads to a prediction that annual temperature changes will lag by six months at about 2–3 metres depth. Shortly after. the Academy published his prize winning essay Théorie analytique de la chaleur (Analytical Theory of Heat).2A. He was always bundled in woollen clothing.57 many classes of partial differential equations arising in physics that were analogous to Fourier series). There is also a phase lag for the daily and yearly effects so that at certain depths the temperature will be completely out of step with the surface temperature. Whether influenced by his stay in the heat of Egypt or by his own studies of the flow of heat in metals. permanently frozen subsoil. he became obsessed with the idea that extreme heat was the natural condition for the human body. Again this is confirmed by observation and.: Fourier Analysis.

.

We would like to extend this Fourier transform result to functions that are not periodic. Standard Fourier transforms. Time shifting. The Fourier Transform The Fourier series is used to represent a periodic function as a weighted sum of Developing the sinusoidal (or complex exponential) functions.2 Signals and Systems 2014 . To derive the Fourier transform. Relationship between the Fourier series and Fourier transform. we start with an aperiodic signal g t  : g(t ) t Figure 2B.2B. Fourier transform of sinusoids. Existence of the Fourier transform. Such an extension is possible by what is known as the Fourier transform representation of a function. Fourier transform properties. Finding Fourier transforms. Frequency shifting. Symmetry between the time-domain and frequency-domain.1 Lecture 2B – The Fourier Transform The Fourier transform. Fourier transform of a uniform train of impulses.1 Now we construct a new periodic signal g p  t  consisting of the signal g t  repeating itself every T0 seconds: Make an artificial periodic waveform from the original aperiodic waveform gp(t) 0 T0 t T0 Figure 2B. Continuous spectra.

Signals and Systems 2014 .2b) In the time-domain. f 0 becomes smaller and the spectrum becomes “denser” (the discrete frequencies are closer together). and: lim g p t   gt  (2B. the pulse train becomes a single non-periodic pulse located at the origin.1) T0  Thus. in the limit T0  . In the limit as T0   . This new signal g p  t  is a periodic signal and so it can be represented by an exponential Fourier series. as T0 becomes larger. if we let T0 become infinite. In the limit. In the frequency-domain. the Fourier series representing g p  t  will also represent g t  . as in Figure 2B.1. the fundamental frequency f 0  df and the harmonics become infinitesimally close together. as T0   .2a) (2B. In other words. The individual harmonics no longer exist as they form a continuous function of frequency.2B.2 The period T0 is made long enough so that there is no overlap between the repeating pulses. The exponential Fourier series for g p  t  is: The Fourier series for our artificial periodic waveform  g p  t    Gn e j 2nf 0t n Gn  1 T0 2  j 2 nf 0t   g t e dt p   T 2 0 T0 (2B. the pulses in the periodic signal repeat after an infinite interval. the spectrum exists for every value of f and is no longer a discrete function of f but a continuous function of f.

for which we know: Gn  Af 0 sincnf 0  (2B.2 A -8 -12 -10 -6 6 -4 -2 0 2 4 Figure 2B. the amplitudes of the individual components become smaller.2B. The shape of the frequency spectrum. In the limit as T0  . To illustrate. it no longer makes sense to define the concept of a spectrum as the amplitudes and phases of certain harmonic frequencies. Instead. T0 ).3) The spectrum for  1 5 s and T0 1 s looks like: Gn 0. a new concept is introduced called spectral density.2b). the magnitude of each component becomes infinitesimally small. is unaltered (all Gn ’s are scaled by the same amount. (2B.3 Signals and Systems 2014 8 10 12 f . until they eventually vanish! Clearly. however. consider a rectangular pulse train. too.3 As seen from Eq.

the frequency of any “harmonic” nf 0 must now correspond to the general frequency variable which describes the continuous spectrum.2b).4) 0 which is just a rearrangement of Eq.2B. however. the spectral lines get closer and closer. The spectral lines get closer and closer with increasing T0 . We suspect that the product Gn T0 will be finite as T0  . It is possible. but the amplitude gets smaller and smaller with increasing T0 . in the same way that the area remained constant as T  0 in the family of rect functions we used to “explain” the impulse function. (2B.04 A -8 -6 6 -4 -12 -10 -2 0 4 2 8 10 12 f Figure 2B. Signals and Systems 2014 . As T0  . it is impossible to draw the magnitude spectrum as a graph of Gn because the amplitudes of the harmonic phasors have reduced to zero. but smaller and smaller Gn 0. to graph a new quantity: Gn T0   T0 2  T0 2 g p  t  e  j 2nf t dt (2B.4 For T0  5 s we have: As the period increases.4 The envelope retains the same shape (it was never a function of T0 anyway). In the limit.

we apply the limiting process to Eq. (2B. f 0  df . such that the product is finite: nf 0  f .1) and (2B. we now notice that: f 0  df . (2B. and we represent it by: G f    g t  e  j 2ft dt  (2B. n. In order to apply the limiting process to Eq.8) That is. T0  and G  f  can be called the spectral density.5 In other words. Here. T0  (2B. To express g t  in the time-domain using frequency components. the fundamental frequency becomes infinitesimally small as the period is made larger and larger.5) With the limiting process as defined in Eqs.2a). (2B. Eq.6) The right-hand side of this expression is a function of f (and not of t). T0  (2B.9) The Fourier transform defined . (2B. This agrees with our reasoning in obtaining Eq.2a). of infinitesimally small discrete frequencies.5). (2B.4) becomes:  Gn T0   g t  e  j 2 ft  dt (2B. G  f   lim GnT0 has the dimensions amplitude per unit frequency. since there we had an infinite number. to give the finite continuous frequency f.5).2B.7)  Therefore. we multiply the summation by T0 f 0  1 : g p t    G T e n   n 0 j 2nf 0 t f0 Signals and Systems 2014 (2B. n must tend to infinity as f 0 approaches zero.

6 and use Eq.11b)  These relationships can also be expressed symbolically by a Fourier transform pair: Notation for a Fourier transform pair gt   G f  Signals and Systems 2014 (2B. and g t  is the inverse Fourier transform of G f  . nf 0  ndf  f .7) and (2B. (2B.12) . g p t   g t  . To recapitulate.10) are collectively known as a Fourier transform pair.10) Eqs. (2B. we have shown that: The Fourier transform and inverse transform side-by-side gt    G f e  j 2 ft  df G f    g t  e  j 2ft dt  (2B.8) and the new quantity G  f   GnT0 .2B. The function G f  is the Fourier transform of g t  . and: gt    G f e j 2ft df   The inverse Fourier transform defined (2B. In the limit the summation becomes an integral.11a) (2B.

Consider a beam loaded with weights of G1 . and the total weight W on the beam is given by: We can have either individual finite weights n W   Gi i 1 (2B. G 3 . the integral of G x over the entire length: W   G x dx or continuous infinitesimal weights xn x1 Signals and Systems 2014 (2B. .5 The beam is loaded at n discrete points. as shown in Figure 2B. G 2 .14) . is a function of x. . The total weight on the beam is now given by a continuous sum of the infinitesimal weights at each point.5(b) above.that is. x n as shown in (a) below: G1 G2 G3 Gn x1 x2 xn x3 G(x ) x1 (a) xn (b) Figure 2B.2B.7 Continuous Spectra The concept of a continuous spectrum is sometimes bewildering because we Making sense of a generally picture the spectrum as existing at discrete frequencies and with continuous spectrum finite amplitudes. in kilograms per meter. x 2 . The continuous spectrum concept can be appreciated by considering a simple analogy.13) Now consider the case of a continuously loaded beam. The loading density G x . G n kilograms at uniformly spaced points x1 . x 3 .

The total contribution in an infinitesimal interval df is given by G f  e j 2ft df . At other points there was no load. however. and the continuously loaded beam with a current sheet. that is. is given by G x dx .2B. On the other hand. the spectrum exists at every value of f.15) For a nonperiodic signal. A periodic signal can be represented by a sum of discrete exponentials with finite amplitudes (harmonic phasors):  g t    Gn e j 2nf0t n  (2B. the weight existed only at discrete points. An exactly analogous situation exists in the case of a signal and its frequency spectrum. the amplitude of that frequency component is zero. Signals and Systems 2014 . Therefore G x represents the relative loading at a point x. At any one frequency f. in the continuously distributed case. The load in a small distance dx. The analysis is the same.16) An electrical analogy could also be useful: just replace the discrete loaded beam with an array of filamentary conductors. the loading exists at every point. and the function g t  can be expressed in terms of the continuous sum of such infinitesimal components: We can get a signal from infinitesimal sinusoids – if we have an infinite number of them! gt    G f e j 2ft df   (2B. but at any one point the load is zero. the distribution of exponentials becomes continuous.8 In the discrete loading case.

9 Existence of the Fourier Transform Dirichlet (1805-1859) investigated the sufficient conditions that could be imposed on a function g t  for its Fourier transform to exist.17b) Note that these are sufficient conditions and not necessary conditions. On any finite interval: a) g t  is bounded b) g t  has a finite number of maxima and minima c) g t  has a finite number of discontinuities. i. Signals and Systems 2014 The Dirichlet conditions are sufficient but not necessary conditions for the FT to exist .    g t  dt   (2B. It can be shown that signals which have infinite energy but which contain a finite amount of power and meet Dirichlet condition 1 do have valid Fourier transforms. for the signals or functions we may wish to analyse as engineers. then it has a Fourier transform G  f  . and so all energy signals are Fourier transformable. 2. For practical purposes.18) is absolutely integrable. These so-called Dirichlet conditions are: (2B. These include the step function and all periodic functions. Any signal with finite energy: E   g t  dt   2 (2B.2B. Use of the Fourier transform for the analysis of many useful signals would be impossible if these were necessary conditions. g t  is “absolutely integrable”.17a) 1. Many signals of interest to electrical engineers are not energy signals and are therefore not absolutely integrable.e. we can use the rule of thumb that if we can draw a picture of the waveform g t  .

2B. we have the Fourier transform pair: and its transform t A rect    AT sinc fT  T Signals and Systems 2014 (2B.20) .19) Therefore.10 Finding Fourier Transforms Example Find the spectrum of the following rectangular pulse: A common and useful signal… g(t ) -T /2 A T /2 t Figure 2B. We then have:  t G  f    A rect e  j 2ft dt  T  T 2  A e  j 2ft dt T 2  A e  jfT  e jfT  j 2f A sin fT   f  AT sinc fT    (2B.6 We note that g t   Arect t T  .

2B. (2B. We may now state our observations more formally.22) Linearity is obeyed by transform pairs . We see that the linearity property of the Fourier transform pair can be defined as: agt   aG f  Signals and Systems 2014 (2B.7 There are a few interesting observations we can make in general by Time and frequency characteristics considering this result.20) results in what we shall call a “standard” transform: rect t   sinc f  (2B. Letting A=1 and T=1 in Eq. Another is that compression in time (by relationship making T smaller) will result in an expansion of the frequency spectrum (“wider” sinc function in this case). Another is that the Fourier transform is a linear operation – multiplication by A in the time domain results in the spectrum being multiplied by A.11 We can also state this graphically: g(t ) -T /2 G (f ) A T /2 t -5 T -4 T -3 T -2 T AT -1 T 1 T 2 T 3 T 4 T 5 T f Figure 2B.21) One of the most common transform pairs – commit it to memory! It is standard in the sense that we cannot make any further simplifications. One is that a time limited function has frequency reflect an inverse components approaching infinity. It expresses a fundamental relationship between the rect and sinc functions without any complicating factors.

24) We now only have to derive enough standard transforms and discover a few Fourier transform properties to be able to handle almost any signal and system of interest. We could (and should in future) derive our Fourier transforms by starting with a standard Fourier transform and then applying appropriate Fourier transform pair properties.12 We can also generalise the time scaling property to: t g    T G  fT  T  Time scaling property of transform pairs (2B.2B.23) Thus. For example. Signals and Systems 2014 . we could arrive at Eq.20) without integration by the following: Applying standard properties to derive a Fourier transform rectt   sinc f  (standard transform) t rect    T sinc fT  (scaling property) T t A rect    AT sinc fT  (linearity) T (2B. (2B. expansion in time corresponds to compression in frequency and vice versa.

Thus. we can make the change of variable x  t to give:  g  f    G t e  j 2tf dt (2B. there is an almost symmetrical relationship between the transform and its inverse.27)  Notice that the right-hand side is precisely the definition of the Fourier transform of the function G t  .2B.13 Symmetry between the Time-Domain and Frequency-Domain From the definition of the inverse Fourier transform: gt    G f e  j 2 ft  df (2B.28) Duality property of transform pairs .25) we can make the transformations t   f and f  x to give:  g  f    G  x e  j 2xf dx (2B. This is summed up in the duality property: G t   g  f  Signals and Systems 2014 (2B.26)  Now since the value of a definite integral is independent of the variable of integration.

29) Graphically: Fourier transform of a sinc function G (t ) g(-f ) 2B -5 2B -2 B -3 2B -1 B -1 2B 1 2B 1 B 3 2B 2 B 5 2B t Figure 2B.14 Example Consider a rectangular pulse in the frequency-domain.2B.9 Signals and Systems 2014 -B 1 B f . so we need to apply the time-scale property in reverse to arrive at: 2 B sinc2 Bt   rect f 2 B  (2B. This represents the frequency response of an ideal low-pass filter. Using the duality property. this can also be written as sinct   rect  f  . G (f ) 1 -B f B Figure 2B. But our frequency-domain transform is G  f   rect  f 2 B  . Since the rect function is symmetric.8 We are interested in finding the inverse Fourier transform of this function. with a cut-off frequency of B. we know that if rectt   sinc f  then sinct   rect  f  .

then we have from Eq.20).15 Example We wish to find the Fourier transform of the impulse function   t  . We can find by direct integration (using the sifting property of the impulse function): G f      t  e  j 2ft dt  1  (2B. (2B.32) Now let T  0 so that the rectangle function turns into the impulse function   t  . we arrive at Eq. Graphically. (2B. Signals and Systems 2014 Fourier transform of an impulse function . we have another standard Fourier transform pair:   t  1 (2B. Let the height be such that the area AT is always equal to 1.31).30)  That is.20): 1 t rect    sinc fT  T T (2B. (2B. and all contribute equally.31) It is interesting to note that we could have obtained the same result using Eq. we have: g(t ) G (f ) 1 0 t 1 f Figure 2B. Noting that sinc 0 is unity.2B.10 This result says that an impulse function is composed of all frequencies from DC to infinity.

Example We wish to find the Fourier transform of the constant 1 (if it exists). As the width of the rectangular pulse becomes larger and larger. it becomes a DC signal. Eventually. If we let our rectangular function of Figure 2B. once the Fourier transform has been obtained for a function. and it is up to us (with experience and ability) to find the easiest way. which is a DC signal. the sinc function has no width and infinite height – an impulse function.16 This example highlights one very important feature of finding Fourier transforms . However. Using Eq. and noting that the area under the sinc function is 1 T (see Lecture 1A) we therefore have: Fourier transform of a constant 1   f  Signals and Systems 2014 (2B. and its amplitude increases.2B.6 take on infinite width. we get: G f    1e  j 2ft dt  (2B. Choosing the path of direct evaluation of the integral. with infinite pulse width. However. the sinc function becomes narrower and narrower. it is unique . we can approach the problem another way.this serves as a check for different methods.34) .33)  which appears intractable. (2B.there is usually more than one way to find them.20) with T  .

36) which is the same as Eq. (2B. Signals and Systems 2014 . and one method (direct integration) seemed impossible! It is therefore advantageous to become familiar with the properties of the Fourier transform.2B.17 Yet another way to arrive at this result is through recognising that a certain amount of symmetry exists in equations for the Fourier transform and inverse Fourier transform.34). we get: 1   f  (2B. (2B.31) gives: 1    f  (2B. Applying the duality property to Eq.35) Since the impulse function is an even function. two different methods converged on the same result. Again.

The Fourier transform of a time shifted function to the right is:  g t  t 0    g t  t 0 e  j 2ft dt (2B. Thus.11 Signals and Systems 2014 f .2B.18 Time Shifting A time shift to the right of t0 seconds (i. the Fourier transform of  t T 2   jfT g t   A rect and is shown below:  is G  f   AT sinc fT e T   Time shifting a function just changes the phase G (f ) g(t ) A -5 T -4 T -3 T -2 T -1 T G (f ) 0 T /2 T AT 1 T 2 T 3 T 4 T 5 T 1 T 2 T 3 T 4 T 5 T AT t -5 T -4 T -3 T -2 T -1  T -2  f slope = T Figure 2B.37)  Letting x  t  t 0 then dx  dt and t  x  t 0 and we have: Time shift property defined  g t  t 0    g  x e  j 2f  x  t 0 dx  g t  t 0   G  f e (2B. a time shift to the right of t 0 seconds in the time-domain is equivalent to multiplication by e  j 2ft0 in the frequency-domain. a time delay simply causes a linear phase change in the spectrum – the magnitude spectrum is left unaltered.38)  j 2ft 0 Therefore. For example. a time delay) can be represented by g t  t 0  . A time shift to the left of t0 seconds can be represented by g t  t 0  .e.

a frequency shift to the right of f 0 Hertz in the frequency-domain is equivalent to multiplication by e j 2ft0 in the time-domain. consider “amplitude modulation”.2B.41) . a spectrum G  f  can be shifted to the right. For example.19 Frequency Shifting Similarly to the last section. G  f  f 0  . G  f  f 0  . This property will be used to derive several standard transforms and is particularly important in communications where it forms the basis for modulation and demodulation. or to the left. a shift to the left by f 0 Hertz in the frequency-domain is equivalent to multiplication by e  j 2ft0 in the time-domain.40) G f  f 0  Therefore. Similarly. The inverse Fourier transform of a frequency shifted function to the right is:    G  f  f 0 e j 2ft df  G f  f 0  (2B.39) Letting x  f  f 0 then dx  df and f  x  f 0 and we have:    Frequency shift property defined G  x e j 2  x  f 0 t dx  G  f  f 0  g t e j 2f 0 t  (2B. making use of the of the Euler expansion for a cosine:  g t  cos2f c t   g t  12 e j 2f c t  12 e  j 2f c t   1 2   G  f  f c   12 G  f  f c  Signals and Systems 2014 Multiplication by a sinusoid in the timedomain shifts the original spectrum up and down by the carrier frequency (2B. The sign of the exponent in the shifting factor is opposite to that with time shifting – this can also be explained in terms of the duality property.

43b) we get the following transform pairs: Standard transforms for cos and sin (2B.12 Signals and Systems 2014 f0 f .f0 Figure 2B.44a) 1 1 cos2f 0 t     f  f 0     f  f 0  2 2 j j sin 2f 0 t      f  f 0     f  f 0  2 2 (2B.43a) e j 2f 0t  21j e  j 2f 0t   2j e j 2f 0t  2j e  j 2f 0t (2B.f0 0 1 2 G (f ) j 2 j -2 T0 t .34) and the frequency shifting property. we know that: e j 2f 0t  e  j 2f 0t   f  f0   (2B.44b) Graphically: Spectra for cos and sin cos(2 f 0t ) 1 0 G (f ) 1 2 T0 t sin(2 f 0t ) 1 f f0 .2B.20 The Fourier Transform of Sinusoids Using Eq. by substituting into Euler’s relationship for the cosine and sine: cos2f 0 t   12 e j 2f 0t  12 e  j 2f 0t sin 2f 0 t   1 2j (2B.42b) Therefore.42a)   f  f0  (2B. (2B.

48) Therefore. we know we can express any periodic waveform as a sum of harmonic phasors:  gt   Gn e j 2nf 0 t (2B. we already know the transform pair: e j 2nf 0t    f  nf 0  (2B. since the Fourier transform is a linear operation.49) .21 Relationship between the Fourier Series and Fourier Transform From the definition of the Fourier series. in words: The spectrum of a periodic signal is a weighted train of impulses – each weight is equal to the Fourier series coefficient at that frequency. Gn . Gn Signals and Systems 2014 (2B.47) Summing over all harmonically related exponentials. we get:  G e n   j 2nf 0 t n    G   f  nf  n   n 0 (2B. that we can easily find the Fourier transform of any periodic signal. From Eq.46) We therefore expect.46) by a complex constant. (2B.42a). (2B.2B. we have: Gn e j 2nf 0 t  G n  f  nf 0  (2B.45) n  This sum of harmonic phasors is just a linear combination of complex exponentials. Scaling Eq.

with spacing equal to the fundamental of the waveform. If we have an impulse in the spectrum. then that must mean a finite phasor at a specific frequency. or a graph of infinitesimally small phasors spaced infinitesimally close together. i. a sinusoid (we recognise that a single sinusoid has a pair of impulses for its spectrum – see Figure 2B.14 This should make intuitive sense – the spectrum is now defined as “spectral Pairs of impulses in the spectrum correspond to a sinusoid in the timedomain density”. f 0 : The double-sided magnitude spectrum of a rectangular pulse train is a weighted train of impulses G (f ) 0. Signals and Systems 2014 . the spectrum is then a weighted train of impulses.2 A weights = Gn -8 f0 -6 f0 -12 f0 -10 f0 6 f0 -4 f0 -2 f0 0 2 f0 4 f0 8 f0 10f0 12f0 f Figure 2B.13 We have already found the Fourier series coefficients for this waveform: Gn  Af 0sincnf 0  (2B.12).e.50) For the case of  T0 5 .2B.22 Example Find the Fourier transform of the rectangular pulse train g t  shown below:  g (t) A 0 t T0 Figure 2B.

we therefore have the following Fourier transform pair:    t  kT   f   f  nf  0 k  0 0 n (2B.2B.52) Graphically: g (t) 1 -2T0 0 -T0 T0 2T0 3T0 t f0 2 f0 3 f0 f G( f ) f0 -2f 0 .51) Using Eq.48).15 To find the Fourier transform of this waveform. We have already found the Fourier series coefficients of the uniform train of impulses as the limit of a rectangular pulse train: Gn  f 0 (2B. and so Eq. (2B.23 The Fourier Transform of a Uniform Train of Impulses We will encounter a uniform train of impulses frequently in our analysis of communication systems: g (t) 1 -2T0 0 -T0 T0 2T0 3T0 t Figure 2B. (2B. we simply note that it is periodic.16 Signals and Systems 2014 The Fourier transform of a uniform train of impulses is a uniform train of impulses .f0 0 Figure 2B.48) applies.

2B.24
Standard Fourier Transforms

rect t   sinc f 

(F.1)

 t  1

(F.2)

e t u t  

1
1  j 2f

(F.3)

A
A
Acos 2f 0t    e j   f  f 0   e  j   f  f 0 
2
2

 t  kT   f   f  nf 

k 

(F.4)

0

0

n

Signals and Systems 2014

0

(F.5)

2B.25
Fourier Transform Properties
Assuming g t   G f  .

t



ag t   aG f 

(F.6)

Linearity

t
g    T G  fT 
T 

(F.7)

Scaling

gt  t 0   G f e  j 2ft0

(F.8)

Time-shifting

g  t e j 2 f 0 t  G  f  f 0 

(F.9)

Frequency-shifting

Gt   g  f 

(F.10)

d
gt   j 2fG f 
dt

(F.11)

g  d 

1
G 0 
G f  
f 
2
j 2f

Duality

Time-differentiation

(F.12)

Time-integration

g1  t  g2  t   G1  f *G2  f 

(F.13)

Multiplication

g1  t  * g2  t   G1  f G2  f 

(F.14)

Convolution

 gt dt  G0


(F.15)
Area



G f df  g 0

Signals and Systems 2014

(F.16)

2B.26
Summary

Aperiodic waveforms do not have a Fourier series – they have a Fourier
transform. Periodic waveforms also have a Fourier transform if we allow
for the existence of impulses in the transform.

A spectrum, or spectral density, of any waveform is a graph of the
amplitude and phase of the Fourier transform of the waveform.

To find the Fourier transform of a signal, we start with a known Fourier
transform pair, and apply any number of Fourier transform pair properties
to arrive at the solution. We can check if necessary by using the definition
of the Fourier transform and its inverse.

The Fourier transform of a periodic signal is a weighted train of impulses –
each impulse occurs at a harmonic of the fundamental and is weighted by
the corresponding Fourier series coefficient at that frequency.

References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.

Signals and Systems 2014

2B.27
Exercises
1.
Write an expression for the time domain representation of the voltage signal
with double sided spectrum given below:

X( f )
2 -30°

2 30°
-1000

1000

f

What is the power of the signal?

2.
Use the integration and time shift rules to express the Fourier transform of the
pulse below as a sum of exponentials.

x(t)
2

-1 0

2

Signals and Systems 2014

4

t

2B.28
3.
Find the Fourier transforms of the following functions:
g (t )
10
(a)
-2

-1 -5

1

t

2

g (t )
1

cos t

(b)

/2

-/2
1

t

g ( t ) Two identical
cycles
t2

(c)
0

1

2

3

4

1

2

3

4

t

g (t )
3
2
1
(d)
0

4.
Show that:

e

a t

2a
2
a  2f 
2

Signals and Systems 2014

5

t

2B.29
5.
The Fourier transform of a pulse p t  is P f  . Find the Fourier transform of
g t  in terms of P f  .

g(t )
2
p(t )
1

1
t

1

t

-1.5

6.
Show that:
g t  t 0   g t  t 0   2G  f cos2ft 0 

7.
Use G1  f  and G2  f  to evaluate g1 t   g 2 t  for the signals shown below:

g (t )

g (t )

1

a

2

1

ae -at
t

Signals and Systems 2014

t

2B.30
8.
Find G1  f   G2  f  for the signals shown below:

G1( f )
A
-f 0

f0

f

G2( f )
k
-f 0

f0

f

9.
Determine signals g t  whose Fourier transforms are shown below:

A

-f 0

|G ( f )|

f0

-2 ft 0

G (f )

f

f

(a)

A

|G ( f )|

G (f )
 /2
f0

-f 0

f0

f

-f 0
-  /2

(b)

Signals and Systems 2014

f

2B.31
10.
Using the time-shifting property, determine the Fourier transform of the signal
shown below:

g (t )
A
T
t

-T
-A

11.
Using the time-differentiation property, determine the Fourier transform of the
signal below:

g (t )
A
-2
2

t

-A

Signals and Systems 2014

2B.32
William Thomson (Lord Kelvin) (1824-1907)
William Thomson was probably the first true electrical engineer. His
engineering was firmly founded on a solid bedrock of mathematics. He
invented, experimented, advanced the state-of-the-art, was entrepreneurial, was
a businessman, had a multi-disciplinary approach to problems, held office in
the professional body of his day (the Royal Society), published papers, gave
lectures to lay people, strived for an understanding of basic physical principles
and exploited that knowledge for the benefit of mankind.
William Thomson was born in Belfast, Ireland. His father was a professor of
engineering. When Thomson was 8 years old his father was appointed to the
chair of mathematics at the University of Glasgow. By age 10, William
Thomson was attending Glasgow University. He studied astronomy, chemistry
and natural philosophy (physics, heat, electricity and magnetism). Prizes in
Greek, logic (philosophy), mathematics, astronomy and physics marked his
progress. In 1840 he read Fourier’s The Analytical Theory of Heat and wrote:
…I had become filled with the utmost admiration for the splendour and
poetry of Fourier… I took Fourier out of the University Library; and in a
fortnight I had mastered it - gone right through it.
At the time, lecturers at Glasgow University took a strong interest in the
approach of the French mathematicians towards physical science, such as
Lagrange, Laplace, Legendre, Fresnel and Fourier. In 1840 Thomson also read
Laplace’s Mécanique Céleste and visited Paris.
In 1841 Thomson entered Cambridge and in the same year he published a
paper on Fourier's expansions of functions in trigonometrical series. A more
important paper On the uniform motion of heat and its connection with the
mathematical theory of electricity was published in 1842.
The examinations in Cambridge were fiercely competitive exercises in problem
solving against the clock. The best candidates were trained as for an athletics
contest. Thomson (like Maxwell later) came second. A day before he left
Cambridge, his coach gave him two copies of Green’s Essay on the
Signals and Systems 2014

2B.33
Application of Mathematical Analysis to the Theories of Electricity and
Magnetism.
After graduating, he moved to Paris on the advice of his father and because of
his interest in the French approach to mathematical physics. Thomson began
trying to bring together the ideas of Faraday, Coulomb and Poisson on
electrical theory. He began to try and unify the ideas of “action-at-a-distance”,
the properties of the “ether” and ideas behind an “electrical fluid”. He also
became aware of Carnot’s view of heat.
In 1846, at the age of twenty two, he returned to Glasgow on a wave of
testimonials from, among others, De Morgan, Cayley, Hamilton, Boole,
Sylvester, Stokes and Liouville, to take up the post of professor of natural
philosophy. In 1847-49 he collaborated with Stokes on hydrodynamic studies,
which Thomson applied to electrical and atomic theory. In electricity Thomson
provided the link between Faraday and Maxwell. He was able to mathematise
Faraday’s laws and to show the formal analogy between problems in heat and
electricity. Thus the work of Fourier on heat immediately gave rise to theorems
on electricity and the work of Green on potential theory immediately gave rise
to theorems on heat flow. Similarly, methods used to deal with linear and
rotational displacements in elastic solids could be applied to give results on
electricity and magnetism. The ideas developed by Thomson were later taken
up by Maxwell in his new theory of electromagnetism.
Thomson’s other major contribution to fundamental physics was his
combination of the almost forgotten work of Carnot with the work of Joule on
the conservation of energy to lay the foundations of thermodynamics. The
thermodynamical studies of Thomson led him to propose an absolute
temperature scale in 1848 (The Kelvin absolute temperature scale, as it is now
known, was defined much later after conservation of energy was better
understood).

Signals and Systems 2014

2B.34
The Age of the Earth
In the first decades of the nineteenth century geological evidence for great
changes in the past began to build up. Large areas of land had once been under
water, mountain ranges had been thrown up from lowlands and the evidence of
fossils showed the past existence of species with no living counterparts. Lyell,
in his Principles of Geology, sought to explain these changes “by causes now
in operation”. According to his theory, processes – such as slow erosion by
wind and water; gradual deposition of sediment by rivers; and the cumulative
effect of earthquakes and volcanic action – combined over very long periods of
time to produce the vast changes recorded in the Earth’s surface. Lyell’s socalled ‘uniformitarian’ theory demanded that the age of the Earth be measured
in terms of hundreds of millions and probably in terms of billions of years.
Lyell was able to account for the disappearance of species in the geological
record but not for the appearance of new species. A solution to this problem
was provided by Charles Darwin (and Wallace) with his theory of evolution by
natural selection. Darwin’s theory also required vast periods of time for
operation. For natural selection to operate, the age of the Earth had to be
measured in many hundreds of millions of years.
Such demands for vast amounts of time run counter to the laws of
thermodynamics. Every day the sun radiates immense amounts of energy. By
the law of conservation of energy there must be some source of this energy.
Thomson, as one of the founders of thermodynamics, was fascinated by this
problem. Chemical processes (such as the burning of coal) are totally
insufficient as a source of energy and Thomson was forced to conclude that
gravitational potential energy was turned into heat as the sun contracted. On
this assumption his calculations showed that the Sun (and therefore the Earth)
was around 100 million years old.

Signals and Systems 2014

2B.35
However, Thomson’s most compelling argument concerned the Earth rather
than the Sun. It is well known that the temperature of the Earth increases with
depth and
this implies a continual loss of heat from the interior, by conduction
outwards through or into the upper crust. Hence, since the upper crust does
not become hotter from year to year there must be a…loss of heat from the
whole earth. It is possible that no cooling may result from this loss of heat
but only an exhaustion of potential energy which in this case could scarcely
be other than chemical.
Since there is no reasonable mechanism to keep a chemical reaction going at a
steady pace for millions of years, Thomson concluded “…that the earth is
merely a warm chemically inert body cooling”. Thomson was led to believe
that the Earth was a solid body and that it had solidified at a more or less
uniform temperature. Taking the best available measurements of the
conductivity of the Earth and the rate of temperature change near the surface,
he arrived at an estimate of 100 million years as the age of the Earth
(confirming his calculations of the Sun’s age).
The problems posed to Darwin’s theory of evolution became serious as
Thomson’s arguments sank in. In the fifth edition of The Origin of Species,
Darwin attempted to adjust to the new time scale by allowing greater scope for
evolution by processes other than natural selection. Darwin was forced to ask
for a suspension of judgment of his theory and in the final chapter he added
With respect to the lapse of time not having been sufficient since our planet
was consolidated for the assumed amount of organic change, and this
objection, as argued by [Thomson], is probably one of the gravest yet
advanced, I can only say, firstly that we do not know at what rate species
change as measured by years, and secondly, that many philosophers are not
yet willing to admit that we know enough of the constitution of the universe
and of the interior of our globe to speculate with safety on its past duration.
(Darwin, The Origin of Species, Sixth Edition, p.409)

Signals and Systems 2014

Einstein’s theory of relativity extended the principle of conservation of energy by taking matter as a form of energy. …this seems to be one of the many cases in which the admitted accuracy of mathematical processes is allowed to throw a wholly inadmissible appearance of authority over the results obtained by them. Signals and Systems 2014 . 1869) However. and biology took its time from geology. (Quarterly Journal of the Geological Society of London. The answer was provided by a theory which was just beginning to be gossiped about.2B. It is the conversion of matter to heat which maintains the Earth’s internal temperature and supplies the energy radiated by the sun. Vol. and as the grandest mill in the world will not extract wheat-flour from peascods. Then in 1904 Rutherford announced that the radioactive decay of radium was accompanied by the release of immense amounts of energy and speculated that this could replace the heat lost from the surface of the Earth. But Thomson and his followers began to adjust his first estimate down until at the end of the nineteenth century the best physical estimates of the age of the Earth and Sun were about 20 million years whilst the minimum the geologists could allow was closer to Thomson’s original 100 million years.36 The chief weakness of Thomson’s arguments was exposed by Huxley A variant of the adage: “Garbage in equals garbage out”. The ratios of lead isotopes in the Earth compared to meteorites now leads geologists to give the Earth an age of about 4. Thomson’s estimates were the best available and for the next thirty years geology took its time from physics.55 billion years. and allows the time claimed by the geologist and biologist for the process of evolution. Mathematics may be compared to a mill of exquisite workmanship. which grinds you stuff of any degree of fineness. (Rutherford quoted in Burchfield. so pages of formulae will not get a definite result out of loose data. what you get out depends on what you put in. but nevertheless. 25. The discovery of the radioactive elements…thus increases the possible limit of the duration of life on this planet.164) A problem for the geologists was now replaced by a problem for the physicists. p.

Faraday’s opinion was shared by the electrical adviser to the project. more heavily armoured cable was laid and it was a complete success. In spite of the record of the longer lines. After 12 hours a trawler accidentally caught and cut the cable. the American Cyrus J. Fields proposed a telegraph line linking Europe and America. The railroads. Oceanographic surveys showed that the bottom of the Atlantic was suitable for cable laying. were also beginning to crisscross those same regions. Attempts were made to provide underwater links between the various separate systems. A second. as well as the running of empires. Dr Whitehouse (a medical doctor). The connection of existing land telegraph lines had produced a telegraph line of the length of the proposed cable through which signals had been passed extremely rapidly. A pulse on such a wire travels at a well defined speed with Signals and Systems 2014 . The short lines worked. to have speedy communication. The remedy was thus to use a thin wire (so that less electricity was needed to charge it) and high voltages to push the signal through. The first cable between Britain and France was laid in 1850. The operators found the greatest difficulty in transmitting even a few words. The British government offered a subsidy and money was rapidly raised. It was vital for the smooth and safe running of the railroads. The current in a telegraph wire in air is approximately governed by the wave equation. Thomson’s researches had given him a clearer mathematical picture of the problem. western Europe and the more settled parts of the USA. but the operators found that signals could not be transmitted along submarine cables as fast as along land lines without becoming confused.37 The Transatlantic Cable The invention of the electric telegraph in the 1830s led to a network of telegraph wires covering England. Faraday had predicted signal retardation but he and others like Morse had in mind a model of a submarine cable as a hosepipe which took longer to fill with water (signal) as it got longer.2B. spawned by the dual inventions of steam and steel.

Thomson came up with the mirror galvanometer in which the pointer is replaced by a beam of light. Also. whose professional reputation was now involved. the signals will be jumbled at the receiver. This equation predicts that electric pulses will last for a time that is proportional to the length of the cable squared. he had no authority over the technical advisers. Whitehouse tried to get his receiver to work. In going from submarine cables of 50 km length to cables of length 2400 km. Only Thomson’s galvanometer was sensitive enough to interpret the minute and blurred messages coming through. sensitive detector. even though the manufacturers were supplying copper to the then highest standards of purity. the diffusion equation shows that the wire should have as large a diameter as possible (small resistance). In 1858. In undersea cables of the type proposed. In vain. Testing the cable. increasing the voltage makes this jumbling (called intersymbol interference) worse. retardation effects are 2500 times worse. Moreover the production of the cable was already underway on principles contrary to Thomson’s. Finally. Realising that the success of the enterprise would depend on a fast. If two or more signals are transmitted within this time. Whitehouse ordered that a series of huge two thousand volt induction coils be used to try to push the message through faster – after four weeks of Signals and Systems 2014 . But that 99-word message took 16½ hours to get through. On 16 August it carried a 99-word message of greeting from Queen Victoria to President Buchanan. In a first attempt in 1857 the cable snapped after 540 km had been laid. Whitehouse.2B. Even though Thomson was on the board of directors of Field’s company.38 no change of shape or magnitude with time. Thomson was astonished to find that some sections conducted only half as well as others. capacitive effects dominate and the current is approximately governed by the diffusion (i. denied these conclusions.e. Europe and America were finally linked by cable. Signals can be sent as close together as the transmitter can make them and the receiver distinguish them. Thomson set about to invent one. heat) equation. The problem with an ordinary galvanometer is the high inertia of the needle.

Only one ship was large enough to handle it and that was Brunel’s Great Eastern. Half a million pounds was being staked on the correctness of the solution of a partial differential equation. Alaska. but after 2000 km the cable parted. In 1861 civil war broke out in the United States. By 1864 Field had raised enough capital for a second attempt. In 1865 the Great Eastern laid cable for nine days. Thomson was confident that the cable would do all that was required. The company decided to build and lay a new cable and then go back and complete the old one. This time there was a competitor. Strict quality control was exercised: the copper was so pure that for the next 50 years ‘telegraphist’s copper’ was the purest available. The cable was designed in accordance with Thomson’s theories. Since communication had been perfect up until the final break. The new cable was mechanically much stronger but also heavier. In 1859 eighteen thousand kilometres of undersea cable had been laid in other parts of the world. She was fives time larger than any other existing ship. Thomson had promised the company a rate of 8 or even 12 words a minute. On the 28th the cable was open for business and earned £1000. Siberia and Russia to reach Europe the long way round. the expedition left a buoy to mark the spot and sailed for home. After two weeks of unsuccessfully trying to recover the cable. The Western Union Company had decided to build a cable along the overland route across America. and only five thousand kilometres were operating. Western Union ordered all work on their project to be stopped at a loss of $3 000 000.39 this treatment the insulation finally failed. Signals and Systems 2014 .2B. the Bering Straits. Cable laying for the third attempt started on 12 July 1866 and the cable was landed on the morning of the 27th. 2500 tons of cable and £350 000 of capital lay useless on the ocean floor. Once again the British Government supported the project – the importance of quick communication in controlling an empire was evident to everybody. The commercial success of the cable would therefore depend on the rate at which messages could be transmitted.

A wave of knighthoods swept over the engineers and directors. where the tide can be as high as 21m. In contrast. Tides are primarily due to the gravitational effects of the Moon. Signals and Systems 2014 . One example of this is the narrow Bay of Fundy.40 On 1 September after three weeks of effort the old cable was recovered and on 8 September a second perfect cable linked America and Europe. to perform the task “which seemed to the Astronomer Royal so complicated and difficult that no machine could master it’ of computing the Fourier series coefficients from the record of past heights. Even today. the Mediterranean Sea is almost tideless because it is a broad body of water with a narrow entrance. even in the simplest case of a single ocean covering a rigid Earth to a uniform depth. Thomson also built another machine. height of the tides for a year in a few minutes. called the harmonic analyser. Sun and Earth on the oceans but their theoretical investigation. the study of only slightly more realistic models is only possible by numerical computer modelling. Other Achievements Thomson worked on several problems associated with navigation – sounding machines.” To reduce this labour Thomson designed and built a machine which would trace out the predicted Michelson (of Michelson-Morley fame) was to build a better machine that used up to 80 Fourier series coefficients. The patents which Thomson held made him a wealthy man. This was the first major victory in the struggle “to substitute brass for brain” in calculation. There are many other factors influencing local tides – such as channel width – which produce phenomena akin to resonance in the tides.2B. lighthouse lights. He then approximated the height of the tide by a trigonometric polynomial – a Fourier series with a finite number of terms. The production of ‘blips’ at discontinuities by this machine was explained by Gibbs in two letters to Nature. These ‘blips’ are now referred to as the “Gibbs phenomenon”. The coefficients of the polynomial required calculation of the Fourier series coefficients by numerical integration – a task that “…required not less than twenty hours of calculation by skilled arithmeticians. For his work on the transatlantic cable Thomson was created Baron Kelvin of Largs in 1892. compasses and the prediction of tides. The Kelvin is the river which runs through the grounds of Glasgow University and Largs is the town on the Scottish coast where Thomson built his house. given the Fourier series coefficients. is very hard. Thomson recognised that the forces affecting the tides change periodically. between Nova Scotia and New Brunswick.

Körner. Morrison.: The Life of Lord Kelvin. S. In all he published over 300 major technical papers during the 53 years that he held the chair of Natural Philosophy at the University of Glasgow. These became standard texts for many generations of scientists. There were also prizes which Thomson gave to the student that he considered most deserving. He also established standards for all the quantities in use in physics. Appendix B. 1994. Thomson worked in collaboration with Tait to produce the now famous text Treatise on Natural Philosophy which they began working on in the early 1860s. Encyclopedia Britannica. 1975. adjacent to Isaac Newton. Inc. In later life he developed a complete range of measurement instruments for physics and electricity. but Thomson's refusal to accept atoms. He encouraged the best students by offering prizes. Signals and Systems 2014 . J. certainly put him on the losing side of many arguments later in his career.D. He was buried in Westminster Abbey in London where he lies today. William Thomson. John Wiley & Sons. Many volumes were intended but only two were ever written which cover kinematics and dynamics. keeping this part of the work distinct from the mathematical side. and his opposition to Rutherford's ideas of radioactivity. 1976.2B. During the first half of Thomson's career he seemed incapable of being wrong while during the second half of his career he seemed incapable of being right. 1988. Cambridge University Press. This seems too extreme a view. Thomson. Lord Kelvin. He introduced laboratory work into the degree courses. his opposition to Darwin's theories. References Burchfield.: Fourier Analysis. T.W.: Lord Kelvin and The Age of the Earth.P. N.: Introduction to Fourier Analysis. died in 1907 at the age of 83. 2004. Macmillan. Kelvin & Tait: Treatise on Natural Philosophy..41 Thomson introduced many teaching innovations to Glasgow University. London. his incorrect speculations as to the age of the Earth and the Sun.

.

Summary of the sampling and reconstruction process.2) We apply a sinusoid if initial conditions are zero. Aliasing. Practical sampling and reconstruction. Finding the Fourier series of a periodic function from the Fourier transform of a single period. we seek a model of a linear. Suppose the input to the system is: xt   A cos 0t    Signals and Systems 2014 . Practical multiplication and convolution. The power of the frequency-domain approach will be seen as we are able to determine a system’s output given almost any input.1 Lecture 3A – Filtering and Sampling Response to a sinusoidal input. rather than time. The systems concept of a signal’s spectrum and a system’s frequency response will be seen to be of fundamental importance in the frequency-domain characterisation of a system. Response to a Sinusoidal Input We have already seen that the output of a LTI system is given by: yt   ht   xt  (3A. Fundamental signal operations can also be explained easily – such as sampling / reconstruction and modulation / demodulation – in the frequency domain that would otherwise appear bewildering in the time domain. What does a filter do to a signal? Sampling.1) Starting with a convolution description of a system (3A. That is.3A. Reconstruction. Ideal filters. description of signals in the time-invariant system governed by continuous-time differential equations that frequency domain we need one for expresses its behaviour with respect to frequency. Windowing in the time domain. Response to an arbitrary input. we seek a way to We have a describe a system in terms of frequency. Introduction Since we can now represent signals in terms of a Fourier series (for periodic signals) or a Fourier transform (for aperiodic signals).

First of all. we get: The Fourier transform of the impulse response appears in our analysis…  H     ht e  jt dt (3A.3A.3) Where X is the phasor representing xt  .4) can be written as: yt   H 0  Xe j 0t  H  0  X *e j 0t (3A. Inserting this into Eq. Now let: …and relates the output phasor with the input phasor! Y  H 0  X Signals and Systems 2014 (3A.1) gives:    yt    h  Xe j0 t    X *e  j0 t   d        h e  j0 Xe j0t d   h e j0 X *e  j0t d     h e  j0 d  Xe j0t   h e j0 d  X *e  j0t       (3A. Now Eq. (3A.6) If ht  is real. then: H     H *   (3A. (3A.2 We have already seen that this can be expressed (thanks to Euler) as: A sinusoid is just a sum of two complex conjugate counterrotating phasors A A xt   e j e j 0t  e  j e  j 0t 2 2  Xe j 0t  X *e  j 0t (3A.8) . if we take the Fourier transform of the impulse response.7) which should be obvious by looking at the definition of the Fourier transform.4) This rather unwieldy expression can be simplified.5)  where obviously   2f .

6) can now be written as: yt   Ye j 0t  Y *e j 0t (3A. H   is called the Frequency.3A. Signals and Systems 2014 . Note that the phase response defined system impulse response and the frequency response form a Fourier transform pair: ht   H  f  (3A. There are two ways to get H  f  . but with the amplitude scaled by the factor H  0  and the phase shifted by an amount H  0  . scaled in magnitude and changed in angle by an amount equal to H  0  (a complex number). Also: Y *  H * 0  X *  H  0  X * (3A. magnitude and magnitude response and H   is called the phase response. (3A.11) phase of the input sinusoid change – 0 0 0 according to the Fourier transform of the impulse Hence the response resulting from the sinusoidal input xt   A cos 0 t    is response yt   A H   cos t    H   also a sinusoid with the same frequency  0 . The function H   is termed the frequency response. We can find the system impulse response ht  Two ways to find the and take the Fourier transform. or we can find it directly from the frequency response differential equation describing the system.3 This equation is of fundamental importance! It says that the output phasor to a system is equal to the input phasor to the system.10) which is just another way of writing the sinusoid: The magnitude and (3A.9) Eq.12) The impulse response and frequency response form a Fourier transform pair We now have an easy way of analysing systems with sinusoidal inputs: simply determine H  f  and apply Y  H  f 0  X .

13) becomes:       (3A.6) that if the input is Vi  Ae j 0t   then the output is Vo  H  0  Ae j 0t   . The differential equation Eq.15) d 1 1 H  0 Ae j 0t    H  0 Ae j 0t    Ae j 0t   dt RC RC and thus: j 0 H  0  Ae j 0t    Signals and Systems 2014 . (3A. Since the input is a sinusoid. find the forced response to an arbitrary sinusoid (this is also termed the sinusoidal steady-state response).13) which is obtained by KVL. Finding the frequency response of a simple system R vi C vo Figure 3A.1 The input/output differential equation for the circuit is: dvo t  1 1  vo t   vi t  dt RC RC (3A.14) 1 1 H  0  Ae j 0t    Ae j 0t   RC RC (3A. Note that Vi and Vo are complex numbers.3A. and if the factor e j0t were suppressed they would be phasors.4 Example For the simple RC circuit below. we know from Eq. which is really just a sum of conjugate complex exponentials. (3A.

we know that the impulse response is: ht   1 RC e  t RC u t  (3A. As a check.16) and therefore: H  0   1 RC j 0  1 RC (3A. The magnitude function is: H    1 RC  2  1 RC 2 (3A. show that the frequency response is the Fourier transform of the impulse response.18) This is the frequency response for the simple RC circuit.5 Dividing both sides by Vi  Ae j 0t   gives: 1 1 j 0 H  0   H  0   RC RC (3A.21) Phase response of a lowpass RC circuit .17) which yields for an arbitrary frequency: 1 RC H    j  1 RC Frequency response of a lowpass RC circuit (3A.20) Magnitude response of a lowpass RC circuit and the phase function is: H     tan 1 RC  Signals and Systems 2014 (3A.3A.19) Impulse response of a lowpass RC circuit Using your standard transforms.

3A. The bandwidth of the filter is also equal to  0 .6 Plots of the magnitude and phase function are shown below: A graph of the frequency response – in this case as magnitude and phase |H ( ) | 1 1 2 0 0 0 0º 0 0 2 0  2 0  -45 º -90 º H ( ) Figure 3A. the RC circuit completely “blocks” the sinusoidal input. Filter terminology defined the circuit is an example of a lowpass filter. As a result of this behaviour. Finally as the frequency increases to  . As the frequency increases. The frequency  0  1 RC is termed the cutoff frequency. Signals and Systems 2014 .2 The system’s behaviour described in terms of frequency The behaviour of the RC circuit is summarized by noting that it passes low frequency signals without any significant attenuation and without producing any significant phase shift. the attenuation and the phase shift become larger.

the response to the periodic input xt  is: y t     H n X n   0 n e jn o t (3A.24) response simply multiplies the input Fourier series coefficients to produce the output Fourier series coefficients .22) which is just a Fourier series for a periodic signal It follows from the previous section that the output response resulting from the complex exponential input X n e jn0t is equal to H n 0 X n e jn0t . It can be seen that the only thing we need to determine is new Fourier series The frequency coefficients. with fundamental frequency equal to that of the input. the output y t  must be periodic. given by: Yn  H n 0 X n Signals and Systems 2014 (3A.23) Since the right-hand side is a complex exponential Fourier series. and for aperiodic inputs.e. For periodic inputs. the spectrum is effectively given by the Fourier series. we should be able to find the response of a system to a sum of sinusoids. we use the Fourier transform. By linearity. Periodic Inputs For periodic inputs. The system in both cases is described by its frequency response. we can do an infinite number… and input spectra given by the frequency response. That is.7 Response to an Arbitrary Input Since we can readily establish the response of a system to a single sinusoid. we can express the input signal by a complex exponential Fourier series: xt    X n   n e jn o t (3A. i.3A. the output has the same period as the input. “a spectrum in gives a spectrum If we can do one out”. with the relationship between the output sinusoid.

26) These relationships describe how the system “processes” the various complex exponential components comprising the periodic input signal. (3A.25) and the output phase spectrum is: Yn  H n 0   X n (3A.25) determines if the system will pass or attenuate a given component of the input. Eq.27) we get: and transform to the frequency domain  ht   xt e  jt dt  Y f     (3A. Aperiodic Inputs If we can do finite sinusoids.29) This can be rewritten in the form:   Y  f    h  xt   e  jt dt  d     Signals and Systems 2014 (3A.3A. Eq. (3A.8 The output magnitude spectrum is just: Yn  H n 0  X n Don’t forget – the frequency response is just a frequency dependent complex number (3A.30) . we can do infinitesimal sinusoids too! Taking the Fourier transform of both sides of the time domain input/output relationship of an LTI system: Start with the convolution integral again yt   ht   xt  (3A.26) determines the phase shift the system will give to a particular component of the input. In particular.28) Substituting the definition of convolution. we get:   Y  f     h xt   d e  jt dt       (3A.

Eq. (3A.3A.31) Factoring out e  j from the second integral.33) Convolution in the time-domain is multiplication in the frequency-domain This is also a proof of the “convolution in time property” of Fourier transforms. it can be shown that convolution in the frequency-domain is multiplication in the equivalent to multiplication in the time-domain.33) is the frequency-domain version of the equation given by The output spectrum is obtained by Eq.27). (3A.35) The phase spectrum is added to and the output phase spectrum is: Y  f   H  f   X  f  Note that the frequency-domain description applies to all inputs that can be Fourier transformed. (3A. spectrum by the frequency response The output magnitude spectrum is: Y f   H f  X  f  (3A. Periodic inputs are then a special case of Eq.9 Using the change of variable   t   in the second integral gives:   Y  f    h  x e  j   d  d     (3A.34) The magnitude spectrum is scaled (3A. Signals and Systems 2014 time-domain .32) which is: Y  f   H  f X  f  (3A. It says that the spectrum of the output signal is equal to the multiplying the input product of the frequency response and the spectrum of the input signal.33). we can write:    j    Y  f    h e d  x e  j d        (3A. including sinusoids if we allow impulses in the spectrum. By similar arguments together with the duality property of the Fourier Convolution in the frequency-domain is transform.

or filtering. An example of an ideal lowpass filter is shown below: Cutoff |H| 1 ideal Pass 0 0 Vi Stop 0 filter Vo  Figure 3A.36) . the cutoff frequency. The point where passband and stopband meet is called  0 . while those that are rejected lie in the stopband. An ideal lowpass filter with a bandwidth of B Hz has a magnitude response:  f  H  f   Krect   2B  Signals and Systems 2014 (3A.filters Now that we have a feel for the frequency-domain description and behaviour of a system. Ideal filters pass sinusoids within a given frequency range.3 Filter types Other basic types of filters are highpass.3A.10 Ideal Filters A first look at frequency-domain descriptions . we will briefly examine a very important application of electronic circuits – that of frequency selection. bandpass and bandstop.3. All have similar definitions as given in Figure 3A. and reject (completely attenuate) all other sinusoids. The term bandwidth as applied to a filter corresponds to the width of the passband. Here we will examine ideal filters – the topic of real filter design is rather involved. Frequencies that are passed are said to be in the passband.

We can see that distortionless transmission through a filter is characterised by a constant delay of the input signal: vo  t   Kvi  t  D (3A.37) we have also included the fact that all frequencies in the passband of the filter can have their amplitudes multiplied by a constant without affecting the wave shape.we do not care about any distortion in a stopband. The filter introduces a delay of D seconds. but retaining signal “shape” delay D vi vo vi 0 1 Lowpass Filter vo 0 t 1 1.37) In Eq.3A. For example. Signals and Systems 2014 Distortionless transmission defined . (3A. This is illustrated below: A filter introducing delay.11 Phase Response of an Ideal Filter Most filter specifications deal with the magnitude response. e.5 t Figure 3A.g. phase response is particular phase extremely important. in a digital system we may be sending 1’s A response is crucial and 0’s using a specially shaped pulse that has “nice” properties. the filter just looks like a delay. Also note that Eq. (3A. its value for retaining a signal’s “shape” is zero at the centre of all other pulses. At the receiver it is passed through a lowpass filter to remove high frequency noise. In systems where the filter is designed to pass a particular “wave shape”.37) applies only to frequencies in the passband of a filter .4 To the pulse. but the output of the filter is as close as possible to the desired pulse shape.

42) . Let one of these be: vi  A cost    (3A.41) The delay D in this case is referred to as the group delay. the phase response must be a straight line with negative slope that passes through the origin. In general. (3A. the output of the filter will be: vo  KA cos  t  D     KA cost  D    (3A. (This means the group of sinusoids that make up the wave shape have a delay of D).40) That is. From Fourier analysis we know that any periodic signal can be decomposed into an infinite summation of sinusoidal signals. the requirement for the phase response in the passband to achieve distortionless transmission through the filter is: Group delay defined d  D d (3A.38) From Eq.39) The input and output signals differ only by the gain K and the phase angle which is: Distortionless transmission requires a linear phase   D (3A.37).12 We will now relate these distortionless transmission requirements to the phase response of the filter. The ideal lowpass filter can be expressed completely as:  f   j 2fD H  f   Krect e B 2   Signals and Systems 2014 (3A.3A.

5 We would also like to know the group delay caused by this filter.13 Example We would like to determine the maximum frequency for which transmission is practically distortionless in the following simple filter: Model of a short piece of co-axial cable. We know the magnitude and phase response already: H    1 1    0  2 H     tan 1   0  Signals and Systems 2014 (3A.43a) (3A. or twisted pair 1k R 1 nF C 0 = 1 RC Figure 3A.3A.43b) .

99 (3A. We then have: 1 1    0  2  0.44)   01425 0 .14 These responses are shown below: The deviation from linear phase and constant magnitude for a simple firstorder filter |H| 1 1 2 0 0 2 0  H  /4 -1  -  /2 0 Figure 3A. Also. We first find an expression for the delay: D 1 1 d  d 1    0  2  0 Signals and Systems 2014 (3A. suppose we can tolerate a deviation in the delay of 1% in the “passband”.6 Suppose we can tolerate a deviation in the magnitude response of 1% in the “passband”.45) .3A.

One way to design a real filter is simply to multiply causal Eq.2146 radians (see Figure 3A.  0 . (3A. Over this range of frequencies. (3A. according to practically distortionless transmission. The delay for   01 Eq. Frequencies well below  0 are transmitted practically without distortion.99 (3A. Eq.48) by u t  .46)   01005 0 .46) that we must have   01 .44) and (3A. The ideal filter is unrealizable. To show this.45). We can see from Eqs. (3A.42): ht   2 BKsinc2 Bt  D  (3A.6). and the filter An ideal filter is unrealizable is therefore not causal (how can there be a response to an impulse at t  0 because it is nonbefore it is applied?). the group delay is approximately 1 μs . take the inverse Fourier transform of the ideal filter’s frequency response. In practice. the unit step. of the filter).48) It should be clear that the impulse response is not zero for t  0 .3A.47) Approximate group delay for a firstorder lowpass circuit For the values shown in Figure 3A. BW. variations in the magnitude transfer function up to the half-power frequency are considered tolerable (this is the bandwidth.  0 for .15 and then impose the condition that the delay be within 1% of the delay at DC: 1 1    0  2  0. but frequencies in the vicinity of  0 will suffer some distortion. the phase deviates from the ideal linear characteristic by at most   4   1  0. (3A.6. Signals and Systems 2014 . is approximately given by: D 1 0 (3A.

3A.49) For a periodic signal: Xf    X   f  nf  n   (3A. R x (t ) H (f ) y (t ) 1 j 2 f C eg.16 What does a Filter do to a Signal? Filtering a periodic signal Passing a periodic signal through a filter will distort the signal (so called linear distortion) because the filter will change the relative amplitudes and phases of the sinusoids that make up its Fourier series. In general. and hence synthesise the filtered waveform. H (f ) = 1 j 2 f RC +1 Figure 3A. the output Fourier transform is just the input Fourier transform multiplied by the filter frequency response H  f  .50) 0 n and therefore: Spectrum of a filtered periodic signal Yf    H nf X   f  nf  n   0 n Signals and Systems 2014 0 (3A.7 Y  f   H  f  X  f  (3A.51) . Once we can calculate the amplitude change and phase shift that the filter imposes on an arbitrary sinusoid we are in a position to find out how each sinusoid is affected.

The complex gain of the filter at each frequency is: Filter gain table n Hf  Gain 0 0 0 1 j 4  0. the output Fourier series table is: Output Fourier series table n Amplitude Phase 0 0 1 4 60° 2 12 0° Signals and Systems 2014 .5 Hz and 1 Hz.5 2 90° 2 j 4 1 4 90° Phase shift Hence.17 Example A periodic signal has a Fourier series as given in the following table: Input Fourier series table N Amplitude Phase 0 1 1 2 -30° 2 3 -90° What is the Fourier series of the output if the signal is passed through a filter with transfer function H  f   j 4f ? The period is 2 seconds.3A. There are 3 components in the input signal with frequencies 0. 0.

5 -1 -0.5 0  Figure 3A.5 -270° Signals and Systems 2014 .3A.18 Example Suppose the same signal was sent through a filter with transfer function as sketched: |H ( f )| 1 0.8 The output Fourier series is: Output Fourier series table n Amplitude Phase 0 1 1 2 -120° 2 1.5 0 0.5 1 f 0.5 1 f H ( f)  -1 -0.

Weights of the impulses are the sample values to be used by a digital signal processor (computer). The inverse of the sample period is f s  1 Ts and is called the sample rate or sampling frequency. CD. mobile phone!) Here we demonstrate how. Signals and Systems 2014 An ideal sampler multiplies a signal by a uniform train of impulses .g. An ideal sampler is shown below: g (t ) p (t ) = gs ( t ) g (t ) p (t ) -2Ts -Ts 0 Ts 2Ts Figure 3A.3A. (e. Once processed. the samples are turned back into a continuous. if certain signal – because we can then process it parameters are right. Sampling is one of Samples can be quantized and then operated upon digitally (digital signal the most important processing).things we can do to a continuous-time time waveform. a sampled signal can be reconstructed from its samples digitally almost perfectly.9 The period Ts of the sampling waveform p t  is called the sample period or sampling interval.19 Sampling Sampling is one of the most important operations we can perform on a signal. It is usually expressed in samples per second (Sa/s). Ideal sampling involves multiplying a waveform by a uniform train of impulses. The result is a weighted train of impulses. or simply Hz.

20  Let g t  be a time domain signal.g. for ideal sampling. we have: An ideal sampler produces a train of impulses .10 That is. we have to use finite value pulses. we use an analog-to-digital converter to get actual values of a signal (e. a voltage). In practice. the original signal forms an envelope for the train of impulses. or indeed any shape. when we perform digital signal processing (DSP) on the signal.Ts 0 Ts 2Ts t Figure 3A. Signals and Systems 2014 . Then. in the time-domain. and we have simply generated a weighted train of impulses. triangular. we understand that we should treat the sample value as the weight of an impulse.52) k  Graphically. It will be shown later that it doesn’t matter what pulse shape is used for the sampling waveform – it could be rectangular. To practically sample a waveform using analog circuitry.each impulse is weighted by the original signal g (t ) g s (t ) sampled signal g (t ) -2 /B -1 /B 0 1/B 2/B -2 /B -1 /B 0 t 1/B 2/B t p (t ) 1 -2Ts . If we multiply it by pt     t  kT  we s k   get an ideally sampled version:  g s t   gt    t  kTs  (3A. since we cannot create a real function containing impulses. Note that is it physically impossible to ideally sample a waveform. where each weight takes on the signal value at that particular instant of time.3A.

periodically repeated fs B fs -B fs fs+B f P (f ) fs -2 fs . periodically repeated along the frequency axis.52): Gs  f   G  f   f s  fs     f  nf  s n     G f  nf  (3A.11 Thus the Fourier transform of the sampled waveform. Spacing between repeats is equal to the sampling frequency.fs -B 2B f .3A. in the frequency-domain.fs 0 fs 2 fs f Figure 3A.53) s n   Graphically. G s  f  .fs . we have: G (f ) -2B -B 0 Gs ( f ) 1 B .21 Taking the Fourier transform of both sides of Eq. (3A. Signals and Systems 2014 . is a scaled replica of the original.fs+B -B 0 A sampled signal’s spectrum is a scaled replica of the original. f s (the inverse of the sampling interval).

22 Sampling and Periodicy If we ideally sample a continuous Fourier transform G f  .each impulse is weighted by the original spectrum     f  nf  : n   0 S (f ) G( f ) -8 f 0 0 f0 -4 f 0 4 f0 8 f0 f Figure 3A.3A. We already know this! We know a periodic time-domain signal can be synthesised from sinusoids with frequencies nf 0 .12 Now taking the inverse Fourier transform we get: F 1  S  f   g t   T0   t  kT0  (3A. i. has a transform consisting of impulses at frequencies nf 0 . Signals and Systems 2014 .54) k   which is a periodic repeat of a weighted version of the original signal: s( t ) 0 .13 Sampling in one domain implies periodicy in the other Thus.e. then we have S  f   G f   n   An ideally sampled spectrum is a train of impulses . We now see the general pattern: Sampling in one domain implies periodicy in the other.T0 T0 t Figure 3A. sampling in the frequency-domain results in periodicy in the time domain. by multiplying it with     f  nf 0  .

3A. Gs( f ) We recover the original spectrum by lowpass filtering G( f ) lowpass filter Figure 3A.fs+B -B 0 fs fs fs -B B G (f ) fs+B f -2B lowpass filter H( f ) -2 B -B -B 0 1 B 2B f 1/ fs 0 B 2B f Figure 3A.15 Hence the time-domain output of the filter is equal to g t  . in the frequency-domain: Gs ( f ) .14 Graphically. Signals and Systems 2014 Lowpass filtering a sampled signal’s spectrum results in the original signal’s spectrum… an operation that is easy to see in the frequency-domain! .fs -B . which shows that the original signal can be completely and exactly reconstructed from the sampled waveform g s t  . the only component of the spectrum Gs  f  that is passed is just the original spectrum G  f  .23 Reconstruction If a sampled signal g s t  is applied to an ideal lowpass filter of bandwidth B.fs .

do not make a significant contribution to either the shape or energy of the signal. Signals and Systems 2014 . a rect time-domain waveform has a spectrum which is a sinc function which has infinite frequency content). in the time-domain: A weighted train of impulses turns back into the original signal after lowpass filtering…an operation that is not so easy to see in the time-domain! g s (t ) -2 /B -1 /B 0 1/B 2/B lowpass filter g( t ) t -2 /B -1 /B 0 1/B 2/B t Figure 3A.16 We can’t sample and reconstruct perfectly. Any time-limited signal therefore cannot be perfectly reconstructed.24 Graphically. which means spectral components higher than. many signals are essentially bandlimited. but we can get close! There are some limitations to perfect reconstruction though.g. say B. However.3A. since there is no sample rate high enough to ensure repeats of the original spectrum do not overlap. One is that timelimited signals are not bandlimited (e.

To prevent aliasing. Do not confuse these two independent entities! The Nyquist frequency is a lower bound for the fold-over frequency in the sense that failure to select a fold-over frequency at or above the Nyquist frequency will result in spectral aliasing and loss of the capability to reconstruct a continuous-time signal from its samples without error. we have to sample at a rate f s  2 B . The frequency B is termed the Nyquist frequency Nyquist frequency. that is. The Nyquist rate is defined as f N  2 B .55) To avoid aliasing. and it may be selected independently of the characteristics of the signal being sampled. The frequency f s 2 is Fold-over frequency called the spectral fold-over frequency. Nyquist’s sampling criterion is the formal expression of the above fact: Perfect reconstruction of a sampled signal is possible if the Nyquist’s sampling criterion sampling rate is greater than twice the bandwidth of the signal being sampled f s  2B (3A.25 Aliasing We saw that sampling in one domain implies periodicy in the other. we need to sample at a rate greater than the Nyquist rate. If the We have to ensure function being made periodic has an extent that is smaller than the period.e. there is no finite sample rate that would permit errorless reconstruction of the continuous-time signal from its samples. there no spectral overlap will be no resulting overlap and hence it will be possible to recover the when sampling continuous (unsampled) function by windowing out just one period from the domain displaying periodicy. Signals and Systems 2014 Nyquist rate defined . and it is a function only of the signal and is independent of defined the selected sampling rate. and it is determined only by the defined selected sample rate. and is not to be confused with the similar term Nyquist frequency. f s  f N . The Nyquist rate is 2 B . whereas the Nyquist frequency is B.3A. i. The Nyquist frequency for a signal which is not bandlimited is infinity.

and we will not be able to perfectly reconstruct the original signal: X r( f ) -B 0 X (f ) f B -B "reconstructed" 0 B f original Figure 3A. The sampled spectrum is shown below. where the repeats of the original spectrum now overlap: An illustration of aliasing in the frequency-domain folded-over high-frequency components X s( f ) 0 . By bandlimiting (using a filter) the signal so that its bandwidth is less than half the sample rate. the output spectrum of the filter will contain high-frequency components of xt  folded-over to low-frequency components.26 To illustrate aliasing. B. or 2. of a lowpass signal.17 If the sampled signal xs t  is lowpass filtered with cutoff frequency f s 2 .B B fs f s /2 = fold-over frequency f Figure 3A. consider the case where we have failed to select the sample rate higher than twice the bandwidth.fs .3A. Selecting a sample rate higher than twice the bandwidth of the signal (equivalent to saying that the fold-over frequency is greater than the bandwidth of the signal).18 How to avoid aliasing To summarise – we can avoid aliasing by either: 1. Signals and Systems 2014 .

f s 2 : Vi ( f ) -B Vo ( f ) 0 B 0 -B f B f f s /2 Vi ( f ) Vo ( f ) anti-alias filter H ( f) 1 .27 Practical Sampling and Reconstruction To prevent aliasing.3A. but a different passband magnitude to correct for the sampling process. The reconstruction filter has the same bandwidth as the AAF. practical systems are constructed so that the signal-to-be- We have to ensure sampled is guaranteed to meet Nyquist’s sampling criterion. They do this by no spectral overlap passing the original signal through a lowpass filter.19 If the system is designed correctly. then the high frequency components of the original signal that are rejected by the AAF are “not important” in that they carry little energy and/or information (e.20 Signals and Systems 2014 Reconstruction Filter vo when sampling . Sampling then takes place at a rate of f s Sa/s. known as an anti-alias filter (AAF). noise). A practical sampling scheme is shown below: vi Anti-alias Filter ADC Digital Signal Processor DAC Figure 3A.g.f s /2 0 f s /2 f Figure 3A. to ensure that no frequencies are present above the fold-over frequency.

3A.f s /2 0 f s /2 f Gr ( f ) A C 0 t -B Figure 3A.21 Signals and Systems 2014 0 B f .28 Summary of the Sampling and Reconstruction Process The sampling and reconstruction process in both the time-domain and frequency-domain Time-Domain Frequency-Domain Sampling g (t ) G( f ) A C 0 0 -B t B f S(f ) s(t ) 1 0 fs Ts t 0 -f s g s (t ) fs f Gs ( f ) Afs C 0 t -f s 0 -B B fs f Reconstruction h (t ) H( f ) 1 1 fs fs 0 Ts t g r (t ) .

only periodic functions have Fourier series representation.3A.22 We can create the periodic version by convolving g 1  t  with a train of unit impulse functions with spacing equal to the period. This method allows the Fourier series coefficients to be determined directly from the Fourier transform. Don’t forget.29 Finding the Fourier Series of a Periodic Function from the Fourier Transform of a Single Period It is usually easier to find the Fourier transform of a single period than The quick way to performing the integration needed to find Fourier series coefficients (because determine Fourier series coefficients all the standard Fourier properties can be used). T0 : Convolved with a uniform impulse train 1 -2T0 -T0 0 Figure 3A.23 Signals and Systems 2014 T0 2T0 t . provided the period is known. Suppose we draw one period of a periodic waveform: A single period g1(t) 1 0 t T1 Figure 3A.

the periodic version is:  g p  t   g1  t   t  kT0  (3A.24 Using the convolution multiplication rule:      F g p t   F g1 t  F    t  kT0  k    G1  f   f 0     f  nf  n   (3A.59) . In Figure 3A.56) k  gives the periodic waveform gp( t ) 1 -2T0 -T0 0 T1 2T0 T0 t Figure 3A.57) 0 In words. we need to convolve g 1  t  with 0 k  Thus. the Fourier transform of the periodic signal consists of impulses located at harmonics of f 0 1 T0 . That is.24 we have: Gn  f 0 T1 sincnf 0 T1 e  jnf 0T1 Signals and Systems 2014 (3A.3A.30   t  kT  . g p  t  .58) These are the Fourier series coefficients. whose weights are: Fourier series coefficients from the Fourier transform of one period Gn  f 0 G1  nf 0  (3A.

(3A.31 Graphically. This is in perfect agreement with the concept of a continuous spectrum. If an a spectrum represent a sinusoid impulse exists at a certain frequency. the Fourier series coefficients are just the weights of the impulses in the spectrum of the periodic function. (3A. Each Remember that pairs of impulses in frequency has an infinitesimal amplitude sinusoid associated with it. then there is a finite amplitude sinusoid at in the time-domain that frequency.58).57) takes the original spectrum and multiplies it by a train of impulses – effectively creating a weighted train of impulses: G1( f ) Sampling in the frequency domain produces a periodic waveform in the time-domain T1 0 2f1 f1 f f0 0 f 0 2f 0 Gp( f ) f f 0 T1 -8 f0 -6 f0 -12 f0 -10 f0 6 f0 -4 f0 -2 f0 0 2 f0 4 f0 8 f0 10 f0 12 f0 f Figure 3A. Signals and Systems 2014 . To get the nth Fourier series coefficient. the operation indicated by Eq.3A.25 According to Eq. use the weight of the impulse located at nf 0 .

Some practical effects of looking at signals over a finite time Sometimes we have no choice.26 The viewed signal is: g w  t   sin 2t  rect t  0.32 Windowing in the Time Domain Often we wish to deal with only a segment of a signal. say from t  0 to t  T . So the Fourier transform of the windowed signal is the original signal convolved with the Fourier transform of the window.5 Signals and Systems 2014 (3A. Example Find the Fourier transform of sin 2t  when it is viewed through a rectangular window from 0 to 1 second: A rectangular window applied to a sinusoid 1 0 t 1 Figure 3A. as this is the only part of the signal we have access to .our measuring instrument has restricted us to a “window” of duration T beginning at t  0 .60) . Outside this window the signal is forced to be zero.3A. How is the signal’s Fourier transform affected when the signal is viewed through a window? Windowing defined Windowing in the time domain is equivalent to multiplying the original signal g t  by a function which is non-zero over the window interval and zero elsewhere.

3A.27 Signals and Systems 2014 3 4 f .5 -4 -3 -2 -1 0 1 2 Figure 3A.61) Graphically. the magnitude spectrum of the windowed signal is: Spectrum of a rectangular windowed sinusoid 1 -4 -3 -2 -1 0 1 1/2 90° 2 3 4 f 1/2 -90° 0 -1 1 f G( f ) 0.5 j  j      f  1    f  1  sinc f e  jf 2  2  j j   sinc f  1e  j  f 1  sinc f  1e  j  f 1 2 2 (3A.33 The Fourier transform will be: Fsin 2t  Frectt  0.

the more accurate the spectrum becomes. the longer the window.34 If the window were changed to 4 seconds.28 Obviously. Signals and Systems 2014 .3A. the better the spectrum 4 -4 -3 -2 -1 0 1 1/2 90° -1 2 3 4 f 1/2 -90° 0 1 f G( f ) 2 -4 -3 -2 -1 0 1 2 3 4 f Figure 3A. we would then have: The longer we look.

62) F g w t   rect f   4 sinc4 f  (3A.35 Example Find the Fourier transform of sinc t  when it is viewed through a window from -2 to 2 seconds: Viewing a sinc function through a rectangular window g (t ) w -2 -1 1 0 1 2 t Figure 3A.29 We have: g w t  sinct rect t 4 (3A.3A.63) and: Signals and Systems 2014 .

there would be a peak every 1 T ).30 We see that windowing in the time domain by T produces ripples in the frequency domain with an approximate spacing of 2 T between peaks.3A.36 Graphically: Ripples in the spectrum caused by a rectangular window 1 -1 -0.5 1 f 0.5 0 1 -1 -0.5 1 f 0.5 0 f Figure 3A. Signals and Systems 2014 .5 0 0.5 1 4 -1 -0. (In a magnitude spectrum.

65b) Signals and Systems 2014 . Remember however that the time and frequency domain are just two ways of describing the same signal – a time varying voltage that we measure across two terminals.64a) Frequency domain output is X(f) convolved with Y(f) (3A. They have a pair of terminals for the two signals to be multiplied together. At higher frequencies any nonlinear device can be used for multiplication. Depending on the bandwidth of the signal they can handle. convolution in the frequency-domain Multiplication in time domain  convolution in frequency domain You can buy a “4 quadrant multiplier” as an IC from any electronics supplier. x (t ) x (t ) y (t ) y (t ) Figure 3A.31 Time domain output is x(t) multiplied with y(t) (3A.37 Practical Multiplication and Convolution We have learned about multiplication and convolution as mathematical operations that we can apply in either the time or frequency domain. What do we need to physically do to our signal to perform an operation equivalent to say multiplying its frequency domain by some function? Two physical operations we can do on signals are multiplication (with another Multiplication in the time-domain is signal) and filtering (with a filter with a defined transfer function). the price varies from several dollars to over a hundred dollars.3A.

either passive or active that performs as a filter.38 Convolution in the time-domain is multiplication in the frequency-domain Convolution in time domain  multiplication in frequency domain You can design a network.65a) Frequency domain output is X(f) multiplied with H(f) (3A. frequency or a complex expression of frequency) or equivalently by its impulse response which is the inverse Fourier transform of the transfer function.3A. frequency and a second plot of phase vs.32 Time domain output is x(t) convolved with h(t) (3A.66b) Signals and Systems 2014 . Its characteristics can be specified by its transfer function (a plot of magnitude vs. x (t ) Filter specified by H ( f ) or h ( t ) y (t ) = h (t ) * x (t ) Figure 3A.

 The frequency response of a system is just the Fourier transform of the impulse response of the system. The output sinusoid is related to the input sinusoid by a complex-valued function known as the frequency response. thereby preserving wave shape. This expresses the important property: convolution in the time-domain is equivalent to multiplication in the frequency-domain. the impulse response and the frequency response form a Fourier transform pair: ht   H  f  .  Sampling is the process of converting a continuous-time signal into a discrete-time signal. It is achieved by passing the samples through a lowpass filter.39 Summary  Sinusoidal inputs to linear time-invariant systems yield sinusoidal outputs. They are frequency selective devices. It is achieved. thus destroying the ability to later reconstruct the signal. by multiplying the signal by a train of impulses. in the ideal case.3A. It is caused by not meeting the Nyquist criterion: f s  2 B . H  f  .  Filters are devices best thought about in the frequency-domain.  Reconstruction is the process of converting signal samples back into a continuous-time signal. That is.  Aliasing is an effect of sampling where spectral overlap occurs. Signals and Systems 2014 .  Linear phase is desirable in filters because it produces a constant delay. changing both the magnitude and phase of frequency components of an input signal to produce an output signal.  The output of an LTI system due to any input signal is obtained most easily by considering the spectrum: Y  f   H  f X  f  .  The frequency response of a system can be obtained directly by performing analysis in the frequency domain.

: Communication Systems.  Using finite-duration signals in the time-domain is called windowing. Inc.40  Fourier series coefficients can be obtained from the Fourier transform of one period of the signal by the formula: Gn  f 0 G1 nf 0  . B. HoltSaunders. P. Tokyo. New York.: Fundamentals of Signals and Systems using MATLAB®. References Haykin.3A. John-Wiley & Sons. 1994.: Modern Digital and Analog Communication Systems. Prentice-Hall. 1983. 1997.. Kamen. Lathi. & Heck. Signals and Systems 2014 . B. S. Windowing affects the spectrum of the original signal. E.

[one or none correct] 1. The Fourier series coefficients. The Fourier transform of one period of a periodic waveform is G(f). cross out the wrong answers. The scaling property of the Fourier transform is: f (a) g at   G   a (b) g at   1 G f  a Answers: 1.41 Quiz Encircle the correct answer. vi The Fourier transform vo vi 0 vo Ideal Filter t |H( f )| (a) of the impulse response 0 t of the filter resembles: h (t) |H( f )| (b) f t (c) f 3. Gn . a 2. The convolution of x t  and y t  is given by: (a)    x  y t    d (b)    x    y    t  d (c)    X   Y  f   d 2. c 3.5 (c) 6 5. c 4. x 5. x Signals and Systems 2014 (c) ag t   aG f  . x(t) The peak value of the y(t) convolution is: t (a) 9 t (b) 4. are given by: (a) nf 0 G f 0  (b) Gnf 0  (c) f 0Gnf 0  4.3A.

A signal is to be sampled with an ideal sampler operating at 8000 samples per second. What is the spacing in frequency between positive peaks of the ripples? 4.42 Exercises 1. Assuming an ideal low pass anti-aliasing filter.3A. and under what conditions? Signals and Systems 2014 .5 0.1 0. The pulse repetition rate is 1 kHz.9 1 t (ms) -5 2.6 0.4 0 0. x(t) 10 0. deduce the Fourier transform of the triangular pulse: x(t)  0 A  t 3. how can the sampled signal be reconstituted in its original form. By relating the triangular pulse shown below to the convolution of a pair of identical rectangular pulses. Calculate the magnitude and phase of the 4 kHz component in the spectrum of the periodic pulse train shown below. The pulse x t   2 Bsinc 2 Bt  rect Bt 8 has ripples in the amplitude spectrum.

5 -100 -50 50 100 f (Hz) 50 100 f (Hz) H( f )  -100 -50  Draw up a table in the same form as Table 1 of the Fourier series of the output waveform. The following table gives information about the Fourier series of a periodic waveform. assuming that g t  is measured across 50  ? (b) Express the third harmonic as a pair of counter rotating phasors.5 Phase (º) -30 -30 -60 -90 Give the frequencies of the fundamental and the 2nd harmonic.43 5. A train of impulses in one domain implies what in the other? 6. which has a period of 50 ms. Is there a DC component in the output of the amplifier? Signals and Systems 2014 . |H( f )| 1 0. g t  . What is the signal power. Table 1 Harmonic # 0 1 2 3 4 (a) Amplitude 1 3 1 1 0.3A. What is the value of the third harmonic at t  20 ms ? (c) The periodic waveform is passed through a filter with transfer function H  f  as shown below.

A signal. 1 RC  100 . how? 8. 1 RC  10 . Explain. is sampled by multiplying it with a rectangular pulse train with repetition rate 4 kHz and pulse width 50 s.3A. Can the original signal be recovered without distortion. The 18th harmonic thus computed can be non-zero even when no 9 kHz component is present in the input signal. Use MATLAB® to determine the output of a simple RC lowpass filter subjected to a square wave input given by: xt     rectt  2n n   for the cases: 1 RC  1 . 9. Signals and Systems 2014 .25 kHz. and if so. To do the analysis a digital storage oscilloscope takes a record of length 2 ms and then computes the Fourier series. Plot the time domain from t  3 to 3 and Fourier series coefficients up to the 50th harmonic for each case.44 7. A signal is to be analysed to identify the relative amplitudes of components which are known to exist at 9 kHz and 9. bandlimited to 1 kHz.

Quadrature amplitude modulation (QAM). The channel is a medium – such as a twisted pair. Double-sideband. or data. The receiver output is fed to the output transducer. it must be converted by an input transducer into an electrical waveform referred to as the baseband signal or the message signal.1 The source originates the message. a television picture. A model of a typical communication system is: A communication system Source Input transducer Message signal Received signal Transmitted signal Transmitter Channel Output signal Receiver Output transducer Destination Distortion and noise Figure 3B. The transmitter modifies the baseband signal for efficient transmission. If the data is nonelectrical. an optical fibre. coaxial cable. such as a human voice. Single sideband (SSB) Modulation. Amplitude modulation (AM). The receiver processes the signal from the channel by undoing the signal modifications made at the transmitter and by the channel. which converts the electrical signal to its original form. or a radio link – through which the transmitter output is sent. suppressed-carrier (DSB-SC) modulation. Signals and Systems 2014 . Introduction The modern world cannot exist without communication systems. a waveguide.1 Lecture 3B – Amplitude Modulation Modulation.3B. The destination is the thing to which the message is communicated.

combustion engine ignition radiation. The waveform is distorted because of different amounts of attenuation and phase shift suffered by different frequency components of the signal. Attenuation of the signal increases with channel length. solar and intergalactic radiation). External noise includes interference from signals transmitted on nearby channels. can be partly corrected at the receiver by an equalizer with gain and phase characteristics complementary to those of the channel. The signal is not only distorted by the channel. Internal noise results from thermal motion of electrons in conductors. etc.3B. and adds noise waveform. fluorescent lighting) and natural noise (electrical storms. Noise is one of the basic factors that sets a limit on the rate of communication. diffusion or recombination of charged carriers in electronic devices. This type of distortion. Signals and Systems 2014 . but it is also contaminated along the path by undesirable signals lumped under the broad term noise which are random and unpredictable signals from causes external and internal. called linear distortion. human-made noise generated by electrical equipment (such as motor drives. Such distortion can also be partly corrected by a complementary equalizer at the receiver. The channel may also cause nonlinear distortion through attenuation that varies with the signal amplitude.2 The Communication Channel A channel acts partly as a filter – it attenuates the signal and distorts the The communication channel attenuates and distorts the signal.

and regeneration of the original digital signal means that digital communication can transmit messages with a greater accuracy than an analog system in the presence of distortion and noise. Easier message extraction (in the presence of noise). On the other hand. This conversion process is known as sinusoid’s modulation. and one of its parameters – such as amplitude. Baseband and Carrier Communication Some baseband signals produced by various information sources are suitable for direct transmission over a given communication channel – this is called baseband communication. A carrier is a sinusoid of high frequency.3B. and analog communication is all but obsolete. the baseband signal is used to modify some amplitude. In an analog message. frequency. Modulation Baseband signals produced by various information sources are not always Modulation is the suitable for direct transmission over a given channel. and are grouped under the name angle modulation. In this mode. or phase modulation (PM). This is the reason why digital communication is so prevalent. the waveform is important and even a slight distortion or interference in the waveform will cause an error in the received signal. In this process. or phase – is varied in proportion to the baseband signal mt  . Signals and Systems 2014 frequency. These signals are process of modifying a high-frequency modified to facilitate transmission.3 Analog and Digital Messages Messages are analog or digital. digital messages are transmitted by using a finite set of electrical waveforms. we have amplitude modulation (AM). one of the basic parameters (amplitude. parameter of a high-frequency carrier signal. Communication that uses modulation to shift the frequency spectrum of a signal is known as carrier communication. in essence. FM and PM are similar. Accordingly. or phase by a message signal . frequency modulation (FM). frequency or phase) of a high frequency sinusoidal carrier is varied in proportion to the baseband signal mt  .

Signals and Systems 2014 .3B.4 The figure below shows a baseband signal mt  and the corresponding AM and FM waveforms: Examples of AM and FM modulated waveforms v t Carrier v t Modulating (baseband) signal v t Amplitude-modulated wave v t Frequency-modulated wave Figure 3B.2 At the receiver. the modulated signal must pass through a reverse process called demodulation in order to retrieve the baseband signal.

We modulate a high-frequency carrier. A type of FDM.5 Modulation facilitates the transmission of information for the following reasons.3B. thus translating the signal spectrum to the region of carrier frequencies that corresponds to a much smaller wavelength. a tuneable channel bandpass filter is used to select the desired signal. the wavelengths are too large for reasonable antenna dimensions. This method of transmitting several signals simultaneously is known as frequency-division multiplexing (FDM). Simultaneous Transmission of Several Signals We can translate many different baseband signals into separate “channels” by using different carrier frequencies. known as orthogonal frequency-division multiplexing (OFDM) is at the heart of Wi-Fi – and was invented at the CSIRO! Signals and Systems 2014 . If the carrier frequencies are chosen …and to simultaneously sufficiently far apart in frequency. Ease of Radiation For efficient radiation of electromagnetic energy. the spectra of the modulated signals will not transmit messages through a single overlap and thus will not interfere with each other. the radiating antenna should Modulation is used for ease of radiation be of the order of one-tenth of the wavelength of the signal radiated. For many (broadcasting)… baseband signals. At the receiver.

By the multiplication property of Fourier transforms. The multiplier is implemented with a non-linear device. the output spectrum is obtained by convolving the spectrum of xt  with the spectrum of cos2f c t  .3 The local oscillator in Figure 3B. Suppressed-Carrier (DSB-SC) Modulation Let xt  be a signal such as an audio signal that is to be transmitted through a cable or the atmosphere. the signal xt  and the carrier cos c t  are simply multiplied together. We now restate a very important property of convolution involving an impulse: X  f     f  f0   X  f  f0  Signals and Systems 2014 (3B.6 Double-Sideband.3 is a device that produces the sinusoidal signal cos c t  . and is usually an integrated circuit at low frequencies. In one form of AM transmission. The process is illustrated below: Double side-band – suppressed carrier (DSB-SC) modulation Signal multiplier x( t ) Local Oscillator y ( t ) = x ( t ) cos(2 f c t ) = modulated signal cos(2 f c t ) Figure 3B.3B.1) . In amplitude modulation (AM). the signal modifies (or modulates) the amplitude of a carrier sinusoid cos c t  .

2) The spectrum of the modulated signal is a replica of the signal spectrum but “shifted up” in frequency.3B. and the process is therefore called doublesideband transmission. or DSB transmission for short.4 Signals and Systems 2014 -2 -1 0 1 2 t .7 The output spectrum of the modulator is therefore: Y  f   X  f   DSB-SC uptranslates the baseband spectrum 1   f  fc     f  fc  2 1 X  f  fc   X  f  fc  2 (3B. An example of modulation is given below in the time-domain: DSB-SC in the timedomain x( t ) y( t ) Signal multiplier 1 1 y ( t ) = x ( t ) cos(2 f ct ) x( t ) -2 -1 0 1 2 t modulated signal Local Oscillator cos(2 f ct ) 1 -2 -1 1 2 t 0 Figure 3B. If the signal has a bandwidth equal to B then the modulated signal spectrum has an upper sideband from f c to f c  B and a lower sideband from f c  B to f c .

g.fc 0 fc f . and achieves practical propagation The higher frequency range of the modulated signal makes it possible to achieve good propagation in transmission through a cable or the atmosphere.5 Modulation lets us share the spectrum.8 And in the frequency domain: DSB-SC in the frequency-domain X (f ) 1 Y  f   1  X  f  f c   X  f  f c  2 Y(f ) X (f ) -2 -1 0 1 2 f 1/2 1   f  fc     f  fc  2 1/2 . It also allows the “spectrum” to be shared by independent users – e.fc 0 fc f cos(2 f ct ) Local Oscillator Figure 3B. TV. Signals and Systems 2014 .3B. radio. mobile phone etc.

3B.9
Demodulation
The reconstruction of xt  from xt cos c t  is called demodulation. There are
many ways to demodulate a signal, here we will consider one common method
called synchronous or coherent demodulation.

x ( t ) cos (2 f c t )
2

x ( t ) cos(2 f c t )

Local
Oscillator

lowpass
filter

x (t )

Coherent
demodulation of a
DSB-SC signal

cos(2 f c t )

Figure 3B.6
The first stage of the demodulation process involves applying the modulated Coherent
waveform xt cos c t  to a multiplier. The other signal applied to the demodulation
requires a carrier at
the receiver that is

multiplier is a local oscillator which is assumed to be synchronized with the synchronized with
carrier signal cos c t  , i.e. there is no phase shift between the carrier and the the transmitter
signal generated by the local oscillator.
The output of the multiplier is:

1
X  f  f c   X  f  f c  1   f  f c     f  f c 
2
2
1
1
 X  f    X  f  2 f c   X  f  2 f c 
2
4

(3B.3)

xt  can be “extracted” from the output of the multiplier by lowpass filtering

with a cutoff frequency of B Hz and a gain of 2.

Signals and Systems 2014

3B.10
Another way to think of this is in the time-domain:

1
xt cos 2 2f c t   xt  1  cos4f c t 
2

(3B.4)

Therefore, it is easy to see that lowpass filtering, with a gain of 2, will produce
xt  . An example of demodulation in the time-domain is given below:
Demodulation in the
time-domain
x ( t ) cos (2 f c t )
2

1

x ( t ) cos(2  f c t )
-2

1
-2

-1

0

1

2

-1

0

1

2

x( t )

t

1

x(t )

lowpass
filter

t

-2

-1

0

1

2

t

cos(2 f c t )

Local
Oscillator
1
-2

-1

0

1

2

t

Figure 3B.7
The operation of demodulation is best understood in the frequency domain:
Demodulation in the
frequency-domain
Yf 

1
2

X  f

 f c   X  f  f c 

1/2

1/4

Y (f )

1/4

0

-2 f c

- fc

0

f

2 fc

1/2

X (f )

1
2

  f

x (t )

lowpass
filter

f

fc

-2

 f c     f  f c 

H( f )

2

1/2
- fc

0

fc

f

1

cos(2 f c t )
Local
Oscillator

-2

-1

0

Figure 3B.8

Signals and Systems 2014

1

2

f

-1

0

1

2

f

3B.11
Summary of DBS-SC Modulation and Demodulation

Time-Domain

The DSB-SC
modulation and
demodulation
process in both the
time-domain and
frequency-domain

Frequency-Domain

Modulation

g (t )

G( f )
A

C

0

-B 0

t

B

f

L( f )

l (t )
1
0

1/2
Tc

t
-f c

gm(t )

fc

0

f

Gm( f )
C

A /2
t
-f c -B

0

-f c

Demodulation

-f c +B

0

f c -B

f c +B

fc

f

L( f )
1/2

l (t )
1
0

Tc

-f c

t

fc

0

f

Gm( f )
g i( t )

A /2
C

A /4

0

t

-2 f c

A /4
-f c -B

B

0

fc

2 fc

f

H( f )

h (t )

2

2

0

T0

t

0

- f0

f0

f

Gd( f )

gd( t )

A

C

t

-B 0

Figure 3B.9

Signals and Systems 2014

B

f

3B.12
Amplitude Modulation (AM)
Let g t  be a message signal such as an audio signal that is to be transmitted
through a cable or the atmosphere. In amplitude modulation (AM), the message
signal modifies (or modulates) the amplitude of a carrier sinusoid cos2f c t  .
In one form of AM transmission, a constant bias A is added to the message
signal g t  prior to multiplication by a carrier cos2f c t  . The process is
illustrated below:
AM modulation

Adder
g (t )

Multiplier
gAM( t ) = modulated signal

A + g (t )

+
+

cos(2 f c t )
A

Local
oscillator

Figure 3B.10
The local oscillator in Figure 3B.10 is a device that produces the sinusoidal
signal cos2f c t  . The multiplier is implemented with a non-linear device, and
is usually an integrated circuit at low frequencies. The adder is a simple opamp circuit.
The spectrum of the modulated signal should show a replica of the signal
spectrum but “shifted up” in frequency. Ideally, there should also be a pair of
impulses representing the carrier sinusoid. If G  f  , the spectrum of g t  , is
bandlimited to B Hz, then the modulated signal spectrum has an upper
sideband from f c to f c  B and a lower sideband from f c  B to f c . Since the
appearance of the modulated signal in the time domain is that of a sinusoid
with a time-varying amplitude proportional to the message signal, this
modulation technique is called amplitude modulation, or AM for short.

Signals and Systems 2014

3B.13
Envelope Detection
There are several ways to demodulate the AM signal. One way is coherent (or

synchronous) demodulation. If the magnitude of the signal g t  never exceeds
the bias A, then it is possible to demodulate the AM signal using a very simple
and practical technique called envelope detection.
An AM envelope
detector used for
demodulation

gAM( t )

C A+ g (t )

R

 0 = 1/ RC
Figure 3B.11
As long as the “envelope” of the signal A  g t  is non-negative, the message
g t  appears to “ride” on top of the half-wave rectified version of g AM t  . The

half-wave rectified g AM t  can be smoothed with a capacitor so that the output
closely approximates A  g t  . The time constant of the RC smoothing circuit
is not extremely critical, so long as B  f 0  f c .
The “envelope detector” can be thought of as a rectifier followed by a lowpass
filter:

g (t)

g (t )

AM

r

g (t )
lowpass
filter

precision
rectifier
Figure 3B.12

Signals and Systems 2014

d

3B.14
Single Sideband (SSB) Modulation
In so far as the transmission of information is concerned, only one sideband is
necessary, and if the carrier and the other sidebands are suppressed at the
transmitter, no information is lost. When only one sideband is transmitted, the
modulation system is referred to as a single-sideband (SSB) system.
A method for the creation of an SSB signal is illustrated below:
SSB modulation

Multiplier

Adder

g (t )

g SSB (t )
modulated
signal

-90°
phaseshifter

cos(2 f c t )
Local Oscillators
sin(2 f c t )

Figure 3B.13
The local oscillators in Figure 3B.13 are devices that produce sinusoidal
signals. One oscillator is cos2f c t  . The other oscillator has a phase which is
said to be in quadrature, or a phase of   2 with respect to the first oscillator,
to give sin 2f c t  . The multiplier is implemented with a non-linear device and
the adder is a simple op-amp circuit (for input signals with a bandwidth less
than 1 MHz). The “-90° phase-shifter” is a device that shifts the phase of all
positive frequencies by –90° and all negative frequencies by +90°. It is more
commonly referred to as a Hilbert transformer. Note that the local oscillator
sin 2f c t  can be generated from cos2f c t  by passing it through a Hilbert
transformer.
Signals and Systems 2014

3B.15
Quadrature Amplitude Modulation (QAM)
In one form of AM transmission, two messages that occupy the same part of
the spectrum can be sent by combining their spectra in quadrature. If the first
message signal g1 t  is multiplied by a carrier cos2f c t  , then the second
message signal g 2 t  is multiplied by sin 2f c t  . The process is illustrated
below:
QAM modulation

Multiplier

Adder

g 1( t )

Local
Oscillators

g QAM( t ) = modulated signal

cos(2 f c t )
sin(2 f c t )

g 2( t )

Figure 3B.14
The local oscillators in Figure 3B.14 are devices that produce sinusoidal
signals. One oscillator is cos2f c t  . The other oscillator has a phase which is
said to be in quadrature, or a phase of   2 with respect to the first oscillator,
to give sin 2f c t  . The multiplier is implemented with a non-linear device and
the adder is a simple op-amp circuit (for input signals with a bandwidth less
than 1 MHz).

Signals and Systems 2014

3B.16
The spectrum of the modulated signal has two parts in quadrature. Each part is
a replica of the original message spectrum but “shifted up” in frequency. The
parts do not “interfere” since the first message forms the real (or in-phase, I)
part of the modulated spectrum and the second message forms the imaginary
(or quadrature, Q) part. An abstract view of the spectrum showing its real and
imaginary parts is shown below:

G1 ( f )

G2 ( f )
f
original message spectra

f

G QAM( f )
Re (I)
-fc
Im (Q)

fc

f

QAM spectrum

Figure 3B.15
Normally, we represent a spectrum by its magnitude and phase, and not its real
and imaginary parts, but in this case it is easier to picture the spectrum in
“rectangular coordinates” rather than “polar coordinates”.
If the spectrum of both message signals is bandlimited to B Hz, then the
modulated signal spectrum has an upper sideband from f c to f c  B and a
lower sideband from f c  B to f c .
The appearance of the modulated signal in the time domain is that of a sinusoid
with a time-varying amplitude and phase, but since the amplitude of the
quadrature components (cos and sin) of the carrier vary in proportion to the
message signals, this modulation technique is called quadrature amplitude

modulation, or QAM for short.

Signals and Systems 2014

3B.17
Coherent Demodulation
There are several ways to demodulate the QAM signal - we will consider a
simple analog method called coherent (or synchronous) demodulation.
Coherent QAM
demodulation

g QAM( t ) cos(2 f c t )
lowpass
filter

g QAM( t )

g 1( t )

cos(2 f c t )
sin(2 f c t )
g QAM( t ) sin(2 f c t )
lowpass
filter

g 2( t )

Figure 3B.16
The first stage of the demodulation process involves applying the modulated
waveform g QAM t  to two separate multipliers. The other signals applied to
each multiplier are local oscillators (in quadrature again) which are assumed to
be synchronized with the modulator, i.e. the frequency and phase of the
demodulator’s local oscillators are exactly the same as the frequency and phase
of the modulator’s local oscillators. The signals g1 t  and

g 2 t  can be

“extracted” from the output of each multiplier by lowpass filtering.

Signals and Systems 2014

3B.18
Summary

Modulation shifts a baseband spectrum to a higher frequency range. It is
achieved in many ways – the simplest being the multiplication of the signal
by a carrier sinusoid.

Demodulation is the process of returning a modulated signal to the
baseband. Modulation and demodulation form the basis of modern
communication systems.

References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using

MATLAB®, Prentice-Hall, 1997.
Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.

Signals and Systems 2014

3B.19
Exercises
1.
Sketch the Fourier transform of the waveform g t   1  cos 2t   sin 20t  .

2.
A scheme used in stereophonic FM broadcasting is shown below:
A
L

C


R


B
cos(2 f c t )

The input to the left channel (L) is a 1 kHz sinusoid, the input to the right
channel (R) is a 2 kHz sinusoid. Draw the spectrum of the signal at points A, B
and C if f c  38 kHz .

3.
Draw a block diagram of a scheme that could be used to recover the left (L)
and right (R) signals of the system shown in Question 12 if it uses the signal at
C as the input.

Signals and Systems 2014

1 Signals and Systems 2014 t . e t . Finding a Fourier transform using a Laplace transform.4A. the decay of the exponential will be greater than the increase of the ramp as t : 1 t t t e - t e - t t Figure 4A. then for  “sufficiently large”. We need it for two reasons: 1. and include any initial conditions in the system’s response. Standard form of a feedback control system. Block diagrams. For example. Cascading blocks. Standard Laplace transforms. unstable systems. we wish to determine a system’s response from the application of an input signal at t  0 . Laplace transform properties. there are some important signals. The system transfer function. Block diagram transformations. such as ht   e at u t . such as the unbounded ramp function The need for the Laplace transform r t   tu t  .1 Lecture 4A – The Laplace Transform The Laplace transform. do not have a Fourier transform. The Laplace transform fulfils both of these objectives. Transforms of differential equations. If we multiply a time-domain function by an exponential convergence factor. which do not have a Fourier transform. 2. Also. Finding Laplace transforms. Evaluation of the inverse Laplace transform. a  0 . In both cases the time-domain function is such that    g t  dt   . The Laplace Transform The Laplace transform is a generalisation of the Fourier transform. before taking the Fourier transform. if we multiply the ramp r t   tu t  by e t . then we guarantee convergence of the Fourier transform for a broader class of functions.

3) which is called the two-sided Laplace transform. The lower limit in the Laplace transform can then be replaced with zero.4A.2) Letting s    j . the factor e t is used to try and force the overall function to zero for large values of t in an attempt to achieve convergence of the Fourier transform (note that this is not always possible). That is. which we will refer to simply as “the Laplace transform”: The one-sided Laplace transform  X s     xt e st dt 0 Signals and Systems 2014 (4A.2 In other words. After multiplying the original function by an exponential convergence factor. By making the integral only valid for time t  0 . The result is termed the one-sided Laplace transform.4) . this becomes: The two-sided Laplace transform  X s    xt e st dt  (4A. we assume that signals are only applied at t  0 and impulse responses are causal and start at t  0 . and so we can write:  X   j    xt e   j t dt  (4A. we can incorporate initial conditions into the s-domain description of signals and systems. the Fourier transform is:  F xt e t    xt e e  t  jt     xt e   j t  dt (4A.1) dt The resulting integral is a function of   j .

6) But s    j (  fixed) and so ds  jd .7) The inverse Laplace transform It is common practice to use a bidirectional arrow to indicate a Laplace transform pair. We then get: 1 c  j st   xt   X s e ds  c  j  2j (4A. is termed complex frequency – it is a complex number and can be graphed on a complex plane: Im s s= + j j  0 Re s Figure 4A.4A.2 The inverse Laplace transform can be obtained by considering the inverse Fourier transform of X   j  : xt et  1 2    X   j e jt d (4A. s . as follows: xt   X s  Signals and Systems 2014 (4A. We then have: xt   1 2    X   j e  j t d (4A.3 The Laplace transform variable.5) where we have used the fact that d  2df .8) The Laplace transform pair defined .

12) . the term e   s  a t does not necessarily vanish. e  zt  0 only if   0 . as t   .4) converges.10) Note that s is complex and as t   . e  zt  e    j t  e t e  jt (4A.9) Because u t   0 for t  0 and u t   1 for t  0 . Thus:  0 Re z  0 lim e  zt   t   Re z  0 Signals and Systems 2014 (4A. Example To find the Laplace transform of a signal xt   e  at u t  and its ROC. is the set of values of s (the region in the complex plane) for which the integral in Eq.   X s     e e dt    e  sa t dt 0 at  st 0   1  sa t  e     sa 0 (4A. (4A.11) Now e  jt  1 regardless of the value of t . Here we recall that for a complex number z    j . and e  zt   if   0 .4 Region of Convergence (ROC) The region of convergence (ROC) for the Laplace transform X s  . Therefore.4A. we substitute into the definition of the Laplace transform:  X s     e at u t e st dt 0 (4A.

4A.14) or: e  at u t   1 sa Re s  a (4A. (4A.10) yields: X s   1 sa Res  a   0 (4A.3.10) exists only for the values of s in the shaded region in Figure 4A.13) Use of this result in Eq. For other values of s .3 This fact means that the integral defining X s  in Eq. as shown in the shaded area below: Signal x ( t ) 1 e -at A signal and the region of convergence of its Laplace transform in the s-plane Region of convergence j c+j u( t ) 0 t -a 0 c  c-j Figure 4A.15) The ROC of X s  is Re s  a . the integral does not converge. Signals and Systems 2014 . (4A.5 Clearly:  0 Res  a   0 lim e  s  a t   t   Res  a   0 (4A.

the Fourier transform X   is just the Laplace transform X s  with s  j .16) That is. One possible path of integration is shown (dotted) in Figure 4A. We can avoid this integration by compiling a table of Laplace transforms. with  varying from   to  . This path of integration must lie in the ROC for X s  . this is possible if c  a . then the inverse transform is unique. (4A.6 The ROC is required for evaluating the inverse Laplace transform. as defined The ROC is needed to establish the convergence of the Laplace transform by Eq. The operation of finding the inverse transform requires integration in the complex plane. we substitute s  j into its Laplace transform: X    1 1 1   s  3 s j j  3 3  j 2f (4A.7). For the signal e  at u t  . Example To find the Fourier transform of a signal xt   e 3t u t  . and the Laplace transform exists along the imaginary axis of the s-plane ( s  j ). Finding a Fourier Transform using a Laplace Transform If the function xt  is zero for all t  0 .17) A quick check from our knowledge of the Fourier transform shows this to be correct (because the Laplace transform includes the j axis in its region of convergence). and we do not need to specify the ROC. The path of integration is along c  j .4A. then: X    X s  s  j The Fourier transform from the Laplace transform (4A. The ROC is not needed if we deal with causal signals only If all the signals we deal with are restricted to t  0 . Signals and Systems 2014 .3.

i.4 0. if we are willing to split the transform into its magnitude and phase (remember X s  is a complex number).4 There are several things we can notice about the plot above.2 0 -6 10 -4 5 0 -2  -5 0 -10 j Figure 4A. the surface approaches an infinite value at the point s  3 . Signals and Systems 2014 .7 We can also view the Laplace transform geometrically.4A. note that the surface has been defined only in the ROC. Such a point is termed a pole. for Re s  3 . Secondly. in obvious reference to the surface being analogous to a tent (a zero is a point where the surface has a value of zero).8 0.e.6 |X(s)| 0. First. The magnitude of X s   1 s  3 can be visualised by graphing X s  as the height of a surface spread out over the s-plane: The graph of the magnitude of the Laplace transform over the s-plane forms a surface with poles and zeros 1 0.

8 We can completely specify X s  . it should be apparent that a function that has a Laplace transform with a ROC in the right-half plane does not have a Fourier transform (because the Laplace transform surface will never intersect the j -axis). If we graph the height of the surface along this cut against  . by drawing a so-called pole-zero plot: A pole-zero plot is a shorthand way of representing a Laplace transform j  -3 s -plane Figure 4A. Thirdly.5 A pole-zero plot locates all the critical points in the s-plane that completely specify the function X s  (to within an arbitrary constant). Signals and Systems 2014 . apart from a constant gain factor. and it is a useful analytic and design tool.6 With these ideas in mind.  : The Fourier transform is obtained from the j-axis of a plot of the Laplace transform over the entire s-plane |X | 1/3 0 magnitude of 1 X ( ) = 3+ j   Figure 4A. one cut of the surface has been fortuitously placed along the imaginary axis.4A. we get a picture of the magnitude of the Fourier transform vs.

20) This is a frequently used transform in the study of electrical circuits and control systems. Signals and Systems 2014 .19) The Laplace transform of an impulse Thus. we substitute into the Laplace transform definition:  X s      t e st dt (4A. Example To find the Laplace transform of the unit-step. just substitute a  0 into Eq. just like the Fourier transform.9 Finding Laplace Transforms Like the Fourier transform. Then. it is only necessary from a practical viewpoint to find Laplace transforms for a few standard signals.18) 0 Recognising this as a sifting integral. finding a Laplace transform of a function will consist of starting with a known transform and successively applying known transform properties. and then formulate several properties of the Laplace transform. the Laplace transform of an impulse is 1. we arrive at a standard transform pair:  t   1 (4A. (4A.15).4A. Example To find the Fourier transform of a signal xt    t  . The result is: u t   1 s The Laplace transform of a unitstep (4A.

22) and so we have another standard transform: cos 0t u t   s s 2   02 (4A. not just the j -axis.4A. (4A. it follows that:   1 j 0 t 1 1 1  e  e j 0 t u t      2 2  s  j0 s  j0   s s 2  02 (4A. we recognise that: cos 0t u t     1 j 0t e  e  j 0t u t  2 (4A.10 Example To find the Laplace transform of cos 0 t u t  .23) A similar derivation can be used to find the Laplace transform of sin  0 t u t  . Most of the Laplace transform properties are inherited generalisations of Fourier transform properties. Signals and Systems 2014 .21) From Eq. whereas the Fourier transform is double-sided (from   to  ) and the Laplace transform is more general in the sense that it covers the entire s-plane.15). based on the fact that the Laplace transform is one-sided (from 0 to  ). There are a few important exceptions.

e. Thus:   d xt   sX s   x 0  dt Signals and Systems 2014 (4A.4A.26) The Laplace transform differentiation property .11 Differentiation Property One of the most important properties of the Laplace transform is the differentiation property. It is much easier to solve algebraic equations than differential equations! The Laplace transform of the derivative of a function is given by:  dx  dx  L      e st dt  dt  0 dt (4A. we obtain:    dx  L    xt e st   s   xt e st dt 0 0  dt  (4A. for X s  to exist].24) Integrating by parts. it is necessary that xt e  st  0 as t   for the values of s in the ROC for X s  . It enables us to directly transform a differential equation into an algebraic equation in the complex variable s.25) For the Laplace integral to converge [i.

12 Standard Laplace Transforms u t   1 s (L.3) (L.5) .1)  t   1 (L.4) (L.4A.2) e t u t   1 s 1 cost u t   s s2  2 sin t u t    s2  2 Signals and Systems 2014 (L.

7) Scaling xt  c u t  c   e  cs X s  (L.10) Multiplication by t (L.13 Laplace Transform Properties Assuming xt   X s  .4A.6) Linearity t x   T X sT  T  (L.15) Final-value theorem t xt    1 N dN X s  ds N N   d xt   sX s   x 0  dt  t 0 x d  1 X s  s s  t  s 0 Signals and Systems 2014 .8) Time shifting e at xt   X s  a  (L.14) Initial-value theorem lim xt   lim sX s  (L.9) Multiplication by exponential (L.11) Differentiation (L. axt   aX s  (L.12) Integration x1 t   x2 t   X 1 s  X 2 s  (L.13) Convolution x0   lim sX s  (L.

 s  p1 s  p 2 s  pN (4A...28) where the pi are called the poles of X s  .28) is: Rational Laplace transforms written in terms of poles cN c c X s   1  2  .. Such functions can be expressed as a sum of simpler functions by using partial fraction expansion. (4A. They are given by: Residues defined ci  s  pi  X s s  pi Signals and Systems 2014 (4A.  a1s  a0 (4A...  b1s  b0 X s   a N s  p1 s  p2 . which is a term derived from complex variable theory.30) .. we can find inverse transforms from a table of Laplace transforms. then the partial fraction expansion of Eq.4A.  b1 s  b0 X s   a N s N  a N 1 s N 1  .27) This can also be written: bM s M  bM 1s M 1  . Most of the transforms of practical interest are rational functions. If the poles are all distinct.29) We call the coefficients ci “residues”. Instead.s  p N  (4A.. that is. ratios of polynomials in s..14 Evaluation of the Inverse Laplace Transform Finding the inverse Laplace transform requires integration in the complex plane. which is normally difficult and time consuming to compute. All we need to do is express X s  as a sum of simpler functions of the forms listed in the table... Rational Laplace Transforms A rational Laplace transform of degree N can be expressed as: bM s M  bM 1 s M 1  .

. Signals and Systems 2014 .34) As this equation must be true for all values of s. set s  2 to remove c2 : 2 2  1  c  1  2  4  1 (4A. and then cover-up rule evaluate the left-hand side at a value of s that makes the factor s  2   0 .33) To find c1 . is to mentally cover up the factor s  2  on the left-hand side. multiply both sides of Eq.33) by s  2  : c s  2  2s  1  c1  2 s  4  s4 (4A.4A. gives us: xt   c1e p1t  c 2 e p2t  .3) and property (L.7).. i. (4A.29) using standard transform (L.32) By expansion into partial fractions we have: c c 2s  1  1  2 s  2 s  4  s  2 s  4 (4A.31) The time-domain form depends only on the poles Note that the form of the time-domain expression is determined only by the poles of X s  ! Example Find the inverse Laplace transform of: Y s   2s  1 s  2 s  4  (4A.15 Taking the inverse Laplace transform of Eq. without performing algebraic manipulation by Heaviside’s hand. This mental procedure is known as Heaviside’s cover-up rule.35) An equivalent way to find c1 .  c N e p N t .e. at s  2 . t  0 (4A. (4A.

41) .7): y t   e 2 t  3e 4 t .36) Therefore.  c N e p N t . t  0 (4A..39) The inverse transform is: xt   c1e p1t  1 c e p1 t  ..16 Applying Heaviside’s cover-up rule for c2 results in the mental equation: 2 4  1 c 3  4  2  2 (4A.. the partial fraction expansion is: Y s   1 3  s2 s4 (4A... s  p1 s1  p1 s  p3 s  pN (4A. In this case the residues of the two poles are complex conjugate and: c3 cN c1 c1 X s       .37) The inverse Laplace transform can now be easily evaluated using standard transform (L.. t  0 Signals and Systems 2014 (4A.4A.3) and property (L. t  0 (4A. then the complex conjugate p1    j is also a pole (that’s how we get real coefficients in the polynomial).38) Note that the continual writing of u t  after each function has been replaced by the more notationally convenient condition of t  0 on the total solution. If there is a pole p1    j .  c N e p N t .40) which can be expressed as: Complex-conjugate poles lead to a sinusoidal response x t   2 c1 et cost  c1   c3 e p3t  .

Then the partial fraction expansion of Eq.17 Now suppose the pole p1 of X s  is repeated r times. (4A. (4A.45) Find c1 by multiplying both sides by s  1 and setting s  1 (Heaviside’s cover-up rule): 1 4 c 3  1  2 2 1 (4A.  2 r s  p N  s  p1 s  p1  s  p1  s  pr 1  (4A.43) 1 for the repeated poles 0  i  r  1 .47) Residue defined for repeated poles .    . multiply throughout by s  2 and set s  2 : 2 24  c  2  2  1 3 Signals and Systems 2014 (4A.28) is: X s   cN c1 c2 cr cr 1   .46) To find c3 ..44) Expanding into partial fractions we have: c3 c c s4  1  2  2 s  1s  2  s  1 s  2 s  2 2 (4A.. Example Find the inverse Laplace transform of: Y s   s4 s  1s  2 2 (4A..30) for the distinct poles r 1  i  N and: cr i  1  di r   i s  p1  X s  i!  ds  s p (4A..42) Partial fraction expansion with repeated poles The residues are given by Eq.4A.

if s is set to  2 in the equation after differentiation.4A. the LHS of our problem becomes: 3 d  s  4  s  1  s  4    2   ds  s  1  s  12 s  1 (4A.43). To find c2 . Therefore.53) Therefore.50) Therefore.18 Note that Heaviside’s cover-up rule only applies to the repeated partial fraction with the highest power.48) Now to get rid of c3 . (4A. Multiplying throughout by s  2 gives: 2 c s4 2  1 s  2   c 2  s  2   c 3 s 1 s 1 (4A. we have to use Eq. the partial fraction expansion is: Y s   3 3 2   s  1 s  2 s  2 2 Signals and Systems 2014 (4A.49) The differentiation of the quotients can be obtained using: du dv u d u   dx 2 dx   v dx  v  v (4A. differentiate with respect to s: 2 d  s  4  d  c1 s  2   d     c 2 s  2  ds  s  1  ds  s  1  ds (4A. we can resolve c2 : c2  3  3  2  12 (4A.52) Differentiation of the c1 term will result in an s  2  multiplier.51) The second term on the RHS becomes: d c2 s  2  c2 ds (4A.54) .

we see that any high order rational function X s  can be decomposed into a combination of first-order factors. Signals and Systems 2014 Multiple poles produce coefficients that are polynomials of t . Using MATLAB® we just do: num = [1 –2 1]. MATLAB® can be used to obtain the poles and residues for a given rational function X s  . Thus. there are three types of response from an LTI system:  Real poles: the response is a sum of exponentials. in general.19 The inverse Laplace transform can now be easily evaluated using (L.p] = residue(num. den = [1 3 4 2]. Example Given: s 2  2s  1 X s   3 s  3s 2  4 s  2 calculate xt  .3).  Multiple poles: the response is a sum of exponentials but the coefficients of the exponentials are polynomials in t. which returns vectors of the residues and poles in r and p respectively. [r. produce a coefficient of the exponential term which is a polynomial in t.  Complex poles: the response is a sum of exponentially damped sinusoids. t0 (4A. (L.den).55) A multiple pole will. In summary.10): y t   3e  3e t 2 t  2te 2 t .4A.7) and (L.

the Laplace transform of a differential equation is an algebraic equation that can be readily solved for Y s  . The equation is: d2y dy dx  5  6y  x 2 dt dt dt Signals and Systems 2014 (4A.20 Transforms of Differential Equations The time-differentiation property of the Laplace transform sets the stage for solving linear differential equations with constant coefficients.4A. Because d k y dt k  s k Y s  . differential equation time-domain difficult? LT frequency-domain time-domain solution ILT algebaric equation easy! s -domain solution Figure 4A. Next we take the inverse Laplace transform of Y s  to find the desired solution y t  .56) for the initial conditions y 0    2 and y 0    1 and the input xt   e 4t u t  .7 Example Solve the second-order linear differential equation: D 2   5 D  6 y t   D  1x t  (4A.57) .

Then from property (L.62) input term s Therefore: Y s   2 s  11 s 1  s  4  s 2  5s  6  5s 6 s     2 zero -input component  zero -state component 5   1 2 2 32  7        s  2 s  3   s  2 s  3 s  4  Signals and Systems 2014 (4A. we obtain: s Y s   2s  1 5sY s   2  6Y s   s s 4  s 1 4 2 (4A.58) Also. for xt   e 4t u t  .21 Let y t   Y s  .60) Collecting all the terms of Y s  and the remaining terms separately on the lefthand side.57).63) .61) so that: s 2   5 s  6 Y s   2 s  11  s  1  s  4  initial condition terms (4A. we have: X s   1 s4 and   dx s s  sX s   x 0   0  dt s4 s4 (4A. (4A.11):   dy  sY s   y 0   sY s   2 dt d2y  s 2Y s   sy 0   y 0   s 2Y s   2 s  1 2 dt     (4A.4A. we get: s 2   5s  6 Y s   2 s  11  s 1 s4 (4A.59) Taking the Laplace transform of Eq.

The initial condition terms give rise to the zero-input response. assuming zero initial conditions.11). For the case described by Eq.  a1s  a0 Do the revision problem assuming zero initial conditions.65) If we take the Laplace transform of this equation using (L. Signals and Systems 2014 (4A..65).68) .  a1s  a0 (4A.64) The Laplace transform method gives the total response. we get: transform into rational functions of s bM s M  bM 1s M 1  ..67) The function H s  is called the transfer function of the system since it specifies the transfer from the input to the output in the s-domain (assuming no initial energy). which includes zero-input and zero-state components.  b1s  b0 H s   N s  aN 1s N 1  .. the transfer function is a rational polynomial given by: bM s M  bM 1s M 1  . (4A.  b1s  b0 Y s   N X s  N 1 s  a N 1s  .2t y t    7 e -5 e 3t   12 e 2t  2e 3t  32 e 4t    zero -input response zero -state response  13 2 .66) Now define: Transfer function defined Y s   H  s  X s  (4A.. t0 (4A. This is true for any system..22 Taking the inverse transform yields: . The zero-state response terms are exclusively due to the input. Consider the Nth order input/output differential equation: Systems described by differential equations d N y t  N 1 d i y t  M d i xt    ai   bi i dt N dt dt i i 0 i 0 (4A.2t e -3e  3t  e 3 2 4t ....4A.

it can be expressed in the factored form: b s  z1 s  z 2  s  z M  H s   M s  p1 s  p2 s  p N  Transfer function factored to get zeros and poles (4A. the poles and zeros determine the transfer function completely. They are often displayed on a polezero diagram. This shows us that apart from a constant factor bM .69) which shows us that: ht   H s  (4A. which is a plot in the s-domain showing the location of all the poles (marked by x) and all the zeros (marked by o).67b) The relationship between timedomain and frequency-domain descriptions of a system the Laplace transform of the impulse response! Instead of writing H s  as in Eq.4A.70a) The transfer function is (4A. (4A. we can take the Laplace transform and get: Y s   H s  X s  (4A. You should be familiar with direct construction of the transfer function for electric circuits from previous subjects. Signals and Systems 2014 .23 The System Transfer Function For a linear time-invariant system described by a convolution integral.71) where the z’s are the zeros of the system and the p’s are the poles of the system.68).

Example Given the following electrical system: A simple first-order system R v i (t ) C v o(t ) Figure 4A. with each block containing a transfer function. the Laplace transform of this is: Vo s   sRCVo s   Vi s  (4A.74) .72) Assuming zero initial conditions.24 Block Diagrams Block diagrams are transfer functions Systems are often represented as interconnections of s-domain “blocks”. the time constant.8 we can perform KVL around the loop to get the differential equation: vo t   RC dvo t   vi t  dt (4A.4A. Signals and Systems 2014 (4A.73) and therefore the system transfer function is: Vo s  1 1   Vi s  1  sRC 1  sT where T  RC .

11 Signals and Systems 2014 Addition and subtraction of signals in a block diagram .4A. Signals in block diagrams can not only be transformed by a transfer function. Notation We use the following notation. or even an active circuit (with op-amps). they can also be added and subtracted.9 Note that there’s no hint of what makes up the inside of the block. The important thing the block does is hide all this detail. X Z=X+Y X Z=X-Y Y Y Figure 4A. except for the input and output signals. It could be a simple RC circuit. a complex passive circuit.10 Most systems have several blocks interconnected by various forwards and backwards paths. where G s  is the transfer function: Vi (s ) A block represents multiplication with a transfer function Vo(s ) G (s) Figure 4A.25 Therefore the block diagram is: Vi (s ) Block diagram of the simple lowpass RC circuit Vo(s ) 1 1  sT Figure 4A.

13 Show that the transfer function for the above circuit is: Vo 1 RC  2 Vi s  3 RC s  1 RC 2 2 Signals and Systems 2014 (4A.12 Care must be taken when cascading blocks.26 Cascading Blocks Blocks can be connected in cascade. Consider what happens when we try to create a second-order circuit by cascading two first-order circuits: A circuit which IS NOT the cascade of two first-order circuits R Vi R C C Vo Figure 4A. Cascading blocks implies multiplying the transfer functions X G1( s) Y=G1X Z=G1G2 X G2( s) Figure 4A.4A.75) .

they We can only interact).76) They are different! In the first case.27 Compare with the following circuit: A circuit which IS the cascade of two first-order circuits R Vi R C C Vo Figure 4A. the second network loads the first (i.4A. Op-amp circuits of both the inverting and non-inverting type are ideal for cascading. We can only cascade circuits if the “outputs” of the circuits present a cascade circuits if they are buffered low impedance to the next stage.e.14 which has the transfer function: Vo 1 RC 1 RC  Vi s  1 RC s  1 RC 2 1 RC    2 2 s  2 RC  s  1 RC  (4A. so that each successive circuit does not “load” the previous circuit. Signals and Systems 2014 .

or desired output C s  = controlled variable.28 Standard Form of a Feedback Control System Perhaps the most important block diagram is that of a feedback connection. or output B s  = output multiplied by H s  E s  = actuating error signal R s   C s  = system error C s  = closed-loop transfer function R s  G s H s  = loop gain To find the transfer function.77) . we solve the following two equations which are self-evident from the block diagram: C s   G s E s  E s   Rs   H s C s  Signals and Systems 2014 (4A.4A.15 We have the following definitions: G s  = forward path transfer function H s  = feedback path transfer function R s  = reference. input. shown below: Standard form for the feedback connection R (s ) C (s ) E (s ) G (s) B (s ) H (s) Figure 4A.

Note that for negative feedback we get 1  G s H s  and for positive feedback we get 1  G s H s  .4A.81) Finding the error signal’s transfer function for the standard feedback connection and: Notice how all the above expressions have the same denominator. We define 1  G s H s   0 as the characteristic equation of the differential equation describing the system. negative feedback positive feedback 1+GH 1-GH Figure 4A.80) Bs  G s H s   Rs  1  G s H s  (4A.78) and therefore: Transfer function for the standard feedback connection C s  G s   R s  1  G s H s  (4A.16 Signals and Systems 2014 Characteristic equations for positive and negative feedback .29 Then the output C s  is given by: C s   G s Rs   G s H s C s  C s 1  G s H s   G s Rs  (4A.79) Similarly. we can show that: E s  1  Rs  1  G s H s  (4A.

30 Block Diagram Transformations Simplifying block diagrams We can manipulate the signals and blocks in a block diagram in order to simplify it. Original Diagram Combining blocks in cascade Moving a summing point behind a block X1 X2 G1 Equivalent Diagram X3 G2 X1 X3 G X1 X1 X3 G1 G2 X3 G X2 X1 Moving a summing point ahead of a block X3 G X1 X1 X1 X1 X1 X1 Moving a pick-off point ahead of a block Eliminating a feedback loop X2 G G G X2 G X2 X2 X2 1 G X1 X2 X1 X2 1 G X2 G X3 G X2 Moving a pick-off point behind a block X2 G G X1 H Signals and Systems 2014 G 1 GH X2 .4A. This is termed block-diagram reduction. The overall system transfer function is obtained by combining the blocks in the diagram into just one block.

31 Example Given: C R G H then alternatively we can get: C  G R  HC  R   GH   C  H  which is drawn as: R 1 H C GH Signals and Systems 2014 .4A.

G1G2 H1 + G1G2 (G3+ G4 ) H2 Signals and Systems 2014 o .4A.G1G2 H1 G3+ G4 H2 which simplifies to: i G1G2 (G3+ G4 ) 1.G1G2 H1 H1 Therefore we get: i o G1G2 1.32 Example Given: G3 o i G1 G2 G4 H1 H2 we put: G1 G2 G1G2 G3 G4 G3+ G4 G1G2 G1G2 1.

33 Example Consider a system where d is a disturbance input which we want to suppress: d o i G1 G2 Using superposition. consider  i only:  o1 i G1 G2 Therefore:  o1 G1G2   i 1  G1G2 Considering d only:  o2 d G2 G1 we have:  o2 G2   i 1  G1G2 Therefore. use a small G2 and a large G1 ! Signals and Systems 2014 .4A. the total output is:  o   o1   o 2  G1G2 G2 i  d 1  G1G2 1  G1G2 Therefore.

1994. All singleinput single-output systems can be reduced to one equivalent transfer function.: Modern Digital and Analog Communication Systems. 1983. The Laplace transforms of the input signal and output signal are related by the transfer function of the system: Y s   H s X s  .  A block diagram is composed of blocks containing transfer functions and adders. S. 1997. New York.34 Summary  The Laplace transform is a generalization of the Fourier transform.: Communication Systems. Prentice-Hall. we start from a known Laplace transform pair.  Systems described by differential equations have rational Laplace transforms.  The transfer function of a system can be obtained by performing analysis in the s-domain. and apply any number of Laplace transform pair properties to arrive at the solution. There is a one-to-one correspondence between the coefficients in the differential equation and the coefficients in the transfer function. John-Wiley & Sons. HoltSaunders.  The impulse response and the transfer function form a Laplace transform pair: ht   H s  . They are used to diagrammatically represent systems. & Heck. Lathi. References Haykin. E. Inc. Signals and Systems 2014 .  Inverse Laplace transforms are normally obtained by the method of partial fractions using residues. To find the Laplace transform of a function..4A.: Fundamentals of Signals and Systems using MATLAB®. Tokyo. Kamen. P. B. B.

4A. Obtain transfer functions for the following networks: a) b) R C1 R1 C R2 C2 c) d) R2 C L L1 C C1 R1 R 2. Obtain the Laplace transforms for the following integro-differential equations: a) L di t  1 t  Rit    i d  et  C 0 dt dx t  d 2 xt    Kx t   3t B b) M dt dt 2 d t  d 2 t    K t   10 sin t B c) J dt dt 2 Signals and Systems 2014 .35 Exercises 1.

36 3. b) and c) above. 5. Find the f t  which corresponds to each F s  below: a) s2  5 F s   3 s  2s 2  4s b) F s   s s 4  5s 2  4 c) F s   3s  1 2 5 s 3 s  2  4. Find an expression for the transfer function of the following network (assume the op-amp is ideal): C2 R1 Vi C1 R2 Vo Signals and Systems 2014 .4A. Use the final value theorem to determine the final value for each f t  in 3 a).

37 6.4A. when it is opened instantaneously. Find the inductor current y t  for t  0 . 2 y (t ) 1 H 5 10 V 1 5 F t =0 7. Given: R (s ) E (s ) B (s ) C (s ) G (s) H (s) a) Find expressions for: (i) C s  R s  (ii) E s  R s  (iii) B s  R s  b) What do you notice about the denominator in each of your solutions? Signals and Systems 2014 . the switch is in a closed position for a long time before t  0 . In the circuit below.

find the transfer functions of the following systems: a) R G1( s) Y G2( s) H2( s) G3( s) H3( s) b) c X1 Y X5 a b d Signals and Systems 2014 .38 8.4A. Using block diagram reduction techniques.

4A.39 9. Use block diagram reduction techniques to find the transfer functions of the following systems: a) H2 R G1 G2 G3 G4 C H1 H3 b) X B D A E C F Signals and Systems 2014 Y .

where he gained the attention of the great mathematician and philosopher Jean Le Rond d’Alembert by sending him a paper on the principles of mechanics. In 1773. Laplace proposed a model of the solar system which showed how perturbations in a planet’s orbit would not change its distance from the sun. before the Academy of Sciences. He was a powerful and influential figure. He began producing a steady stream of remarkable mathematical papers. and even a new means of computing planetary orbits which led to astronomical tables of improved accuracy. and while at the local military school. his uncle (a priest) recognised his exceptional mathematical talent. contributing to the areas of celestial mechanics.40 Pierre Simon de Laplace (1749-1827) The application of mathematics to problems in physics became a primary task in the century after Newton. he began to study at the University of Caen. At sixteen. Signals and Systems 2014 . Not only did he make major contributions to difference equations and differential equations but he examined applications to mathematical astronomy and to the theory of probability. Laplace was the son of a farmer of moderate means. Two years later he travelled to Paris. two major topics which he would work on throughout his life. His work on mathematical astronomy before his election to the Académie des Sciences included work on the inclination of planetary orbits. and Laplace became a professor of mathematics. Laplace contributed a stream of papers on planetary motion. For the next decade. he introduced a new calculus for discovering the motion of celestial bodies. clearing up discrepancies in the orbit’s of Jupiter and Saturn. a study of how planets were perturbed by their moons.4A. His genius was immediately recognised. cosmology and probability. he showed how the moon accelerates as a function of the Earth’s orbit. and in a paper read to the Academy on 27 November 1771 he made a study of the motions of the planets which would be the first step towards his later masterpiece on the stability of the solar system. Foremost among a host of brilliant mathematical thinkers was the Frenchman Laplace.

. which is found to be applicable to a great deal of phenomena. In 1785. including gravitation.4A. Laplace had already discovered the invariability of planetary mean motions. the third was on force and momentum. now known as Laplace’s equation. The Exposition consisted of five books: the first was on the hypothesis. to Napoleon on why his works on celestial mechanics make no mention of God. Laplace stated his philosophy of science in the Exposition:If man were restricted to collecting facts the sciences were only a sterile nomenclature and he would never have known the great laws of nature. " apparent motions of the celestial bodies. and self-correcting. flattened. water. Laplace presented his famous nebular hypothesis in 1796 in Exposition du systeme du monde. the fourth was on the theory of universal gravitation and included an account of the motion of the sea and the shape of the Earth. in seeking to grasp their relationships. that he is led to discover these laws.Laplace. Laplace let it be known widely that he considered himself the best mathematician in France. the second was on the actual motion of the celestial bodies. the motion of the sea. These and many of his earlier results Signals and Systems 2014 . It is in comparing the phenomena with each other. constant. and in this role in 1785.. and also atmospheric refraction. he introduced a field equation in spherical harmonics. the final book gave an historical account of astronomy and included his famous nebular hypothesis which even predicted black holes. he examined and passed the 16 year old Napoleon Bonaparte. and slowly rotating cloud of "Your Highness. which viewed the solar system as originating from the contracting and cooling of a large. Exposition du systeme du monde was written as a non-mathematical introduction to Laplace's most important work. light.. electricity and magnetism.41 The 1780s were the period in which Laplace produced the depth of results which have made him one of the most important and influential scientists that the world has seen. I have no need of this incandescent gas. In 1786 he had proved that the eccentricities and inclinations of planetary orbits to each other always remain small. the propagation of sound. heat. The effect on his colleagues would have been only mildly eased by the fact that Laplace was right! In 1784 Laplace was appointed as examiner at the Royal Artillery Corps.

The first book studies generating functions and also approximations to various expressions occurring in probability theory. the first on general laws of equilibrium and motion of solids and also fluids. and remarks on mathematical expectation. In 1812 he published the influential study of probability. and elastic fluids (1821). the velocity of sound (1816). Some papers were lost in a fire that destroyed the chateau of a descendant. and inverse probability. After the publication of the fourth volume of the Mécanique Céleste. Bayes's rule (named by Poincaré many years later). Signals and Systems 2014 . Applications to mortality. 1827 at his home outside Paris. In it Laplace included a study of the shape of the Earth which included a discussion of data obtained from several different expeditions. the first two in 1799. The main mathematical approach was the setting up of differential equations and solving them to describe the resulting motions. and gaps in his biography have been filled by myth. in particular the shape and rotation of the cooling Earth (1817-1820). the theory of heat. length of marriages and probability in legal matters are given.4A. Many original documents concerning his life have been lost. life expectancy. The book continues with methods of finding probabilities of compound events when the probabilities of their simple components are known. The work consists of two books. double refraction (1809). Théorie analytique des probabilités. Laplace died on 5 March. The second volume deals with mechanics applied to a study of the planets. while the second book is on the law of universal gravitation and the motions of the centres of gravity of the bodies in the solar system. and Laplace applied his theory of errors to the results. The first volume of the Mécanique Céleste is divided into two books. and others went up in flames when Allied forces bombarded Caen during WWII. then a discussion of the method of least squares.42 formed the basis for his great work the Traité du Mécanique Céleste published in 5 volumes. Laplace continued to apply his ideas of physics to other problems such as capillary action (1806-07). The second book contains Laplace's definition of probability.

Arbitrary response. Sinusoidal response.2b) If H s  has repeated poles. Unit-step response.2a) The impulse response is always some sort of exponential (4B. Signals and Systems 2014 . let’s examine the rational system transfer function that ordinarily arises from linear differential equations: bM s M  bM 1s M 1  . From the transfer function..  b1s  b0 H s   N s  a N 1s N 1  . we can derive many important properties of a system.1 Lecture 4B – Transfer Functions Stability.1) The transfer function H s  is the Laplace transform of the impulse response ht  . then ht  will contain terms of the form ct i e  pt and/or At i e t cost    . or by analysing a system directly in the tells us lots of things s-domain. Overview Transfer functions are obtained by Laplace transforming a system’s The transfer function input/output differential equation. for real and complex non-repeated poles: c  ce  pt s p A cos  s     A sin   Ae t cost    2 2 s      (4B.4B. In particular..  a1s  a0 (4B. then the poles of H s  determine the form of ht  . Since it’s the poles of a Laplace transform that determine the system’s time-domain response.. Stability To look at stability..

3a) where pi are the poles of H s  .6) . so that X s   1 s .2 From the time-domain expressions. If an input xt  is applied for t  0 with no initial energy in the system.3b) in the open left-half s-plane Unit-Step Response Consider a system with rational transfer function H s   Bs  As  .4) Suppose xt  is the unit-step function u t  . this can be written as: Y s   H 0  E s   s As  Signals and Systems 2014 (4B. it follows that ht  converges to zero as t   if and only if: Re pi   0 Conditions on the poles for a stable system (4B. Then the transform of the step response is: Transform of stepresponse for any system Y s   B s  As s (4B. then the transform of the resulting output response is: Y s   B s  X s  As  (4B.5) Using partial fractions. This is equivalent to saying: Stability defined A system is stable if all the poles of the transfer function lie (4B.4B.

An analysis of the transient response is very important because we may wish to design a system with certain time-domain behaviour. t  0 (4B. if the system is stable. The constant H 0  is the steady-state value of the step response. So. Signals and Systems 2014 . we get the time-domain response to the unit-step function: yt   H 0  y tr t . especially have a requirement to reach 99% of the steady-state value within a certain time.4B. we may Transients are important. then the term y tr t  converges to zero as t   .7) Taking the inverse Laplace transform of Y s  . the step response contains a transient that decays to zero and it contains a constant with value H 0  . For example. If the system is stable so that all the roots of As   0 lie in the open left-half plane.3 where E s  is a polynomial in s and the residue of the pole at the origin was given by: c  sY s s  0  H 0  (4B. This will be examined for the case of first-order and second-order systems. in which case y tr t  is the transient part of the response. for control system or we may wish to limit any oscillations about the steady-state value to a design certain amplitude etc.8) The complete response consists of a transient part and a steady-state part where y tr t  is the inverse Laplace transform of E s  As  .

9) The unit-step response is: Step-response of a first-order system y t   1  e  pt . the faster the rate of decay of the transient. then p lies in the open left-half plane. It is defined as 1 p . assuming p  0 . T  RC for a simple RC circuit). (4B.4B. Signals and Systems 2014 . An important quantity that characterizes the rate of decay of the transient is the Time constant defined time constant T. The smaller the time constant. t  0 (4B.8). This may be an important design consideration. the rate at which the step response converges to the steady-state value is equal to the rate at which the transient decays to zero. The rate at which the transient decays to zero depends on how far over to the left the pole is. If the system is stable. and the second term decays to zero.10) which has been written in the form of Eq.4 First-Order Systems For the first-order transfer function: H s   p s p (4B. Since the total response is equal the constant “1” plus the transient response. You are probably familiar with the concept of a time constant for electric circuits (eg. but it is a concept applicable to all first-order systems.

Signals and Systems 2014 (4B.12b) Damping ratio and natural frequency defined If we write: H s    n2 s  p1 s  p2  (4B.13) then the poles of H s  are given by the quadratic formula: p1   n   n  2  1 p 2   n   n  2  1 There are three cases to consider.5 Second-Order Systems Now consider the second-order system given by the transfer function:  n2 H s   2 s  2 n s   n2 (4B.4B.12a) (4B.11) Standard form of a second-order lowpass transfer function The real parameters in the denominator are:   damping ratio  n  natural frequency (4B.14b) Pole locations for a second-order system .14a) (4B.

so that the step response looks like that of a firstorder system. we get the unit-step response: Step response of a second-order overdamped system y t   1  c1e  p1t  c2 e  p 2 t .15) and the transform of the unit-step response is given by: Y s    n2 s  p1 s  p2 s (4B. Signals and Systems 2014 .6 Distinct Real Poles (   1 ) .18) and the steady-state value: Steady-state part of the step response of a second-order overdamped system yss t   H 0    n2 p1 p 2 1 (4B.Overdamped In this case. the transfer function can be expressed as: H s    n2 s  p1 s  p2  (4B.19) It often turns out that the transient response is dominated by one pole (the one closer to the origin – why?). t  0 (4B. the transient part of the response is given by the sum of two exponentials: Transient part of the step response of a second-order overdamped system y tr t   c1e  p1t  c2 e  p 2t (4B.16) Rewriting as partial fractions and taking the inverse Laplace transform.4B.17) Therefore.

Signals and Systems 2014 (4B. the transient response is: y tr t   1   n t e  n t (4B.4B.7 Repeated Real Poles (   1 ) – Critically Damped In this case.23) Steady-state part of the unit-step response of a second-order critically damped system .21) Unit-step response of a second-order critically damped system Hence.22) Transient part of the unit-step response of a second-order critically damped system and the steady-state response is: yss t   H 0   1 as before.20) Expanding H s  s via partial fractions and taking the inverse transform yields the step response: y t   1  1   n t e  n t (4B. the transfer function has the factored form:  n2 H s   s   n 2 (4B.

27) Thus.8 Complex Conjugate Poles ( 0    1 ) – Underdamped For this case. from the transform Eq. The transient response is an exponentially decaying sinusoid with frequency  d rads -1 .26) and the transform of the unit-step response Y s   H s  s can be expanded as: Y s   1 s   n    n  s s   n 2   d2 (4B.25) The transfer function is then:  n2 H s   s   n 2   d2 (4B. Signals and Systems 2014 .24) so that the poles are located at: p1. Thus second-order systems with complex poles have an oscillatory step response.28) Verify the above result. (4B. we define the damped frequency: Damped frequency defined d  n 1 2 (4B.2b). the unit-step response is: Step response of a second-order underdamped system y t   1   n  t e sin  d t  cos 1   d n (4B.4B. 2   n  j d Underdamped pole locations are complex conjugates (4B.

for fixed  n .707 j n j d =1 Re s . Only two parameters determine the pole locations:  and  n . A graph of some pole locations for various values of  and one value of  n are shown below: Second-order pole locations Im s =0 =5  = 0. until   1 .4B. the response becomes less oscillatory and more and more damped. with one moving to the left. Signals and Systems 2014 How the damping ratio. As  is increased.j n Figure 4B. As  is increased. The response becomes more and more damped due to the right-hand pole getting closer and closer to the origin. that varying  from 0 to 1 causes the poles to move from the imaginary axis along a circular arc with radius  n until they meet at the point s   n . and there is no sinusoid in the response.9 Second-Order Pole Locations It is convenient to think of the second-order response in terms of the pole-zero plot of the second-order transfer function. and one moving toward the origin. the poles move apart on the real axis. . varies the pole location . It is instructive how varying either parameter moves the pole locations around the s-plane. Now the poles are real and repeated.1 We can see. n . If   0 then the poles lie on the imaginary axis and the transient never dies out – we have a marginally stable system.

31) yields: The sinusoidal response of a system y t     C H  j 0 e j 0 t  H *  j 0 e  j 0 t  ytr t  2 (4B.30) The partial fraction expansion of Eq. To determine the system response to the sinusoidal input xt   C cos 0 t . The inverse Laplace transform of both sides of Eq.10 Sinusoidal Response Again consider a system with rational transfer function H s   Bs  As  .30) is:  C 2 H  j 0  C 2 H *  j 0  E s  Y s     s  j  0  s  j 0  As  (4B. we first find the Laplace transform of the input: X s   Cs Cs  s 2   02 s  j 0 s  j 0  (4B. this can be written: is sinusoidal (plus a transient) y t   C H  j 0  cos 0t  H  j 0   ytr t  Signals and Systems 2014 (4B. (4B.4B.33) .31) You should confirm the partial fraction expansion.29) The transform Y s  of the ZSR is then: Y s   B s  Cs As  s  j 0 s  j 0  (4B.32) and from Euler’s identity. (4B.

11 When the system is stable. t  0 (4B. the y tr t  term decays to zero and we are left with the steady-state response: yss t   C H  j 0  cos 0t  H  j 0 .35) Frequency response from transfer function Arbitrary Response Suppose we apply an arbitrary input xt  that has rational Laplace transform X s   C s  Ds  where the degree of C s  is less than Ds  .4B.37) Y s   This can be written as: Y s   Taking the inverse transform of both sides gives: y t   yss t   ytr t  (4B. If this input is applied to a system with transfer function H s   Bs  As  .34) Steady-state sinusoidal response of a system This result is exactly the same expression as that found when performing Fourier analysis. and hence there was no transient! This means that the frequency response function H   can be obtained directly from the transfer function: H    H  j   H s  s  j (4B.36) F s  E s   D s  As  (4B. Signals and Systems 2014 The Laplace transform of an output signal always contains a steadystate term and a transient term .38) where yss t  is the inverse transform of F s  Ds  and y tr t  is the inverse transform of E s  As  . except there the expression was for all time. the transform of the response is: B s C s  As Ds  (4B.

1997. E. The transfer function gives us the ZSR only!  The step response is an important response because it occurs so frequently in engineering applications – control systems in particular.  The frequency response of a system can be obtained from the transfer function by setting s  j . while the form of the steady-state response depends directly on the poles of the input X s  . Second-order systems exhibit different step responses depending on their pole locations – overdamped.  The complete response of a system consists of a transient response and a steadystate response. & Heck. References Kamen. Signals and Systems 2014 . The transient response consists of the ZIR and a part of the ZSR. B.: Fundamentals of Signals and Systems using MATLAB®. regardless of what the system transfer function H s  is! Summary  A system is stable if all the poles of the transfer function lie in the open left-half s-plane. critically damped and underdamped.  The poles of the system determine the transient response.  The poles of the signal determine the steady-state response. The steady-state response is part of the ZSR.12 The important point to note about this simple analysis is that the form of the transient response is determined by the poles of the system transfer function The form of the response is determined by the poles only H s  regardless of the particular form of the input signal xt  .4B. Prentice-Hall.

Determine whether the following circuit is stable for any element values. (i) xt   u t  (ii) xt   tu t  (iii) xt   2 sin 2t u t  (iv) xt   2 sin 10t u t  b) Use MATLAB® to compute the responses numerically. Plot the responses and compare them to the responses obtained analytically in part a).4B. Suppose that a system has the following transfer function: H s   8 s4 a) Compute the system response to the following inputs. and for any bounded inputs: R L C 2. Identify the steadystate solution and the transient solution.13 Exercises 1. Signals and Systems 2014 .

Consider three systems which have the following transfer functions: (i) H s   32 s  4 s  16 (ii) H s   32 s  8s  16 (iii) H s   32 s  10s  16 2 2 2 For each system: a) Determine if the system is critically damped. b) Calculate the step response of the system. 4. For the circuit shown. c) Use MATLAB® to compute the step response numerically.4B. or overdamped. compute the steady-state response yss t  resulting from the inputs given below assuming that there is no initial energy at time t  0 . underdamped.14 3. a) xt   u t  10 F b) xt   10 cost u t  100 k c) xt   cos5t   6 u t  Signals and Systems 2014 200 k 10 F . Plot the response and compare it to the plot of the response obtained analytically in part b).

It was during this job that he developed an interest in the physical operation of the telegraph cable. with prodigious possibilities in its power. and at the age of sixteen he obtained his first (and last) job as a telegraph operator with the Danish-Norwegian-English Telegraph Company. It was into this smelly. – Oliver Heaviside . This enabled Heaviside to be reasonably well educated.4B. solid geometry and partial differential equations. At that time. greater and greatest. noisy. and other relatives. for the rest of his life. A lucky marriage made Charles Wheatstone (of Wheatstone Bridge fame) Heaviside’s uncle. a time of steam engines belching forth a steady rain of coal dust. By the early 1870’s. a time of horses clattering along cobblestoned streets. In 1874 Heaviside resigned from his job as telegraph operator and went back to live with his parents. unhealthy and class-conscious world that Oliver Heaviside was born the son of a poor man on 18 May. telegraph cable theory was in a state left by Professor William Thomson (later Lord Kelvin) – a diffusion theory modelling the passage of electricity through a cable with the same mathematics that describes heat flow. 1850. a time of unimaginable disease and lack of sanitation. a time when social services were the fantasy of utopian dreamers.15 Oliver Heaviside (1850-1925) The mid-Victorian age was a time when the divide between the rich and the poor was immense (and almost insurmountable). He was to live off his parents. In 1876 he published a paper entitled On the extra current which made it clear that Heaviside (a 26-year-old unemployed nobody) was a brilliant talent. Heaviside was contributing technical papers to various publications – he had taught himself calculus. He dedicated his life to writing technical papers on telegraphy and electrical theory – much of his work forms the basis of modern circuit theory and field theory. differential equations. But the greatest impact on Heaviside was Maxwell’s treatise on electricity and magnetism – Heaviside was swept up by its power. He had extended the mathematical understanding of telegraphy far beyond Signals and Systems 2014 I remember my first look at the great treatise of Maxwell’s…I saw that it was great.

using a symmetrical vector calculus notation (also championed by Josiah Willard Gibbs (1839-1903)). and not electric charges. In 1878. and would be the key to solving the problems of long distance telephony. Signals and Systems 2014 . A major portion of Heaviside’s work was devoted to “operational calculus”. as well as contributing to the ideas behind field theory. and next to him to Heaviside and Hertz.4B. it would actually improve the signalling rate of the line – thus was born the idea of “inductive loading”. it’s mathematical rigor was not at all clear. he was able to solve an enormous amount of problems involving field theory. etc.1 This caused a controversy with the mathematicians of the day because although Rigorous mathematics is narrow. which in modern terms would be called impedance. he wrote a paper On electromagnets. That we have got so far is due in the first place to Maxwell.16 Thomson’s submarine cable theory. capacitors and inductors. In 1879. Unfortunately. which allowed transcontinental telegraphy and long-distance telephony to be developed in the USA. His knowledge of the physics of problems guided him correctly in many instances to the development of suitable mathematical processes. which introduced the expressions for the AC impedances of resistors. It showed that inductance was needed to permit finite-velocity wave propagation. although Heaviside’s paper was correct. physical mathematics bold and broad. – H. Now all has been blended into one theory. In 1887 Heaviside introduced the concept of a resistance operator. Heaviside independently invented (and applied) his own version of the operational calculus. the main equations of which can be written on a page of a pocket notebook. and thus the resistance operator for an inductor would be written as pL. – Oliver Heaviside it seemed to solve physical problems. and led to his eventual isolation from the “academic world”. He would then treat p just like an algebraic 1 The Ukrainian Mikhail Egorovich Vashchenko-Zakharchenko published The Symbolic Calculus and its Application to the Integration of Linear Differential Equations in 1862. his paper On the theory of faults showed that by “faulting” a long telegraph line with an inductance.A. Lorentz When Maxwell died in 1879 he left his electromagnetic theory as twenty equations in twenty variables! It was Heaviside (and independently. such as energy being carried by fields. From these equations. and Heaviside introduced the symbol Z for it. He let p be equal to time-differentiation. it was also unreadable by all except a few – this was a trait of Heaviside that would last all his life. Hertz) who recast the equations in modern form.

17 quantity. The resulting age of the Earth (100 million years) fell short of that physics should be carried on right needed by Darwin’s theory. sand and sandstone.4B. when he used it in 1927. The unit-step was even called the Heaviside step. In other words.” Paul Dirac derived the modern notion of the impulse. We now know that such a simple model is based on faulty premises – radioactive decay within the Earth maintains the thermal gradient without a continual cooling of the planet. Heaviside was fond of using the unit-step as an input to electrical circuits. and given the symbol H t  . He did his undergraduate work in electrical engineering and was both familiar with all of Heaviside’s work and a great admirer of his. It is an impulsive function. Heaviside’s operators allowed the reduction of the differential equations of a physical system to equivalent algebraic equations. Based on experimentally eliminating the physics by reducing derived thermal conductivity of rock. of total amount 1. especially since it was a very practical matter to send such pulses down a telegraph line. operational calculus. Signals and Systems 2014 . using his Oliver Heaviside. was able to solve the diffusion equation for a finite Vol II. But Heaviside. to give life and reality to the evidence. In 1862 Thomson wrote his famous paper On the secular cooling of the Earth. Collected Works. and also went against geologic and palaeontologic through. modelled as a semi-infinite mass. But the power of Heaviside’s methods to solve remarkably complex problems became readily apparent.4 spherical Earth. at age 25. which was really just a collection of his papers. in a paper on quantum mechanics. John Perry (a professor of mechanical engineering) redid Thomson’s problem. and solve for voltage and current in terms of a power series in p. with obvious implications for Darwin’s theory of evolution. Heaviside released 3 volumes of work entitled Electromagnetic Theory. The heat flow. so to speak…[it] involves only ordinary ideas of differentiation and integration pushed to their limit. and arrived at approximate results that obtain assistance which could (based on the conductivity and specific heat of marble and quartz) put the physics gives to the mathematics. but Heaviside simply used the notation 1. Throughout his “career”. – the age of the Earth into the billions of years. in which he imagined the Earth to be a uniformly heated ball of The practice of molten rock. Heaviside also played a role in the debate raging at the end of the 19th century about the age of the Earth. he then a problem to a mathematical mathematically allowed the globe to cool according to the physical law of purely exercise should be thermodynamics embedded in Fourier’s famous partial differential equation for avoided as much as possible. He was tantalizingly close to discovering the impulse by stating “… p  1 means a function of t which is wholly concentrated at the moment t  0 . p. and to the great analysis using discontinuous diffusivity.

Today we merely call it the “ionosphere”. 1988. Heaviside wrote an article for the Encyclopedia Britannica entitled The theory of electric telegraphy. Barnett in the mid-1920s. The last five years of Heaviside’s life. his brilliant discovery of the distortionless circuit. and for his wit and humor. Heaviside spent much of his life being bitter at those who didn’t recognise his genius – he had disdain for those that could not accept his mathematics without formal proof. which was directly detected by Edward Appleton and M. and he felt betrayed and cheated by the scientific community who often ignored his results or used them later without recognising his prior work. This supposed layer was referred to as the “Heaviside layer”. his field theory analyses. References Nahin. He withdrew from public and private life. His operational calculus was made obsolete with the 1937 publication of a book by the German mathematician Gustav Doetsch – it showed how. he extended his essay to include “wireless” telegraphy. the historical obscurity of Heaviside’s work is evident in the fact that his vector analysis and vector formulation of Maxwell’s theory have become “basic knowledge”. In 1902.4B. Nahin Today. Apart from developing the wave propagation theory of telegraphy. Heaviside should be remembered for his vectors.: Oliver Heaviside: Sage in Solitude.F. in 1891. brushing aside his honorary doctorate from the University of Göttingen and even refusing to accept the medal associated with his election as a Fellow of the Royal Society. his pioneering applied mathematics.18 Heaviside shunned all honours. were years of great privation and mystery. Signals and Systems 2014 . Objects were thrown at his windows and doors and numerous practical tricks were played on him. It was with much bitterness that he eventually retired and lived out the rest of his life in Torquay on a government pension. and explained how the remarkable success of Marconi transmitting from Ireland to Newfoundland might be due to the presence of a permanently conducting upper layer in the atmosphere. Heaviside’s operators could be replaced with a mathematically rigorous and systematic method. 1925. and was taunted by “insolently rude imbeciles”.J. P. with both hearing and sight failing.A. – P. with the Laplace transform. IEEE Press. He died on 3rd February.

and frequency-domain are inextricably related – we can’t alter the characteristics of one without affecting the other. Frequency response of a lowpass second-order system. Digital filters. in the steady-state. measure the magnitude and phase change at the output.5A. Magnitude responses.1 Lecture 5A – Frequency Response The frequency response function. how well a system meets the stability criterion. Signals and Systems 2014 . Determining the frequency response from a transfer function.1) where H   is the frequency response function. remember that the time. Thus. The Frequency Response Function Recall that for a LTI system characterized by H s  . Despite all this. the steady-state response is: yss t   A H  0  cos 0 t  H  0  (5A. Approximating Bode plots using transfer function factors. Bode plots. Visualization of the frequency response from a pole-zero plot. Transfer function synthesis. and whether the system is robust to disturbance inputs. Phase responses. It can help us determine things such as the DC gain and bandwidth. The definition above is precisely how we determine the frequency response experimentally – we input a sinusoid and. This will be demonstrated for a second-order system later. and for a sinusoidal input xt   A cos 0 t  . the system behaviour for sinusoidal inputs is completely specified by the magnitude response H   and the phase response H   . obtained by setting s  j in H s  . Overview An examination of a system’s frequency response is useful in several respects.

5) A negative gain in decibels is referred to as attenuation.5A) We plot the magnitude and phase of H   as a function of  or f . -3 dB gain is the same as 3 dB attenuation. for the sinusoidal steady state: H    K j  p (5A. If the logarithm (base 10) of the magnitude is multiplied by 20.2) The sinusoidal steady state corresponds to s  j . For example.5A. Consider a simple first-order transfer function: H s   K s p (5A. Therefore. then we have the gain of the transfer function in decibels (dB): The magnitude of the transfer function in dB H   dB  20 log H   dB (5A.2 Determining the Frequency Response from a Transfer Function We can get the frequency response of a system by manipulating its transfer function.4a) which is normally written in polar coordinates: The transfer function in terms of magnitude and phase H    H   H   (5A.3) The complex function H   can also be written using a complex exponential in terms of magnitude and phase: H    H   e jH   (5A. (5A. Signals and Systems 2014 . Eq.2) is. We use both linear and logarithmic scales.

3 The phase function is usually plotted in degrees. (5A.6) The magnitude function is found directly as: H    1 1    0  2 (5A. For example. we will use linear magnitude versus linear frequency plots. transfer function in the sinusoidal For the simple first-order RC circuit that you are so familiar with.2).9) . To look at magnitude of the these properties.8) Magnitude Responses A magnitude response is the magnitude of the transfer function for a sinusoidal steady-state input.7) has three frequencies of special interest corresponding to these values of H   : H 0   1 H  0   1 H    0 2  0. in Eq. Magnitude The magnitude response is the responses can be classified according to their particular properties. plotted against the frequency of the input.7) and the phase is:   H     tan 1    0  (5A. the steady state magnitude function given by Eq. (5A.707 Signals and Systems 2014 (5A. let K  p   0 so that: H    1 1  j  0 (5A.5A.

Signals and Systems 2014 . The plot below shows the complete magnitude response of H   as a function of  .1.4 The frequency  0 is known as the half-power frequency. and the circuit that produces it: A simple lowpass filter |H| 1 R 1 2 Vi 0 0 0 C Vo 2 0  Figure 5A. and the circuit that produces it are shown below: An ideal lowpass filter |T| 1 Cutoff ideal Vi Pass 0 0 Stop 0 filter Vo  Figure 5A. called the cutoff frequency. the output voltage remains fixed in amplitude until a critical frequency is reached. the range with no output is called the stopband. The obvious classification of the filter is a lowpass filter.1 An idealisation of the response in Figure 5A. the output is zero.5A. At that frequency.  0 . known as a brick wall. and for all higher frequencies.2 For the ideal filter. The range of frequencies with Pass and stop bands defined output is called the passband.

12) .1 differs from the ideal. by convention.707 Signals and Systems 2014 (5A. is: H 0   0 H  0   H    1 1 2  0. the half-power frequency is taken as the cutoff frequency.11) The magnitude function of this equation.5A. then the resulting circuit is: C Vi R Vo Figure 5A. and with s  j .10) Letting 1 RC   0 again. we obtain: H    j  0 1  j  0 (5A. at the three frequencies given in Eq. If the positions of the resistor and capacitor in the circuit of Figure 5A.1 are interchanged.3 Show that the transfer function is: H s   s s  1 RC (5A.9).5 Even though the response shown in the plot of Figure 5A. and. (5A. it is still known as a lowpass filter.

Signals and Systems 2014 filter Vo . and the circuit that produces it: A simple highpass filter |H| 1 C 1 2 Vi 0 0 0 2 0 3 0 R Vo  Figure 5A. The ideal brick wall highpass filter is shown below: An ideal highpass filter |T| 1 Cutoff ideal Vi Stop 0 0 Pass 0  Figure 5A.6 The plot below shows the complete magnitude response of H   as a function of  . as it was for the lowpass filter.4 This filter is classified as a highpass filter.5A.5 The cutoff frequency is  0 .

6).13) where the input is taken as the phase reference (zero phase). phase responses are only meaningful when we look Phase response is at sinusoidal steady-state signals. A transfer function for a sinusoidal input is: obtained in the sinusoidal steady state H    Y  Y   H  X X 0 (5A. For the bilinear transfer function: H    K j  z j  p (5A. (5A. When    0 .5.7 Phase Responses Like magnitude responses.    tan 1 1  45 . those giving negative  as lag systems. the circuit is a lag circuit.5A. Signals and Systems 2014 . for which H   is given by Eq. we have:   0    tan 1  (5A. Those giving positive Lead and lag circuits defined  are known as lead systems.15) The phase of the bilinear transfer function We use the sign of this phase angle to classify systems. For the simple RC circuit of Figure 5A.16) Since  is negative for all  .14) the phase is:     K  tan    tan 1    z  p 1 (5A.

17) The phase response has the same shape as Figure 5A. Signals and Systems 2014 . and so the circuit is a lead circuit. show that the phase is given by:   0   90 tan 1  (5A.8 A complete plot of the phase response is shown below: Lagging phase response for a simple lowpass filter 0 0º 0 2 0   -45 º -90 º Figure 5A.6 For the circuit in Figure 5A.7 The angle  is positive for all  .5.6 but is shifted upward by 90 : Leading phase response for a simple highpass filter 90 º  45 º 0º 0 0 2 0  Figure 5A.5A.

9 Frequency Response of a Lowpass Second-Order System Starting from the usual definition of a lowpass second-order system transfer function:  n2 H s   2 s  2 n s   n2 (5A.19)  n2   2  j 2 n The magnitude is: H  j     n2 2 n  2   2   2 2 (5A.21) The phase response of a lowpass second order transfer function .18) we get the following frequency response function: H     n2 (5A.20) n The magnitude response of a lowpass second order transfer function and the phase is:  2 n   2 2      n     tan 1  Signals and Systems 2014 (5A.5A.

  n   90. degrees -90 -180º asymptote for all  -180 0  1 Figure 5A.5A.8 For the magnitude function. from Eq.10 The magnitude and phase functions are plotted below for   0.22) and for large  . which is sometimes described as two-pole rolloff.20) we see that: H 0  1. For the phase function. (5A. H    0 (5A. H  n   1 2 .23) . the magnitude decreases at a rate of -40 dB per decade.     180 Signals and Systems 2014 (5A. we see that:  0  0.4 : Typical magnitude and phase responses of a lowpass second order transfer function  r =  n 1-2 |H| 2 n 1 2 1 -40 dB / decade 0 n 0 0  All   .

25) Lowpass second order transfer function using pole factors . the transfer function is: H s    n2 s  p s  p   Signals and Systems 2014 (5A.n p* Figure 5A.9.24) Standard form for a lowpass second order transfer function the poles are located on a circle of radius  n and at an angle with respect to the negative real axis of   cos 1   .11 Visualization of the Frequency Response from a Pole-Zero Plot The frequency response can be visualised in terms of the pole locations of the transfer function.5A. For example.9 In terms of the poles shown in Figure 5A. for a second-order lowpass system:  n2 H s   2 s  2 n s   n2 (5A. These complex conjugate pole locations are shown below: Pole locations for a lowpass second order transfer function j  n p   .

26) are shown below: Determining the magnitude and phase response from the s plane j j j m1 p m1 p 1 m1 j n p 1 j 1 1    m2 m2 m2 2 2 p* 2 p* p*  |H| 0 1 2 1 1 -90 º 0 1 n 2 j 2  -180 º Figure 5A.10 Signals and Systems 2014 n 2  . (5A.28) and: Phase function written using the polar representation of the pole factors Vectors representing Eq. the two pole factors in this equation become: Polar representation of the pole factors j  p  m1 1 j  p  m2  2 and (5A.12 With s  j .27)    1   2  (5A. the magnitude and phase are: Magnitude function written using the polar representation of the pole factors H    n2 m1m2 (5A.26) In terms of these quantities.5A.

The use of logarithmic scales has several desirable properties:  we can approximate a frequency response with straight lines. From this construction we can see that the short length of m1 near the frequency  n is the reason why the magnitude function reaches a peak near  n . Illinois.  the slope of all asymptotic lines in a magnitude plot is  20n dB/decade .  we add and subtract individual factors in a transfer function.one below  n . where n is any integer. where his name is pronounced boh-dee. where the scale of the frequency variable (usually  ) is logarithmic. and  n 45/decade for phase plots.  the shape of a Bode plot is preserved if we decide to scale the frequency – this makes design easy. Purists insist on the original Dutch boh-dah.  by examining a few features of a Bode plot. we can readily determine the transfer function (for simple systems). * Dr. one at  n .5A. Hendrik Bode grew up in Urbana. USA. These plots are useful in visualising the frequency response of the circuit. This is called an approximate Bode plot. rather than multiplying and dividing.13 Figure 5A. No one uses bohd. Signals and Systems 2014 The advantages of using Bode plots . We normally don’t deal with equations when drawing Bode plots – we rely on our knowledge of the asymptotic approximations for the handful of factors that go to make up a transfer function. and one above  n . Bode Plots Bode* plots are plots of the magnitude function H   dB  20 log H   and the phase function H   .10 shows three different frequencies .

5A.1 n n 10 n 100 n 0 20 1 s  n  0 -90 -20 -180 -40 0. Signals and Systems 2014 .1 n n 10 n 100 n 40 0.01 n 0 20 1 s  n  1 0 -90 -20 -180 -40 0.01 n 0.1 n n 10 n 100 n 40 0.  40 0 20 K 0 -90 -20 -180 -40 0.01 n 0 20 0 -90 -20 -180 -40 0. dB H .1 n n 10 n 100 n 0.1 n n 10 n 100 n 0.01 n 0.01 n The corresponding numerator factors are obtained by “mirroring” the above plots about the 0 dB line and 0° line.14 Approximating Bode Plots using Transfer Function Factors The table below gives transfer function factors and their corresponding magnitude asymptotic plots and phase linear approximations: Transfer Function Factor Magnitude Phase Asymptote Linear Approximation H .1 n n 10 n 100 n 40 1 s  2  2  s  1  n n  2 0.1 n n 10 n 100 n 0.01 n 0.01 n 0.01 n 0.1 n n 10 n 100 n 0.

We wish to design a filter to these specifications and the additional requirement that all capacitors have the value C  10 nF . Example The asymptotic Bode plot shown below is for a band-enhancement filter: A band enhancement filter |H|.15 Transfer Function Synthesis One of the main reasons for using Bode plots is that we can synthesise a desired frequency response by placing poles and zeros appropriately. dB 20 dB 0 dB 2 10 3 10 4 10 5 10  rad/s (log scale) Figure 5A. leaving the gain at higher and lower frequencies unchanged.11 We wish to provide additional gain over a narrow band of frequencies. This is easy to do asymptotically. and the results can be checked using MATLAB®.5A. Signals and Systems 2014 .

12 Those marked 1 and 4 represent zero factors. The pole-zero plot corresponding to these factors is shown below: The pole-zero plot corresponding to the Bode plot j 4 3 2 1  Figure 5A.13 Signals and Systems 2014 .5A.16 The composite plot may be decomposed into four first-order factors as shown below: Decomposing a Bode plot into firstorder factors |H|. while those marked 2 and 3 are pole factors. dB 1 4  rad/s (log scale) 2 3 Figure 5A. dB 20 dB 0 dB 2 10 3 5 10  rad/s (log scale) 4 10 10 |H|.

14 Signals and Systems 2014 1 .29) Substituting s for j gives the transfer function: s  10 s  10  H s   s  10 s  10  2 5 3 4 (5A.31) For a circuit realisation of H 1 and H 2 we decide to use an inverting op-amp circuit that implements a bilinear transfer function: A realisation of the specifications -2 10 -3 10 -5 10 -4 10 Vi Vo 1 1 1 Figure 5A.30) The transfer function corresponding to the Bode plot We next write H s  as a product of bilinear functions.17 From the break frequencies given. we have: 1  j 10 1  j 10  H    1  j 10 1  j 10  2 5 3 4 (5A.5A. but one possibility is: H s   H 1 s H 2 s   The transfer function as a cascade of bilinear functions s  10 s  10  s  10 3 s  10 4 2 5 (5A. The choice is arbitrary.

18 To obtain realistic element values.32) Since the capacitors are to have the value 10 nF. Signals and Systems 2014 . However. This is accomplished with the equations: Magnitude scaling is required to get realistic element values Cnew  1 Cold km and Rnew  km Rold (5A. we need to scale the components so that the transfer function remains unaltered. the response only required placement of the poles and zeros on the real axis. The element values that result are shown below and the design is complete: A realistic implementation of the specifications 1 M 100 k 1 k 10 k Vi Vo 10 nF 10 nF 10 nF 10 nF Figure 5A. this means k m  108 .15 In this simple example. complex pole-pair placement is not unusual in design problems.5A.

Digital filters are generally unaffected by such factors as component accuracy. The cost of digital implementation is often considerably lower than that of its analog counterpart (and it is falling all the time). or FFT. we are free to do any processing on them that we desire.19 Digital Filters Digital filtering involves sampling. An inverse FFT can then be performed. The components of a digital filter are shown below: The components of a digital filter vi Anti-alias Filter ADC Digital Signal Processor DAC Reconstruction Filter vo Figure 5A. There are many advantages of digitally processing advantages analog signals: 1. quantising and coding of the input analog Digital filters use signal (using an analog to digital converter. 3. The spectrum can then be modified by scaling the amplitudes and adjusting the phase of each sinusoid. that affect analog filter circuits. so it makes sense to implement it in software rather than hardware. A digital filter may be just a small part of a larger system. or it can be a Digital filter general purpose computer. and the processed numbers are converted back into analog form (using a digital to analog converter.5A. long-term drift. the quantising error in the ADC and the sampling rate. or ADC for short). the signal’s spectrum is found using a fast Fourier transform.16 The digital signal processor can be custom built digital circuitry. temperature stability. The accuracy of a digital filter is dependent only on the processor word length. In modern digital signal processors. Once we have analog filters converted voltages to mere numbers. the signal is convolved in the time-domain in real-time. or DAC). Signals and Systems 2014 . Usually. 2. etc. an operation corresponding to a fast convolution is also sometimes employed – that is. 4.

Filters of high order can be realised directly and easily. Signals and Systems 2014 . B.: Fundamentals of Signals and Systems using MATLAB®. Filter responses can be made which always have linear phase (constant delay). Analog filters (and signal conditioning) are still necessary to convert the analog signal to digital form and back again. 8. 9. Summary  A frequency response consists of two parts – a magnitude response and a phase response. Many circuit restrictions imposed by physical limitations of analog devices can be circumvented in a digital processor. 1997. It tells us the change in the magnitude and phase of a sinusoid at any frequency. Digital filter disadvantages Some disadvantages are: 1. References Kamen. in the steady-state. Prentice-Hall.  Bode plots are magnitude (dB) and phase responses drawn on a semi-log scale. 2. Processor speed limits the frequency range over which digital filters can be used (although this limit is continuously being pushed back with ever faster processors). Digital filters can be modified easily by changing the algorithm of the computer. Digital filters can be designed that are always stable. 6. & Heck. 7. E.5A. regardless of the magnitude response.20 5. enabling the easy analysis or design of high-order systems.

(c) 0. Draw the approximate Bode plots (both magnitude and phase) for the transfer functions shown. (a) G s   10 (d) G s   (b) G s   1 10s  1 4 s (c) G s   (e) G s   5s  1 1 10s  1 (f) G s   5s  1 Note that the magnitude plots for the transfer functions (c) and (d). Prove this not only analytically but also graphically using MATLAB®. (e) and (f) are the same.5A.21 Exercises 1. 2. With respect to a reference frequency f 0  20 Hz . Prove that if G s  has a single pole at s  1  the asymptotes of the log magnitude response versus log frequency intersect at   1  .5 3. Express the following magnitude ratios in dB: (a) 1. find the frequency which is (a) 2 decades above f 0 and (b) 3 octaves below f 0 . (b) 40. Why? 4. Use MATLAB® to draw the exact Bode plots and compare. Signals and Systems 2014 .

01s  1 Use asymptotes for the magnitude response and a linear approximation for the phase response.5A. Signals and Systems 2014 . 7.22 5. respectively. to draw the Bode plot for: G s   100s  1 s 0.5  s 2 s 2 10  s    Draw the approximate Bode plots and from these graphs find G and G at (i)   0. (iii)   10 rads -1 . Draw the exact Bode plot using MATLAB® (magnitude and phase) for G s   100s 2 s 2  2 n s   n2 when  n  10 rads -1 and   0. (ii)   1 rads -1 . Given (a) G s   206. (iv)   100 rads -1 .3 Compare this plot with the approximate Bode plot (which ignores the value of  ).1s  1 94.6s  1 (b) G s   4  s  1 . Make use of the property that the logarithm converts multiplication and division into addition and subtraction. 6.66s  1 2 s 0.1 rads -1 .

Plot the Bode diagrams and identify the transfer functions.2 0. What is the –3 dB bandwidth of the system? (b) Plot the closed loop frequency response when K is increased to K  100 .04 0. G  25 (ii)    2 .07 0.1 0.4 0. G  127 Find G and G at 1 and  2 when the system is connected in a unity feedback arrangement.23 8. Signals and Systems 2014 . Given G s   K s s  5 (a) Plot the closed loop frequency response of this system using unity feedback when K  1 .5 1 2 3 5 10 20 30 40 50 100 (a) G1 (dB) 40 34 25 20 14 10 2 -9 -23 -32 -40 -46 -64 G1 (°) -92 -95 -100 -108 -126 -138 -160 -190 -220 -235 -243 -248 -258  rads-1  0.1 0. The experimental responses of two systems are given below. What is the effect on the frequency response? 10.  rads-1  0.7 1 2 4 7 10 20 40 100 500 1000 (b) G2 (dB) -26 -20 -14 -10 -7 -3 -1 -0. G  6 dB . G  18 dB .01 0. The following measurements were taken for an open-loop system: (i)   1 .5A.02 0.2 0.3 0 0 0 -2 -3 -7 -12 -20 -34 -40 G2 (°) 87 84 79 70 61 46 29 20 17 17 25 36 46 64 76 84 89 89 9.

An amplifier has the following frequency response. Find the transfer function. 40 30 20 |H(w)| dB 10 0 -10 -20 -30 -40 0 10 1 10 2 3 10 10 w 4 10 5 10 (rad/s) 100 80 60 arg(H(w)) deg 40 20 0 -20 -40 -60 -80 -100 0 10 1 10 2 3 10 10 w (rad/s) Signals and Systems 2014 4 10 5 10 .5A.24 11.

we may be able to change the transient response to suit particular requirements. The difference between the input and output.5B. Second-order step response. Overview Control systems employing feedback usually operate to bring the output of the system being controlled “in line” with the reference input. and if we can. in the steady-state. etc. like time taken to get to a certain position within a small tolerance. not “overshooting” the mark and hitting walls. what are the limits of the control? Obviously we cannot get a rover to move infinitely fast. Rise time and delay time. Steady-State Error One of the main objectives of control is for the system output to “follow” the system input. It must undergo a transient – just like an electric circuit with storage elements. For example. Peak time. with feedback. However.1) . Settling time. is termed the steady-state error: et   r t   ct  ess  lim et  t  Signals and Systems 2014 Steady-state error defined (5B. Transient response. so it will never follow a step input exactly. Percent overshoot.1 Lecture 5B – Time-Domain Response Steady-state error. a maze rover may receive a command to go forward 4 units – how does it respond? Can we control the dynamic behaviour of the rover.

then Rs   R s and the steady-state error is: ess  lim et   lim sE s  t  s0 sRs  R  lim  lim s  0 1  Gs  s  0 1  Gs   R 1  lim Gs  s0 Signals and Systems 2014 (5B.1 System type defined The type of the control system.2) When the input to the control system in Figure 5B. or simply system type.3) . For example: 101  3s  s s 2  2s  2 4 Gs   3 s s  2  G s     type1 type 3 (5B.1 is a step function with magnitude R.5B.2 Consider the unity-feedback system: R (s ) E (s ) G (s) C (s ) Figure 5B. is the number of poles that G s  has at s  0 .

3) becomes: ess  R 1 KP (5B.2 Signals and Systems 2014 . We can summarise the errors of a unity-feedback system to a step-input as: R ess  type 0 system :  constant 1 KP type 1 or higher system : ess  0 (5B. we define the step-error constant. We may represent the MR by the following block diagram: MR transfer function for velocity output X (s ) 1M skf M V (s ) Figure 5B.7) where vt  is the velocity.3 For convenience.5) We see that for the steady-state error to be zero. M is the mass and k f represents frictional losses. (5B.5B. we require K P   . This will only occur if there is at least one pole of G s  at the origin.4) so that Eq. as: Step-error constant – only defined for a step-input K P  lim G s  s0 (5B. xt  is the driving force. K P .6) Steady-state error to a step-input for a unity-feedback system Transient Response Consider a maze rover (MR) described by the differential equation: 1 dvt  xt   vt   M M dt Maze rover force / velocity differential equation kf (5B.

it appears that our input to the rover affects the velocity in some way. We’re therefore actually interested in the following model of the MR: MR transfer function for position output X (s ) V (s ) 1M skf M 1 s C (s ) Figure 5B.4 Now. Such a control strategy is termed series compensation. our position model of the MR is: X (s ) 1M ss  k f M  C (s ) Figure 5B. since position ct  is given by: ct    v d (5B. But we need to control the output position. from the diagram above.5B.3 This should be obvious. Signals and Systems 2014 . Suppose we wish to build a maze rover position control system. not the output velocity. We will choose for simplicity a unity-feedback system.8) t 0 Using block diagram reduction.4 The whole point of modelling the rover is so that we can control it. and place a “controller” in the feedforward path in front of the MR’s input.

Signals and Systems 2014 . Although this simple controller can only vary  n2 . with  n fixed (why?). In fact. Then the closed-loop transfer function is given by: KP M C s   Rs  s s  k f M   K P M (5B.5B.5 A block diagram of the proposed feedback system.5 Let the controller be the simplest type possible – a proportional controller which is just a gain K P .10) Note: For a more complicated controller. with it’s unity-feedback and series compensation controller is: R (s ) E (s ) Gc( s ) X (s ) 1M ss  k f M  C (s ) Simple MR position control scheme controller Figure 5B. the goal of the controller design is to choose a suitable response value of K P to achieve certain criteria. we will not in general obtain a second-order transfer function. we Controllers change the transfer function can still see what sort of performance this controller has in the time-domain as – and therefore the time-domain K P is varied.9) which can be manipulated into the standard form for a second-order transfer function:  n2 C s   Rs  s 2  2 n s   n2 Second-order control system (5B. The reason we examine second-order systems is because they are amenable to analytical techniques – the concepts remain the same though for higher-order systems.

5B. especially ones that can be approximated by secondorder systems. and the output response of the system is obtained by taking the inverse Laplace transform of the output transform:  n2 C s   s s 2  2 n s   n2  (5B. time-domain specifications of systems can be directly related to an underdamped second-order response via simple formula. Rs   1 s . Second-order Step Response The time-domain unit-step response to the system described by Eq.6 For a unit-step function input. critically damped (repeated real poles) and underdamped (complex conjugate poles). their analysis generally helps to form a basis for the understanding of analysis and design for higher-order systems.10) has been solved previously. We found that there were three distinct solutions that depended upon the pole locations. We termed the responses overdamped (two real poles).11) We have seen previously that the result for an underdamped system is: ct   1  e  n t 1 2  sin  n 1   2 t  cos 1   (5B. Some time-domain criteria only apply to certain types of response. Signals and Systems 2014 . Also. For example.12) We normally desire the output to be slightly underdamped due to its “fast” characteristics – it rises to near the steady-state output much quicker than an overdamped or critically damped system. percent overshoot only exists for the underdamped case. Although second-order control systems are rare in practice. (5B.

 percent overshoot  max ss 100% css delay time  td (5B.6 The following definitions are made: c c P.90 0.13d) Settling time ctd   css  50% rise time  t r  t90%  t10% ct90%   css  90%.95 0. ct10%   css  10% settling time  ts ct  ts   css  5% css Signals and Systems 2014 . = percent overshoot 1.7 We will now examine what sort of criteria we usually specify.00 0.13a) Percent overshoot (5B.13c) Rise time (5B.5B.13b) Delay time (5B.10 0 t tr tp ts Figure 5B.O.50 td 0.05 1.O. with respect to the following diagram: Step response definitions c (t ) P.

16) and the damped natural frequency is: Damped frequency defined  d  imaginary part of the poles (5B. p2   n  j n 1   2    j d (5B.18) All of the above are illustrated in the following diagram: Second-order complex pole locations j jn j  d = j  n 1. n .17) The damping ratio is defined as: Damping ratio defined   n (5B.10) are given by: p1 . (5B.7 Signals and Systems 2014 .15) The damping factor is defined as: Damping factor defined   real part of the poles (5B.jn Figure 5B.14) The undamped natural frequency is: Natural frequency defined  n  radial distance from origin (5B. 2  . = .5B.8 The poles of Eq.

6 1.0 0.8 0.0 0.5B.5 2.2 c( t ) 0.0 1.9 The effect of  is readily apparent from the following graph of the step-input response: 1.6 1.2 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13  nt Figure 5B.8 Second-order step response for varying damping ratio  =0.8 Signals and Systems 2014 .4 0.4 1.1 1.5 1.

 n j  n 1.20) .12). t s . (5B.10) has the s-plane plot shown below if 0    1 : Pole locations showing definition of the angle  j n .5B. we need to examine the step-response more closely. To derive an estimate for the settling time.9 It can be seen that the angle  is given by: cos   n  n (5B.10 Settling Time The settling time. This band is usually expressed as a percentage p of the steady-state value. as given by Eq. second-order lowpass transfer function of Eq.19) The step-response. is then: ct   1  e  n t 1 2  sin  n 1   2 t   Signals and Systems 2014  (5B. (5B. 2   s -plane Figure 5B. The standard. is the time required for the output to come within and stay within a given band about the actual steady-state value.

5B. because of the oscillatory response. It can be obtained numerically from the responses shown in Figure 5B.10 To determine the settling time.  nt 1 2 T= 1 n 1 e . Signals and Systems 2014 Pair of envelope curves for the unitstep response of a lowpass secondorder underdamped system . t n 1- 1- 1 1 2 0 T 1 2 t 2T 3T 3  1 t   cos   2  1  2 1 4T 5T   1   cos    1  2 1 2 Figure 5B. we need to find the time it takes for the response to fall within and stay within a certain band about the steady-state value.11  The curves 1  e  n t 1 2  are the envelope curves of the transient response for a unit-step input. The response curve ct  always remains within a pair of the envelope curves. This time depends on  and  n in a non-linear fashion. as shown below: 1+ 1 1 1+ e 2 .8.

Eq.11 represent the loci of maxima and minima of the step-response.20).5B. (5B. and equating to zero:     dct   e  n t  cos  n 1   2 t    n 1   2 dt 1 2  e  n t 1 2 sin  n 1   2 t     n  (5B. nt 0 Figure 5B. the step-response is the damped sinusoid shown below: Exponential curves intersecting the maxima and minima of the step-response 1+e .12 One way of analytically estimating the settling time with a simple equation is to consider only the minima and maxima of the step-response.11 The dashed curves in Figure 5B.21) 0 Signals and Systems 2014 .e . The maxima and minima are found by differentiating the time response. For 0    1 . t n steady-state value 1 1.

and solving for t.22).25) gives the time at which the maxima and minima of the stepresponse occur. (5B. 1. we have:   tan 1 2 1  (5B. n  0. Signals and Systems 2014 Times at which the maxima and minima of the step-response occur .24) into Eq. 2.23).23) From the s-plane plot of Figure 5B. we have: tan 1 1 2    n 1   2 t    n (5B.5B.25) only gives valid results for t  0 .  (5B. 2. (5B.13 Dividing through by the common term and rearranging.24) Substituting Eq. we get:    1   2 cos  n 1   2 t     sin  n 1   2 t   1 2  1 2    tan  n 1   2 t       tan  n 1   2 t    n  (5B.25) Eq. we obtain: t  n n 1 2 . (5B. (5B. (5B. Since ct  in Eq. n  0.20) is only defined for t  0 .  Then. Eq. taking the arctangent of both sides of Eq. (5B.22) where we have used the fact that tan   tan   n  .9. 1.

29a) n 1 1  2 e 1 2 sin  . (5B.5B.29b) From Figure 5B.28) then Eq. sin n     sin  . we have: cn  1  n 1 1 2 e 1 2 sin n    (5B. n odd n even (5B. (5B.9.27) Since: sin n      sin  .26) Cancelling common terms. we get: cn  1   n n 1 1 2 e   1   2 n  n  sin     1 2   n   n 1 2 (5B.30) .25) into Eq.27) defines two curves: c1 n  1  c2 n  1  n 1 1  2 e 1 2 sin  . n even (5B. we have: sin   1   2 Signals and Systems 2014 (5B.20). n odd (5B. (5B.14 Substituting Eq.

33) Since the exponential curves. But these values will be exactly the same as those given by the following exponential curves: c1 t   1  e  n t (5B. (5B. respectively. n even (5B. Eqs (5B. pass through the maxima and minima of the step-response. n odd n 1 2 .29). Signals and Systems 2014 Exponential curves passing through the maxima and minima of the step-response .  (5B.31b) Eq. 1.32).5B. 2. n  0.25). with the times for the maxima and minima given by Eq. the relative maximum and minimum values of the step-response. (5B. we get: c1 n  1  e c2 n  1  e n 1 2 .31a) and Eq.32b) evaluated at the times for the maxima and minima: t n n 1 2 .31b) are. they can be used to approximate the extreme bounds of the step-response (note that the response actually goes slightly outside the exponential curves. (5B.15 Substituting this equation for sin  into Eqs. especially after the first peak – the exponential curves are only an estimate of the bounds). (5B.31a) (5B.32a) c2 t   1  e  nt (5B.

35) .32) represent the deviation from the steadystate value.34) Taking the natural logarithm of both sides and solving for t s gives the “p-percent” settling time for a step-input: Settling time for a second-order system ts   where   ln   n p .16 We can make an estimate of the settling time by simply determining the time at which c1 t  [or c2 t  ] enters the band 1    ct   1   about the steady-state value. (5B. it is sufficient to calculate the time when the magnitude of the exponential is equal to the required error  . steady-state value 1. as indicated graphically below: Graph of an underdamped stepresponse showing exponential curves bounding the maxima and minima 1+ e . 100 Note that this formula is an approximation to the real settling time. t s :   e  t n s (5B. nt 1+  1 1.5B. This time is the settling time.12 The exponential terms in Eqs.e . Signals and Systems 2014 (5B. t n 0 ts Figure 5B. Since the exponential response is monotonic.

or solve the resulting equations numerically using MATLAB®. Percent Overshoot The magnitudes of the overshoots can be determined using Eq.37) Hence.25). 50% and 90% values of the steady-state response and solving for t. t p .36) Peak time for a second-order system This formula is only applicable if 0    1 .39) Rise Time and Delay Time To determine rise times and delay times.5B. with n  1 (the first local maxima): tp     d n 1 2 (5B. otherwise the peak time is t p   .17 Peak Time The peak time. the maximum value is:  ct  max  1  e 1 2 (5B. we usually don’t resort to solving the non-linear equation that results in substitution of the 10%.38) and the maximum percent overshoot is: P. the maximum overshoot is:  maximum overshoot  e 1 2 (5B. at which the response has a maximum overshoot is given by Eq. The maximum value is obtained by letting n  1 . (5B. Signals and Systems 2014 Percent overshoot for a second-order system . (5B. or measure from a graph of the response (or on the DSO). Therefore.  e  1 2 100% (5B.31a).O. We use a normalised delay time graph.

Prentice-Hall. In “set point” control systems. the time-domain response is the step response of the system.  The transient part of a time-domain response is important. 1995.. most of these quantities can be obtained from simple formula – in general they cannot. rise time.O.  e  1 2   d  100% References Kuo. Signals and Systems 2014 . For second-order systems. etc. B.  Three important parameters of a second-order underdamped response are used to specify a transient response. settling time. Control systems usually specify acceptable system behaviour with regards to percent overshoot.18 Summary  The time-domain response of a system is important in control systems.5B. They are: Settling time: ts   Peak time: tp  ln   n  n 1 2 Percent overshoot: P.: Automatic Control Systems 7th ed. we can directly relate pole locations (  and  n ) to transient behaviour.  For second-order all-pole systems. We usually employ feedback in a system so that the output tracks the input with some acceptable steady-state error.

of the output (b) the peak time of the output (c) the damping ratio (d) the natural frequency of the system (e) the actual frequency of oscillation of the output (f) the 0-100% rise time (g) the 5% settling time (h) the 2% settling time 2. determine: (a) the P. If the input to the system is a step of 10 units. A second-order all-pole system has roots at  2  j 3 rads -1 .5B.O. Determine a second-order. all-pole transfer function which will meet the following specifications for a step input: (a) 10%-90% rise time  150 ms (b)  5% overshoot (c) 1% settling time  1 s Signals and Systems 2014 .19 Exercises 1.

8 t (a) Derive an expression (in terms of a and b) for the damping ratio. [Hint: Expand the denominator into the form ss    j d s    j d  ] 4. Signals and Systems 2014 . The experimental zero-state response to a unit-step input of a second-order allpole system is shown below: c( t ) a b 1. (b) Determine values for the natural frequency and damping ratio.20 3.08 .5B.4 and b  0.4 0. given a  0.0 0 0. Given: C s    n2 s s 2  2 n s   n2  find ct  using the residue approach.

Sketch and compare the two step responses. Given: (i) G1 s   10 s  10 (ii) G 2 s   1 s 1 (iii) G3 s   10 s  1s  10 (a) Sketch the poles of each system on the s-plane. Hint: A suitable criterion to use that is common in control theory is the integral of the square of the error.21 5. which is defined as: I 1   e 2 t dt T 0 The upper limit T is a finite time chosen somewhat arbitrarily so that the integral approaches a steady-state value. Find an approximate first-order model of the transfer function: G s   4 s  3s  2 2 by “fitting” a single-time-constant step response curve to its step response. ISE. (c) Which pole dominates the time response of G3 s  ? 6. Signals and Systems 2014 .5B. (b) Sketch the time responses of each system to a unit-step input.

The major trend has been towards the use of automatic numerically controlled machine tools using direct digital inputs.5B. The figure below illustrates the block diagram of an automatic numerically controlled machine-tool position control system.22 7. and the possibility of human error. Automatically controlled machine-tools form an important aspect of control system application. using a computer to supply the reference signal. Many CAD/CAM tools produce numeric output for the direct control of these tools. R ( s) servo amplifier servo motor Ka = 9 1 s ( s +1) C( s) position feedback (a) What is the undamped natural frequency  n and damping factor  ? (b) What is the percent overshoot and time to peak resulting from the application of a unit-step input? (c) What is the steady-state error resulting from the application of a unit-step input? (d) What is the steady-state error resulting from the application of a unit-ramp r t   tu t  input? Signals and Systems 2014 . eliminating the tedium of repetitive operations required of human operators.

5B. Sketch this response. Given G s   0.1 . Find the steady-state errors to (a) a unit-step.23 8. and (b) a unit-ramp input. 9.1 s  0. What is the system time constant? b) A unity feedback loop is to be connected around the system as shown: R (s ) K G ( s) C (s ) Sketch the time responses of the closed-loop system and find the system time constants when (i) K  0. What affect does feedback have on the time response of a first-order system? Signals and Systems 2014 .1 a) Find an expression for the step-response of this system. (ii) K  1 and (iii) K  10 . for the following feedback system: R (s ) 1 2 s +1 C (s ) 3s Note that the error is defined as the difference between the actual input r t  and the actual output ct  .

1 The system is described by the following transfer function: C s  G s   R s  1  G s H s  (6A. Disturbance rejection.we wish to reduce the difference between a reference input and the actual output. Sensitivity. Signals and Systems 2014 General feedback control system transfer function . stability. this is called set-point control. will be given in the following sections. Overview We apply feedback in control systems for a variety of reasons. Feedback also affects the transient response. with H s   1 .1 Lecture 6A – Effects of Feedback Transient response.6A. Closed-loop control. The primary purpose of feedback is to more accurately control the output . disturbance rejection and sensitivity to system parameters. bandwidth. Reduction of the system error is only one advantage of feedback. Recall that the basic feedback system was described by the block diagram: General feedback control system R (s ) E (s ) B (s ) C (s ) G (s) H (s) Figure 6A. Some of the criteria for choosing G s  . When the input signal is a step.1) The only way we can improve “system performance” – whatever that may be – is by choosing a suitable H s  or G s  .

or a response without an “overshoot” which may be physically impossible or cause damage to the system being controlled (e.g.6A. Unity-feedback closed-loop control system R (s ) E (s ) C (s ) Gc ( s) Gp( s) controller plant Figure 6A. This is termed open-loop control. A typical control system for set-point control simply derives the error signal by comparing the output directly with the input. Open-loop control system R (s ) Gc ( s) Gp( s) controller plant C (s ) Figure 6A. We can modify the response of a system by cascading the system with a transfer function which has been designed so that the overall transfer function achieves some design objective. This is termed closed-loop control. We might desire a “speedy” response.3 Signals and Systems 2014 . Maze rover hitting a wall!). Such a system is called a unity-feedback system.2 A better way of modifying the response of a system is to apply feedback.2 Transient Response One of the most important characteristics of control systems is their transient response. we can control the transient response (within limits). By adjusting the loop feedback parameters.

t0 Signals and Systems 2014 (6A. then the only way it can appear in the expression for V s  is if X s   K s . Assuming the input is a step function. Since the MR’s transfer function does not contain a K s term. As we have seen.3 Example We have already seen that the MR can be described by the following block diagram (remember it’s just a differential equation!): MR transfer function for velocity output X (s ) V (s ) 1M skf M Figure 6A. then we have: V s   G s  X s   1M K s  kf M s  K kf s  K kf s  kf M (6A.6A. the objective is to force vt  to be equal to a desired constant speed.4 In this example.2) Inverse transforming yields: vt    K  k 1 e f kf  M t . this implies a partial fraction term v0 s in the expression for V s  . v0 .3) .

For simplicity. we can see that we have no control over how fast the velocity converges to v0 .5 This is referred to as open-loop control. with our controller placed in the feed-forward path: Closed-loop MR controller R (s ) E (s ) Gc( s ) X (s ) controller 1M skf M plant Figure 6A. Closed-Loop Control To better control the output. by examining Eq. since it depends only on the reference Open-loop control has many disadvantages signal and not on the output. If our reference signal were r t   v0 u t  then the system we have described is: Simple open-loop MR controller R (s ) X (s ) kf controller 1M skf M V (s ) plant Figure 6A. Note that the reference input has to be converted to the MR input through a gain stage equal to k f .6 Signals and Systems 2014 V (s ) . t0 (6A. then vt   v0 as t   .4).4 If K is set to v0 k f then:  Step response of MR velocity vt   v0 1  e    kf M t .4) and since k f M  0 . we’ll use a unity-feedback system. Also.6A. This type of control is deficient in several aspects. we’ll implement a closed-loop control system. which must be known. (6A.

However. we can see that the rate at which vt  converges to the steady-state value can be made as fast as desired by again taking K P to be suitably large. Since it is now MR velocity stepresponse using a proportional controller Sometimes a closed-loop system impossible for vt   v0 .5 Proportional Control (P Controller) The simplest type of controller has transfer function Gc s   K P .6).7 With this type of control.5) Inverse transforming yields the response: vt     K P v0  k  K  M t 1 e f P . (6A.6) Now the velocity converges to the value K P v0 k f  K P  . the transform of the output is: V s     Gc s G s  R s  1  Gc s G s  KP M v0 s  k f M  K P M  s K P v0 k f  K P  s  K P v0 k f  K P  s  k f M  KP M  (6A. R (s ) E (s ) X (s ) KP controller 1M skf M Proportional controller (P controller) V (s ) plant Figure 6A. This is called proportional control since the control signal xt  is directly proportional to the error signal et  . from Eq. Signals and Systems 2014 . Also. we are free to make this error as small as desired by choosing a suitably large value for K P .6A. t0 k f  KP (6A. the proportional controller will always result in a exhibits a steadystate error steady-state tracking error equal to k f v0 k f  K P  .

6 Integral Control (I Controller) One of the deficiencies of the simple P controller in controlling the maze rover was that it did not have a zero steady-state error. By comparing this second-order transfer function with the standard form:  n2 T s   2 s  2 n s   n2 (6A. This may be good enough. Signals and Systems 2014 . With this type of control.6A. but we would prefer to be able to control the damping ratio  as well. or the distance of the poles from the origin.  n .8 We recognise that the controller transfer function is just an integrator. instead of Type 1.8) we can see that the controller is only able to adjust the natural frequency. We can easily make the overall feedforward system Type 1 by changing the controller so that it has a pole at the origin: Integral controller (I controller) R (s ) E (s ) X (s ) KI s controller 1M skf M V (s ) plant Figure 6A. so this form of control is called integral control. the overall transfer function of the closed-loop system is: T s   K I sM s  k f M  K I sM   KI M s  k f M s  KI M 2 (6A. This means that the output will follow the input in the steady-state (zero steady-state error).7) We can see straight away that the transfer function is 1 at DC (set s  0 in the transfer function). This was due to the fact that the overall feedforward system was Type 0.

9 The controller in this case causes the plant to respond to both the error and the integral of the error. Intuitively we can conclude that the response will be similar to the response with integral control. we can see straight away that the transfer function is 1 at DC (set s  0 in the transfer function). But we also have a zero in the numerator. as well as the natural frequency  n .6A.9) as:  n2 K P K I  n2 s T s   2  2 2 s  2 n s   n s  2 n s   n2 Signals and Systems 2014 (6A. and again that the output will follow the input in the steady-state (zero steady-state error). but will also contain a term which is the derivative of this response (we see multiplication by s – the zero – as a derivative).7 Proportional Plus Integral Control (PI Controller) If we combine the two previous controllers we have what is known as a PI controller. R (s ) E (s ) KP + KI s X (s ) controller 1M skf M V (s ) plant Figure 6A. We can now control the damping ratio  .10) Proportional plus Integral controller (PI controller) . independently of each other. the overall transfer function of the closed-loop system is: T s    K P  K I s 1 M s  k f M  K P  K I s 1 M KI M  KP M s s 2  k f M  K P M s  K I M (6A. With this type of control. (6A.9) Again. We can analyse the response by rewriting Eq.

and the slope of c I t  oscillates between positive and negative values between these points.11) The figure below shows that the addition of the zero at s   K I K P reduces the rise time and increases the maximum overshoot.00 KP d cI (t ) KI dt 0 t Figure 6A. (6A.6A. compared to the I controller step response: PI controller response to a unitstep input cI (t ) + c (t ) KP d cI (t ) KI dt cI (t ) 1.8 For a unit-step input. The derivative is sketched by noting that the derivative of c I t  is zero when the tangent to c I t  is horizontal.10 The response is easy to sketch by drawing the derivative of c I t  and adding a scaled version of this to c I t  .10) be c I t  . let the output response that is due to the first term on the right-hand side of Eq. Signals and Systems 2014 . Then the total unit-step response is: ct   cI t   K P dcI t  K I dt (6A.

That is. Closed-loop control can compensate to some degree for disturbance inputs. Integral. The addition of a derivative to the PI controller means that PID control contains anticipatory control. Derivative Control (PID Controller) One of the best known controllers used in practice is the PID controller. the control signal disturbance inputs cannot compensate for the disturbance d t  . the controller can anticipate the direction of the error and use it to better control the process. and derivative. Disturbance Rejection A major problem with open-loop control is that the output ct  of the plant will be “perturbed” by a disturbance input d t  . Example Consider the MR in open-loop: D2( s) D1( s) R (s ) E (s ) X (s ) Gc( s ) controller Signals and Systems 2014 Open-loop MR system with disturbance inputs D3( s) 1M skf M V (s ) .9 The total response can then be sketched in. in both the time and frequency-domains.6A. Since the control signal r t  does Feedback minimizes not depend on the plant output ct  the effect of in open-loop control.12) There are established procedures for designing control systems with PID controllers. ensuring that the total response goes through the points where the c I t  slope is zero. The PID controller transfer function is: G c s   K P  K D s  KI s (6A. integral. where the letters stand for proportional. Proportional. by knowing the slope of the error.

we need to make the loop-gain 1  Gc s G s  large. the output is given by: V s   Most disturbance inputs are minimized when using feedback Gc s G s  Rs   D1 s  1  Gc s G s   G s  D2 s  1  Gc s G s   1 D3 s  1  Gc s G s  (6A. The system cannot discriminate between d1 t  and r t  .10 Figure 6A.12 Using superposition. Signals and Systems 2014 . a disturbance d 2 t  has just as much “control” as our controller! Closed-loop feedback reduces the response of the system to disturbance inputs: Closed-loop MR system with disturbance inputs D2( s) D1( s) R (s ) E (s ) X (s ) Gc( s ) D3( s) 1M skf M V (s ) controller Figure 6A. To minimise the other disturbances on the output.11 The disturbance inputs could be modelling electronic noise in amplifiers.13) Thus. or a sudden increase in velocity due to an incline. In open-loop control. we cannot eliminate noise at the input d1 t  . or any other unwanted signal.6A.

open. If ST is small.or closed-loop.15) . It represents the fractional change in the system transfer function due to a fractional change in some parameter.6A. For small changes in : ST  T T0   0 where T0 and  0 are the nominal or design values.14) is called the system sensitivity (with respect to  ). where  is some parameter in the transfer function. Let the system transfer function be expressed as T s. the effect on T of changes in  is small. Then: ST  T T  T    T  System sensitivity defined (6A. Signals and Systems 2014 (6A.11 Sensitivity Sensitivity is a measure of how the characteristics of a system depend on the Sensitivity defines how one element of variations of some component (or parameter) of the system. System Sensitivity This is a general formulation which applies to any type of system.  . The effect of the a system affects a characteristic of the parameter change can be expressed quantitatively in terms of a sensitivity system function.

6A. Analytically. using system sensitivity for the open-loop case: T s   G  s  T 1 G (6A. the output must depend directly on G s  . so S GT  1 . as we thought. Signals and Systems 2014 .18) The transfer function is therefore directly sensitive to changes in G.17) (here G corresponds to the parameter  ). Therefore changes in G s  “don’t matter”. Therefore: System sensitivity to G for open-loop systems SGT  G T 1 T G (6A.16) if G s H s   1 . or the system is not sensitive to changes in G s  . Intuitively for closed-loop: T s   G s  1  1  G s H s  H s  (6A.12 Example How does T s  depend on changes in G s  for: R (s ) R (s ) G ( s) C (s ) E (s ) G ( s) C (s ) B (s ) a) open loop H ( s) b) closed loop Figure 6A.13 Intuitively for open-loop.

 Sensitivity is a measure of the dependence of a system’s characteristics with respect to variations of a particular element (or parameter). If the input R is held constant. Show that S HT  1 for this example.: Fundamentals of Signals and Systems using MATLAB®. Prentice-Hall. but input and feedback elements must be highly stable because they have a much greater effect on the output. B. PID are used to compensate the transfer function of the plant in unity-feedback systems. Summary  Various types of controllers. PI.  Feedback systems can minimise the effect of disturbance inputs or system parameter variations. 1997.6A.13 Analytically. Feedback can reduce the sensitivity of forward-path elements. the effect of a percentage change in G is reduced by the factor 1 1  GH  . the effect on the output C of a change in G is 1 1  GH  less than it would have been without feedback. & Heck. Signals and Systems 2014 must be accurate and stable . for the closed-loop case: G s  1  G s H s  T 1  GH  GH  G 1  GH 2 T (6A.19) Then: G 1 1 S   G 1  GH  1  GH 2 1  GH T G (6A. References Kamen. E. This result means that stable feedback Feedback elements components must be used in order to receive the full benefits of feedback. such as P.20) System sensitivity to G for closed-loop systems Therefore with feedback.

Assume that an operational amplifier has an infinite input impedance. Show that the following block diagrams are equivalent: C (s ) R (s ) G (s) H (s) C (s ) R (s ) where G s   G' ( s) G s  1  G s H s   1 2.14 Exercises 1.6A. zero output impedance and a very large gain K. Show for the feedback configuration shown that Vo Vi  K K  1  1 if K is large. K V1 V2 Signals and Systems 2014 .

b) Determine the sensitivity of the system’s transfer function T with respect to the output transducer. G. d) Indicate qualitatively the frequency dependency of S GT . For the system shown: R ( s) K1 =10 G= 100 s ( s +1) C( s) K2 =10 a) Determine the sensitivity of the system’s transfer function T with respect to the input transducer.15 3. K 2 . c) Determine the sensitivity of the system’s transfer function T with respect to the plant.6A. K1 . Signals and Systems 2014 .

6A. The desired roll angle  t  is zero degrees. the natural frequency of the ship.16 4. Signals and Systems 2014 . and   0. It is important to ensure passenger comfort on ships by stabilizing the ship’s oscillations due to waves. Note: 1. Most ship stabilization systems use fins or hydrofoils projecting into the water in order to generate a stabilization torque on the ship. 2. The transfer function of a typical ship is: G s    n2 s 2  2 n s   n2 where  n  2 T  2 . the oscillations continue for several cycles and the rolling amplitude can reach 18 for the expected amplitude of waves in a normal sea. This regulating system is only effective for disturbances (waves) with frequencies  d   n . T  3. With this low damping factor  . Determine and compare the open-loop and closed-loop system for: a) sensitivity to changes in the actuator constant K a and the roll sensor K1 b) the ability to reduce the effects of the disturbance of the waves.1 .14 s . Can you show this? Comment on these results with respect to the effect of feedback on the sensitivity of the system to parameter variations. A simple diagram of a ship stabilization system is shown below: Td( s ) R (s ) Ka  (s ) Tf ( s ) G (s) roll fin actuator ship K1 roll sensor The rolling motion of a ship can be regarded as an oscillating pendulum with a deviation from the vertical of  degrees and a typical period of 3 seconds.

D (s) R (s ) E (s ) KI s PI compensator KP + G (s) C (s ) plant Find the steady-state error. e  . for the following conditions [note that the error is always the difference between the reference input r t  and the plant output ct  ]: (a) G s   K1 . r t   unit-ramp (iii) d t   unit-step. r t   unit-step How does the addition of the integral term in the compensator affect the steady-state errors of the controlled system? Signals and Systems 2014 . r t   unit-step (ii) d t   0 . r t   0 (iv) d t   unit-step. 1  sT1 (c) G s   K1 .6A. The system shown uses a unity feedback loop and a PI compensator to control the plant. KI  0 s1  sT1  KI  0 when: (i) d t   0 .17 5. KI  0 s1  sT1  (d) G s   K1 . 1  sT1 KI  0 (b) G s   K1 .

Mapping between s-domain and z-domain. Transforms of difference equations. That thing is the ztransform. Overview Digital control of continuous-time systems has become common thanks to the ever-increasing performance/price ratio of digital signal processors (microcontrollers. so in some cases we will design a system using continuous-time techniques and then simply discretize it. The system transfer function. To handle signals in the discrete-time domain. and even things which are impossible using analog components (such as median filtering). The side-effect of all these wonderful benefits is the fact that we have to learn a new (yet analogous) body of theory to handle what are essentially discrete-time systems. Evaluation of inverse z-transforms. we are free to do anything we like to them. The z-Transform We’ll start with a discrete-time signal which is obtained by ideally and uniformly sampling a continuous-time signal: xs t     xt  t  nT  n   s Signals and Systems 2014 (7A. Finding ztransforms. something rather difficult to do with analog circuitry. Once we convert an analog signal to a sequence of numbers. gate arrays etc). We can even create systems which “learn” or adapt to changing systems. It will be seen that many of the techniques we use in the continuoustime domain can be applied to the discrete-time case.1 Lecture 7A – The z-Transform The z-transform. Standard z-transforms. we’ll need something akin to the Laplace transform in the continuous-time domain.1) We’ll treat a discrete-time signal as the weights of an ideally sampled continuous-time signal . DSPs. Complicated control structures can be implemented easily in a computer. z-transform properties.7A.

we replace the xt  with the value at the sample instant: xs t     xnT  t  nT  n   s s (7A.2) Also.7A. then we can say:  xs t    xnTs  t  nTs  (7A. t  0 .2 Now since the function x s t  is zero everywhere except at the sampling instants.3) n 0 This is the discrete-time signal in the time-domain.1 Signals and Systems 2014 t . The value of each sample is represented by the area of an impulse. as shown below: A discrete-time signal made by ideally sampling a continuous-time signal x (t ) 0 t 1 0 Ts t xs ( t ) 0 Figure 7A. if we establish the time reference so that xt   0.

The idea is to define a complex variable z as: z  e sTs (7A. e  snTs . This is a problem because one of the advantages of transforming to the s-domain was to be able to work with algebraic polynomial equations. involves a summation of a series of functions. this integrates to:  X s s    xnTs e  snTs n 0 (7A. This definition gives us a new transform. called the z-transform. a very simple idea transforms the Laplace transform of a sampled signal into one which is an algebraic polynomial equation.4) Since summation and integration are linear. Fortunately. with independent variable z:  X  z    xnTs z  n n 0 Signals and Systems 2014 (7A.7A. we can see that the Laplace transform of a sampled signal.5) Using the sifting property of the delta function.8) Definition of z .6) Therefore. xs t  .3 What if we were to analyse this signal in the frequency domain? Taking the Laplace transform yields: X s s      0  xnT  t  nT e s n 0 s  st dt (7A.7) Notice that this is a non-linear transformation. we’ll change the order to give:   X s s    xnTs   t  nTs e  st dt 0 n 0 (7A.

9) Thus.11) so that the magnitude and phase of z are. Since s    j . if we know the sample values of a signal.4 Since xnTs  is just a sequence of sample values.9) is the one-sided z-transform. we have: z  eTs e jTs (7A. respectively: z  eTs z  Ts Signals and Systems 2014 (7A.12b) . it is a relatively easy step to write down the z-transform for the sampled signal: X  z   x0  x1z 1  x2z 2   (7A. (7A.10) Note that Eq. we normally write the z-transform as:  X  z    xnz  n Definition of z-transform n 0 (7A. xn . We use this for the same reasons that we used the one-sided Laplace transform.12a) (7A. Mapping Between s-Domain and z-Domain The mapping from the continuous-time s-plane into the discrete-time z-plane defined by z  e sTs leads to some interesting observations. xn .7A.

the frequency  in the s-domain maps linearly on to the unit-circle in the z-domain with a phase angle z  Ts . distance along the j -axis in the s-domain maps linearly onto angular displacement around the unit-circle in the z-domain. In the z-domain. Therefore.2 The mapping of the s-domain to the z-domain depends on the sample interval. this corresponds to a magnitude of z  eTs  e 0  1 . In other words. Mapping the s-Plane Imaginary Axis The j -axis in the s-plane is when   0 .5 Therefore. we can draw the mapping between the s-domain and the z-domain as follows: The mapping between s-plane and z-plane Mapping Im z Im s unit circle j z=e sTs  Ts e   Ts Re s Re z s -plane z -plane Figure 7A. unit circle Im z Im s 3 uniform angular spacing 3  2 2 uniform linear spacing  Ts 1 Re s 1 Re z s -plane z -plane Figure 7A. Ts . Therefore.3 Signals and Systems 2014 The mapping between the j-axis in the s-plane and the unit-circle in the z-plane . the choice of sampling interval is crucial when designing a digital system.7A.

is unique Im s  s /2 Im z 2 unit circle 3  2 3 1  s /2 4 Re s  Ts 1 Re z 5 5 4 s -plane z -plane Figure 7A. Thus. if we decrease the frequency along the j -axis from 0 to s   j s 2 .4 Signals and Systems 2014 .7A. The mapping of the j-axis in the splane up to the foldover frequency. as we increase the frequency from 0 to half the sampling frequency along the j -axis in the s-plane.14) Thus. there is a unique one-to-one mapping from the s-plane to the z-plane.6 Aliasing With a sample period Ts . In a similar manner. there is a mapping to the z-domain in an anticlockwise direction around the unit-circle from z  10 to z  1 . between the foldover frequencies s   j s 2 . s  j s 2 and: ze sTs e j s 2 Ts  e j  1 (7A. and the unit-circle in the z-plane. the angular sample rate is given by:  s  2f s  2 Ts (7A.13) When the frequency is equal to the foldover frequency (half the sample rate). the mapping to the z-plane is in a clockwise direction from z  10 to z  1   .

Signals and Systems 2014 The mapping of the j-axis higher than the foldover frequency in the s-plane. if the frequency is increased beyond the foldover frequency.7 However. absolute frequencies greater than the foldover frequency in the s-plane are mapped on to the same point as frequencies less than the foldover frequency in the z-plane. the mapping just continues to go around the unit-circle. causes aliasing . and the unit-circle in the zplane.5 Thus. That is. This means that aliasing occurs – higher frequencies in the s-domain are mapped to lower frequencies in the z-domain. Im s 2 Im z 3 s /2 s  s /2 unit circle 1 2 3 1 Re s 3 s /2 s  3 s /2 Re z z -plane s -plane Figure 7A.7A. they assume the alias of a lower frequency. The energies of frequencies higher than the foldover frequency add to the energy of frequencies less than the foldover frequency and this is referred to as frequency folding.

transform definition:  X  z     nz  n n 0   0   1z 1   2z 1   (7A.15) Since  0  1 . we get a standard transform pair:  n  1 The z-transform of a unit-pulse (7A. Signals and Systems 2014 . We first need a few standard transforms.16) Thus. Example To find the z-transform of a signal xn   n . but not identical to. and  n  0 for n  0 .8 Finding z-Transforms We will use the same strategy for finding z-transforms of a signal as we did for the other transforms – start with a known standard transform and successively apply transform properties.7A. the transform of an impulse for the Fourier and Laplace transforms. the z-transform of a unit-pulse is 1. Note that there is no such thing as an impulse for discrete-time systems – this transform pair is therefore similar to. we substitute into the z.

20) gives: S k 1  x   1  x  x 2  x 3    x k 1  x  x2  x3   x k (7A. multiplying both sides by x gives: xS k  x  x 2  x 3    x k (7A.19) and (7A.20) Subtracting Eqs.7A. we substitute into the definition of the z-transform:  X  z    a n unz  n (7A. where we can see that on the right-hand side only the first and last terms remain after performing the subtraction. let the sum of the first k terms of a general geometric progression be written as S k : k 1 S k   x n  1  x  x 2    x n    x k 1 n 0 (7A.17) n 0 Since un  1 for all n  0 .19) Then. Signals and Systems 2014 .9 Example To find the z-transform of a signal xn  a n un. (7A.18) To convert this geometric progression into closed-form.  a X z      n0  z  n 2 3 a a a  1          z z z (7A.21) This is a telescoping sum.

10 Dividing both sides by 1  x  then results in: 1 xk Sk   x  1 x n 0 k 1 n (7A.18) results in: X z   1 .24) This can be rewritten as: The z-transform of a geometric progression X z   z . because then x k  0 . a 1   z a 1 z (7A.25) . (7A. za z a Signals and Systems 2014 (7A.23) Using this result to express Eq. For this special case. we have: Closed-form expression for an infinite geometric progression   xn  1 x  x2   n 0 1 .7A. 1 x x 1 (7A.22) If k   then the series will only converge if x  1 .

7A.11 The ROC of X  z  is z  a . (7A. as shown in the shaded area below: Signal x [ n] A signal and the region of convergence of its z-transform in the zplane Region of convergence Im z 1 n a u [n] |a| 0 n 0 Re z Figure 7A.25).6 As was the case with the Laplace transform. if we restrict the z-transform to causal signals. The result is: z un  z 1 This is a frequently used transform in the study of control systems. Example To find the z-transform of the unit-step. then we do not need to worry about the ROC.26) . Signals and Systems 2014 The z-transform of a unit-step (7A. just substitute a  1 into Eq.

It enables us to directly transform a difference equation into an algebraic equation in the complex variable z.7A.   j  j  2 z e z e  (7A. Most of the z-transform properties are inherited Laplace transform properties.31) .29) and so we have another standard transform: cosn un  z  z  cos  z 2  2 z cos   1 (7A.28) Therefore: 1 z z  cosn    .27) According to Eq.30) A similar derivation can be used to find the z-transform of sin n un . (7A. it follows that: e  jn  z z  e  j (7A. The z-transform of a function shifted to the right by one unit is given by:  Z xn  1   xn  1z  n n 0 Signals and Systems 2014 (7A.12 Example To find the z-transform of cosn un  . Right Shift (Delay) Property One of the most important properties of the z-transform is the right shift property.25). we recognise that:   cosn   e jn  e  jn 2 (7A.

2) (Z.4) n a n sin n un  2 a sin  z 2 z  2a cos  z  a Signals and Systems 2014 (Z.32) Thus: xn  1  z 1 X z   x 1 (7A.5) The z-transform right shift property .13 Letting r  n  1 yields:  Z xn  1   xr z   r 1 r  1  x 1  z 1   xr z r r 0  z 1 X  z   x 1 (7A.1) (Z.33) Standard z-Transforms z un  z 1  n  1 z a un  za n z 2  a cos  z a cos n un  2 z  2a cos  z  a 2 (Z.7A.3) (Z.

11) k 0 n (Z.7A.15) Initial-value theorem Final-value theorem z  n  z 1 Signals and Systems 2014 .12) Summation z x  k  X z    z 1 k 0 Convolution x1 n x2 n  X 1  z X 2  z  (Z.10) q 1 xn  q   z X  z    xk z q q k (Z. axn  aX  z  Linearity Multiplication by z a xn  X   a (Z.8) k 0 Multiplication by n Left shifting no corresponding transform (Z.13) x0  lim X  z  (Z.14) lim xn  lim z  1X  z  (Z.9) d nxn   z X z  dz (Z.14 z-Transform Properties Assuming xn  X  z  .7) n an Right shifting (Z.6) q 1 xn  q   z X  z    xk  q z q k (Z.

35) we want to put F z  in the form: F  z   k 0  k1 z z z  k2    kn z  p1 z  p2 z  pn (7A.15 Evaluation of Inverse z-Transforms From complex variable theory.36) The approach we will use is to firstly expand F  z  z into partial fractions. then multiply through by z. Example F z   z 3  2z 2  z 1 z  22 z  3 Therefore: F z  z 3  2 z 2  z  1  2 z z  z  2  z  3 k k k k2  0 1   3 2 z z  2 z  2 z 3 Signals and Systems 2014 Expand functions of z into partial fractions – then find the inverse ztransform . We manipulate X z  into a form where we can simply identify sums of standard transforms that may have had a few properties applied to them.7A. the definition of the inverse z-transform is: 1 n 1   xn  X z z dz  C 2j Inverse z-transform defined – but too hard to apply! (7A. Given: b0 z n  b1 z n 1    bn F z   z  p1 z  p2 z  pn  (7A. just like we hardly ever use the definition of the inverse Laplace transform.34) You won’t need to evaluate this integral to determine the inverse z-transform.

n  0 12 100 10  2  75 z za use na n  (Standard transform Z.3 and 2  z  p2  z  a 2 property Z.6) we have: Signals and Systems 2014 n n z .10).16 Now we evaluate the residues: k0  z 1 F z  z 3  2z 2  z 1   2 z z 0  z  2  z  3 z 0 12 d d  z 3  2 z 2  z  1 2 F z  k1  z  2     dz z z 2 dz  z  z  3  z 2      z  z  3 3 z 2  4 z  1  z 3  2 z 2  z  1 2 z  3 77  2 2 100 z  z  3 z 2 k 2  z  2 2 k 3   z  3 F z  z 3  2z 2  z 1 19   z z 2 z  z  3 10 z 2 11 F z  z 3  2z 2  z 1   2 75 z z  3 z z  2 z  3 Therefore: F z   1 77 z 19 z 11 z    2 12 100 z  2 10 z  2 75 z  3 From our standard transforms: f n  Note: For k 2 1 77 n 19  n  n 11 n  n  2   2   3 . a  a z  a 2 .7A. Then with linearity (property Z.

38) . y 2  37 36 and the input xn  2 un. Because yn  k   z  k Y  z  .17 Transforms of Difference Equations The right shift property of the z-transform sets the stage for solving linear difference equations with constant coefficients. difference equation time-domain frequency-domain difficult? time-domain solution ZT IZT algebaric equation easy! z -domain solution Figure 7A. the z-transform of a difference equation is an algebraic equation that can be readily solved for Y  z  .7A. Next we take the inverse z-transform of Y  z  to find the desired solution yn . n Now: yn u n   Y  z  yn  1u n   z 1Y  z   y 1  z 1Y  z   11 6 yn  2u n   z  2Y  z   z 1 y 1  y 2  z  2Y  z   Signals and Systems 2014 11 1 37 z  6 36 (7A.7 Example Solve the second-order linear difference equation: yn   5 yn  1  6 yn  2  3 xn  1  5 xn  2 (7A.37) if the initial conditions are y 1  11 6 .

5 3z  2 5z 3 z   zero -input component zero -state component Signals and Systems 2014 (7A.39) Taking the z-transform of Eq.5  z 2  5 z  6 z  5z  6       zero -input component   zero -state component (7A.5  (7A.5  6 z  2 Y  z   3  11z 1  (7A.37) and substituting the foregoing results.7A.5 u n   n n z z  0 . Then: xn   2  u n   0.40) or: 1  5 z 1    z 3zz05.44) .5 z  0 .42) and: 3 z  5 3 z  11 Y z    z  2 z  3 z  0.5 xn  1u n   z 1 X  z   x 1  z 1 z 1  z  0 . we obtain: 11   11 37   Y  z   5 z 1Y  z     6  z  2Y  z   z 1   6  6 36   3 5   z  0 .43) Therefore: z z 26 z 22 z 28 z 2    Y z   5 2z  3  15z  0. 5 2 3 z z z z z        zero -input component zero -state component (7A.18 For the input. (7A.5z  2 z  3 z 5 2 26 15 22 3 28 5          2 3 0 .5 z  z  0 .5  (7A.41) from which we obtain: Y z   z 3 z  11 z 3 z  5  2  z  0. x 1  x 2  0 .5 xn  2u n   z  2 X  z   z 1 x 1  x 2  z  2 X  z   1 z  z  0 .

19 and: 26 0. the z-transform method gives the total response.46) First-order difference equation Taking the z-transform of both sides and using the right-shift property gives:   Y  z   a z 1Y  z   y 1  bX  z  (7A.48) which can be written: Y z    ay 1z bz  X z  za za (7A. n  0 n n (7A. The initial condition terms give rise to the zero-input response. Signals and Systems 2014 and corresponding z-transform .5  22 2   285  yn   52   23  15 3    3  n n n zero -input response  26 15 0. The zero-state response terms are exclusively due to the input. the second part results from the input. The System Transfer Function Consider the simple first-order discrete-time system described by the difference equation: yn  ayn  1  bxn (7A. which includes zero-input and zero-state components.7A.47) Solving for Y  z  gives: b  ay 1 Y z    X z  1  az 1 1  az 1 (7A.49) The first part of the response results from the initial conditions.5 n n n zero -state response  73 2   185 3 .45) As can be seen.

55) .54) so that the transfer function is: Transfer function derived directly from difference equation b0 z N  b1 z N 1    bM z N  M H z   N z  a1 z N 1    a N 1 z  a N Signals and Systems 2014 (7A. then taking the z-transform of both sides results in: b0 z N  b1 z N 1    bM z N  M Y z   N X z  N 1 z  a1 z    a N 1 z  a N (7A.53) and if the system has zero initial conditions. To determine the output yn we simply evaluate Eq.51) so that: Y z   H z X z  Discrete-time transfer function defined (7A.7A.52) and take the inverse z-transform.50) We now define the transfer function for this system as: H z   bz za (7A.20 If the system has no initial energy (zero initial conditions) then: Y z   bz X z  za (7A. (7A.52) This is the transfer function representation of the system. For a general nth order system described by the difference equation: N M i 1 i 0 yn   ai yn  i    bi xn  i  (7A.

In a similar manner. (7A. the unit-pulse response and the transfer function form a z-transform pair. for a stable system. Therefore. which is inside the unit-circle.7A. Stability The left-half s-plane. we must have: pi  1 (7A. This is equivalent to saying: A system is stable if all the poles of the transfer function lie inside the unit-circle Signals and Systems 2014 (7A.57) The unit-pulse response and transfer function form a z-transform pair That is. The right-half s-plane maps outside the unit circle.58a) where pi are the poles of H  z  . n  0 i 0 (7A.58b) Stability defined for a discrete-time system .56) when transformed gives Eq. transfer functions of z having poles with magnitudes less than one decay to zero in the time-domain as n   . We therefore have: hn  H  z  (7A.52). where   0 .21 We can show that the convolution relationship of a linear discrete-time system:  yn  hn xn   hi xn  i . Recall that functions of the Laplace variable s having poles with negative real parts decay to zero as t   . corresponds to z  eTs  1 .

8 By taking the z-transform of the input and output.59) . just like for continuous-time systems. Recall that an LTI discrete-time system is composed of elements such as adders. which is shown below: A discrete-time unitdelay element x [n ] D y [n ] = x [n -1] Figure 7A. gains (which multiply the input by a constant).9 Example The discrete-time approximation to continuous-time integration is given by the difference equation: yn  yn  1  Txn  1 Signals and Systems 2014 (7A.7A. we can see that we should represent a delay in the z-domain by: Delay element in block diagram form in the z-domain X ( z) z -1 Y ( z ) = z -1 X ( z) Figure 7A.22 Transfer Function Interconnections The transfer function of an LTI discrete-time system can be computed from a block diagram of the system. and the unit-delay element.

Signals and Systems 2014 .12 You should confirm that this transfer function obtained using block diagram reduction methods is the same as that found by taking the z-transform of Eq. we get: Transfer function of a numeric integrator X ( z) T z -1 1.z -1 Y ( z) Figure 7A.11 Through the standard block diagram reduction techniques.59).23 The block diagram of the system is: Time-domain block diagram of a numeric integrator T x [n -1] x [n ] y [n ] T x [n ] T D y [n -1] D Figure 7A. (7A.7A.10 The system in the z-domain is: z-domain block diagram of a numeric integrator z -1T X ( z ) X ( z) T X ( z) T Y ( z) z -1 -1 z Y ( z) z -1 Figure 7A.

: Fundamentals of Signals and Systems using MATLAB®. There is a one-to-one correspondence between the coefficients in the difference equation and the coefficients in the transfer function. B. There is a mapping from the s-domain to the z-domain given by z  e sTs . Prentice-Hall. & Heck.7A.  Systems described by difference equations have rational z-transforms.  The transfer function of a system can be obtained by performing analysis in the z-domain. and for frequencies above the foldover frequency. standard transforms and the z-transform properties. E. The z-transforms of the input signal and output signal are related by the transfer function of the system: Y z   H z X  z  .  The unit-pulse response and the transfer function form a z-transform pair: hn  H  z  . References Kamen. 1997.24 Summary  The z-transform is the discrete-time counterpart of the Laplace transform. The mapping is not unique.  We can use the z-transform to express discrete-time systems in transfer function (block diagram) form. Signals and Systems 2014 .  We evaluate inverse z-transforms using partial fractions. aliasing occurs.

Signals and Systems 2014 . Hint: xn can be written as the sum of a unit step and two unit-pulses. 4. 3.  3. y 1  1 . Using z-transforms: (a) Find the unit-pulse response for the system given by: 1 yn  xn  yn  1 3 (b) Find the response of this system to the input 0. 1. 1. 2 … n 3. n  0. (i) Construct a difference equation from the following block diagram: X ( z) Y ( z) 3 z -1 z -1 -2 z -2 (ii) From your solution calculate yn for n = 0. n  2. 2 and 3 given y 2  2 .7A. 1 1.25 Exercises 1. Construct z-domain block diagrams for the following difference equations: (i) yn  yn  2  xn  xn  1 (ii) yn  2 yn  1  yn  2  3 xn  4 2. or as the subtraction of two unit-steps. xn  0 for n  0 and xn   1 for n = 0.  2. xn  2. n  1.

1. yn  1 2 for n  0. determine the first three terms of the weighting sequence of the overall system by applying a unit-pulse input and calculating the resultant response. yn  a n1 for n  1. 3. yn  n 2 a n1 for n  1. For the feedback configuration shown below. x [n ] h1[n] y [n ] h2[n] 5. 3.26 4. x [n ] h1[n] y [n ] h2[n] 6. (iv) yn  0 for n  0 . yn  na n1 for n  1. n Signals and Systems 2014 . 2. (ii) yn  0 for n  0 . 2.7A. Express your results as a function of the weighting sequence elements h1 n and h2 n . Determine the weighting sequence for the system shown below in terms of the individual weighting sequences h1 n and h2 n . Using the definition of the z-transform (a) Find Y  z  when: (i) yn  0 for n  0 . 2. 3. (iii) yn  0 for n  0 . 2.

(c) Use the linearity property of the z-transform and the z-transform of e anT (from tables) to find Z cosnT  . Poles and zeros are defined for the z-transform in exactly the same manner as for the Laplace transform. For each of the z-transforms given below. 4. Check the result using a table of z-transforms. xn   0. 2. all other n by noting that xn  x1 n  x2 n where x1 n is the unit-step sequence and x2 n is the unit-alternating sequence. Signals and Systems 2014 .27 (b) Determine the z-transform of the sequence 2. find the poles and zeros and plot the locations in the z-plane. Which of these systems are stable and unstable? 1  2 z 1 (a) H  z   3  4 z 1  z 2 (b) H  z   (c) H  z   1 2 1  3 4 z  1 8 z 4 5  2 z 2 1  6 z 1  3z  2 Note: for a discrete-time system to be stable all the poles must lie inside the unit-circle. 7.7A. n  0. Verify your result by directly determining the z-transform of xn .

28 8. Determine the inverse z-transforms of: z2 (a) X z   z  1z  a  (b) X  z   3  2 z 1  6 z 4 (c) X  z   1  e z z  1z  e  (d) X  z   z  z  1 z  1z 2  z  1 4 (e) X  z   4 z 2 z  1 (f) X z   z z  z 1  aT  aT 3 2 Signals and Systems 2014 . Use the partial fraction expansion method to find a general expression for xn in Question 9. Given: (a) yn  3 xn  xn  1  2 xn  4  yn  1  yn  2 (b) yn  4  yn  3  yn  2  3 xn  4  xn  3  2 xn Find the transfer function of the systems (i) by first finding the unit-pulse response (ii) by directly taking the z-transform of the difference equation 9. 11.7A. Confirm that the first four terms are the same as those obtained by direct division. Use the direct division method to find the first four terms of the data sequence xn . given: X z   14 z 2  14 z  3 z  1 4z  1 2z  1 10.

Use z-transform techniques to find a closed-form expression for the Fibonacci sequence: 0.29 12. 8. (b) Find H  z  (i) by directly taking the z-transform of the difference equation (ii) from your answer in (a) (c) From your solution in (b) find the unit-step response of the system. (d) Find the zero-state response if xn  n .7A. 2. 21. the initial conditions must be zero. Given yn  xn  yn  1  yn  2 . 13. 1. find the unit-step response of this system using the transfer function method. 3. 1. 5. 34… Hint: To use transfer function techniques. 13. 14. Construct an input so that the ZSR of the system described by the difference equation gives the above response. Signals and Systems 2014 . Check your solution using a convolution method on the original difference equation. Given: 8 yn  6 yn  1  yn  2  xn (a) Find the unit-pulse response hn using time-domain methods. assuming zero initial conditions.

find F  z  . 3. Are they familiar values? 15.7A. Given F s   1 . Signals and Systems 2014 . Take the ztransform of f nT  . First find f t  . Then put t  nT to obtain f nT  . s  a 2 Hint: 1. 2.30 The ancient Greeks considered a rectangle to be perfectly proportioned (saying that the lengths of its sides were in a golden ratio to each other) if the ratio of the length to the width of the outer rectangle equalled the ratio of the length to the width of the inner rectangle: 1  -1 1  That is:  1  1  1 Find the two values of  that satisfy the golden ratio.

find f 0  and f   of the following functions: (a) F  z   1 z  0. Given: X 2 z   X 1 z  z  6  X 3 z   D z  z 1 z 1 X 3  z   X 1  z   kX 2 z  X 4 z   10 3 X 2 z   X 3 z  z2 z A (a) Draw a block diagram. Are there any differences between the operations which apply to discrete-time and continuous-time transfer functions? 17.3 z2 (c) F  z   z  14 z  a  (b) F z   1  5 z 3  2 z 2 14 z 2  14 z  3 (d) F  z   z  1 4z  1 2z  1 Signals and Systems 2014 . Use block diagram reduction to find X 3  z  . Using the initial and final value theorems.7A.31 16.

Determine the inverse z-transform of: F z   z  z  1 z  1 z 2  z  1 4   HINT: This has a repeated root. Use techniques analogous to those for the Laplace transform when multiple roots are present. and (b) using any other convolution technique.32 18. Compare answers. 19. Perform the convolution yn yn when (i) yn  un (ii) yn   n   n  1   n  2 using (a) the property that convolution in the time-domain is equivalent to multiplication in the frequency domain. Signals and Systems 2014 .7A.

System discretization. Communication schemes and controllers implemented digitally have inherent advantages over their analog counterparts: reduced cost. and this will no doubt continue into the future. Response matching. Signal Discretization We have already seen how to discretize a signal. We saw Signals and Systems 2014 Sampling a continuous-time signal produces a discrete-time signal (a train of impulses) . stability with aging. analyse and design systems still uses analog concepts. and ultimately most embedded systems eventually interface to a continuous-time world. Overview Digital signal processing is now the preferred method of signal processing. but a real system can come close to the ideal.7B. Frequency response. adaptability (ability to track changes in the environment). flexibility (one H/W design.1 Lecture 7B – Discretization Signal discretization. much of how we think. An ideal sampler produces a weighted train of impulses: g (t ) p (t ) = gs ( t ) g (t ) p (t ) -2Ts -Ts 0 Ts 2Ts Figure 7B. It’s therefore important that we now take all that we know about continuoustime systems and “transfer” or “map” it into the discrete-time domain. in-system programming. Of course an ideal sampler does not exist. Signal reconstruction. repeatability. However. different S/W).1 This was how we approached the topic of z-transforms.

the output of a sampler is quantized so that we have a digital representation of the signal. we could reconstruct the signal perfectly from its samples (if the lowpass filter is ideal). not only at discrete times. To ensure the Nyquist criterion is met. But digital systems have to first convert their digital data into analog data. Thus.2 We then showed that so long as the Nyquist criterion was met. Digital signal processing also quantizes the discrete-time signal In a computer. in a digital system. and are a source of errors for digital signal processors.2 in the lab that it didn’t matter if we used a rectangular pulse train instead of an impulse train – the only effect was that repeats of the spectrum were weighted by a sinc function.7B. we normally place an anti-alias filter before the sampler. This didn’t matter since reconstruction of the original continuous-time signal was accomplished by lowpass filtering the baseband spectrum which was not affected by the sinc function. values can only be stored as discrete values. Signal Reconstruction The reconstruction of a signal from ideal samples was accomplished by an ideal filter: Lowpass filtering a discrete-time signal (train of impulses) produces a continuous-time signal gs (t ) lowpass filter g( t ) Figure 7B. Hold Operation The output from a digital signal processor is obviously digital – we need a way to convert a discrete-time signal back into a continuous-time signal. One way is by using a lowpass filter on the sampled signal. as above. Signals and Systems 2014 . and this is accomplished with a DAC. The effects of quantization will be ignored for now – be aware that they exist.

1) where Ts is the sample period. we note that there is always some output (it never turns off).4Ts .2Ts . The operation of the zero-order hold device in terms of frequency response shows us that it acts like a lowpass filter (but not an ideal one): ~ G  f   Ts sinc fTs e  jfTs Gs  f  Show that the above is true by taking the Fourier transform of g~ t  . and that the values it produces are quantized: The output from a DAC looks like a train of impulses convolved with a rectangle g( t ) ~ g (t ) .4 The output of the zero-order hold device is: g~ t   g nTs . nTs  t  nTs  T s (7B.3Ts . Signals and Systems 2014 (7B.5Ts .Ts 0 Ts 2Ts 3Ts 4Ts 5Ts 6Ts t Figure 7B.2) Zero-order hold frequency response .3 The mathematical model we use for the output of a DAC is: gs (t ) zero-order hold A DAC is modelled as a zero-order hold device ~ g( t ) Figure 7B.7B.3 To model a DAC.

3) In the frequency-domain.3) holds true. yn  y nT  (7B. (7B. this leads to a z-domain expression which is not a finite rational polynomial in z.4 System Discretization Suppose we wish to discretize a continuous-time LTI system.3): Discretizing a system should produce the same (sampled) signals x (t ) y (t ) x [n ] = x ( nTs ) h (t ) h [n ] Laplace transform X ( s) y [n ] = y ( nTs ) z. then we can simply do: An exact match of the two systems We can’t implement the exact match because the transfer function is not a finite rational polynomial H d  z   H s  s  1 T s  ln z (7B. Recall that each z 1 in a transfer function represents a shift to the right in the signal – a task easily handled by indexing into an array of stored values (there are special Signals and Systems 2014 . One way is to match the inputs and outputs in the frequency-domain.7B. We want polynomials of z because the discrete-time system will be implemented as a difference equation in a computer.transform Y (s ) X ( z) H (s) Y (z ) Hd ( z) Figure 7B. while still maintaining Eq. (7B. You would expect that since z  e sTs .5 We now have to determine the discrete-time transfer function H d  z  so that the relationship Eq.4) Unfortunately. We would expect the input/output values of the discrete-time system to be: xn  xnT . we want an equivalent relationship by taking the Laplace transform of the continuous time system and the z-transform of the discrete-time system.

but for now we will stay with the easier concept of integer delays).7) Subtracting Eq.7B.8) Now let: z 1 y 1 y (7B. we seek a rational polynomial approximation to the ln z function.6). Therefore. rearranging: Signals and Systems 2014 .6) Now. a  1 and let x  1  y : y2 y3 y4 ln 1  y   y     2 3 4 (7B. and dividing by 2 gives us: y3 y5 y7 1 1 y   y ln    2  1  y  3 5 7 (7B. (7B.5 signal processing techniques to implement fractional delays.7) from Eq.10) or. We start with Taylor’s theorem: 2  x  a f  x   f a    x  a  f a   2! f a    (7B.5) Now set f  x   ln x . from inspection.9) y z 1 z 1 (7B. we can also have: y2 y3 y4 ln 1  y    y     2 3 4 (7B. (7B.

So.8) yields:   z  1  1  z  1  3 1  z  1  5 ln z  2           z  1  3  z  1  5  z  1  (7B.7B.13) This transformation has several desirable properties:  The open LHP in the s-domain maps onto the open unit disk in the zdomain (thus the bilinear transformation preserves the stability condition).11) Now if z  1 . (7B.12) We can now use this as an approximate value for s.an approximate mapping between s and z s 1 2  z 1 ln z    Ts Ts  z  1  (7B.10) into Eq.14) . (7B. an approximate mapping from a continuous-time system to a discrete-time system.  The j -axis maps onto the unit circle in the z-domain. This will be used later when we look at frequency response of discrete-time systems.6 Substituting Eqs.4) is: Discretizing a system using the bilinear transformation  2 z 1   H d  z   H   T z 1  s  Signals and Systems 2014 (7B. then we can truncate higher than first-order terms to get the approximation:  z 1 ln z  2   z  1 (7B. This is called the bilinear transformation (since it has 2 (bi) linear terms): Bilinear transformation defined . (7B.9) and (7B. from Eq.

6 But this point is also given by the angle   2n .7B. In fact any point on the unit circle maps to the s-domain frequency Ts    2n . Signals and Systems 2014 (7B. If we define:   Ts (7B.7 Frequency Response Since z  e sTs .15) The relationship between discretetime frequency and continuous-time frequency (7B. so the mapping from the zplane to the s-domain is not unique.16) Value of z to determine discretetime frequency response then the frequency response in the z-domain is found by setting: ze j This value of z is found on the unit-circle and has an angle of   Ts : Frequency response mapping j z=e j  -1 1 -j Figure 7B.17) . then letting s  j gives z  e jTs . or: Frequency response mapping is not unique – aliasing  2   n Ts Ts    n s Ts where  s is the sampling frequency in radians.

20) The inverse relationship is:   2 tan 1 Ts 2 Signals and Systems 2014 (7B.8 This shows us two things:  The mapping from the z-domain to the s-domain is not unique. Just like the continuous-time case where we set s  j in H s  to give the frequency response H   .21) . Discrete-time frequency response is periodic  The frequency response of a discrete-time system is periodic with period 2 . Doing this with Eq. (7B.7B.18) The frequency that corresponds to  in the s-domain is approximately given by Eq.14) yields the approximation:  2 e j  1   H d    H  j  T e 1  s  (7B. which means it can be completely characterised by restricting  so that       . we can set z  e j in H d  z  to give the frequency response H d   . We already know this! It’s called aliasing. This means that frequencies spaced at  s map to the same frequency in the z-domain.13): 2 e j  1 j  Ts e j  1 (7B. Conversely. (7B.19) Using Euler’s identity show that this can be manipulated into the form: Mapping continuous-time frequency to discrete-time frequency using the bilinear transformation  2  tan Ts 2 (7B. the mapping from the s-domain to the z-domain is not unique.

is given by: 2  H d    H  tan  2  Ts Discretizing a system to get a similar frequency response (7B.21). (7B. (7B. (7B.15).7B.23) which is a small deviation from the real mapping   Ts given by Eq.18). Note that we may be able to select the sample period T so that all our “critical” frequencies will be mapped by: c  2 tan 1  cTs  T   2 c s  2  2    cTs (7B. the approximate frequency response of an equivalent discrete-time system.21) is called frequency warping.22) The distortion caused by the approximation Eq.9 So now Eq. If there is some critical frequency (like the cutoff frequency of a filter) that must be preserved in the transformation. Signals and Systems 2014 . since the relationship is non-linear. then we can pre-warp the continuous-time frequency response before applying Eq. (7B.

24) where the output of the discrete-time system is obtained by sampling the stepresponse of the continuous-time system. this mapping is best performed as step invariance synthesis. it is usual to consider a mapping from continuous-time to discrete-time in terms of the time-domain response instead of the frequency Time-domain (step response) matching response.25) then we want: Step response matching to get the discrete-time transfer function H d z   Y z  z 1 z (7B. We saw before that we want: xn  xnTs . yn  y nTs  (7B. Since for a unit-step input we have: X z   z z 1 (7B.26) where Y  z  is the z-transform of the ideally sampled step-response of the continuous-time system. although other mappings can be made (like impulse invariance). Signals and Systems 2014 .7B.10 Response Matching In control systems. For set-point control.

8 The transform of the step response is then: Y s   0.5 V (s ) Figure 7B.5 1 s  0.5 Signals and Systems 2014 (7B. We might come up with a continuous-time controller transfer function such as: Gc s   500  5 s (7B.5 s  0.27) Our closed loop system is therefore: R (s ) E (s ) X (s ) 500s  5 s controller 0.001 s  0.28) .11 Example Suppose we design a maze rover velocity controller in the continuous-time domain and we are now considering its implementation in the rover’s microcontroller.7 Show that the block diagram can be reduced to the transfer function: R (s ) 0.7B.01 V (s ) maze rover Figure 7B.5 s 1 1   s s  0.

5 nTs un (7B.12 and taking the inverse Laplace transform gives the step response: Continuous-time step response   y t   1  e 0. using Eq.5Ts z  e 0.29) The discretized version of the step response is: and desired discrete-time step response   yn  1  e 0.5t u t  (7B. Signals and Systems 2014 .31) Hence.7B. (7B.26) yields the following transfer function for the corresponding discrete-time system: Equivalent discretetime transfer function using step response z 1 bz 1 1  e 0.5Ts (7B.33) You should confirm that this difference equation gives an equivalent stepresponse at the sample instants using MATLAB® for various values of T.30) and taking the z-transform gives: Y z   z z  z  1 z  e  0.32) We would therefore implement the difference equation: Equivalent discretetime difference equation using step response yn  ayn  1  bxn  1 (7B.5Ts H d z   1    z  e 0.5Ts 1  az 1 (7B.

We can then process the signal digitally. & Heck.: Fundamentals of Signals and Systems using MATLAB®. E. This maps the LHP in the s-domain into the open unit disk in the z-domain. then all the information will be contained in the resulting discrete-time signal. and introduces warping when examining the frequency response. References Kamen. 1997.7B.  We can reconstruct a continuous-time signal from mere numbers by using a DAC.  A system (or signal) may be discretized using the bilinear transform. Signals and Systems 2014 . Prentice-Hall. We model this as passing our discrete-time signal (a weighted impulse train) through a lowpass filter with a rectangular impulse response. If we meet the Nyquist criterion for sampling.13 Summary  We can discretize a continuous-time signal by sampling. It is an approximation only. B.  Response matching derives an equivalent discrete-time transfer function so that the signals in the discrete-time system exactly match samples of the continuous-time system’s input and output signals.

A continuous-time system has a transfer function G s   s2 and it is s  3s  2 2 required to find the transfer function of an equivalent discrete-time system H  z  whose unit-step response consists of samples of the continuous-time system’s unit-step response. Repeat Exercise 1. but this time use the bilinear transform. 1.14 Exercises 1. 2.7B. Find H  z  assuming a sample time of 1 s . 4. 10. Compare the step responses of each answer in Exercises 1 and 2 using MATLAB®. A Maze Rover phase-lead compensator has the transfer function: H s   20s  1 s  4 Determine a difference equation that approximates this continuous-time system using the method of step-response matching. 100 s to verify your answer. 3. Compare the time solutions at t  0. The sample period is 64 ms. Signals and Systems 2014 .

or nearly. Our task is to find suitable pole locations for the system. equal to the input. we might specify percent overshoot. and allowable deviations from the final steadystate value. For example. but the formula shown here only apply to all-pole second-order systems).8A. Design criteria for discrete-time systems. We will examine transient performance as specified for an allpole second-order system (the transient performance criteria still exist for higher-order systems. The more complex specification of transient behaviour is tackled by an examination of the system’s poles. peak time. The way to achieve the desired steady-state error. we want the output to be exactly. is by employing a closed-loop configuration that gives the desired system type. Given a reference input. The transient specification for a control system will usually consist of times taken to reach certain values. for any type of input. and 5% settling time for a control system. The steady-state performance is relatively simple. Signals and Systems 2014 . Overview The design of a control system centres around two important specifications – the steady-state performance and the transient performance.1 Lecture 8A – System Design Design criteria for continuous-time systems.

8A. we can evaluate the minimum allowable  .O.2 Design Criteria for Continuous-Time Systems Percent Overshoot The percent overshoot for a second-order all pole step-response is given by:  P.1 Signals and Systems 2014  =cos -1  .  e 1 2 (8A.O. specification is satisfied: PO specification region in the s-plane shaded region is where PO is satisfied j    s -plane Figure 8A. then we can shade the region of the s-plane where the P. Since   cos . for a given maximum overshoot specification. But what specific  do we choose? We don’t know until we take into consideration other specifications! What we do is simply define a region in the s-plane where we can meet this specification. where  is the angle with respect to the negative real axis.1) Therefore.

8A. we define the region in the s-plane where this specification is satisfied: Peak time specification region in the s-plane j j d shaded region is where t p is satisfied  s -plane Figure 8A. then we can find the minimum  d   t p required to meet the specification.2 Signals and Systems 2014  d= t p .3 Peak Time The peak time for a second-order all-pole step-response is given by: tp   d (8A. Again.2) Since a specification will specify a maximum peak time.

3 Signals and Systems 2014 -ln  = t s .3) Since a specification will specify a maximum settling time. We define the region in the s-plane where this specification is satisfied: Settling time specification region in the s-plane shaded region is where t s is satisfied j   s -plane Figure 8A. then we can find the minimum    ln  t s required to meet the specification.4 Settling Time The settling time for a second-order all-pole step-response is given by: ts   ln   (8A.8A.

motors cannot have megavolts applied to their windings without breaking down. within reasonable limits. because the inputs to our system would exceed the linear range of systems exceed the allowable linear range.8A. The region we have drawn on the s-plane is an output specification – we must give consideration to the inputs of the system! For example.4 Sometimes a specification is automatically met by meeting the other specifications – this is clear once the regions are drawn on the s-plane.5 Combined Specifications Since we have now specified simple regions in the s-plane that satisfy each specification. It is now up to us to choose. sharp. all we have to do is combine all the regions to meet every specification. the desired closed-loop pole locations. Our analysis has considered the system to be linear – we know it is not in practice! Op amps saturate. and we can’t put our foot on the accelerator past the floor of the car (no matter how much we try)! Signals and Systems 2014 . there is nothing theoretically wrong with choosing closed-loop pole We choose s-plane locations out near infinity – we’d achieve a very nice. almost step-like poles close to the origin so as not to response! In practice. we can’t do this. shaded region is where all specs are satisfied j j    Combined specifications region in the s-plane d  s -plane Figure 8A.

5 Can we apply these specifications to real (higher-order) systems? Yes – if it’s a dominant second-order all-pole system: Dominant secondorder all-pole system j  s -plane Figure 8A.6 Practical considerations therefore mean we should choose pole locations that are close to the origin – this will mean we meet the specifications. We normally indicate the desired pole locations by placing a square around them: Desired closed-loop pole locations are represented with a square around them chosen pole location that meets the specs and keeps the system linear j j    d  s -plane Figure 8A.8A.6 Signals and Systems 2014 . and hopefully we won’t be exceeding the linear bounds of our system.

7 or we introduce a minor-loop to move a dominant first-order pole away: Dominant secondorder all-pole system made by moving a pole using a minor-feedback loop j minor-loop feedback  s -plane Figure 8A.7 or we achieve a pole-zero cancellation by placing a zero very close to the dominant pole: Dominant secondorder all-pole system made by pole-zero cancellation j close zero  s -plane Figure 8A. In some cases this inexact cancellation can cause the inexact opposite of the desired effect – the system will have a dominant first-order pole and the response will be “sluggish”. Normally we do not know exactly the pole locations of a system.8A. so the “cancellation” is Pole-zero cancellation is only approximate. Signals and Systems 2014 .8 We should be careful when implementing pole-zero cancellation.

we have: d   1.591 2 so that:   cos 1   53 For the peak time spec.8A.43 nepers 9 9 Signals and Systems 2014 .1  e 1   0.57  2 rads-1 For the settling time spec.57 s (c) 2% settling time t s  9 s We evaluate the regions in the order they are given. we get:   ln 0.8 Example We need to design a maze rover position controller to achieve the following specifications: (a) PO < 10% (b) t p  1.02 39   0. we have:  0. For the PO spec.

or a variety of other methods. Discretizing a continuous-time system inevitably involves a sample time – this sample time affects the pole locations of the discrete-time system.1 j2 53° -1.8A.9 The regions can now be drawn: j j 2.6 -0.6  j 2.43  53° s -plane -j 2 -j 2. zero-order holds. In transforming the plane performance specifications to the z-plane.1 to meet all our specifications.1 We choose the desired closed-loop poles to be at s  1. it is first convenient to see how a single point of the s-plane maps into the z-plane. Signals and Systems 2014 . We therefore would like to see where our poles Specification regions in the zshould be to satisfy the initial design specifications. Design Criteria for Discrete-Time Systems A second-order continuous-time system can be “translated” into a second-order discrete-time system using either the bilinear (Tustin) transform.

10 Mapping of a Point from the s-plane to the z-plane By definition. we have: Mapping a point from the s-plane to the z-plane unit-circle j e  Ts Ts  s -plane z -plane Figure 8A. Signals and Systems 2014 . z  e sTs .9 We can see that if:  0  0  0 z 1 z 1 z 1 (8A. and s    j .4) That is.5) We will now translate the performance specification criteria areas from the splane to the z-plane.8A. Therefore: z  eTs e jTs  eTs Ts  z z (8A.

5 j unit-circle  =0.6) For a given  .5 unit-circle s  =0 2  =  s  =0 2  =1 s -plane z -plane Figure 8A.11 Percent Overshoot   In the s-plane.11 Signals and Systems 2014 .8A. This line in the z-plane must be: z  e sTs  e Ts e jTs tan cos 1    e Ts Ts tan cos 1    (8A.10 The region in the z-plane that corresponds to the region in the s-plane where the PO specification is satisfied is shown below: PO specification region in the z-plane j  =0. the PO specification is given by s    j tan cos 1  . this locus is a logarithmic spiral: j j  =0.5 s 2  =  s 2  =0 s -plane z -plane Figure 8A.

this locus is a straight line between the origin and the unitcircle.8A.7) For a given  d .12 The region in the z-plane that corresponds to the region in the s-plane where the peak time specification is satisfied is shown below: j Peak time specification region in the z-plane j s unit-circle 2 j d  d Ts - d Ts  -j d z -plane s -plane Figure 8A.12 Peak Time In the s-plane. the peak time specification is given by s    j d . A line of constant frequency in the z-plane must be: z  e sTs  eTs e j d Ts  eTs  d Ts (8A.13 Signals and Systems 2014 . at an angle    d Ts (remember we only consider the LHP of the splane so that      0 ): j unit-circle s j 2 j 2 j 1  s Ts 2  -j 1  2 Ts  1 Ts - 1 Ts s -plane z -plane Figure 8A.

the settling time specification is given by s    j .8A. The corresponding locus in the z-plane must be: z  e sTs  e Ts e jTs  e Ts Ts (8A.13 Settling Time In the s-plane.15 Signals and Systems 2014 .14 The region in the z-plane that corresponds to the region in the s-plane where the settling time specification is satisfied is shown below: j - Settling time specification region in the z-plane unit-circle e T s  s -plane z -plane Figure 8A. this locus is a circle centred at the origin with a radius e Ts : j unit-circle  2T s e - 2  - 1  1T s e s -plane z -plane Figure 8A.8) For a given  .

due to the imperfections of the discretization! Signals and Systems 2014 . then we need to maximize z . and  is a small negative number near the origin. For the s-plane we chose pole locations close to the origin. then discretize it using a method such as the bilinear transform or step-response matching. We should always check whether the resulting discrete-time system will meet the specifications – it may not. peak time and We choose z-plane poles close to the unit-circle so as not to exceed the linear range of systems settling time regions in the z-plane. We therefore choose pole locations in the z-plane which satisfy all the criteria. and we choose them as far away from the origin as possible: Combined specifications region in the z-plane unit-circle desired pole locations z -plane Figure 8A. Since z  eTs .14 Combined Specifications A typical specification will mean we have to combine the PO.16 Sometimes we perform a design in the s-plane.8A.

and are usually chosen close to the origin so that the system remains in its linear region.15 Summary  The percent overshoot. peak time and settling time specifications for an allpole second-order system can be found in the z-plane. Prentice-Hall. J. 1995. NSWIT. Signals and Systems 2014 .8A. 1986. References Kuo. peak time and settling time specifications for an allpole second-order system can easily be found in the s-plane.  We can apply the all-pole second-order specification regions to systems that are dominant second-order systems. The desired pole locations are chosen as close to the unit-circle as possible so the system stays within its linear bounds. Desired pole locations can then be chosen that satisfy all the specifications. B: Automatic Control Systems.: Circuits and Systems 2 Notes.  The percent overshoot. Nicol.

.

AIEE. Overview The roots of a system’s characteristic equation (the poles).1 Lecture 8B – Root Locus Root locus. pp.. 1948. MATLAB®’s RLTool. Root loci of discrete-time systems.1) . 547-551. Root Locus As an example of the root locus technique.” Trans. pt. Root locus rules. They are very important for system designers in two respects. The pole locations also determine whether the system is stable – we must have all poles in the open LHP for stability.O. etc. settling time. we will consider a simple unityfeedback control system: R (s ) E (s ) C (s ) Gc( s ) Gp( s) controller plant Figure 8B. R.8B. Time-response of discrete-time systems. If we can control the pole locations then we can position them to meet time-domain specifications such as P. A “root locus” is a graphical way of showing how the roots of a characteristic equation in the complex (s or z) plane vary as some parameter is varied. It is an extremely valuable aid in the analysis and design of control systems. vol. II.1 We know that the closed-loop transfer function is just: Gc s G p s  C s   Rs  1  Gc s G p s  Signals and Systems 2014 (8B. 67. determine the mathematical form of the system’s transient response to any input. and was developed by W. Evans in his 1948 paper “Graphical Analysis of Control Systems.

2 This transfer function has the characteristic equation: 1  Gc s G p s   0 (8B.8B.5) Taking magnitudes of both sides leads to the magnitude criterion for a root locus: Root locus magnitude criterion P s   1 K (8B. or some other parameter that we have control over.5) gives the angle criterion for a root locus: Root locus angle criterion K  Ps   180  360r r  0.3) gives us: KPs   1 (8B. (8B. taking angles of both sides of Eq. as the parameter K is varied.6b) To construct a root locus we can just apply the angle criterion.6a) Similarly.1. To find a particular point on the root locus.2) Now suppose that we can “separate out” the parameter of interest. (8B. It could be part of the controller.3) where Ps  does not depend on K. K. in the characteristic equation – it may be the gain of an amplifier. Then the characteristic equation can be written: 1  KPs   0 Characteristic equation of a unityfeedback system (8B.4) where zi are the m open-loop zeros and pi are the n open-loop poles of the system. or a sampling rate. In general: KPs   K s  z1 s  z 2 s  z m  s  p1 s  p2 s  pn  (8B. Signals and Systems 2014 . Also. The graph of the roots of this equation. or part of the plant. rearranging Eq. gives the root locus. we need to know the magnitude of K. (8B.2.

3 Example A simple maze rover positioning scheme is shown below: R (s ) E (s ) 0. (8B. we then have: K  K p and Ps   0. The characteristic equation of the system is: 1  KPs   1  K p 0.5 ss  Kp controller C (s ) maze rover We are trying to see the effect that the controller parameter K p has on the closed-loop system.5 s s  2  For such a simple system. it is easier to derive the root locus algebraically rather than use the angle criterion.5 K p  0 The roots are then given by: s  1  1  0. we can make the following assignments: G p s   0. First of all.5 0 ss  2 or just: s 2  2 s  0 .5 K P Signals and Systems 2014 .3).8B.5 and Gc s   K p ss  2 Putting into the form of Eq.

for example. 2  1.  2 s1. Also note that we can’t arbitrarily position the poles anywhere on the s-plane – we are restricted to the root locus.  1  j 2 The root locus is thus: Root locus for a simple two-pole system j j Kp =4 Kp =0 -2  -1 Kp =2 s -plane -j What may have not been obvious before is now readily revealed: the system is unconditionally stable (for positive K p ) since the poles always lie in the LHP. and the K p parameter can be used to position the poles for an overdamped.  1 2 K p  3 s1. or underdamped response.4 We will now evaluate the roots for various values of the parameter K p : Kp  0 K p 1 Kp  2 s1.  1  j Kp  4 s1. Signals and Systems 2014 . critically damped.8B.  1  1 s1. 2  1  j 2 . 2  1  1 2 . 2  1  j . that we cannot increase the damping of our underdamped response – it will always be e t . 2  0. This means.

Root locus construction rules 1.3) and rearranging: s  p1 s  p2  s  pn   K s  z1 s  z 2  s  zm   0 (8B. (8B. are just the open-loop poles of P s  . For n  m then n  m branches go to infinity. which are also the poles of Gc s G p s  .8B.e. 2.3). This can be seen by substituting Eq. the number is zero or even. Number of Branches If Ps  has n poles and m zeros then there will be n branches. when K  0 . (8B. Real Axis Symmetry The root locus is symmetrical with respect to the real axis. Real Axis Sections Any portion of the real axis forms part of the root locus for K  0 if the total number of real poles and zeros to the right of an exploratory point along the real axis is an odd integer.5 Root Locus Rules We will now examine a few important “rules” about a root locus construction that can give us insight into how a system behaves as the parameter K is varied.4) into Eq. K  0 ) at the poles of P s  . 4. As K   . Signals and Systems 2014 . (8B.7) This shows us that the roots of Eq. Locus End Points The root locus starts (i. 3. For K  0 . the root locus branches terminate at the zeros of P s  .

In this area.8) follows. r is even (0. Eq. Since the total angular component must add up to 180 or some odd multiple. …. there are branches of the root locus which must break-in onto the real axis in order to terminate on zeros. Similarly. Real Axis Breakaway and Break-In Points A breakaway point is where a section of the root locus branches from the real axis and enters the complex region of the s-plane in order to approach zeros which are finite or are located at infinity. all the poles and zeros contribute about the same angular component.9) The value of  A is the centroid of the open-loop pole and zero configuration. 6. Signals and Systems 2014 . Asymptotic Intercept (Centroid) The asymptotes all intercept at one point on the real axis given by: A poles of Ps      n m zeros of Ps  nm (8B. For K  0 . 2. 5. 3. n  m ). r is odd (1. n  m ).8) For K  0 . 7.6 5. Asymptote Angles The n  m  branches of the root locus going to infinity have asymptotes given by: A  r 180 nm (8B.8B. (8B. This can be shown by considering the root locus as it is mapped far away from the group of open-loop poles and zeros. …. 4.

10) where As  and B s  do not contain K. A simple method for finding the breakaway points is available.2 The breakaway (and break-in) points correspond to points in the s-plane where multiple real roots of the characteristic equation occur. Taking a lead from Eq. (8B. Suppose that f s  has multiple roots of order r.12) . then we get: df s  0 ds s  p1 Signals and Systems 2014 (8B.7).8B.7 Examples of breakaway and break-in points are shown below: Breakaway and break-in points of a root locus j breakaway point break-in point -5 -4 -3 -2 -1  s -plane Figure 8B. we can write the characteristic equation 1  KP s   0 as: f s   B s   KAs   0 (8B.11) If we differentiate this equation with respect to s and set s  p1 . Then f s  may be written as: f s   s  p1  s  p2 s  pk  r (8B.

(8B.15) or Bs As   Bs As   0 (8B.12).10) we obtain: df s   Bs   KAs   0 ds (8B.8 This means that multiple roots will satisfy Eq. (8B. (8B.8B. (8B.14) If we substitute this value of K into Eq. (8B.13) The particular value of K that will yield multiple roots of the characteristic equation is obtained from Eq. from Eq.17) and: dK Bs As   Bs As   ds A 2 s  Signals and Systems 2014 (8B.10). From Eq.13) as: K  Bs  As  (8B.16) On the other hand.10) we obtain: K  B s  As  (8B. we get: B s  f s   Bs   As   0 As   (8B.18) .

such as accurately evaluating stability. Effect of Poles and Zeros Zeros tend to “attract” the locus. Use a computer Use a computer to plot the root locus! The other rules provide intuition in shaping the root locus. 10. satisfy K  0 instead).19) correspond to actual breakaway and break-in points. valid solutions must lie on the real-axis.9 If dK ds is set equal to zero. This is useful to analytically determine the value of K that causes the system to become unstable (or stable if the roots are entering from the right-half plane).19) It should be noted that not all points that satisfy Eq. and are also used to derive analytical quantities for the gain K. (8B. Therefore. 8. (8B. we get the same equation as Eq. 9. if K  0 (those that do not. Signals and Systems 2014 .8B.16). the breakaway points can be determined from the roots of: dK 0 ds Equation to find breakaway and break-in points (8B. Also. Imaginary Axis Crossing Points The intersection of the root locus and the imaginary axis can be found by solving the characteristic equation whilst restricting solution points to s  j . while poles tend to “repel” it.

can be drawn on the basis of the rules presented. In addition. illustrated below. Rule 2 The root locus starts ( K  0 ) from the poles located at 0.7 60° -8 -7 -6 -5 -4 -3 double pole -2  -1 -60° -7.25  = 1.8B. Rule 3 Complex portions of the root locus occur in complex-conjugate pairs. 1  G s   0 . The root locus of this system. is a fourth-order equation.034 1 -0. j K =10. -1. Signals and Systems 2014 . and a double pole located at -4. all on the negative real axis.4525 s -plane Rule 1 There are four separate loci since the characteristic equation. it has three zeros at infinity.10 Example Consider a unity negative-feedback system: R (s ) s K   s  s  s E (s ) C (s ) This system has four poles (two being a double pole) and one zero. One pole terminates ( K   ) at the zero located at -6 and three branches terminate at zeros which are located at infinity.

4 1 2  3 180  180 .8B. Rule 5 The loci approach infinity as K becomes large at angles given by: 1  1 180  60 .11 Rule 4 The portions of the real axis between the origin and -1. 4 1 3  5 180  300 4 1 Rule 6 The intersection of the asymptotic lines and the real axis occurs at: A   9   6  1 4 1 Rule 7 The point of breakaway (or break-in) from the real axis is determined as follows. and between -6 and   are part of the root locus. From the relation: 1  G s H s   1  K s  6 0 2 ss  1s  4 we have: ss  1s  4 s  6  2 K  Taking the derivative we get:   s  6 2ss  1s  4  2s  1s  4  ss  1s  4  0 dK  ds s  62 2 2 Therefore: s  6s  42ss  1  2s  1s  4  ss  1s  42  0 s  4s  64s 2  11s  4  ss  1s  4  0 s  43s 3  30s 2  66s  24  0 s  4s 3  10s 2  22s  8  0 Signals and Systems 2014 . the double poles at -4.

It is impossible for the root at -2.7 rads-1 as the frequency of crossover. -2. we obtain:  16  K   16  K    6K  0   24   9   9  2 Solving this quadratic. since the root locus doesn’t lie there (the root at -2.4525.034.4525. The characteristic equation becomes: ss  1s  4  K s  6  0 2 s 4  9s 3  24s 2  16  K s  6 K  0  4  j 9 3  24 2  j 16  K   6 K  0 Letting the real and imaginary parts go to zero.5135 and -0.8B.5135 to be a breakaway or break-in point for the negative feedback case. -7. Thus.12 One of the roots of this equation is obviously -4 (since it is written in factored form) which corresponds to the open-loop double pole.2483 . Substituting this into the preceding equation. we finally get K  10. we have:  4  24 2  6 K  0 and  9 3  16  K   0 Solving the second equation gives:  16  K 3 Substituting this value into the first equation. The roots are -7.5135 is the breakaway point for a positive feedback system.e. For the cubic equation s 3  10s 2  22s  8  0 . i. we obtain:   1. we resort to finding the roots using MATLAB® with the command roots([1 10 22 8]).034 and -0. when K  0 ). Rule 8 The intersection of the root locus and the imaginary axis can be determined by solving the characteristic equation when s  j . Signals and Systems 2014 . the breakaway and break-in points are -4.

although the analytical solution provides more accuracy for points such as the crossover point and breakaway / break-in points.8B.13 Rule 9 follows by observation of the resulting sketch. Rule 10 is shown below: Root Locus Editor (C) 8 6 4 Imag Axis 2 0 -2 -4 -6 -8 -15 -10 -5 0 5 10 Real Axis Thus. Signals and Systems 2014 . and it also provides insight into the system’s behaviour. the computer solution matches the analytical solution.

» rltool We choose “File|Import…” from the main menu. and can be dragged by the mouse. » Gc=tf([1 3].5. Example The previous maze rover positioning scheme has a zero introduced in the controller: R (s ) E (s ) K p s 0. We then place our Gc transfer function into the “C” position and press OK. MATLAB® will draw the root locus and choose appropriate graphing quantities. We place our transfer function G p into the “G” position of MATLAB®’s control system model. Signals and Systems 2014 . From the command window. and the gain K p that was needed to put them there can be observed at the top of the user interface. just type “rltool” to start the root locus user interface.14 MATLAB®’s RLTool We can use MATLAB® to graph our root loci. 1). we would enter the following: » Gp=tf(0. [1 2 0]).5 s  s controller maze rover C (s ) Using MATLAB®’s rltool.8B. The closed-loop poles are shown by red squares. The status line gives the closed-loop pole locations.

For example. We couldn’t do this for the case of a straight gain in the previous example: we can "bend and shape" j the root locus to pass through our desired pole region -5 -4 -3 -2 -1 Root locus “bent and shaped” to pass through desired pole region  s -plane "old" locus never passes through the desired pole region This system is no longer an “all-pole system”.15 The locus for this case looks like: j Kp =15 -5 Kp =0 -4 -3 -2  -1 Kp =1 s -plane We can see how the rules help us to “bend and shape” the root locus for our purposes. that we were aiming for in the design must be checked by observing the step-response. we may want to increase the damping (move the poles to the left) which we have now done using a zero on the real axis.8B. Signals and Systems 2014 . so any parameters such as rise time etc. This is done in MATLAB® by choosing “Analysis|Response to Step Command” from the main menu.

The characteristic equation in this case is just: Root locus applies to discrete-time systems also 1  Gc  z G p  z   1  KP  z   0 Signals and Systems 2014 (8B. Also. the root locus heads off into the RHP. If the parameter K p is increased to over 60. unfortunately in this case.5 s  s maze rover controller MATLAB® gives us the following root locus: j A root locus can clearly show the limits of gain for a stable system Kp =60 -4 -3 -2  -1 Kp =4 s -plane We see that the pole “repels” the root locus.20) . then our system will be unstable! Root Loci of Discrete-Time Systems We can perform the same analysis for discrete-time systems.16 Example We will now see what happens if we place a pole in the controller instead of a zero: R (s ) E (s ) Kp  s C (s ) 0.8B. but of course the interpretation of the pole locations in the z-plane is different to the s-plane.

8B. then the pole and zero locations in the z-plane may lie out of the specification area. Signals and Systems 2014 C (z ) . If we don’t. the pole locations depend upon the sample period Ts . and in extreme cases can even lead to instability! Example We decide to simulate a maze rover by replacing it’s continuous-time model with a discrete-time model found using the bilinear transform: R (z ) E (z ) Kp G  z controller maze rover If the original maze rover continuous-time transfer function was: G s   1 s s  1 then application of the bilinear transform gives: Ts2  z  1 G z   4  2Ts z 2  8 z  4  2Ts 2 The closed-loop transfer function is now found to be: T z   K pTs2 z  1 2 4  2T s     Ts2 z 2  2Ts2  8 z  4  2Ts  Ts2 Clearly. Ts . We need to choose it to be “sufficiently small” so that the discrete-time system approximates the continuous-time system closely.17 Time Response of Discrete-Time Systems We have already seen how to discretize a continuous-time system – there are methods such as the bilinear transform and step response matching. One parameter of extreme importance in discretization is the sample period.

8B.1 z -plane The corresponding step-responses for the two extreme cases are shown below: Ts =1 Step response due to two different sample periods 1 Ts =0. then we inhibit the ability of the feedback to correct for errors at the output .1 0 n Ts We can see that the sample time affects the control system’s ability to respond to changes in the output. Signals and Systems 2014 . and sometimes even instability. If the sample period is much larger.18 A locus of the closed-loop poles as Ts varies can be constructed. The figure below shows the root locus as Ts is varied from 0.1 s to 1 s: unit-circle Root locus for a discrete-time system as sample period is varied Ts =1 Ts =0. If the sample period is small relative to the time constants in the system. increased overshoot. then the output will be a good approximation to the continuous-time case.causing oscillations.

NSWIT.  We can bend and shape a root locus by the addition of poles and zeros so that it passes through a desired location of the complex plane. Signals and Systems 2014 .19 Summary  The root locus for a unity-feedback system is a graph of the closed-loop pole locations as a system parameter.  There are various rules for drawing the root locus that help us to analytically derive various values. such as gains that cause instability. and whether the chosen sample period can meet the specifications. Nicol.  Root locus techniques can be applied to discrete-time systems.: Circuits and Systems 2 Notes.  The root locus of a discrete-time system as the sample period is varied gives us insight into how close an approximation we have of a continuoustime system. 1995. but understanding the root locus rules provides insight into the design of compensators in feedback systems.  The root locus can tell us why and when a system becomes unstable. B: Automatic Control Systems. A computer is normally used to graph the root locus. is varied from 0 to infinity. J.8B. Prentice-Hall. References Kuo. such as controller gain. 1986.

Signals and Systems 2014 . For the system shown: R ( s) K (s + 4) s (s +1) (s + 2) C( s) Use RLTool in MATLAB® for the following: (a) Plot the root locus as K is varied from 0 to  .20 Exercises 1.8B. (c) Find the value of K for which the closed-loop system will have a 5% overshoot for a step input. (b) Find the range of K for stability and the frequency of oscillation if unstable. (d) Estimate the 1% settling time for the closed-loop pole locations found in part (c).

Signals and Systems 2014 . U (s) 1 C( s) ( s +3) ( s 8) Use RLTool in MATLAB® for the following: (a) Show by a plot of the root locus that the plant cannot be stabilized for any K. why K  0 is not worth pursuing). if unity feedback is placed around it as shown. with the aid of a sketch.    K   . The plant shown is open-loop unstable due to the right-half plane pole.21 2. R ( s) K ( s +3) ( s 8) C( s) (b) An attempt is made to stabilize the plant using the feedback compensator shown: R ( s) K C( s) ( s +3) ( s 8) ( s +1) ( s +2) Determine whether this design is successful by performing a root locus analysis for 0  K   . (Explain.8B.

R s  (b) From the root-locus plot. find the value of a which gives both minimum overshoot and settling time when r t  is a step function. Signals and Systems 2014 .8B.22 3. For the system shown the required value of a is to be determined using the root-locus technique. R ( s) (a) Sketch the root-locus of (s + a ) (s + 3) 1 s ( s +1) C( s) C s  as a varies from   to  . (c) Find the maximum value of a which just gives instability and determine the frequency of oscillation for this value of a.

23 4. R ( s) desired position 10 s ( s +4) amplifier and motor 0. (a) Sketch the root-locus of C s  as K varies from   to  . The block diagram of a DC motor position control system is shown below. Note: The closed-loop system has two poles (as found from the root locus) and no zeros. and 0 to -100 if the electrical output connections from the tachometer are reversed (giving positive feedback). R s  Use two plots: one for negative feedback and one for positive feedback. (c) Find the steady-state error (as a function of K) when r t  is a step function. 0 to +100 for the negative feedback configuration shown. (b) Find the largest magnitude of K which just gives instability. find the value of K which will give 10% overshoot when r t  is a step function. Find all important geometrical properties of the locus.8B. Verify this yourself using block diagram reduction. Signals and Systems 2014 . and determine the 10-90% rise time for this value of K. K can vary from -100 to +100.3 gear train C( s) actual position Ks tachometer The performance is adjusted by varying the tachometer gain K. and determine the frequency of oscillation of the system for this value of K. (d) From the root locus plots.

but his childhood was something of a model for a future scientist. and had a fascination with mechanical toys which he retained all his life. who in 1884-1885." as Heaviside called it) that the theory eventually passed into general circulation in the 1890s. His mother died when he was 8. He was endowed with an exceptional memory. cast the long list of equations that Maxwell had given into the compact and symmetrical set of four vector equations shown here and now universally known as “Maxwell's equations”. Signals and Systems 2014 . Maxwell was able to summarize all observed phenomena of electrodynamics in a handful of partial differential equations known as Maxwell’s equations1: E    E   B t B  0   B  J   E t From these he was able to predict that there should exist electromagnetic waves which could be transmitted through free space at the speed of light. to the moon. At 16 he attended the University of Edinburgh where the library still holds records of the books he borrowed while still an undergraduate – they include works by Cauchy on differential 1 It was Oliver Heaviside. The revolution in human affairs wrought by these equations and their experimental verification by Heinrich Hertz in 1888 is well known: wireless communication. It was in this new form ("Maxwell redressed.8B. and even to the edge of the solar system! James Maxwell was born in Edinburgh. Scotland.so spectacularly demonstrated by television and radio transmissions across the globe. At 14 he presented a paper to the Royal Society of Edinburgh on ovals.24 James Clerk Maxwell (1831-1879) Maxwell produced a most spectacular work of individual genius – he unified electricity and magnetism. control and measurement .

Newton on optics. who was also an excellent mathematician. on the kinetic theory of gases (he had first considered the problem while studying the rings of Saturn). he formulated a “theory of three primary colour-perceptions” for the human perception of colour. Cambridge. In 1866 he formulated. the kinetic theory of gases. Fourier on the theory of heat. which showed that temperature and heat involved only molecular motion. In 1860 Maxwell moved to King’s College in London. In 1850 he moved to Trinity College. Maxwell stayed at Trinity where. In 1856 he became Professor of Natural Philosophy at Aberdeen. the equations which comprise the basic laws of electromagnetism. In 1861 he created the first colour photograph – of a Scottish tartan ribbon – and was elected to the Royal Society. – James Clerk Maxwell . Signals and Systems 2014 All the mathematical sciences are founded on relations between physical laws and laws of numbers. and started to study the rings of Saturn. where he graduated with a degree in mathematics in 1854.25 equations. Maxwell was edged out of first place in their final examinations by his classmate Edward Routh. in 1855. In 1857 he showed that stability could be achieved only if the rings consisted of numerous small solid particles. In 1855 and 1856 he read papers to the Cambridge Philosophical Society “On Faraday’s Lines of Force” in which he showed how a few relatively simple mathematical equations could express the behaviour of electric and magnetic fields. “A Dynamical Theory of the Electromagnetic Field” was read before a largely perplexed Royal Society in 1864. an explanation now confirmed by the Voyager spacecraft.8B. for the first time. In 1862 he calculated that the speed of propagation of an electromagnetic wave is approximately that of the speed of light: We can scarcely avoid the conclusion that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena. Poisson on mechanics and Taylor’s scientific memoirs. independently of Ludwig Boltzmann. Here he brought forth. so that the aim of exact science is to reduce the problems of nature to the determination of quantities by operations with numbers. Maxwell’s famous account. Scotland. Maxwell also continued work he had begun at Aberdeen.

resistance and a sinusoidal voltage source. It is wonderful what a genius can do in one night! Maxwell’s letter. who had not had very much experience in experimental electricity at that time. Grove knew that Maxwell was a splendid mathematician. symbols.26 Maxwell was the first to publish an analysis of the effect of a capacitor in a circuit containing inductance. Maxwell gave in this letter an unusually clear exposition of the analogy existing between certain electrical and mechanical effects. Maxwell. and to show the conditions for resonance. I have considered it mathematically. was at a loss.8B. Using different. and now send you the result. He used an induction coil for this purpose. in the steadystate is: V I 1   R   L   C   2 2 from which Maxwell pointed out that the current would be a maximum when: L  1 C Signals and Systems 2014 . especially the theoretical art of it. and the next morning he wrote a letter to Sir William Grove explaining the whole theory of the capacitor in series connection with a coil. But he spent that night in working over his problem. Preliminary to the mathematical treatment. The way in which he came to solve this problem makes an interesting story: Maxwell was spending an evening with Sir William Grove who was then engaged in experiments on vacuum tube discharges. and found the if he put a capacitor in series with the primary coil he could get much larger sparks. “Since our conversation yesterday on your experiment on magneto-electric induction. He could not see why. and so he thought he would ask this young man [Maxwell was 37] for an explanation. which began with the sentence.” was dated March 27. and that he also had mastered the science of electricity. 1868. or appendix. he derived and solved the now familiar expression for the current i in such a circuit: L di 1  Ri   idt  V sin t dt C The solution for the current amplitude of the resulting sinusoid. but equivalent. he gave the mathematical theory of the experiment. In the postscript.

In 1868. It was left to Routh some years later to solve the general problem of linear system stability: “It has recently come to my attention that my good friend James Clerk Maxwell has had difficulty with a rather trivial problem…”. Heinrich Hertz later showed a thorough acquaintance with electrical resonance and made good use of it in his experimental apparatus that proved the existence of electromagnetic waves. by Hertz.8B. he devotes one section to a discussion of “Resonance Phenomena” and published the first electrical resonance curve: The first electrical resonance curve published. This problem was solved by Airy in 1840 by fitting a damping disc to the governor. It was then possible to minimize speed j fluctuations by adjusting the “controller gain”. but thought that a general solution was insoluble. which sometimes exhibited “hunting” about a set point – that is. Maxwell published  s -plane his paper “On Governors” in which he derived the equations of motion of engines fitted with governors of various types. the governor would oscillate about an equilibrium position until limited in amplitude by the throttle valve or the travel allowed to the bobs. as predicted by Maxwell’s equations. Signals and Systems 2014 . 1887 When creating his standard for electrical resistance. He made the system stable by using the idea of negative feedback. He posed the problem for more complicated control systems. Maxwell wanted to design a governor to keep a coil spinning at a constant rate.27 Following Maxwell. It was known for some time that the governor was essentially a centrifugal pendulum. In the first of his series of papers describing his experiment. and explained in mathematical terms the source of the oscillation. damped in several ways. the governors would burst into oscillation again. But as the gain was increased. He was also able to set bounds on the parameters of the system that would ensure stable operation. published in 1887. “On Very Rapid Electric Oscillations”.

discussed the possibility of confirming his equations with a student. Bell System Technical Journal. The four partial differential equations describing electromagnetism. The following year he returned to Cambridge to be the first Cavendish Professor of Physics – he designed the Cavendish laboratory and helped set it up. The Cavendish laboratory was opened in 1874. Maxwell’s reputation continued to grow. Signals and Systems 2014 . an admirer of Maxwell. Maxwell died of abdominal cancer. He was recognised to have been an exceptional scientist.8B. About 1880 Hermann von Helmholtz. Vol. in 1879. 20 (4). and he may be said to have prepared the way for twentieth-century physics. 1941.28 In 1870 Maxwell published his textbook Theory of Heat. Thereafter. p. but his theory of electromagnetism remained to be convincingly demonstrated. References Blanchard. and Maxwell spent the next 5 years editing Henry Cavendish’s papers. In 1888 Hertz performed a series of experiments which produced and measured electromagnetic waves and showed how they behaved like light. at the age of forty-eight. mainly because an understanding of the atomic nature of electromagnetism was not yet at hand. At his death. The significance of the work was not immediately grasped.: The History of Electrical Resonance. Heinrich Hertz. first appeared in fully developed form in his Treatise on Electricity and Magnetism in 1873. 415. Maxwell’s reputation was uncertain. J. now known as Maxwell’s equations.

Transfer function.1 Lecture 9A – State-Variables State representation. Frequency-domain techniques are powerful tools. High-order systems are hard to analyse and design. Signals and Systems 2014 . Impulse response. called the state-space approach. Initial conditions are hard to incorporate into the analysis process (remember – the transfer function only gives the zero-state response). q3 . State Representation Consider the following simple electrical system: R vs L i C vC Figure 9A. They characterize the future behaviour of a system once the inputs to a system are specified. Transition matrix. q2 . State variables are usually denoted by q1 . qn . the system is characterized by a set of first-order differential equations that describe its “state” variables. but they do have limitations. …. Linear state-variable feedback. A time-domain approach.1 In the analysis of a system via the state-space approach. Overview The frequency-domain has dominated our analysis and design of signals and systems – up until now.9A. together with a knowledge of the initial states. Solution of the state equations. overcomes these deficiencies and also offers additional features of analysis and design that we have not yet considered.

be represented by the dt symbol x : q1   R 1 1 q1  q2  x L L L Signals and Systems 2014 (9A.1a) q 2  vC (9A. given in Eqs.4) Finally.2) Rearranging to get the derivative on the left-hand side.1).4) in the standard nomenclature for state variable analysis – we use q  dq and also let the input. let: State variables q1  i (9A. (9A.2 States For the system in Figure 9A. we get: vs  Ri  L di  vC dt (9A. (9A. Therefore.9A.5) .1b) From KVL. we get: di R 1 1   i  vC  v s dt L L L (9A. we can rewrite this as: dq1 1 1 R   q1  q2  vs dt L L L (9A.3) In terms of our state variables.1. we can choose i and vC as the state variables. we write Eq. v s .

rearranging to get the derivative on the left-hand side: q 2  1 q1 C (9A. we have for the current through the capacitor: dvC dt iC (9A.9A.7) Finally. Eq. such as A . such as q and b State equation Matrix. we need to find two first-order differential equations to describe the system. this set of equations can be compactly written: q  Aq  bx (9A. Signals and Systems 2014 . Scalar variables such as x scalar notation are written in italics. (9A.9) Using matrix symbols.6) Substituting our state variables. we have: q1  Cq 2 (9A.1.10) is very important – it tells us how the states of the system q change in time due to the input x.10) We will reserve small boldface letters for column vectors.3 Returning to the analysis of the circuit in Figure 9A. as usual. The two equations can be written in matrix form:  q1   R L  1 L   q1  1 L  q    1 C  q    0  x 0  2     2  (9A. vector and and capital boldface letters for matrices.8) Notice how. for a second-order system.

15) .11) Therefore.4 Output The system output can usually be expressed as a linear combination of all the state variables.13) y  cT q  dx (9A.9A. we write: q  y  0 1 1  q 2  (9A. For example.14) Output equation Sometimes we also have: Multiple Input-Multiple Output Systems State variable representation is good for multiple input – multiple output (MIMO) systems. All we have to do is generalise our input and output above to vector inputs and outputs: Multiple input multiple output (MIMO) state and output equations q  Aq  Bx y  CT q  Dx Signals and Systems 2014 (9A. if for the RLC circuit of Figure 9A. in matrix notation.12) which is usually expressed as: y  cT q (9A.1 the output y is vC then: y  vC  q2 (9A.

not just the ZSR.16) Notice how the initial conditions are automatically included by the Laplace transform of the derivative. Taking the Laplace Transform of Eq. in which case we may not need to solve the state equations). (9A. Making Qs  the subject.13).17) to: Qs   Φs q0    Φs bX s  (9A. we define the following matrix: Φs   sI  A  1  resolvent matrix Resolvent matrix defined (9A.19) Using Eq. (9A.9A. The solution will be the complete response. some parameters of the system can be directly determined by examining the A matrix.20) .18) This simplifies Eq. We can solve the state equations in the s-domain. we get:   Qs   sI  A  q 0   sI  A  bX s  1 1 (9A.17) Because of its importance in state variable analysis.10) gives:   sQs   q 0   AQs   bX s  (9A. the LT of the output (for d  0 ) is then:   Y s   cT Φs q 0  cT Φs bX s  Signals and Systems 2014 (9A.5 Solution of the State Equations Once the state equations for a system have been obtained. it is usually necessary to find the output of a system for a given input (However. (9A.

(9A. but it applies to states – not the output! We can get the output response of the system after solving for the states by direct substitution into Eq. (9A. q  Aq .22) Notice how multiplication in the s-domain turned into convolution in the timedomain. The second relationship arises by observing that the solution to the state equation for the case of zero input. For zero input. is q  q0 e At . we also define the ILT of the resolvent matrix.19) is just: Complete solution of the state equation   qt   φt q 0    φt   bx d t 0  ZIR  ZSR (9A.9A.23a) φt   e At (9A. (9A. (9A. called the transition matrix: The transition matrix and resolvent matrix form a Laplace transform pair φt  Φ s   transition matrix resolvent matrix (9A.14).22) gives Signals and Systems 2014 . The transition matrix is a generalisation of impulse response.21) The ILT of Eq.22). Before we do that. Eq. Transition Matrix The transition matrix possesses two interesting properties that help it to be calculated by a digital computer: φ0  I (9A.6 All we have to do is take the inverse Laplace transform (ILT) to get the solution in the time-domain.23b) The first property is obvious by substituting t  0 into Eq.

The matrix e At is defined by: e At 2 3  At  At  At  I    1! 2! 3! How to raise e to a matrix power (9A. q 2  y .26) q1  y.7   qt   φt q 0  .27) r  sin t Let: then: q1  q2 q 2   q1  2q2  x (9A.25) where the input and initial conditions are: y 0   1 y 0   0 (9A. Example Suppose a system is described by the following differential equation: d2y dy dr 2  y  r 2 dt dt dt (9A. x  r  r (9A. The series is truncated when the desired accuracy is reached.28) or just: q  Aq  bx Signals and Systems 2014 (9A.24) This is easy to calculate on a digital computer.9A.29) . so that we must have φt   e At . because it consists of matrix multiplication and addition.

22).8 with: 1  q  q  0 0  A .30) We form the resolvent matrix by firstly finding sI  A : 1  s  1   s 0  0  sI  A      0 s   1  2 1 s  2 (9A.31) Then remembering that the inverse of a matrix B is given by: B 1  adj B B (9A. (9A.32) we get the resolvent matrix:  s  2 1  s  2  1 s  2  s  1 1   Φs   sI  A    s  12   1  s  12 1  s  12  s  s  12  (9A.35) Since we don’t like performing convolution in the time-domain.19) to find the ZSR: q ZSR t   L1 Φs bX s  Signals and Systems 2014 (9A. b   .34) So. q   1    1  2 1  q 2  q 2  (9A.9A. (9A. q   1 .36) . the ZIR is given by:   q ZIR t   φt q 0  e t t  1 te t  1 e t t  1  q1         q  t e t 1  t  0   te t   2  ZIR   te (9A.33) The transition matrix is the inverse Laplace transform of the resolvent matrix: φt   L1 Φs  e t t  1 te t    t e t 1  t    te (9A. from Eq. we use Eq.

9A.37) 1 s  2 s 1 s 1 2 so the ZSR is:  s  2  2 1  s  1 q ZSR t   L  1  2  s  1  1  2 0 s  1    s  1  s  1 s 2  1  s  12    1   s  1 s 2  1    L1   s    s  1 s 2  1       (9A.40) This is just the solution for the states. Signals and Systems 2014 . eg. we use: y  c T q  dx (9A. method of undetermined coefficients.41) cT  1 0.38) Use partial fractions to get:    12 e t  cos t  sin t   q1   1  q  t  2  ZSR  2  e  cos t  sin t    (9A. the output is: y  32 e  t  te t  12 cos t  12 sin t (9A.39) The total response is the sum of the ZIR and the ZSR:  q1   32 e t  te  t  12 cos t  12 sin t    q    1 t t 1 1  2   2 e  te  2 cos t  2 sin t  (9A. To get the output. d  0 Therefore.9 The Laplace transform of the input is: X s   (9A.42) You should confirm this solution by solving the differential equation directly using your previous mathematical knowledge.

43) The output in the s-domain. (9A. then Eq. using the Laplace transform of Eq.9A.45) Impulse Response The impulse response is just the inverse Laplace transform of the transfer function: ht   cT φt b  cT e At b (9A.46) It is possible to compute the impulse response directly from the coefficient matrices of the state model of the system. Signals and Systems 2014 . is just: Y s   c T Qs   c T Φs bX s  (9A. Since a transfer function only gives the ZSR (all initial conditions are zero).44) Therefore. (9A. the transfer function (for d  0 ) is given by: Obtaining the transfer function from a state-variable description of a system H s   cT Φs b (9A.13) and Eq.10 Transfer Function The transfer function of a single input-single output (SISO) system can be obtained easily from the state variable equations. (9A.19) becomes: Qs   Φs bX s  (9A.43).

we can find the transfer function using Eq. Signals and Systems 2014 .11 Example Continuing the analysis of the system used in the previous example. we probably wouldn’t.9A.47) Compare with the Laplace transform of the original differential equation: d2y dy 2  y  x 2 dt dt 2 s  2s  1 Y s   X s    (9A.49) Why would we use the state-variable approach to obtain the transfer function? For a simple system.45):  s2  s  12 H s   1 0  1  s  12   s  1  0 s  1 s  12  1 2  1   s  12   1 0  s   s  12    1  s  12 (9A.48) from which the transfer function is: Y s  X s  1  2 s  2s  1 1  s  12 H s     (9A. (9A. but for multiple-input multipleoutput systems. it is much easier using the state-variable approach.

r.2 The system has been characterised in terms of states – you should confirm that the above diagram of the “system to be controlled” is equivalent to the matrix formulation of Eqs.9A. or a frequency specification. The controller just multiples each of the states qi by a gain k i . such as bandwidth). The design of the controller involves the determination of the controller variables k 0 and k to achieve a desired response from the system (The desired response could be a timedomain specification. or reference input. (9A. the state equations are: Signals and Systems 2014 . and we desire the output y to follow the set point. We have placed a controller in front of the system.51) x  k0 r  k T q Therefore.12 Linear State-Variable Feedback Consider the following system drawn using a state-variable approach: Block diagram of linear state-variable feedback r (t ) k0 x (t ) q b  dt q cT y (t ) A kT controller q system to be controlled Figure 9A. the input x to the controlled system is: The input to the open-loop system is modified by the feedback   (9A. subtracts the sum of these from the input r . such as rise time.10) and (9A.50) q  A k q  bk 0 r (9A. and multiplies the result by a gain k 0 .13). Now.

53) The closed-loop transfer function when linear statevariable feedback is applied where: Φ k s   sI  A k  1 We choose the controller variables k 0 and k T  k1 (9A. (9A. (9A. (9A. For the ZSR. Analysis of the overall system can now proceed as follows.13 where: A k  A  k 0 bk T (9A. The state equations Eq.51) still have the same form as Eq. but A changes to A k and the input changes from x to k 0 r . Signals and Systems 2014 The modified resolvent matrix when linear statevariable feedback is applied . the transfer function is given by Eq.45) with the above substitutions: H s   k 0 cT Φ k s b (9A.10).9A.54) k 2  k n  to create the transfer function obtained from the design criteria (easy for an n  2 secondorder system).52) State-variable feedback modifies the A matrix Eq.51) is the state-variable representation of the overall system (controller plus system to be controlled). (9A.

9A.   0. The relationships for each state-variable and the output variable in the s-domain are obtained directly from the block diagram: 1 X s  70 1 Q2  Q1 s Y  Q2 Q1  (9A. we get: sQ1  70Q1  X sQ2  Q1 Y  Q2 Signals and Systems 2014 (9A.14 Example Suppose that it is desired to control an open-loop process using state-variable control techniques.71s  2500 (9A. The open-loop system is shown below: X (s ) 1 s +70 Q1 1 s Q2 = Y ( s ) Suppose it is desired that the closed-loop second-order characteristics of the feedback control system have the following parameters:  n  50 rads 1 .56) The first step is to formulate a state-variable representation of the system.7071 (9A.58) .55) Then the desired transfer function is: n C s   2 Rs  s  2 n s   n 2 2 2500  2 s  70.57) Rearranging.

62) .60) or just: y c q T All linear.59) q  Aq  bx (9A. To implement state-variable feedback. and compares this with the reference input. The input signal to the process is therefore: x  k 0 r  k1q1  k 2 q2  (9A. X s  . to the process. we form the following system: Controller R (s ) k0 X (s ) 1 s +70 Process Q1( s) 1 s Q2( s) = C (s ) k1 k2 We see that the controller accepts a linear combination of the states. It then provides gain and applies the resulting signal as the “control effort”.15 The corresponding state-variable representation is readily found to be:  70 0 1  q   q   x  0  1 0  y  0 1q (9A.9A.61) or in matrix notation:  x  k0 r  k T q  Signals and Systems 2014 (9A. time-invariant systems have this state-variable representation.

9A.68) .60) to:   q  Aq  b k 0 r  k T q    q  A  k 0 bk T q  bk 0 r q  A k q  bk 0 r (9A.66) We now need to evaluate a few matrices.65) then the transfer function of the closed-loop system is given by: H s   k0cT Φ k s b (9A.64) where: If we let: Φ k s   sI  A k  1 (9A.63) A k  A  k 0 bk T (9A. (9A.67) Then: Φ k s   sI  A k  1  s  70  k0 k1 k0 k2   s  1   1 1 2 s  70  k0 k1 s  k0 k2  k0 k 2  s 1 s  70  k k  0 1  Signals and Systems 2014 (9A.16 Applying this as the input to the system changes the describing state equation of Eq. First: A k  A  k 0 bk T  70   1  70   1 0 1   k 0  k1  0 0  0 k  k0  1  0 0  70  k 0 k1  1  k2  k2  0   k0 k2  0  (9A.

17 The closed-loop transfer function is then found as: H s   k0cT Φ k s b s  k0 k2  1 k0 0 1   s  70  k0 k1 s  k0 k2 1 s  70  k0 k1  0 s k0  2 0 1  s  70  k0 k1 s  k0 k2 1 k0  2 s  70  k0 k1 s  k0 k2  2 (9A. For this simple system.69). the final step is not necessary. we find that: k 0  2500 k1  2. (9A.843 10 4 k2  1 (9A.71) This completes the controller design. and the sensitivity of slight gain variations. k1 and k 2 can be found from Eqs. The following set of simultaneous equations result: k 0  2500 k 0 k 2  2500 70  k 0 k1  70.56) and (9A. The final step would be to draw the root locus and examine the relative stability. Signals and Systems 2014 .70) We have three equations and three unknowns. Solving.69) The values of k 0 .71 (9A.9A.

S. 1997. References Kamen.18 Summary  State-variables describe “internal states” of a system rather than just the input-output relationship. Signals and Systems 2014 .: Modern Control System Theory and Application.  The state-variable approach is a time-domain approach.  Linear state-variable feedback involves the design of gains for each of the states. plus the input.9A. Prentice-Hall. B.: Fundamentals of Signals and Systems using MATLAB®. AddisonWesley. We can include initial conditions in the analysis of a system to obtain the complete response.  The state-variable equations can be solved using Laplace transform techniques. E. & Heck. 1978.  We can derive the transfer function of a SISO system using state-variables. Shinners.

Using capacitor voltages and inductor currents write a state-variable representation for the following circuit: R1 L1 C1 R2 vs R3 2. then y3  y1  y 2 . The corresponding observed outputs are y1 t  . then y3  y1  y 2 . Consider a linear system with input u and output y. (c) If x3  x1  x2 .19 Exercises 1. (b) If x3  1 2 x1  x2  . then y3  1 2  y1  y2  . is the same.9A. Three experiments are performed on this system using the inputs x1 t  . y 2 t  and y3 t  . In each case. Which are true if x0   0 ? Signals and Systems 2014 . x0  . the initial state at t  0 . Which of the following three predictions are true if x0   0 ? (a) If x3  x1  x2 . x2 t  and x3 t  for t  0 .

(a) 1 s +2 3 Q1 1 s +2 Q3 X Y 2 1 s Q2 (b) 1 s +2 X Q1 1 s +1 Q2 1 s +2 Q3 4. Find the transfer function of the systems in Q3: (i) by block diagram reduction (ii) directly from your answers in Q3 (use the resolvent matrix etc) 5.9A. Signals and Systems 2014 Y . Write dynamical equation descriptions for the block diagrams shown below with the chosen state variables. Given:  1  1 1  q   q   x  4 1  2 and: 3 q0    0  0 t  0 xt    1 t  0 Find qt  .20 3.

(c) T  1 s Compare these plots to the values given by the exact solution to Q5 (obtained by finding the inverse Laplace transforms of the answer given). Comment on the effect of varying the time increment T.21 6. Write a simple MATLAB® script to evaluate the time response of the system described in state equation form in Q5 using the approximate relationship: q  qt  T   qt  T Use this script to plot qt  using: (a) T  0. Signals and Systems 2014 .9A. (b) T  0.01 s .1 s .

.

It can also be used to find transfer functions between any output and any input of a system. Discrete-time response. The use of eigenvectors and eigenvalues solves this problem. Overview State-variable analysis is useful for high-order systems and multiple-input multiple-output systems. Solution of the state equations for the ZIR. Normal Form Solving matrix equations is hard. State-variable analysis can also be extended to discrete-time systems. It also gives us the complete response..1 Lecture 9B – State-Variables 2 Normal form. Discrete-time transfer function.unless we have a “trivial” system:  z1  1 0 0 0   z1  z   z   0  0 0 2 2  2        0 0  0          z 0 0 0  n   zn   n  (9B. Any computer solution also has to be thought about in terms of processing time and memory storage requirements.1) which we would just write as: z  Λz Signals and Systems 2014 (9B. producing exactly analogous equations as for continuous-time systems. Poles and repeated eigenvalues.9B.. The only drawback to all this analytical power is that solving the state-variable equations for high-order systems is difficult to do symbolically.2) . Discrete-time state-variables. Similarity transform.

Once we have all the  ’s.6) Therefore: For a non-trivial solution (one where q  0 ). (9B. The eigenvalues are the values of  which satisfy A  I  0 . substituting into Eq.3).3) We therefore seek a “similarity transform” to turn q  Aq into z  Λz .8) . we need to have: Characteristic equation A  I  0 (9B.5) A  I q  0 (9B. we get: Aq  q (9B.7) This is called the characteristic equation of the system. Then q  q and.3) by letting: q  q0 e t (9B.4) where q0  is a constant vector determined by the initial conditions of the states. we know we should get solutions containing exponentials. When solving differential equations.5) is called a column eigenvector. Eigenvalues and eigenvectors defined A  I  0 i  eigenvalue Aq i   i q i  q i   eigenvector Signals and Systems 2014 (9B. Let us therefore try one possible “exponential trial solution” in Eq. (9B. each column vector q i  which satisfies the original equation Eq.9B. (9B.2 This is useful for the ZIR of a state-space representation where: q  Aq (9B.

3 An eigenvector corresponding to an eigenvalue is not unique – an eigenvector can be multiplied by any non-zero arbitrary constant. Example Given a system’s A matrix.13) .11) Factorising the characteristic equation.9B.9) We find the eigenvalues first by solving: 2    A  I  10 3 3 3    6 2 4 0 1    (9B.12) The solutions to the characteristic equation are: 1  2   2  3 eigenvalues 3  11  Signals and Systems 2014 (9B.  2 3 2 A  10 3 4  3 6 1  (9B.10) Evaluating the determinant. we get the characteristic equation: 3  62  49  66  0 (9B. we want to find the eigenvalues and eigenvectors. we get:   2  3  11  0 (9B. We therefore tend to choose the simplest eigenvectors to make the mathematics easy.

9B.17) The columns of the U matrix are the column eigenvectors corresponding to each of the n eigenvalues.16) Similarity Transform The eigenvalues and eigenvectors that arise from Eq. substitute each  into A  I q  0 and solve for q.14) Solve to get: 1 q 1   2  for 1  2  5 (9B.4 To find the eigenvectors. Take 1  2 :   2 3 2   2 0    10 3 4   0  2  3 6 1   0 0    4 10   3 0   q1  0  0  q 2   0  2  q3  0 3 2  q1  0 5 4 q 2   0 6 3  q3  0 (9B. (9B.18) . First. form the square n  n matrix: Similarity transform defined  U  q 1 q 2   q n     q11   q12           q1n   q2 1   qn 1   q     2 2   qn 2             q2 n  qn n   (9B.15) The other eigenvectors are: 0 q 2    2  for 2  3  3 and  2 q 3  4 for 3  11 3 (9B.5) are put to good use by transforming q  Aq into z  Λz . Then since the columns of U are solutions to: Aq i   i q i  Signals and Systems 2014 (9B.

9B.5
then by some simple matrix manipulation, we get:

AU  UΛ

(9B.19)

where:
1
0
Λ
0

0

0

2
0
0

Diagonal matrix 
defined

0
0 
 0

0 n 
0
0

(9B.20)

Example
From the previous example, we can confirm the following relationship.

AU  UΛ
0 2  1
0 2   2 0 0 
 2 3 2  1


10 3 4  2
2 4   2
2 4  0  3 0 


 3 6 1   5  3 3  5  3 3  0
0 11

(9B.21)

Perform the multiplication shown to verify the result.
The matrix U is called the right-hand eigenvector matrix. If, instead of
Eq. (9B.5), we were to solve the equation q T A  q T we get the same
eigenvalues but different eigenvectors. Also, we can form an n  n matrix V so
that:

VA  ΛV

(9B.22)

where V is made up of rows of left-hand eigenvectors:





 q T (1)   q (1)1 q (1) 2  q (1) n
 T ( 2)   ( 2)
q   q 1 q ( 2) 2  q ( 2) n

V
   

 T (n)   (n)
(n)
(n)
q   q 1 q 2  q n

Signals and Systems 2014





(9B.23)

9B.6
Since eigenvectors can be arbitrarily scaled by any non-zero constant, it can be
shown that we can choose V such that:

VU  I

(9B.24)

V  U 1

(9B.25)

which implies:
Relationship
between the two
similarity transforms

Now, starting from Eq. (9B.19), pre-multiply by V:

AU  UΛ
VAU  VUΛ

(9B.26)

but using Eq. (9B.25), this turns into:
The similarity
transform
diagonalizes a
matrix

Λ  U 1AU

(9B.27)

Solution of the State Equations for the ZIR
Given:

q  Aq

(9B.28)

 

and q 0  , we would like to determine the ZIR q ZIR t  .
First, put q  Uz and substitute into Eq. (9B.28) to give:

Uz  AUz

(9B.29)

Now pre-multiply by U 1 :

z  U 1 AUz

Signals and Systems 2014

(9B.30)

9B.7
and using Eq. (9B.27), the end result of the change of variable is:

z  Λz

(9B.31)

Written explicitly, this is:

 z1  1 0 0 0   z1 
 z   0 
z 
0
0
2
 2  
 2 
    0 0  0   
  
 

z
0
0
0

n   zn 
 n 

(9B.32)

This is just a set of n independent first-order differential equations:

z1  1 z1
z 2  2 z 2

z n  n z n

(9B.33)

Consider the first equation:

dz1
 1 z1
dt

(9B.34)

The solution is easily seen to be:

z1  z1 0e 1t

Signals and Systems 2014

(9B.35)

Diagonal form of
state equations

9B.8
The solution to Eq. (9B.33) is therefore just:

z1  z1 0e 1t
z 2  z 2 0e 2t

z n  z n 0e n t

(9B.36)

or written using matrix notation:
Solution to the
diagonal form of
state equations

z  e Λt z 0
The matrix

The matrix e
defined

Λt

e

Λt

e Λt

(9B.37)

is defined by:

2
3

Λt  Λt  Λt 
I




1!

e 1t

0

 0

 0

2!

0
e 2t
0
0

3!
0
0 

0
0 
 0 

0 e n t 

(9B.38)

We now revert back to our original variable:

q  Uz
 Ue Λt z 0
 Ue Λt U 1q0

(9B.39)

and since we know the ZIR is q ZIR t   φt q0  then the transition matrix is:
Transition matrix
written in terms of
eigenvalues and
eigenvectors

φt   Ue Λt U 1
Signals and Systems 2014

(9B.40)

9B.9
This is a quick way to find the transition matrix φt  for high-order systems.
The ZIR of the states is then just:

qt   φt q0
 Ue Λt U 1q0

(9B.41)

Example
Given:
1
0
0
q t   
qt     xt 

 2  3
 2
y t   3 1qt 

 2
q0   
3

(9B.42)

find the ZIR.
We start by determining the eigenvalues:

A  I  0

1
0
 2 3

2  3  2  0
  1  2  0
1  1, 2  2

(9B.43)

Next we find the right-hand eigenvectors:
1
U 1   1 , U 2    1 ,  U   1

1

2

1

2
 
 


(9B.44)

and the left-hand eigenvectors:
2 1
V 1  2 1, V 2    1  1,  V  

 1  1
Signals and Systems 2014

(9B.45)

The ZIR written in
terms of
eigenvalues and
eigenvectors, and
initial conditions

9B.10
As a check, we can see if UV  I .
Forming Λ is easy:
 1 0 
Λ

 0  2

(9B.46)

The transition matrix is also easy:
φt   Ue Λt U 1
1  e  t
1


  1  2  0

0  2 1 


e 2t   1  1
1   2e t
e t 
1


  2t
 e 2t 
  1  2   e
 2e t  e 2t
e t  e  2 t 


t
2t
 e t  2e 2t 
 2e  2e

(9B.47)

The ZIR of the states is now just:
q ZIR t   φt q0
 2e t  e 2t
e t  e  2 t   2 

 
t
2t
 e t  2e 2t  3
 2e  2e
 7e t  5e 2t 

t
2t 
 7e  10e 

(9B.48)

The ZIR of the output is then:

y ZIR t   3 1q ZIR t 

(9B.49)

 14e t  5e 2t
For higher-order systems and computer analysis, this method results in
considerable time and computational savings.

Signals and Systems 2014

9B.11
Poles and Repeated Eigenvalues
Poles
We have seen before that the transfer function of a system using state-variables
is given by:

H s   cT Φs b
 cT sI  A  b
1

cT adjsI  A b

sI  A

(9B.50)

where we have used the formula for the inverse B 1  adj B B . We can see
that the poles of the system are formed directly by the characteristic equation
sI  A  0 . Thus, the poles of a system are given by the eigenvalues of the

matrix A .

poles = eigenvalues of A

(9B.51)

There is one qualifier to this statement: there could be a pole-zero cancellation,
in which case the corresponding pole will “disappear”. These are special cases,
and are termed uncontrollable and/or unobservable, depending on the state
assignments.
Repeated Eigenvalues
When eigenvalues are repeated, we get repeated eigenvectors if we try to solve

Aq  q . In these cases, it is not possible to diagonalize the original matrix
(because the similarity matrix will have two or more repeated columns, and
will be singular – hence no inverse).
In cases of repeated eigenvalues, the closest we can get to a diagonal form is a
Jordan canonical form. Handling repeated eigenvalues and examining the
Jordan form are topics for more advanced subjects in control theory.
Signals and Systems 2014

Poles of a system
and eigenvalues of
A are the same!

9B.12
Discrete-time State-Variables
The concepts of state, state vectors and state-variables can be extended to
discrete-time systems.
A discrete-time SISO system is described by the following equations:

qn  1  Aqn  bxn

State and output
equations for a
discrete-time system

yn  cT qn  dxn

(9B.52)

Example
Given the following second-order linear difference equation:
yn  yn  1  yn  2  xn

(9B.53)

q1 n  yn  1

(9B.54)

we select:

q2 n  yn  2

We now want to write the given difference equation as a set of equations in
state-variable form. Now:
q1 n  yn  1

(9B.55)

Therefore:

q1 n  1  yn

 yn  1  yn  2  xn

(9B.56)

so that:
q1 n  1  q1 n  q 2 n  xn

(9B.57)

q 2 n  yn  2

(9B.58)

Also:

Signals and Systems 2014

9B.13
Therefore:
q 2 n  1  yn  1

(9B.59)

so that, using Eq. (9B.55):
q 2 n  1  q1 n 

(9B.60)

From Eq. (9B.53), we also have:
yn  q1 n  q 2 n  xn

(9B.61)

The equations are now in state variable form, and we can write:
1 1
1 
qn  1  
qn    xn

1 0
0 
yn  1 1qn  1xn

(9B.62)

Thus, we can proceed as above to convert any given difference equation to
state-variable form.
Example

If we are given the transfer function:
H z  

b0  b1 z 1    bN z  N
1  a1 z 1    a N z  N

(9B.63)

then:
Y z 
X z 

 Pz 
p
1
b0  b1 z    b p z
1  a1 z    a p z  p

(9B.64)

1

From the right-hand side, we have:
Pz   a1 z 1 P z   a2 z 2 P z     a N z  N P z   X  z 

Signals and Systems 2014

(9B.65)

9B.14
Now select the state variables as:
Q1  z   z 1 P z 

Q2  z   z 2 P z 

QN  z   z  N P z 

(9B.66)

The state equations are then built up as follows. From the first equation in
Eq. (9B.66):
Q1  z   z 1 Pz 

zQ1  z   P z 

  a1 z 1 P z   a2 z 2 P z     a N z  N P z   X  z 
  a1Q1  z   a 2 Q2  z      a N QN z   X  z 

(9B.67)

Taking the inverse z-transform, we get:
q1 n  1  a1q1 n  a2 q2 n    a N q N n  xn

(9B.68)

From the second equation in Eq. (9B.66), we have:
Q2  z   z 2 P z 

 z 1 z 1 P z 
 z 1Q1  z 

zQ2  z   Q1  z 

(9B.69)

Taking the inverse z-transform gives us:
q 2 n  1  q1 n 

(9B.70)

q N n  1  q N 1 n

(9B.71)

Similarly:

We now have all the state equations. Returning to Eq. (9B.64), we have:

Y  z   b0 P z   b1 z 1 P z     bN z  N P z 

Signals and Systems 2014

(9B.72)

9B.15
Taking the inverse z-transform gives:

yn  b0 q1 n  1  b1q1 n  b2 q2 n    bN q N n

(9B.73)

Eliminating the q1 n  1 term using Eq. (9B.68) and grouping like terms gives:

yn  b1  a1b0 q1 n  b2  a2 b0 q2 n 
  bN  a N b0 q N n  b0 xn

(9B.74)

which is in the required form.
Using matrix notation we therefore have:
 a1  a2
 1
0

qn  1   0
1


 
 0
0
yn  c1 c2 

  a N 1

 aN 
1 

0 
0
0 

 
0
0 qn  0 xn


 


 

0

1
0 
c N qn  b0 xn

where ci  bi  ai b0  i  1, 2, , N

Thus we can convert H  z  to state-variable form.

Signals and Systems 2014

(9B.75)

9B.16
Discrete-time Response
Once we have the equations in state-variable form we can then obtain the
discrete-time response.
For a SISO system we have:

qn  1  Aqn  bxn
yn  cT qn  dxn

(9B.76)

First establish the states for the first few values of n:

q1  Aq0  bx0

q2  Aq1  bx1
 AAq0  bx0  bx1
 A 2 q0  Abx0  bx1
q3  Aq2  bx2

 A 3q0  A 2 bx0  Abx1  bx2

(9B.77)

The general formula can then be seen as:

qn  A n q0  A n 1bx0 

  Abxn  2  bxn  1 n  1, 2, 

(9B.78)

We now define:
Fundamental matrix
defined

φn  A n  fundamental matrix

Signals and Systems 2014

(9B.79)

78) and the above definition. the response of the discrete-time system to any input is given by: Solution to the discrete-time state equations in terms of convolution summation n 1 qn  φnq0   φn  i  1bxi  i 0 yn  cT qn  dxn (9B. Take the z-transform of Eq.17 From Eq.9B.83) Similarly: Y  z   zcT  zI  A  q0 1    cT  zI  A  b  d X  z  1 Signals and Systems 2014 (9B.84) .76) to get: zQ z   zq0  AQ z   bX  z  zI  A Qz   zq0  bX z  (9B. it can be seen that: q ZIR  φnq0 The ZIR and ZSR for a discrete-time state-variable system n 1 q ZSR   φn  i  1bxi  i 0 (9B.80) This is the expected form of the output response.81) Discrete-time Transfer Function We can determine the transfer function from the state-variable representation in the same manner as we did for continuous-time systems.82) Therefore: Q z    zI  A  zq0   zI  A  bX  z  1 1 (9B. (9B. For the states. (9B.

2. we put all initial conditions q0   0 .87) . the transfer function is: The discrete-time transfer function in terms of statevariable quantities H  z   cT  zI  A  b  d 1 (9B. Therefore.80). (9B.  Signals and Systems 2014 (9B.85) To get the unit-pulse response.79). 2. we get: The unit-pulse response in terms of state-variable quantities h0  d hn  cT A n 1b n  1.18 For the transfer function. set the initial conditions to zero and apply a unit-pulse response: h0  d hn  cT φn  1b n  1.9B.  (9B. (9B.86) Using Eq. we revert to Eq.

81)]. Signals and Systems 2014 .19 Example A linear time-invariant discrete-time system is given by the figure shown: x [n ] q [ n +1] 2 D q2[ n] 5/6 y [n ] 5 D q [ n] 1 1/6 1 If We would like to find the output yn if the input is xn  un and the initial conditions are q1 0  2 and q 2 0  3 . we must determine φn   A n . One way of finding this is to first determine a similarity transform to diagonalize A .89) To find the solution using time-domain techniques [Eq. (9B.9B. Recognizing that q 2 n  q1 n  1 . the state equations are:  q1 n  1  0 q n  1   1  2   6 1   q n 0 5  1     x q n 1 6   2    (9B.88) and:  q n yn   1 5 1  q 2 n (9B.

Rearranging we then have: A  UΛ U 1 A 2  A  A  UΛ U 1UΛ U 1  UΛ 2 U 1  A n  UΛ n U 1 (9B.95) Therefore: The inverse of U is readily found to be: U 1  adjU  1  2   U  1 3  Signals and Systems 2014 (9B. for 1  1 3 : 3 1 3  1   x1  0 1 6  1 2  x   0 so choose u 1  1     2    (9B.90) The characteristic equation of A is: 1  5 1  1  1 I  A  1   5  2             0 6 6  3  2 6 (9B.91) 6 Hence.92) The associated eigenvectors are.20 If such a transform can be found. and: 1 3 0  Λ   0 1 2 (9B.9B.93) and for 2  1 2 :  2 1 2  1   x1  0 1 6  1 3  x   0 so choose u 2   1     2    (9B. then Λ  U 1 AU . 1  1 3 and 2  1 2 are the eigenvalues of A .96) .94) 3 2 U  1 1  (9B.

21 Therefore φn  A n  UΛ n U 1 .99) y ZSR n   cT φn  i  1bxi  i 0 We have: cT φn  i  1b 31 3ni 1  21 2ni 1   1 5 n i 1 n i 1  1 2  1 3  61 3  61 2  0  n i 1 n i 1     21 3  31 2  1 n i 1 n i 1  61 3ni 1  61 2ni 1    1 5 n i 1 n i 1   31 2   21 3   41 3 n i 1  91 2 n i 1 (9B. using Eq.81): n 1 (9B.97) To solve for the ZIR.81): y ZIR n  cT φnq0 31 3n  21 2n  61 3n  61 2n  2   1 5 n n n n    21 3  31 2  3  1 3  1 2  121 3n  141 2n    1 5 n n    41 3  71 2   81 3  211 2 n (9B. we have. (9B.9B. and: 3 2 1 3n 0   1  2 φn      1 2n   1 3  1 1  0 3 2  1 3n  21 3n    n n  31 2  1 1  1 2 31 3n  21 2 n  61 3n  61 2n   n n n n  21 3  31 2   1 3  1 2  (9B.98) n To solve for the ZSR. using Eq. we have.100) Signals and Systems 2014 . (9B.

102) The solution using z-transforms follows directly from Eq.101) for n  0 .103) Therefore: yn   81 3  211 2  12  61 3  181 2 n  0     n zero-input response n n n (9B. the two solutions obtained using different techniques are in perfect agreement.22 so that: n 1 y ZSR n    41 3 n i 1  91 2 n i 1 ui  i 0  41 3 n 1 n 1 n 1 i n 1 i  1 3  91 2  1 2 i 0 i 0 n 1 3 n 1 2  181 2 1 3 1 2 n n n  61 3 1  3  181 2 1  2 n n  121 3 n      61 3  181 2  12 n n (9B.84):   Y  z   zcT zI  A  q0  cT zI  A  b  d X z  1 1 1 1   1   2  z  z   1 5  z 1 5     1 6 z  5 6 3 1 6 z  5 6 1  0   z   z  1  z  5 6 1  0   z  5 6 1   2 z 1        1 5 1 5  1 6 z z    1 6 z   3 z 2  5 6 z  1 6 z 2  5 6 z 1 6    z  1     z   z  1 2 z  4 3 z 1  1 5      2 1 5  z2   2 z  5 6 z 1 6  3z  1 3  z  5 6 z  1 6    z  1 13z 2  3z 5z 2  z  2  z  5 6 z  1 6 z  1 z 2  5 6 z  1 6 z z z z z  8  21  12 6  18 z 1 3 z 1 2 z 1 z 1 3 z 1 2    (9B.104) zero-state response Obviously. Signals and Systems 2014 . n  0 n n (9B. (9B. Therefore the total response is: yn  81 3  211 2  12  61 3  181 2  n n n n  12  21 3  31 2 .9B.

9B.23
Summary

The similarity transform uses a knowledge of a system’s eigenvalues and
eigenvectors to reduce a high-order coupled system to a simple diagonal
form.

A diagonal form exists only when there are no repeated eigenvalues.

A system’s poles and eigenvalues are equal.

Discrete-time state-variable representation can be used to derive the
complete response of a system, as well as the transfer function and unitpulse response.

References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB®, Prentice-Hall, 1997.
Shinners, S.: Modern Control System Theory and Application, AddisonWesley, 1978.

Signals and Systems 2014

9B.24
Exercises
1.
Find Λ , U and V for the system described by:
0
q   0
 6
y  0 0

0
1
 0


0
1 q   2
7  x
 20  5
 11  6
1

1q

Note: U and V should be found directly (i.e. Do not find V by taking the
inverse of U). You can then verify your solution by:
(i)

checking that VU  I

(ii)

checking that U  Λ  V  A

2.
Consider the system:
q1  q1  x
q 2  q1  2q2  x
with:
0
q0   
 1
(a) Find the system eigenvalues and find the zero input response of the system.
(b) If the system is given a unit-step input with the same initial conditions, find
q t  . (Use the resolvent matrix to obtain the time solution). What do you

notice about the output?

Signals and Systems 2014

9B.25
3.
A system employing state-feedback is described by the following equations:
1
0   q1  0
 q1   0
q    0
0
1  q2   0 xt 
 2 
 q 3   5  7  3  q3  1
 q1 
y   2 4 3q2 
 q3 
xt   k1

k2

 q1 
k 3 q2   r t 
 q3 

(i)

Draw a block diagram of the state-feedback system.

(ii)

Find the poles of the system without state feedback.

(iii)

Obtain values for k1 , k 2 and k 3 to place the closed-loop system poles
at -4, -4 and -5.

(iv)

Find the steady-state value of the output due to a unit-step input.

(v)

Comment upon the possible uses of this technique.

Signals and Systems 2014

9B.26
4.
For the difference equation:
yn  3 xn 

5
5
1
1
1
xn  1  xn  2  yn  1  yn  2  yn  3
4
8
4
4
16

(a) Show that H  z  

3z 3  5 4 z 2  5 8 z
z 3  1 4 z 2  1 4 z  1 16

(b) Form the state-variable description of the system.
(c) Find the transfer function H  z  from your answer in (b).
(d) Draw a block diagram of the state-variable description found in (b), and use
block-diagram reduction to find H  z  .

5.
Find y5 and y10 for the answer to Q4b given:
1
q0   1 and
 0 

 0
xn  
n
 1

n0
n0

(a) by calculating qn and yn for n  0, 1, , 10 directly from the state

equations.
(b) by using the fundamental matrix and finding y5 and y10 directly.

6.
A discrete-time system with:
H z  

k z  b 
z  z  a  z  c 

is to be controlled using unity feedback. Find a state-space representation of
the resulting closed-loop system.

Signals and Systems 2014

F.1
Appendix A - The Fast Fourier Transform
The discrete-time Fourier transform (DTFT). The discrete Fourier Transform
(DFT). Fast Fourier transform (FFT).

Overview
Digital signal processing is becoming prevalent throughout engineering. We

The motivation
have digital audio equipment (CDs, MP3s), digital video (MPEG2 and behind developing
the FFT

MPEG4, DVD, digital TV), digital phones (fixed and mobile). An increasing
number (billions!) of embedded systems exist that rely on a digital computer (a
microcontroller normally). They take input signals from the real analog world,
convert them to digital signals, process them digitally, and produce outputs that
are again suitable for the real analog world. (Think of the computers
controlling any modern form of transport – car, plane, boat – or those that
control nearly all industrial processes). Our motivation is to extend our existing
frequency-domain analytical techniques to the “digital world”.
Our reason for hope that this can be accomplished is the fact that a signal’s Samples contain all
the information of a

samples can convey the complete information about a signal (if Nyquist’s signal
criterion is met). We should be able to turn those troublesome continuous-time
integrals into simple summations – a task easily carried out by a digital
computer.

Signals and Systems 2014

F.2
The Discrete-Time Fourier Transform (DTFT)
To illustrate the derivation of the discrete-time Fourier Transform, we will
consider the signal and it’s Fourier Transform below:
A strictly time-limited
signal has an infinite
bandwidth

x (t )
A

0

X (f )
C

t

-2 B

-B

0

B

2B

f

Figure F.1
Since the signal is strictly time-limited (it only exists for a finite amount of
time), it’s spectrum must be infinite in extent. We therefore cannot choose a
sample rate high enough to satisfy Nyquist’s criterion (and therefore prevent
aliasing). However, in practice we normally find that the spectral content of
signals drops off at high frequencies, so that the signal is essentially bandlimited to B:
A time-limited signal
which is also
essentially bandlimited

x (t )
A

0

X (f )
C

t

-B

0

B

f

Figure F.2
We will now assume that Nyquist’s criterion is met if we sample the timedomain signal at a sample rate f s  2 B .

Signals and Systems 2014

F.3
If we ideally sample using a uniform train of impulses (with spacing
Ts  1 f s ), the mathematical expression for the sampled time-domain
waveform is:

 

xs t   xt    t  kTs 
k  

 xk  t  kT 

(F.1)

s

k  

This corresponding operation in the frequency-domain gives:

X s  f   X  f  f s

   f  nf 
s

n  

 f X  f  nf 

n  

s

(F.2)

s

The sampled waveform and its spectrum are shown graphically below:
A sampled signal
and its spectrum

x s (t )
A

0

weights are
x ( kTs) = x [ k ]

t

Xs (f )

-B

0

fs C

B = f s /2

f

M samples with spacing Ts

Figure F.3
We are free to take as many samples as we like, so long as MTs   . That is,
we need to ensure that our samples will encompass the whole signal in the
time-domain. For a one-off waveform, we can also sample past the extent of
the signal – a process known as zero padding.
Signals and Systems 2014

F.4
Substituting x s t  into the definition of the Fourier transform, we get:

X s  f    x s t e  j 2ft dt






 xk  t  kT e

 j 2ft

s

k  

dt

(F.3)

Using the sifting property of the impulse function, this simplifies into the
definition of the discrete-time Fourier transform (DTFT):
The discrete-time
Fourier transform
defined

Xsf 

 xk e

 j 2fkTs

k  

(F.4)

The DTFT is a continuous function of f. It is discrete in the sense that it
operates on a discrete-time signal (in this case, the discrete-time signal
corresponds to the weights of the impulses of the sampled signal).
As shown in Figure F.3, the DTFT is periodic with period f s  2 B , i.e. the
range of frequencies  f s 2  f  f s 2 uniquely specifies it.

The Discrete Fourier Transform (DFT)
The DTFT is a continuous function of f, but we need a discrete function of f to
be able to store it in a computer. Our reasoning now is that since samples of a
time-domain waveform (taken at the right rate) uniquely determine the original
waveform, then samples of a spectrum (taken at the right rate) uniquely
determine the original spectrum.

The discrete Fourier
transform is just
samples of the
DTFT

One way to discretize the DTFT is to ideally sample it in the frequencydomain. Since the DTFT is periodic with period f s , then we choose N samples
per period where N is an integer. This yields periodic spectrum samples and we
only need to compute N of them (the rest will be the same)!

Signals and Systems 2014

F.5
The spacing between samples is then:

f0 

The frequency
spacing of a DFT

fs
N

(F.5)

which gives a time-domain relationship:

T0  NTs

(F.6)

The time-domain
sample spacing of a
DFT

The ideally sampled spectrum X ss  f  is then:

 

X ss  f   X s  f     f  nf 0 
n  

 X nf   f  nf 

n  

s

0

(F.7)

0

The ideally sampled spectrum is shown below:
Ideal samples of a
DTFT spectrum

X ss ( f )

- f s /2

fs C

0

f s /2

sample spacing is f0
N samples over one period
Figure F.4

Signals and Systems 2014

f

F.6
What signal does this spectrum correspond to in the time-domain? The
corresponding operation of Eq. (F.7) is shown below in the time-domain:

1
xss t   xs t   
 f0

k  

  t  kT0 

 T x t  kT 
0 s

k  

0

(F.8)

This time-domain signal is shown below:
Periodic extension
of the time-domain
signal due to ideally
sampling the DTFT
spectrum

x ss(t )
A T0

- T0

0

t
T0

2T0

M samples in each repeat with spacing Ts

Figure F.5
We see that sampling in the frequency-domain causes periodicy in the timedomain. What we have created is a periodic extension of the original sampled
signal, but scaled in amplitude. From Figure F.5, we can see that no timedomain aliasing occurs (no overlap of repeats of the original sampled signal) if:

T0  MTs

(F.9)

Using Eq. (F.6), this means:

NTs  MTs
or

NM
Signals and Systems 2014

(F.10)

F.7
If the original time-domain waveform is periodic, and our samples represent
one period in the time-domain, then we must choose the frequency sample
spacing to be f 0  1 T0 , where T0 is the period of the original waveform. The
process of ideally sampling the spectrum at this spacing “creates” the original
periodic waveform back in the time-domain. In this case, we must have

T0  MTs , and therefore NTs  MTs so that N  M .
We choose the number of samples N in the frequency-domain so that
Eq. (F.10) is satisfied, but we also choose N to handle the special case above.
In any case, setting N  M minimises the DFT calculations, and we therefore
choose:

N M
(F.11)

number of frequency-domain samples
= number of time-domain samples

If we do this, then the reconstructed sampled waveform has its repeats next to
each other:

x ss(t )
A T0

- T0

0

t
T0

2T0

M samples in each repeat with spacingTs

Figure F.6
Returning now to the spectrum in Figure F.4, we need to find the sampled
spectrum impulse weights covering just one period, say from 0  f  f s . These
are given by the weights of the impulses in Eq. (F.7) where 0  n  N  1 .

Signals and Systems 2014

The FFT produces
the same number of
samples in the
frequency-domain
as there are in the
time-domain

F.8
The DTFT evaluated at those frequencies is then obtained by substituting

f  nf 0 into Eq. (F.4):
N 1

X s nf 0    xk e  j 2nf 0 kTs
k 0

(F.12)

Notice how the infinite summation has turned into a finite summation over
M  N samples of the waveform, since we know the waveform is zero for all

the other values of k.
Now using Eq. (F.5), we have f 0Ts  1 N . We then have the definition of the
discrete Fourier transform (DFT):
The discrete Fourier
transform defined

N 1

X n   xk e

j

2nk
N

k 0

,

0  n  N 1

(F.13)

The DFT takes an input vector xk  (time-spacing unknown – just a function
of k) and produces an output vector X n (frequency-spacing unknown – just a
function of n). It is up to us to interpret the input and output of the DFT.

The Fast Fourier Transform (FFT)
The Fast Fourier Transform is really a family of algorithms that are used to
The FFT is a
computationally fast
version of the DFT

evaluate the DFT. They are optimised to take advantage of the periodicy
inherent in the exponential term in the DFT definition. The roots of the FFT
algorithm go back to the great German mathematician Gauss in the early
1800’s, but was formally introduced by Cooley and Tukey in their paper “An
Algorithm for the Machine Calculation of Complex Fourier Series,” Math.

Comput. 19, no. 2, April 1965:297-301. Most FFTs are designed so that the
number of sample points, N, is a power of 2.
All we need to consider here is the creation and interpretation of FFT results,
and not the algorithms behind them.
Signals and Systems 2014

and the frequency resolution is to be 5 Hz.F. The important relationships that combine all these parameters are: T0  NTs The relationships between FFT sample parameters or f s  Nf 0 or N  T0 f s (F. or we may be restricted to a certain number of samples in the timedomain and we wish to know the frequency range and spacing of the FFT output. The resulting number of samples is then N  T0 f s  0.14) Example An analog signal with a known bandwidth of 2000 Hz is to be sampled at the minimum possible frequency.2 s . For example. there may be a requirement for the FFT results to have a certain frequency resolution. Signals and Systems 2014 . we can now choose various parameters in the creation of the FFT to suit our purposes. the time-domain window size.2  4000  800 samples. To achieve a frequency resolution of 5 Hz requires a window size of T0  1 f 0  1 5  0.9 Creating FFTs Knowing the background behind the DFT. The minimum sampling frequency we can choose is the Nyquist rate of f s  2 B  2  2000  4000 Sa/s . and the number of samples. We need to find the sample rate.

024 MHz .001  1 kHz . % Frequency vector of N frequencies spaced f0 apart f=-fs/2:f0:fs/2-f0. % Number of samples N=1000. % Time window T0=N*Ts. The following code shows how to set up the appropriate time and frequency vectors if the sample rate and number of samples are specified: % Sample rate fs=1e6.10 Example An analog signal is viewed on a DSO with a window size of 1 ms. f0=1/T0. The DSO takes 1024 samples. % Time vector of N time samples spaced Ts apart t=0:Ts:T0-Ts. The folding frequency is therefore f s 2  512 kHz .F. Note carefully how the time and frequency vectors were specified so that the last value does not coincide with the first value of the second periodic extension or spectrum repeat. The sample rate is f s  Nf 0  1024 1000  1. The frequency vector is used to graph the shifted (double-sided) spectrum produced by the code below: G=fftshift(fft(g)). Ts=1/fs. and this is the maximum frequency displayed on the DSO spectrum. What is the frequency resolution of the spectrum. The frequency resolution of the FFT in this case will be f 0  1 kHz and the output will range from –500 kHz to 499 kHz. Signals and Systems 2014 . and what is the folding frequency (half the sample rate)? The frequency resolution is f 0  1 T0  1 0. Example A simulation of a system and its signals is being performed using MATLAB®.

The DFT gives the value of the FT at frequencies nf 0 for the first spectral repeat. With our example waveform.11 Interpreting FFTs The output of the FFT can be interpreted in four ways. we would have: weights are x ( nTs) = x [ n ] x s (t ) A x [ n] A t  0 input to Interpretation of FFT of ideally sampled one-off waveform T0 n computer N 0 real one-off sampled signal discrete signal FT Xs ( f ) fs C FFT samples are X [n ] f .f s /2 0 human X [n ] fs C f interpretation f s /2 . depending on how we interpret the time-domain values that we feed into it. is the weights of the impulses of an ideally sampled time-limited “one-off” waveform.7 Signals and Systems 2014 .f s /2 real spectrum 0 f s /2 discrete spectrum Figure F. (F.F. xn . then we know the FT is a periodic repeat of the original unsampled waveform.12). Case 1 – Ideally sampled one-off waveform If the FFT input. This interpretation comes directly from Eq.

12 Case 2 – Ideally sampled periodic waveform Consider the case where the FFT input. is the weights of the impulses of an ideally sampled periodic waveform over one period with period T0 . xn .f s /2 real spectrum 0 f s /2 discrete spectrum Figure F. if the time-domain waveform were scaled by T0 . With our example waveform. we would have: Interpretation of FFT of ideally sampled periodic waveform weights are x ( nTs) = x [ n ] x s (t ) A t  0 input to x [ n] A n T0 computer N 0 real periodic sampled signal discrete signal FT Xs ( f ) FFT f0 fs C weights are f0 X [n ] f .f s /2 0 human X [n ] fs C f interpretation f s /2 .8). (F. According to Eq.F.8 Signals and Systems 2014 . To get the FT. we therefore have to scale the DFT by f 0 and recognise that the spectrum is periodic. the DFT gives the FT impulse weights for the first spectral repeat.

F.13 Case 3 – Continuous time-limited waveform If we ideally sample the “one-off” waveform at intervals Ts . (F.f s /2 X [n ] fs C 0 f s /2 real spectrum Figure F. scaled by f s .f s /2 0 f s /2 discrete spectrum .12). we get a signal corresponding to Case 1.9 Signals and Systems 2014 fs C f interpretation . The DFT gives the value of the FT of the sampled waveform at frequencies nf 0 for the first spectral repeat. Interpretation of FFT of one-off waveform x (t ) A ideal t  0 weights are x ( nTs) = x [ n ] x s (t ) A T0 sampling real periodic signal samples are Ts X [n ] C 0 n computer 0 N discrete signal FT f . To undo the scaling and spectral repeats caused by sampling.f s /2 T0 real periodic sampled signal FT X ( f) x [ n] A t  0 input to f s /2 real spectrum ideal FFT human Xs ( f ) reconstruction f . The sampling process creates periodic repeats of the original spectrum. This interpretation comes directly from Eq. All we have to do is scale the DFT output by Ts to obtain the true FT at frequencies nf 0 . we should multiply the Case 1 spectrum by Ts and filter out all periodic repeats except for the first.

10 Signals and Systems 2014 human X [n ] fs C f interpretation . We are now essentially equivalent to Case 2. All we have to do is scale the DFT output by f 0Ts  1 N to obtain the true FT impulse weights. we must ideally sample the continuous signal at intervals Ts to give xn .8).f s /2 0 f s /2 real spectrum Figure F. To undo the scaling and spectral repeats caused by sampling.f s /2 0 f s /2 discrete spectrum . scaled by f s . the DFT gives the FT impulse weights for the first spectral repeat.F. Interpretation of FFT of periodic waveform x (t ) A ideal x s (t ) A weights are x ( nTs) = x [ n ] 0 T0 t 0 T0 t sampling real periodic signal f0 C 0 computer N discrete signal FT weights are X [n ] Xn = N f . (F.f s /2 n 0 real periodic sampled signal FT X ( f) input to x [ n] A f s /2 real spectrum ideal Xs ( f ) FFT f0 fs C weights are f0 X [n ] reconstruction f . The sampling process creates periodic repeats of the original spectrum. According to Eq.14 Case 4 – Continuous periodic waveform To create the FFT input for this case. we should multiply the Case 2 spectrum by Ts and filter out all periodic repeats except for the first.

Phase Detector. Unfortunately.1 Appendix B . the phase error may vary randomly with time. phase distortion is usually intolerable. This is a serious type of distortion. For SSB-SC modulation. demodulators. Loop Filter. Overview The phase-locked loop (PLL) is an important building block for many electronic systems. a frequency shift of 20 Hz is tolerable. PLLs are used in frequency synthesisers. Voltage controlled oscillator. Any discrepancy in the frequency or phase of the local carrier gives rise to distortion in the demodulator output. Phase distortion is generally not a problem with voice signals because the human ear is somewhat insensitive to phase distortion – it changes the quality of the speech but still remains intelligible. For DSB-SC modulation. the receiver requires a local carrier for synchronous demodulation. a constant phase error will cause attenuation of the Frequency and output signal. A frequency error causes the output to have a component which is slightly shifted in frequency. In video signals and data transmission. the local carrier must be in frequency and phase synchronism with the incoming carrier. a phase error in the local carrier gives rise to a phase distortion in the demodulator output.P. Distortion in Synchronous AM Demodulation In suppressed carrier amplitude modulation schemes.The Phase-Locked Loop Phase-locked loop. clock multipliers and many other communications and electronic applications. For voice signals. A phase errors cause distorted frequency error will cause a “beating” effect (the output of the demodulator is demodulator output the original message multiplied by a low-frequency sinusoid).. Ideally. Signals and Systems 2014 .

and are very stable. a carrier. at high frequencies (> 1 MHz). In the presence of strong noise.1 One conventional technique to generate the receiver’s local carrier is to separate the pilot at the receiver by a very narrowband filter tuned to the pilot frequency. A PLL is used to regenerate a local carrier In demodulation applications. It is therefore a useful device for synchronous demodulation of AM signals with a suppressed carrier or with a little carrier (the pilot). since the introduction of CMOS circuits that have entire PLLs built-in. the PLL is used in such applications as space-vehicle-to-earth data links. Signals and Systems 2014 . the PLL is primarily used in tracking the phase and frequency of the carrier of an incoming signal. However. or pilot. where there is a premium on transmitter weight. is transmitted at a reduced level (usually about -20 dB) along with the sidebands. In such a case. or where the loss along the transmission path is very large. A pilot is transmitted to enable the receiver to generate a local oscillator in frequency and phase synchronism with the transmitter Quartz crystals can be cut for the same frequency at the transmitter and receiver.P. the PLL is more effective than conventional techniques. and. commercial FM receivers. even quartz-crystal performance may not be adequate. The pilot is then amplified and used to synchronize the local oscillator. speech SSB with pilot pilot tone f -B 0 B f -fc-B -fc 0 fc fc+B Figure P.2 Carrier Regeneration The human ear can tolerate a drift between the carriers of up to about 30 Hz. For this reason.

If the signal fed back is not equal to the input signal. the frequency and phase of the two signals are in synchronism (expect for a difference of 90°. the signal fed back tends to follow the input signal.3 The Phase-Locked Loop (PLL) A block diagram of a PLL is shown below: A phase-locked loop v i (t ) phase detector pilot tone (input sinusoid) phase difference loop filter v in( t ) v o( t ) VCO local carrier (output sinusoid) Figure P. but the phase of a sinusoid. expect that the quantity fed back and compared is not the amplitude.2 It can be seen that the PLL is a feedback system. the difference (the error) will change the signal fed back until it is close to the input signal. The voltage controlled oscillator (VCO) adjusts its frequency until its phase angle comes close to the angle of the incoming signal.P. The three components of the PLL will now be examined in detail. Signals and Systems 2014 . as will be seen later). In a typical feedback system. A PLL operates on a similar principle. At this point.

4 The horizontal axis is the applied input voltage. The purpose of the control system built around it is to change the input to the VCO so that its output tracks the incoming signal’s frequency and phase. The frequency f o is the nominal frequency of the VCO (the frequency of the output for no applied input). the VCO will produce a sinusoid of a fixed frequency. and the vertical axis is the frequency of the output sinusoid.4 Voltage Controlled Oscillator (VCO) The voltage controlled oscillator (VCO) is a device that produces a constant amplitude sinusoid at a frequency determined by its input voltage.3 The characteristic of a VCO is shown below: fi fo slope = k v vin 0 Figure P. Signals and Systems 2014 .P. For a fixed DC input voltage. We now seek a model of the VCO that treats the output as the phase of the sinusoid rather than the sinusoid itself. v o( t ) v in( t ) VCO input voltage output sinusoid Figure P. The amplitude of the output sinusoid is fixed.

4) t  which reduces to: for f1 a constant.1) To relate this to the phase of the sinusoid. the frequency of a sinusoid cos   cos2f1t  is proportional to the rate of change of the phase angle: d  2f1 dt 1 d f1  2 dt (P. In general.2) We are used to having a constant f1 for a sinusoid’s frequency. the phase is:  t    t  2f o  2k v vin  d  2f o t   2k v vin  d t  Signals and Systems 2014 (P.3)   2f1t (P. The instantaneous phase of a sinusoid is given by:    2f1d (P.5) .5 The instantaneous VCO frequency is given by: f i t   f o  k v vin t  (P. For f i t   f o  k v vin t  .P. we need to generalise our definition of phase.

When used in a PLL. Signals and Systems 2014 . the VCO output is: vo  Ao cos o t   o t  (P.P. Then the output phase of the VCO is given by:  o t   2k v  VDC d  2k vVDC t t  The resulting sinusoidal output of the VCO can then be written as: vo  Ao cos o t  2k vVDC t   Ao cos o  2k vVDC t   Ao cos1t  In other words. a constant DC voltage applied to the input of the VCO will produce a sinusoid of fixed frequency. the VCO input should eventually be a constant voltage (the PLL has locked onto the phase.7) This equation expresses the relationship between the input to the VCO and the phase of the resulting output sinusoid.6) where:  o t   2k v  vin  d t  (P. Example Suppose a DC voltage of VDC V is applied to the VCO input. but the VCO needs a constant input voltage to output the tracked frequency).6 Therefore. f1  f 0  k vVDC .

5 A practical implementation of a phase detector is a four-quadrant multiplier: Four-quadrant multiplier x ( t ) = v i (t )v o(t ) v i (t ) v o(t ) Figure P.P. be: vi  Ai sin  i t   i  (P.7 Phase Detector The phase detector is a device that produces the phase difference between two input sinusoids: v i (t ) sinusoid 1 phase detector phase difference v o(t ) sinusoid 2 Figure P. let the incoming signal of the PLL.6 To see why a multiplier can be used. with constant frequency and phase.8)  i   i   o t  i (P.9) If we let: Signals and Systems 2014 .

11) If we now look forward in the PLL block diagram. If we assume that the loop filter is a lowpass filter that adequately suppresses the high frequency term of the above equation (not necessarily true in all cases!).10) Note that the incoming signal is in phase quadrature with the VCO output (i. Signals and Systems 2014 . The output of the multiplier is: x  vi  v o  Ai sin  o t   i Ao cos o t   o   Ai Ao sin  i   o   sin 2 o t   i   o  2 (P. then the phase detector output can be written as: x Ai Ao sin  i   o   K 0 sin  i   o  2 (P. we can see that this signal passes through the “loop filter”. one is a sine. Thus. Thus.8 then we can write the input in terms of the nominal frequency of the VCO as: vi  Ai sin  o t   i t  (P. as will be shown shortly.12) where K 0  Ai Ao 2 . the multiplier used as a phase detector produces an output voltage that is proportional to the sine of the phase difference between the two input sinusoids. This comes about due to the way the multiplier works as a phase comparator.e. the other a cosine). the PLL will lock onto the incoming signal but it will have a 90° phase difference.P.

e.P. then we can linearize the model above by making the approximation sin  e   e . in terms of phase.8 Note that the integrator in the VCO in the time-domain becomes 1 s in the block diagram thanks to the integration property of the Laplace transform.7 Linear PLL Model If the PLL is close to “lock”. its frequency and phase are close to that of the incoming signal. i. is shown below: A PLL model  i (t )  e (t ) x (t ) K0 sin( ) h (t ) v in( t ) loop filter d  2k v  t  o(t ) VCO phase detector Figure P.9 PLL Model A model of the PLL. rather than voltages. we can convert to the s-domain and do our analysis with familiar block diagrams: A linear PLL model in the s-domain  i (s )  e (s ) K0 H (s) loop filter 2k v  o(s ) s VCO phase detector Figure P. Signals and Systems 2014 . With a linear model.

14) then we want: t  The analysis is best performed in the s-domain.13) lim e t   0 (P. The Laplace transform of the input signal is:  i s   i  o s 2  i s Signals and Systems 2014 (P. if the input phase is given by:  i t    i   o t  i (P. That is. Loop Filter The loop filter is designed to meet certain control system performance requirements. Once of those requirements is for the PLL to track input sinusoids with constant frequency and phase errors.P.9 This is the closed-loop transfer function relating the VCO output phase and the incoming signal’s phase.10 Reducing the block diagram gives: The PLL transfer function for phase signals  i (s ) 2k v K0 H ( s ) s  2k v K0 H ( s )  o( s ) Figure P.15) .

P. the constants a and b must be chosen to meet other system requirements. such as overshoot and bandwidth.11 The Laplace transform of the error signal is:  e s    i s    o s   1  T s  i s   s  i s  s  2k v K 0 H s   s  i   o i    2  s  2k v K 0 H s   s s (P. (P.17) To satisfy Eq.14) we must have: lim H s    (P.16) The final value theorem then gives: lim e t   lim s e s  t  s 0 s2  i   o i   lim   2 s  0 s  2k K H s   s  s v 0  i  o i s  s  2k v K 0 H s  s  0 s  2k v K 0 H s  s  0 (P.19) Obviously. Signals and Systems 2014 .18) s 0 We can therefore choose a simple PI loop filter to meet the steady-state constraints: b H s   a  s (P.

.

Description This code starts off with a given sample rate. % Frequency vector for specified DSO parameters f=-fs/2:f0:fs/2-f0. % Number of samples N=1024.time-window N . it does not produce a spectrum sample at f  f s 2 since this corresponds to the sample at f   f s 2 .sample rate T0  NTs T0 . % Time vector for specified DSO parameters t=0:Ts:T0-Ts. and chooses to restrict the number of samples to 1024 for computational speed. Ts=1/fs.1 FFT . % Time window and fundamental T0=N*Ts.number of samples f s .number of samples f s  Nf 0 f s . The time window takes samples up to but not including the point at t  T0 (since this sample will be the same as the one at t  0 ).FFT. The corresponding frequency vector for the chosen sample rate and sample number.number of samples T0  NTs T0 . f0=1/T0.time-window N  T0 f s MATLAB® Code Code % Sample rate fs=1e6.time-window f s  Nf 0 N . Signals and Systems 2014 .sample rate Then N  T0 f s N . Note that because of the periodicy of the DFT.sample rate Choose f s .Quick Reference Guide Definitions Symbol T0 Description time-window f 0  1 T0 discrete frequency spacing fs sample rate Ts  1 f s N sample period number of samples Creating FFTs Given T0 .

If this condition is not met. Multiply X n by 1 N to give weights of impulses at frequencies nf 0 in true FT. periodic ideally sampled waveform (period T0 ) 0 Interpretation weights are x ( nTs) = x [ n ] 0 x s (t ) A Derivation T0 Values of waveform over one period at intervals Ts . then the periodic extension of the signal assumed by the DFT will have discontinuities and produce “spectral leakage”. T0 4.f s /2 0 Xs ( f ) f f s /2 weights are f0 X [n ] f . 2.f s /2 0 X ( f) f f s /2 weights are X [n ] f0 C Xn = N f . “one-off” continuous waveform 0 of xn Sample weights. Action  f   X s  f   sinc    X n  f  nf 0   f 0  n  samples are X [n ] fs C Xs ( f ) . “one-off” ideally sampled waveform x s (t ) A T0 Sample weights over one period.2 Interpreting FFTs Case 1.FFT. X n gives values of one period of the true continuous FT at frequencies nf 0 .f s /2 0 f s /2  f  N 1 X  f   sinc    Ts X n  f  nf 0   f 0  n 0 samples are X ( f) Ts X [n ] . Signals and Systems 2014 . weights are x ( nTs) = x [ n ]  t T0 x (t ) A  t Values of waveform at intervals Ts . In this case.f s /2 0 f s /2 Note: Periodic waveforms should preferably be sampled so that an integral number of samples extend over an integral number of periods. periodic continuous waveform (period T0 ) x (t ) A t 0 of X n t  3. Multiply X n by Ts to give values of true continuous FT at nf 0 . Multiply X n by f 0 to give weights of impulses at frequencies nf 0 in one period of true FT. the waveform is normally windowed to ensure it goes to zero at the ends of the period – this will create a smooth periodic extension. but give rise to windowing artefacts in the spectrum.

wd=sqrt(1-zeta ^2)*wn. taking the square (^2). The semicolon prevents the matrix being displayed after it is assigned. incrementing by Ts. Creates a vector ([]) with elements 1. Sets the range of the x-axis from 1 to 1e5. xlabel('Time (s)'). showing the use of sqrt. 18 and 900. N=length(t). r will be a 1 x N matrix filled with the value 1.y). Performs a vector multiplication (.256. titled “Figure 1”. N will equal the number of elements in the vector t. and the axis([1 1e5 -40 40]). Signals and Systems 2014 . Graphing Code Description Creates a new graph. subplot(211). using a black cross ('kx').Quick Reference Guide General Code Ts=0. Puts the label 'Time (s)' on the x-axis. Description Assigns the 1 x 1 matrix Ts with the value 0. Useful to make a step. Displays a grid on the current graph. it will appear on the current graph hold on. starting from 0. and uses a semilogx(w. Puts a title 'Complex poles' on the current figure. and multiplication with a scalar (*).N). Creates a 2 x 1 matrix of graphs in the current figure. title('Complex poles'). plot(t. Note all this information is stored in the vector [1 1e5 -40 40]. Note that p*g will be undefined. range of the y-axis from –40 to 40. Typical formula.38) on the current figure. gs=p. Makes the current figure the top one. Plots a point (200.H. Makes a graph with a logarithmic x-axis. plot(200.1 MATLAB® . figure(1).'k:'). t=0:Ts:T.M.38. Puts the label 'y(t)' on the y-axis. r=ones(1.-2.'kx'). and stopping when T is reached or exceeded. instead of a new graph. Assigns the vector t with values. Next time you plot.*) on an elementby-element basis. t on the current figure.-2. ylabel('y(t)'). Useful for creating vectors of transfer function coefficients. grid on.256. black dotted line ('k:'). Graphs y vs. n=45000*[1 18 900].*g. then scales all elements by 45000.

since convolution produces end effects which we usually wish to ignore for steadystate analysis.M.'tustin').H). Reassigns the vector y by taking elements from position 1 to position length(t).w). rlocus(Gol). H contains the frequency response of the transfer function defined by numerator vector n and denominator vector d. Creates a discrete-time equivalent system Gd of the continuous-time system G. Description Creates a logarithmically spaced vector from f1 to f2 with 100 elements. s Gol . Kr . Takes the magnitude of a complex number. 1  sTr 1 .f2. S=stepinfo(Gcl). Creates the transfer function I  Gcl=feedback(Gol. Performs a fast Fourier transform (FFT) on y. Read from a CSV file into a matrix. Hang=angle(H). with a 10 percent duty cycle. Performs a convolution on m2 and Ts*h. and stores the result in Y. Creates the transfer function Gmrv  I=tf(1. K=rlocfind(Gol). Gd=c2d(G. An interactive command that allows you to position the closed-loop poles on the root locus. Hmag=abs(H). Useful for generating a real sampling waveform. Y=fft(y). Control Code Description Gmrv=tf(Kr. Signals and Systems 2014 . It returns the value of K that puts the roots at the chosen location. at frequency points w. an integrator. m=csvread(‘test. with the result stored in y.d. Creates the transfer function Gcl  1  Gol H Makes a root locus using the system Gol. y=conv(m2. using a sample rate of Ts and the bilinear ('tustin') method of discretization.100). of frequency fc.[1 0]). X=fftshift(X). Takes the angle of a complex number.csv’).[Tr 1]). Time-domain Code step(Gcl). Swaps halves of the vector X .2 Frequency-domain Code f=logspace(f1. Normally used after a convolution.Ts. y=y(1:length(t)).useful for displaying the spectrum with a negative frequency component. Creates a square wave from –1 to +1. Computes the step-response characteristics of the system transfer function Gcl.Ts*h). square(2*pi*fc*t.10) Description Calculates the step response of the system transfer function Gcl and plots it on the screen. H=freqs(n.

Multiplication Multiplication Z  kY Description Multiplication by a scalar: zij  kyij z  Ax Multiplication by a vector: zi   aik xk n k 1 Z  AB n Matrix multiplication: zij   aik bkj . Identity matrix.B. x is a column vector with elements xi . i is the row. Signals and Systems 2014 . every element is zero. a12 a 22 a32 a13  a 23   aij a33  0 0 0 1 0 0 I  0 1 0 0 0 1  0 0  I   0  0   0 0   1 0 0  Λ   0 2 0   0 0 3    A is the representation of the matrix with elements aij . diagonal elements are one. aij  0 i  j  . Null matrix. j is the column. Diagonal matrix. Scalar matrix. matrix multiplication is not commutative.1 Matrices .Quick Reference Guide Definitions Symbol aij  a11 A  a21  a31  x1  x   x2   x3  0 0 0  0 0 0 0 Description Element of a matrix. k 1 AB  BA In general.

 ain a n1  a nj  a nn . a12 a 22 a32 a13   x1   b1  a23   x2   b2  a33   x3  b3  Ax  b x  A 1b Set of linear equations written using matrix notation. a11 a31  a32  a33  a12 a13 A  det A  a 21 a31 a22 a32 a 23 a33 a11  a1 j . A21 A22 A23 A31  A32  A33  Adjoint matrix of A.  a1n . . Cofactor of aij .2 Operations Terminology  a11 a 21 t A  a12 a22  a13 a23 Description Transpose of A (interchange rows and columns): aijt  a ji . Signals and Systems 2014 x are the column . Solution to set of linear equations. . Replace every element aij by its cofactor in A . then A is non-singular. eigenvectors. Aij   1 aij i j  A11 adj A   A12  A13 adj A A 1  A Determinant of A.B. Delete the row and column containing the element aij and obtain a new determinant.  a11 a  21  a31 Set of linear equations written using matrix elements. . and then transpose the resulting matrix. If A  0 . . Finding eigenvalues. . Only exists if A is square and non-singular. Linear Equations Terminology a11 x1  a12 x2  a13 x3  b1 a21 x1  a22 x2  a23 x3  b2 a31 x1  a32 x2  a33 x3  b3 Description Set of linear equations written explicitly. Reciprocal of A: A 1 A  AA 1  I . Eigenvalues Equations Ax  x A  I  0 Description  are the eigenvalues. Minor of aij . aij  ai1  aij . Formula is only used for 3x3 matrices or smaller. . then A is singular. If A  0 .

  10 P 1 2 1  e  2   t  4k  (d) g t   2 cos200t   rect  . E   t  25   t  25  5   rect     3   rect t  2  5    (d) g t   rect   .2  t  2  t  74   t  1  t  43   t  1  t  43   t  2   t  74   rect     rect     rect     rect    1   1   3   1   3   1   1  1    10   2   10   2   10   2   10   2  (a) g t    E  83 13 t (b) g t   cos t  rect   . (c) g t     k     e   t 10 k  10  1k rect t  5  10k  .      2 rect t 3 k rect rect        1  1 1 1 k    4 2 4 2          P  169 T0  3 . .1 Answers 1A. T0  20 . T0  4 . T0  2 .A. 2 5 E  19 1A.1   t  2  2k  (a) g t   sin t   rect  .3 (i) 0 (ii) e 3 (iii) 5 (iv) f  t 1  t 2  (v) 1 (vi) f t  t 0  Signals and Systems 2014 . P   k     (b) g t   1 4   t  1  3k   t  34  3k   t  1  3k   t  34  3k          . P  1  2  k   1A. E  2  (c) g t   t 2 rectt  21    t  2 2 rectt  25  .

2 1A.A. 59.64 (j) X  15 .5 45° (a) X  2715 (b) X  510 (e) x t   29.17 (f) (h) X  130 . 8 .445 (c) X x t   100 cost  60 100 (g) (d) X  5596 . 1A.   30 Signals and Systems 2014 8 3.15 cost  22. X *  1  30 (i) -2. 30 .4 Let t  T . Then:     t  t0  f  t    dt   T      or         if T  0   if T  0   all T f T    t 0 T Td f T    t 0 T Td f T    t 0 T T d    T f t0 (using the sifting property) which is exactly the same result as if we started with T   t  t 0  . X *  15 .

5 1 1.5 1 1 x(t) x(t) 2 0.5 -1 3 0 0.5 1 1.5 Time (sec) 3 0.5 -1 -1 -1.5 0 -0.5 0 0.5 Time (sec) 1.5 -0.5 0.5 3 -1.3 1B.5 0.5 -0.5 Time (sec) 2 2.5 Time (sec) 2 2.5 -1 -1 -1.5 -1 3 0 0.1 a) (ii) 2 1.5 Time (sec) 2 2.5 1 (iv) 1.5 0 0 -0.5 x(t) x(t) (iii) 1.5 -0.5 3 2 2.5 1 1.5 Time (sec) 2 2.5 1 1 x(t) x(t) (i) 2 0.5 Time (sec) 1.5 -0.5 1 1 0.5 1 1.5 3 2 2.5 1.5 1.5 0 0 2.5 1 1.A.5 -1.5 1 1 0.5 -1 0 0.5 x(t) x(t) (i) 1.5 (iv) (iii) 2 -1 2 2 2.5 0.5 3 b) (ii) 1.5 1 1.5 0.5 Time (sec) Signals and Systems 2014 .5 0 0.5 1 1.5 0 0 -0.5 3 0 0.5 0 0 -0.5 0 0.

5 1 3 2 2.5 1 1.5 (iv) 1.2 a) b) y1[n ] y2[n ] 3 3 2 2 1 1 -2 -1 0 1 2 3 4 5 n -1 -2 -1 0 -1 -2 -3 -4 1B.5 0 0 -0.5 -1.5 Time (sec) Sampling becomes more accurate as the sample time becomes smaller.5 -1 -1 -1.5 3 0 0.5 x(t) x(t) (i) 1.5 0 0.5 1 1 0.5 -1 -1 0 0.5 2.5 Time (sec) 2 2.5 1 1.5 0.5 -1.5 0 -0.5 1 1 0.5 3 0 -0. 1B.3 a [n] 3 2 1 -2 -1 0 1 2 3 4 5 n -1 -2 -3 Signals and Systems 2014 1 2 3 4 5 n .5 -0.5 Time (sec) 2 2.5 3 0 0.5 x(t) x(t) (iii) 2 1.5 0.5 1 1.5 Time (sec) 1.A.5 -1.4 c) (ii) 1.

A. 5. y1  8 . yn   0.6 (i) D D x [n ] y [n ] D (ii) x [n ] y [n ] 3 D D D D 2 D 1B.4 1.5 1B.5 a) yn   yn  1  yn  2 1B. y3  14 Signals and Systems 2014 D . n  1. y2  1 . 3. all other n 1B.7 (i) yn  yn  1  2 yn  2  3 xn  1 (ii) y0  3 .

y1 2  7 .. y1 1  3 .9 (i) (a) a1  a 4  0 (b) a5  0 (ii) (a) a1  a 4  0 (b) a5  0 1B. 1B. (b) y 1  1 .6 1B. hn  T for n  1.75 .11 yn  2 n   n  2  2 n  4   n  6 1B. y2  1. 3.8 (a) (i) h0  T .21875 1B.250. y3  56 .. 2. y1  12 . hn  0.0625 . since they rely on superposition.. y1 4  31 . y1  1.12 (i) 12 8 4 -1 0 1 2 3 4 5 n Signals and Systems 2014 .. y5  236 No.5 n 1 for n  1. 3. 2 (ii) h0  1 .375 .. y3  1. y2  26 . y1 3  15 .10 y1 0  1 . y4  116 .. y0  0. y1 5  63 y0  4 .A. 2.

2 dt L dt LC LC The resulting discrete-time approximation is: yn  2  RT L  yn  1  1  RT L  T 2 LC yn  2  T 2 LC xn  2 Signals and Systems 2014 .14 The describing differential equation is: d 2 vo R dvo 1 1   vo  vi .7 (ii) 12 8 4 -1 0 1 2 3 4 5 6 7 8 9 10 11 n (iii) 24 20 16 12 8 4 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 n 1B.A.

2 0 -0.15 (a) dy  Ky  Kx dt  (b) ht   Ke  Kt u t   (c) y t   0.5t 4  u t  4 (e) (d) y( t ) 1-e -0.6 y(t) 0.3 0.7 0.75 1  e 0.5 0 4 t (min) 0.1 so that the waveform appears smooth (any value smaller has a minimal effect in changing the solution).4 0.1 0 Signals and Systems 2014 0 2 4 6 8 10 Time (sec) 12 14 16 18 20 .8 1 0.5( t -4) 0.4 -0.2 -0.75 0.6 y(t) 0.8 0.2 0.A. 1B.6 0 2 4 6 8 10 Time (sec) 12 14 16 18 20 Use T  0.4 0.8 0.

since it is not changing as rapidly as the input control signal.1 0 -1 -0.5 5 t 1B. The code snippet below shows relevant MATLAB® code: t=0:Ts:To-Ts. x(t) 1 0.1 0.5 1 1.05 0 -1 -0.5 0.*exp(-t).5 Time (sec) 0.5 0.5 0 0.4 h(t) 0. yc=conv(x. Using numerical convolution will only give the ZSR. The ZIR needs to be obtained using other methods and is e t u t  . x=sin(t).5 0 -1 -0.3 0.5 0 Time (sec) 0. yc=yc(1:N)+exp(-t).9 1B.5 1 1.5 0 Time (sec) Signals and Systems 2014 . % ZSR % total solution = ZSR + ZIR 1B.17 1 -1-0.50 1 2 3 4 4.18 The output control signal is smoothed.A.2 y(t) 0.h*Ts).5 1 1. It is also delayed in time.2 0. h=t.15 0.16 The impulse response is ht   te t u t  .

however. it also introduces a time delay of between three and four samples. Signals and Systems 2014 .10 2 2 1 1 x[nT] si[nT] 1B.19 0 -1 -2 0 -1 0 2 4 -2 6 0 2 t (s) 4 6 4 6 t (s) 0.A.5 2 y[nT] h[nT] 1 0 0 -1 -0.5 0 2 4 6 -2 0 t (s) 2 t (s) Note that the filter effectively removes the added noise.

5 G -0.57° -100 -50 -50 0 -100 50 100 0 f 50 -26.236 2 90° 1 45° 26.118 90° 1.A.3° 0 30° 1 0.1 |G| G 2.2 G  445 2A.5 G 1 84.59° 0 1 t=1 G * -0.11 2A.5 0 -0.57° 100 f -45° (b) Re{G} 1 Im{G } 1 1 1 1/2 -50 -50 0 -100 50 100 f 0 -100 50 -1/2 (c) 2A.57° (a) 0 50 100 0 f 50 f 100 |G| G 2 1.5 G* 1 t=0 1 0.5 1 -1 Signals and Systems 2014 G* t=2 100 f .118 1 45° 26.5 G 27.4 1 0.3 g t   4 cos 200t  2A.

3 0.2 0.A.4 0.5 (a) Gn  1  j n sinc n  1   sinc n  1  4  2   2   0.1 0 -25 -20 -15 -10 -5 0 5 10 15 20 25 -20 -15 -10 -5 0 5 10 15 20 25 4 3 2 1 0 -25 Signals and Systems 2014 .4 0.3 0.2 0.5 0.1 0 -25 -20 -15 -10 -5 0 5 10 15 20 25 -20 -15 -10 -5 0 5 10 15 20 25 4 3 2 1 0 -1 -2 -25 1 (b) Gn  sincn 2sincn 6 2 0.12 2A.

1 0.15 0.25 0.A.05 0 -25 -20 -15 -10 -5 0 5 10 15 20 25 -20 -15 -10 -5 0 5 10 15 20 25 4 3 2 1 0 -25 Signals and Systems 2014 .8 0.1  jn  10 n  0.2 0 -25 -20 -15 -10 -5 0 5 10 15 20 25 -20 -15 -10 -5 0 5 10 15 20 25 2 1 0 -1 -2 -25 (d) Gn  1 n   n  sinc  8   sinc  8   4 2   2  0.13 (c) Gn e    1 1   1 2 0.2 0.6 0.4 0.

2 0. 2 .1 0 -25 -20 -15 -10 -5 0 5 10 15 20 25 -20 -15 -10 -5 0 5 10 15 20 25 2 1 0 -1 -2 -25 2A.3 0.294 W Signals and Systems 2014 1 .6 P  4.5 0.14 sincn  1  1 (e) Gn  and the special case of n  0 can be evaluated by j 2n n applying l’Hospital’s Rule or from first principles: G0  0.A.4 0.

5 0.15 2A.5 0. P  73% |G| 0.8 5 T0 n Note: Gn  Asinc 2    Asincn  2 Signals and Systems 2014 .1592 (b) T0   3 -12 -6 0 6 12  6 12  .7 |G| 0.2387 (a) 2 T0  3 -12 6 12  -0. P  50% 2A.5 (c) T0   6 -12 -6 0 : .02653 0 -6 : . P  60% : |G| 0.A.

16 2B. g1 t   g 2 t   1  e  at u t  .6 This follows directly from the time shift property.4 Hint: e a t  e at u  t   e  at u t  2B.5 f  2B.2 Xf  sinc f e jf  sinc2 f e  j 6f jf 2B. 2B.7 G1  f     a 1 1    f  .3 5  10 cos3f   5 cos4f  2f 2 (a)  (b) sincf  12   2  2 sincf  12  (c) 1  jf e (d) 5sinc5 f   3sinc3 f   sinc f e  j 5f  jf   sinc f  cos2f e  j 3f 2f 2 2B. P  8 2B.1 x t   4 cos 2000t  30 . G2  f   j 2f 2 a  j 2f Signals and Systems 2014 .5 3P 1.A.

17 2B.A.8 G1( f ) * G2( f ) 2 Ak Ak -2 f 0 2 f0 2B.11 Asinc4f   cos4f  jf Signals and Systems 2014 f .9 (a) 2 Af 0 sinc2 f 0 t  t 0  (b) 2 Af 0 sinc f 0 t sin f 0 t  2B.10 j 2 A sin 2 fT  f 2B.

25 Phase (°) -66 -102 -168 -234 Yes.5e 3 e  j120t . Note that after sampling the pulses have tops which follow the original waveform.6 (a) 20 Hz. 3A.18 3A. with impulses.7 Yes.provided the original signal contained no spectral components above 4 kHz.4 By passing the signal through a lowpass filter with 4 kHz cutoff . Signals and Systems 2014 . 40 Hz.2 X  f   A sinc 2  f  3A. The flat topped sampling pulses would simply reduce the amplitudes of the “repeats” of the baseband spectrum as one moved along the frequency axis.2339 0 3A. P  01325 . 3A.9781 (c) Harmonic # 0 1 2 3 4 Amplitude 1 3 1 0. 3A.A.1 0. 0. For ideal sampling.5 Periodicy.5 0.5e  j 3 j e j120t  0. W (b) G3  0. all “repeats” have the same amplitude.3 B4 3A.

Signals and Systems 2014 .8 Truncating the 9.25 kHz sinusoid has the effect of convolving the impulse in the original transform with the transform of the window (a sinc function for a rectangular window).A.19 3A. This introduces “leakage” which will give a spurious 9 kHz component.

5 -40 -36 -38 -39 -37 36 -2 -1 0 1 2 40 38 37 39 f (kHz) C: 3B.1 G( f ) j/2 j/4 .A.j/2 j/4 .5 0.5 -2 -1 0 1 2 f (kHz) A: G( f ) 0.2 G( f ) 0.25 -40 -36 36 0 -38 -39 -37 40 f (kHz) 38 37 39 B: G( f ) 0.j/4 0 -10 -11 -9 .20 3B.j/4 10 f 9 11 3B.3 lowpass filter  L  R C lowpass filter cos(2 f c t ) Signals and Systems 2014 .

1 s  1 R1C1 s  1 R2C2   1 R1C1  1 R2 C 2  1 R2 C1 s  1 R1 R2 C1C 2 a) 1 RC s  1 RC c) 1 R2 C1 s  R1 L1  1 2 s  R L  d) 2 sR L s  1 R2 C1  R1 L1 s  R1  R2  R2 L1C1  b) s 2 4A.A. Why? 4A.4 a) f    5 4 b).21 4A. c) The final value theorem does not apply.3 a) f t    5 7 t  9  e cos 3t  tan 1     4 2  3   b) f t   1 cost   cos2t  3 c) f t   1 2 15 15   t  8t   e 2t  7t    40  2 2   4A.5 T s    1 R1C 2 s s  1 R1C1 s  1 R2C2  Signals and Systems 2014 .2   1   a) I s  sL  R   Li 0   E s   sC      dx 0   3  b) X s  Ms  Bs  K  M  sx 0   Bx 0   2  dt  s    2      10 d 0    c) s  Js 2  Bs  K  J  s 0      B 0   2 dt  s  2    4A.

6u t  4A.9 a) G1G2 G3G4 C  R 1  G3G4 H 1  G2 G3 H 2  G1G2 G3G4 H 3 b) Y AE  CE  CD  AD  X 1  AB  EF  ABEF Signals and Systems 2014 .22 4A.A. 4A.8 a) Y G1  G1G3 H 3  G1G2 H 2  G1G2 G3 H 2 H 3  G1G2  G1G2 G3 H 3  R 1  G2 H 2  G1G3  G1G2 G3 H 2  G1G2 G3  G3 H 3  G2 G3 H 2 H 3 ab    ac  1  b) X 5    X1   Y  1  bd  ac   1  bd  ac  4A.6 y t   5e t cos2t  26.7 a) (i) G 1  GH (ii) 1 1  GH (iii) GH 1  GH b) All denominators are the same.

2 a) 0 dB b) 32 dB c) –6 dB Signals and Systems 2014  .5 Hz 5A. by examining the pole locations for R.2   (i) y t   2 1  e 4t u t  (ii) y t   2t  12  12 e 4t u t  (iii)  8  8 cos 2t  tan 1 2    e  4t u t  y t    5  5  (iv)  8   40  4t  2 y t    e u t  cos10t  tan 1       5   29  29      4B. ht   32te 4t (iii) overdamped. y t   2  21  4t e 4t u t  . y t   2  e sin 2 3t  cos 1 0.5 u t  .23 4B.4 (a) y ss t   u t  (b) y ss t    10 170  1  cos t  tan 1   u t  17  13    5A. 3   ht    16 2t e sin 2 3t 3    (ii) critically damped.1 Yes. y t   2  83 e 2t  23 e 8t u t  . C > 0.A.1 a) 2 kHz b) 2. ht   163 e 2t  e 8t    4B. L. 4B.3 (i)   4  2t   underdamped.

24 5A.3 a) b) Magnitude response Magnitude response 40 40 20 |H(w)| dB |H(w)| dB 30 20 0 -20 10 -40 0 1 10 2 10 3 4 10 10 5 10 -60 -1 10 6 10 0 10 1 2 10 10 w (rad/s) w (rad/s) Phase response Phase response 3 10 4 10 0 arg(H(w)) deg arg(H(w)) deg 50 0 -50 -50 -100 -150 1 10 2 10 3 4 10 10 5 10 6 -1 10 10 0 10 1 2 10 w (rad/s) 10 3 10 4 10 w (rad/s) d) c) Magnitude response 20 0 0 |H(w)| dB |H(w)| dB Magnitude response 20 -20 -20 -40 -40 -60 -3 10 -2 10 -1 0 10 10 1 10 -60 -3 10 2 10 -2 10 -1 0 10 w (rad/s) 10 1 10 2 10 w (rad/s) Phase response Phase response -80 0 -100 arg(H(w)) deg arg(H(w)) deg -20 -40 -60 -120 -140 -160 -80 -180 -100 -3 10 -2 10 -1 0 10 10 1 10 2 -3 10 10 -2 10 -1 0 10 w (rad/s) 10 1 10 2 10 w (rad/s) f) e) Magnitude response 80 60 60 |H(w)| dB |H(w)| dB Magnitude response 80 40 20 40 20 0 -3 10 -2 10 -1 0 10 10 1 10 0 -3 10 2 10 -2 10 -1 0 10 w (rad/s) 10 1 10 2 10 w (rad/s) Phase response Phase response 100 180 arg(H(w)) deg arg(H(w)) deg 80 60 40 20 160 140 120 100 0 -3 10 -2 10 -1 0 10 10 1 10 2 10 80 -3 10 -2 10 w (rad/s) -1 0 10 10 w (rad/s) Signals and Systems 2014 1 10 2 10 .A.

A.6 Magnitude response 50 |H(w)| dB 40 30 20 10 0 0 10 1 10 w (rad/s) 2 10 Phase response arg(H(w)) deg 150 100 50 0 0 10 1 10 w (rad/s) Signals and Systems 2014 2 10 .25 5A.5 Magnitude response 80 |H(w)| dB 60 40 20 0 -2 10 -1 10 0 10 1 10 w (rad/s) 2 10 3 10 4 10 Phase response arg(H(w)) deg 0 -50 -100 -150 -2 10 -1 10 0 10 1 10 w (rad/s) 2 10 3 10 4 10 5A.

A. +100° (iii) –8 dB. +90° Signals and Systems 2014 . -135° (ii) +5 dB. -105° (iii) –15 dB. +55° (iv) –28 dB. +180° (ii) –5 dB.7 a) Magnitude response 100 |H(w)| dB 50 0 -50 -100 -2 10 -1 0 10 1 10 2 10 10 3 10 w (rad/s) Phase response arg(H(w)) deg -100 -150 -200 -250 -2 10 -1 0 10 1 10 2 10 10 3 10 w (rad/s) (i) +28 dB. -180° (iv) – 55 dB.26 5A. -270° b) Magnitude response 40 |H(w)| dB 20 0 -20 -40 -60 -1 10 0 10 1 10 w (rad/s) 2 10 3 10 Phase response 200 arg(H(w)) deg 150 100 50 0 -1 10 0 10 1 10 w (rad/s) 2 10 3 10 (i) +35 dB.

27 5A.2s  10 Magnitude response 20 |H(w)| dB 0 -20 -40 -60 -2 10 -1 10 0 1 10 10 2 10 3 10 w (rad/s) Phase response 100 arg(H(w)) deg 80 60 40 20 0 -2 10 -1 10 0 1 10 10 w (rad/s) Signals and Systems 2014 2 10 3 10 .8 a) G1 s   450 ss  25s  20 Magnitude response 50 |H(w)| dB 0 -50 -100 0 10 1 2 10 3 10 10 w (rad/s) Phase response -50 arg(H(w)) deg -100 -150 -200 -250 -300 0 10 1 2 10 3 10 10 w (rad/s) b) G2 s    10s s  0.A.

A.9 a) Magnitude response 10 0 |H(w)| dB -10 -20 -30 -40 -50 -60 -2 10 -1 0 10 10 w (rad/s) 1 10 2 10 Phase response arg(H(w)) deg 0 -50 -100 -150 -2 10 -1 0 10 10 w (rad/s) 1 10 2 10 Bandwidth is 0 to 0.2 rads-1. Signals and Systems 2014 2 10 . b) Magnitude response 10 0 |H(w)| dB -10 -20 -30 -40 -50 -60 -2 10 -1 10 0 10 w (rad/s) 1 10 2 10 Phase response arg(H(w)) deg 0 -50 -100 -150 -2 10 -1 10 0 10 w (rad/s) 1 10 Bandwidth is increased to 15 rads-1 but is now peakier.28 5A.

4 dB.29 5A.A. 121° 5A.11 Magnitude response 40 |H(w)| dB 20 0 -20 -40 0 10 1 2 10 3 10 10 4 10 5 10 w (rad/s) Phase response 100 arg(H(w)) deg 50 0 -50 -100 0 10 1 2 10 3 10 10 w (rad/s) 45000s 2  18s  900 H s   s  902 s  1000 Signals and Systems 2014 4 10 5 10 .3 dB. 8° (ii) –17.10 (i) –3.

95 s 5B.5 s h) t s  1.3% e)  d  3 rads -1 f) t r  0.2 G s   196 s  19.05 s c)   0.72 s g) t s  1.555 d)  n  13 rads -1 a) 12.A.1 b) t p  1.  n  16.247 .6s  196 2 5B.30 5B.21 rads -1 4  ln a b  2 2 5B.3 ct   1  e  nt 1 2  sin 1   2  n t  cos 1   5B.4 a)   ln a b  b)   0.5 (a) j (i)  -10 j (ii) -1  j (iii) -10 -1  Signals and Systems 2014 .

1 (ii) v 100% steady-state 63.2% different 0 t 1 (c) The pole at -1.31 (b) (i) v 100% 63. Signals and Systems 2014 .2% 0 t 0.2% 0 t 1 (iii) v 100% 63.A.

4 Second-order system 0.6 0.6 First-order system 1.First and Second-order systems 2 1.4 Amplitude 1.8 0.09 sec (ii) T  5 sec (iii) T  0.2 0 0 1 2 3 4 Time (s) 5 6 7 8 5B.299 s  0.6495 Step response .6 G1 s   1.9 a) T  10 sec b) (i) T  9.8 1.2 1 0.8% c) 0 d) 1 9 5B.91 sec Feedback improves the time response.   1 6 b) 58. Signals and Systems 2014 .32 5B.8 a) 0 b) 3 5B.7 a)  n  3 .A.

Signals and Systems 2014 .3 a) S T K1 1 b) S T K2 s2  s c) S  2 s  s  1000 1000  2 s  s  1000 T G 6A.4 a) open-loop S KT a  1 open-loop S KT1  0 closed-loop S KT a  1 1  for    n 1  K 1 K a G s  1  K 1 K a closed-loop S KT1   K1 K a  1 for K1 K a  1 1  K1 K a b) open-loop  s   G s   1 for    n Td s  closed-loop G s   s  1   for    n Td s  1  K1 K a G s  1  K1 K a 6A. i.A. infinite values turn into finite values. and finite values become zero.5 1 1  K P K1 (ii)  (iii)  K1 1  K P K1 a) (i) b) (i) 0 (ii) 1 K I K1 (iii) 0 c) (i) 0 (ii) 1 K P K1 (iii) d) (i) 0 (ii) 0 1 KP (iii) 0 (iv) 1  K1 1  K P K1 (iv) 0 (iv) 1 KP (iv) 0 The integral term in the compensator reduces the “order” of the error.33 6A.e.

1 (i) z -2 x [n ] y [n ] z -1 (ii) x [n ] y [n ] 3 z -4 2 z -1 z -1 7A.A.3 (a) hn  1 3 un n (b) yn  3 n  3 2 un  7 2 1 3 un n or yn  3 2 un  1 2 1 3 un  1 3 un  1 n 1 n    or yn  3  1 3 un  1 2 3  1 3 n n2 un  2 (the responses above are equivalent – to see this. y3  14 7A. graph them or rearrange terms) Signals and Systems 2014 .2 (iii) yn  yn  1  2 yn  2  3 xn  1 (iv) y0  3 .34 7A. y1  8 . y2  1 .

..5 h0  h1 0 h1  h1 1  h2 1h12 0 h2  h1 2  h22 1h13 0  2h1 0h1 1h2 1  h12 0h2 2 7A.6 (a) (i) F  z   z z 1 2 (iii) F  z   (b) X  z   (ii) F  z   1 za z z  a  z (iv) F  z   2 z  a 3 z  a  2z 2 z 2 1 7A. poles at z  3  6 . stable (b) fourth-order zero at z  0 .9 xn  0.35 7A. ..  j 1 2 . poles at z   j 1 2 . 10.125. 9. . unstable 7A. poles at z  1 3.4 hn  h1 n  h2 n 7A.1 .7 (a) zeros at z  0.8 H z   3  z 1  2 z 4 1  z 1  z 2 7A. 14. Signals and Systems 2014 .5.A.2 . stable (c) zeros at z   j 2 5 .

25 5  n 3 7A.... 2...14 yn      n n 1  0. 2. 1. 2.10 xn  81 4 un  81 2 un  8un  24 n n n 7A. 1..11 (a) xn  1  a n1 for n  0.. 1.13  yn  1  1 0..25   5  Signals and Systems 2014 for n  0. 1 a (b) xn  3 n  2 n  1  6 n  4 for n  0.. 2. 2..A. (c) xn  1  e  anT for n  0.. n (e) xn  21 2 n4 (f) xn  2 n for n  0. 2.25  0.5  1.. 1. 1.5  1..36 7A.12 (a) hn  2 n1  1 n n  1 4 1 2  1 8 1 4  2 2 n 3 (b) H  z   1 8 z2 z  1 2z  1 4 (c) yn  1 3  1 24 1 4  1 4 1 2 n n (d) yn  1 2 1 2   1 18 1 4  n 3  4 9 n n 7A. 1. 1.5  1. 2....5  1. 7A.. .25 5  n 3   1 0.. 3 sin 2n 3 for n  0... (d) xn  8  81 2  6n1 2 for n  0..

1 yn  0. f    0 (c) f 0   1 4 .16     z 1 k k  z  1 (a) X 3  z    X 1 z     Dz    k  1z  6k  1  k  1z  6k  1 7A.15 F z   zTe T z  e  T 2 7A. n  0 n n 7B.17 (a) f 0   0 .3795 z  0.7730 yn  1  17. f    1 1 a (d) f 0   0 .A.7741 yn  1  20 xn  18.73 xn  1 7B.37 7A.2 yn  0. f    0 (b) f 0   1 .1670 z  0. f    8 7A.3 H d z   0.3679z  0.19 f n  8  6n1 2  81 2 .87 xn  1 7B.73 xn  17.1353 Signals and Systems 2014 .18 (i) n  1un (ii)  n  2 n  1  3 n  2  2 n  3   n  4 7A.

5 Real Axis -1 -0.82 rads 1 (c) K  0.5 (b) 0  K  6 .A.5 -3 -2.5 0 0.7 s 8B.8 -2 0 2 Real Axis 4 6 8 -1 -20 -15 -10 Signals and Systems 2014 -5 0 5 Real Axis 10 15 20 25 .4 0.5 -2 -1.2 -1 -0.2 (a) One pole is always in the right-half plane: For K  0 : For K  0 : Root Locus Editor (C) Root Locus Editor (C) 4 1 0.2 Imag Axis Imag Axis 1 0 0 -0.4 -2 -0.8 3 0.6 -3 -4 -4 -0.206 (d) t s  10.38 8B.6 2 0.1 (a) Root Locus Editor (C) 8 6 4 Imag Axis 2 0 -2 -4 -6 -8 -4 -3.   2.

2 0 0 -0. The relative magnitude of the poles and zeros determines which of the two paths occurs.6 -10 -0. the root in the right-half plane stays on the real-axis and moves to the right.8 10 0. For K  0 .2 -5 -0. there are two different feasible locus paths. negative values of K are not worth pursuing.4 5 Imag Axis Imag Axis 0. The principle is sound. Root Locus Editor (C) Root Locus Editor (C) 30 40 30 20 20 10 Imag Axis Imag Axis 10 0 0 -10 -10 -20 -20 -30 -30 -90 -80 -70 -60 -50 -40 Real Axis -30 -20 -10 0 10 -40 -20 Signals and Systems 2014 -15 -10 -5 Real Axis 0 5 10 . Thus. For the given open-loop pole-zero pattern.A. one of which includes a nearly circular segment from the left-half s-plane to the right-half s-plane. however.5 ) than that for which the righthalf plane pole enters the left-half plane ( K  48 ).6 0. A different choice of pole-zero locations for the feedback compensator is required in order to produce a stable feedback system.4 -0.39 (b) For K  0 : For K  0 : Root Locus Editor (C) Root Locus Editor (C) 15 1 0.5 ) at a lower gain ( K  37.8 -15 -4 -2 0 2 Real Axis 4 6 -1 -20 8 -15 -10 -5 0 5 Real Axis 10 15 20 25 The system is always unstable because the pole pair moves into the right-half plane ( s   j 3.

A. Signals and Systems 2014 .40 8B.3 (a) Root Locus Editor (C) 5 4 3 2 Imag Axis 1 0 -1 -2 -3 -4 -5 -7 -6 -5 -4 -3 -2 Real Axis -1 0 1 2 (b) a  32 27 (c) 0  a  16 for stability. For a  16 . the frequency of oscillation is 2 rads 1 .

A. n  3 rads -1 (c) Any value of K   2 5 gives zero steady-state error. [The system is type 1.] (d) K  0. t r  1. Therefore any value of K will give zero steady-state error provided the system is stable.41 8B.06 s Signals and Systems 2014 .4 (a) j K >0 -3  0 -1 s -plane j K <0 K -3 j 3 3 2 5  3 K  0 -1 K 2 5 3  32 5  s -plane j 3 (b) K   2 5 .1952 .

1 1   C R  R3  vC  q   1. b   1    1  R2 R3 L    R1   1 L1  R2  R3  9A.A. A   1 2   R2  iL1   L R  R  3  1 2 R2  0 C1 R2  R3   .3 (a)  2 q   0  2 y  2 2 0  3  0 0 q  1 x 0 2  2 (b)  2 q   1  0 y  0 0 (b) H s   0 1q  0x  1  1 1   1 0 q  0 x 0 1  2 1q  0x 9A.2 (a) false (b) false (c) false [all true for x0   0 ] 9A.4 (a) H s   8 s  12 s  3 2 s s  2  1 s  5s  9 s  7 3 2 9A.5 s 3  3s  1  2 2    q t  1  s s 3 -1 s  3 q t   L  12 2s  6   2   2 2  s  3 s s  3         2      1  4 cos 3t  3 sin 3t   u t    2  2 cos 3t  14 sin 3t   3       Signals and Systems 2014 .42 9A.

 1  j 2 (iii) k1  75 .2  1  e t  (a) 1. 2  2 . k 2  49 .A. 3  3 9B.3 (ii) –1. 2 (b) qt    t   e  9B.6 1 0  0   a qn  1    k 0 k qn   k  xn b  c   b  c  0  c  yn  1 0 0qn Signals and Systems 2014 .4 0 0 1 16  3 16   qn  1  1 0 1 4 qn   1 8  xn 0 1  1 4  1 2  yn  0 0 1qn  3 xn 9B. 9B.1 1  1 .43 9B. k 3  10 (iv) y ss t    1 40 (v) State feedback can place the poles of any system arbitrarily (if the system is controllable).