## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Collection Editor: Robert Nowak

**Intro to Digital Signal Processing
**

Collection Editor: Robert Nowak Authors: Behnaam Aazhang Swaroop Appadwedula Richard Baraniuk Matthew Berry Hyeokho Choi Benjamin Fite Roy Ha Michael Haag Mark Haun Don Johnson Douglas L. Jones Nick Kingsbury Dima Moussa Robert Nowak Justin Romberg Daniel Sachs Phil Schniter Ivan Selesnick Melissa Selik Bill Wilson

Online: < http://cnx.org/content/col10203/1.4/ >

CONNEXIONS

Rice University, Houston, Texas

179.0). see p.org/licenses/by/1. 2004 PDF generated: October 30. Collection structure revised: January 22.0 license (http://creativecommons. 2009 For copyright and attribution information for the modules contained in this collection. It is licensed under the Creative Commons Attribution 1.This selection and arrangement of content as a collection is copyrighted by Robert Nowak. .

........................... ...6 The Complex Plane ..... 61 2.13 Amplitude Quantization .......................6 Correlation and Covariance of a Random Signal ............................ ........... 59 Solutions ....................3 Random Signals .................................. 125 Solutions ......................................................7 Autocorrelation of Random Processes .. ?? 5 Filter Design III 5................................................................... .. 78 2...........7 Region of Convergence for the Z-transform .........................11 Filtering with the DFT ........................................1 Image Restoration Basics .................... 98 3........... 139 5........................... 19 1................... ..............3 Design of Linear-Phase FIR Filters by DFT-Based Interpolation ........4 Stationary and Nonstationary Random Processes ........ 7 1............................................................................................................................................................. 68 2.............. .. 56 2 Random Signals 2.................... 53 1................6 Fast Convolution Using the FFT ..... 102 3........................5 Random Processes: Mean and Variance ..........................................................1 Bilinear Transform .....................................................................................9 Zero Locations of Linear-Phase FIR Filters ...........12 Ideal Reconstruction of Sampled Signals .............. 84 3 Filter Design I (Z-Transform) 3.......................................... 143 Solutions .......................2 Introduction to Stochastic Processes ............ 42 1................................2 Four Types of Linear-Phase FIR Filters ....... 50 1.................. 23 1................................................................................................................ ...................Table of Contents 1 DSP Systems I 1....................... 135 5.....................................................................8 The DFT: Frequency Domain with a Computer Analysis .................. ........... 87 .1 Introduction to Random Signals and Processes ................. ................................................................................5 Rational Functions .............................................................2 The Z Transform: Denition ................................................................................. .......................................................... 74 2. 71 2........14 Classic Fourier Series .............. 92 3........................... 3 1................................................................................................................ 97 3....2 Digital Image Processing Basics ...................... ........................................10 Sampling CT Signals: A Frequency Domain Perspective .......................... 117 3.......... 121 Solutions .................................................. ........4 Images: 2D signals ................. 104 3.......4 Poles and Zeros ..............8 Understanding Pole/Zero Plots on the Z-Plane ..........7 The FFT Algorithm . 14 1....3 Table of Common z-Transforms ........................... ...................... 38 1.......................1 Linear-Phase FIR Filters ....... 1 1.....................................................9 Discrete-Time Processing of CT Signals ................ 64 2................................................................ ..........5 DFT as a Matrix Operation .................................. ....... 81 2... 131 5................ 89 3............................................................................................ ...............10 Filter Design using the Pole/Zero Plot of a Z-Transform ................... 33 1...................................... .... ?? 4 Filter Design II 4..8 Crosscorrelation of Random Processes .....1 Dierence Equation .3 2D DFT .................4 Design of Linear-Phase FIR Filters by General Interpolation ............ .......................................... 17 1.......................................... 113 3......... 100 3...... 11 1.

..... ?? Glossary .......... 168 8........................................................................................................................6 5.................... ..........................................................................179 ................5 5.......................5 The Haar System as an Example of DWT .................... 166 8..3 Continuous Wavelet Transform ......7 5....................................................... ..........................1 Adaptive Filtering: LMS Algorithm ............................................................ ................................................ ?? 7 Adaptive Filtering 7.......................................................... 145 FIR Filter Design using MATLAB ............. 150 MATLAB FIR Filter Design Exercise ............................................... 151 Solutions ........ 152 Solutions ...........................................................8 6 Wiener Filter Design Linear-Phase FIR Filters: Amplitude Formulas .............. 155 Solutions ........................2 Orthonormal Wavelet Basis ................................................... 151 Parks-McClellan Optimal FIR Filter Design ..................................... 170 8.......................... 171 Solutions .........................................173 Bibliography . 175 Index .............................................................................1 Haar Wavelet Basis .................................................................................................................iv 5.................................................................... ?? 8 Wavelets and Filterbanks 8......................................................... 176 Attributions ...............4 Discrete Wavelet Transform: Main Concepts ............................................... 159 8................................

poor-quality family portraits) the imaging system introduces a slight distortion. v) F (u. The observed image is g [m. satellite imaging. medical imaging. n] ∗ f [m.1 Image Restoration e.1) (1.1: Image Blurring Above we showed the equations for representing the common model for blurring an image.1: Fourier Transform (FT) relationship between the two functions. n].3) Example 1. 1 This content is available online at <http://cnx. 1 In many applications ( . v) H (u.Chapter 1 DSP Systems I 1. v) = H (u.org/content/m10972/2. 1 .1 Image Restoration Basics 1. v) = G (u.1.2/>.2) (1. v) F (u. astronomical imaging. In Figure 1.g. n] = h [m. The blurring can usually be modeled as an LSI system with a given PSF h [m. Figure 1. n] G (u. Often images are slightly blurred and image restoration aims at deblurring the image.2 we have an original image and a PSF function that we wish to apply to the image in order to model a basic blurred image. v) (1.

1 Frequency Domain Analysis Example 1. we take another look at the image blurring example above and look at how the images and results would appear in the frequency domain if we applied the fourier transforms.4.1 (Image Blurring) looks at the original images in its typical form. DSP SYSTEMS I (a) Figure 1.2 (b) Once we apply the PSF to the original image. In Figure 1.3 1.2 CHAPTER 1. it is often useful to look at our images and PSF in the frequency domain. however. . we receive our blurred image that is shown in Figure 1.3: Figure 1.1.1.

2. .org/content/m10973/2.3 Figure 1.1 Digital Image Processing 2 This 2 A sampled image gives us our usual 2D array of pixels f [m.2/>.4 1. n] (Figure 1.5): content is available online at <http://cnx.2 Digital Image Processing Basics 1.

We can lter f [m.4) = (h [m − k.2: Sampled Image 3 "Discrete-Time Convolution" <http://cnx. n] ∗ f [m.5: We illustrate a "pixelized" smiley face. n − l] f [k. n] by applying a 2D discrete-space convolution as shown below (where h [m. DSP SYSTEMS I Figure 1. n] = h [m. l]) 3 ∞ k=−∞ ∞ l=−∞ Example 1. n] is our PSF): g [m.4 CHAPTER 1.org/content/m10087/latest/> . n] (1.

n] e−(jum) e−(jvm) 4 (1.7) G [u.5) where F [u. v] F [u. v] = H [u. v] is analogous to DTFT in 1D. We also have discrete-space FTS: ∞ ∞ F [u.6) (1.6: Illustrate the "pixelized" nature of all digital images. n] = h [m.org/content/m10247/latest/> . n] ∗ f [m. v] = m=−∞ n=−∞ f [m. is the same as: g [m. v] Example 1. n] (1. as stated above. "Convolution in Time" is the same as "Multiplication in Frequency" note: which.5 Figure 1.3: Magnitude of FT of Cameraman Image 4 "Discrete-Time Fourier Transform (DTFT)" <http://cnx.

7 To get a better image.6 CHAPTER 1.8: Figure 1. The resulting image is shown in Figure 1. we can use the fftshift command in Matlab to center the Fourier Transform. DSP SYSTEMS I Figure 1.8 .

. we need a Fourier Transform (FT) that is discrete and two-dimensional.3. n] e(−j) 2πkm N e(−j) 2πln N where the above equation ((1. v) = m N −1 n N −1 f [m. l] e k=0 l=0 j2πkm N e j2πln N (1.1 Inverse 2D DFT As with our regular fourier transforms.1 2D DFT 5 To perform image restoration (and many other useful image processing algorithms) in a computer.8) F [k. . . .11) Example 1.10) F [k. . N − 1} and l = {0. l] = m=0 n=0 f [m. 1. . .3. l] = F (u.4: Periodic Extensions 5 This content is available online at <http://cnx.v= 2πl N N F (u.3 2D DFT 1. n] = 1 N2 N −1 N −1 F [k.org/content/m10987/2. . N − 1}. . u= 2πk . f [m.1. v) | for k = {0.4/>.10)) has nite support for an N xN image. the 2D DFT also has an inverse transform that allows us to reconstruct an image as a weighted combination of complex sinusoidal basis functions.9) (1. (1.7 1. n] e−(jum) e−(jvm) (1.

9: Illustrate the periodic extension of images. l] h [m − k. N − 1] . 0] .5 N −1 N −1 g [m. 1] . 1. You can think of f as representing an image and h represents a PSF. N − 1] . n − l]) (1.. 0] h [0.. 1] h [1... f [0. f [N − 1. h= f = f [0. 0] h [1.3..12) Below we go through the steps of convolving two two-dimensional arrays. where h [m. f [N − 1. n] = k=0 l=0 (f [k. n] = 0 for m and n > 1 and m and n < 0.8 CHAPTER 1... .2 2D DFT and Convolution The regular 2D convolution equation is Example 1... .. 0] h [0. DSP SYSTEMS I Figure 1.

−n]. 1] + h [0.10 . we slide the ipped matrix.3. over one element to the right and get the following result: We continue in this fashion to nd all of the elements of our convolved image.1 g [0. 0] 0 0 (1.2. so that just the bottom-right element. 0] 0 0 0 h [−m. l] produce? Due to periodic extension by DFT (Figure 1.13) Step 2 (Convolve): g [0. 0] What does H [k. h [−m. n] = h [0.12)) to nd the rst element of our convolved image. 1] f [m. 1] h [0. 0]. −n] = h [0. In order to better understand what is happening. 1] = h [0. n] as: g [m. we can think of this visually. 0] f [m. Then. f [0. 0] f [0. n]. 1] h [1. n − 1] + h [1. 0] of h [−m. h [0. l].) Figure 1. n − 1] (1. 0] f [0.14) We use the standard 2D convolution equation ((1. The basic idea is to take h [−m. g [m. 0] = h [0. 0] f [m − 1.1 Circular Convolution Exercise 1. 59. to get the next element of our convolved image. n] + h [1. 1] f [0.15) 1. −n] and place it "on top" of f [k. −n] overlaps with the top-left element. l] F [k.10): (Solution on p. 0] (1.9 Step 1 (Flip h): h [1. n] + h [0. of f [k. l]. 1] f [m − 1. Using the above method we dene the general formula to nd a particular element of g [m.

when circular convolution is applied we will observe a wraparound eect. N − 1] (1.11 note: Circular Convolution = Regular Convolution N −1 N −1 1. 0] f [N − 1. 1] f [N − 1.16) Using this equation. n]. 1.3. then we zero pad f and h to M + N − 1 x M + N − 1 (see Figure 1. 0] + h [1.3. g [m.2.11). 0] f [0.17) Where the last three terms in (1. For example. n] = IDFT (H [k. we can calculate the value for each position on our nal image. g [m. 0] = h [0. DSP SYSTEMS I (1. 1] f [0. l] = m=0 n=0 f [m. N − 1] + h [1. l]) ∼ ∼ If the support of h is M xM and f is N xN .3 Computing the 2D DFT F [k.10 Based on the above solution.18) . l] F [k. g [0.2. we will let ∼ CHAPTER 1.2 Zero Padding Figure 1. n] e(−j) 2πkm N e(−j) 2πln N (1. 0] + h [0.17) are a result of the wrap-around eect caused by the presence of the images copies located all around it. due to the periodic extension of the images.

then it requires N logN operations per column. l] = f [m. l] e where now we take the 1D FFT of each column. . y) 1. f and scalar.2 Linear Shift Invariant Systems Figure 1. y − y ) LSI systems are expressed mathematically as 2D convolutions: H 1 1 2 2 1 1 2 0 0 0 0 ∞ ∞ 2 (x.12: Images are 2D functions f (x. n] e is simply a 1D DFT over n.13 is LSI if: 1. y − β) f (α.org/content/m10961/2. y)) g (x. 2 O N 2 logN 1. This means that we will take the 1D FFT of each row. then it will require N logN operations per row. y)) = H (f (x. y) + α f (x. which means that if we have N columns.4. We can rewrite this as (1. 6 This content is available online at <http://cnx. y − y )) = g (x − x . if we have N rows.1 Image Processing 6 Figure 1. Therefore the overall complexity of a 2D FFT is N −1 n=0 (−j) 2πln N N −1 (−j) 2πkm N m=0 note: where N equals the number of pixels in the image. f [m.11 where in the above equation.4 Images: 2D signals 1.4. y) = −∞ −∞ h (x − α. y)) + H (f for all images f . H (f (x − x .7/>. y) is the 2D impulse response (also called the point spread function). 2. β) dαdβ where h (x.19) F [k. H (α f (x.

org/content/m10500/latest/> . v) F (u.14 g (x. y − β) f (α. y) = 1 (2π) 2 ∞ −∞ ∞ G (u. y) evaluated at frequencies u and v . Figure 1. We can sample the height of the surface using a 2D impulse array. v0 ) 0 0 is the 2D Fourier transform of h (x. y) e−(jux) e−(jvy) dxdy 7 where F is the 2D FT and u and v are frequency variables in x (u) and y (v).3 2D Fourier Analysis ∞ ∞ CHAPTER 1. y) ∗ f (x. 2D complex exponentials are eigenfunctions for 2D LSI systems: ∞ ∞ ∞ ∞ h (x − α. β ) e−(ju0 α ) e−(jv0 β ) dα dβ h (α . v) = H (u. v) ejux ejvy dudv −∞ (1. β) dαdβ (1. y) = ∞ ∞ −∞ −∞ h (x − α. v) Inverse 2D FT g (x. y − β) eju0 α ejv0 β dαdβ = −∞ −∞ −∞ −∞ ∞ ∞ eju0 x ejv0 y −∞ −∞ h (α .22) 1. v0 ) ∞ −∞ ∞ −∞ h (α . DSP SYSTEMS I F (u.21) G (u. 7 "Eigenfunctions of LTI Systems" <http://cnx.4.20) where H (u0 .4.4 2D Sampling Theory Figure 1. y) = h (x.12 1. β ) e−(ju0 α ) e−(jv0 β ) dα dβ ≡ H (u0 .15: Think of the image as a 2D surface. β ) eju0 (x−α ) ejv0 (y−β ) dα dβ = (1. v) = −∞ −∞ f (x.

5 Nyquist Theorem Assume that f (x. v) is bandlimited in both the horizontal and vertical directions. v) s fs (x.13 Impulses spaced ∆x apart in the horizontal direction and ∆y in the vertical Figure 1. Figure 1. y) is sampled image in frequency 2D FT of s (x. v) Figure 1.16: where f (x. ±B : x y . v) ∗ F (u.19: periodically replicated in (u.18: F (u. y) = S (x. y) is a 2D impulse array in frequency S (u. v) frequency plane 1. y) Figure 1. y) f (x. v) = S (u.4. y) is bandlimited to ±B .17 multiplication in time ⇔ convolution in frequency Fs (u.

y) at spacings of ∆x < and ∆y < .21: ideal lowpass lter.. then f (x. xN −1 . DSP SYSTEMS I Figure 1.org/content/m10962/2. 0 outside Aliasing in 2D (a) Figure 1. . y) can be perfectly recovered from the samples by lowpass ltering: π Bx π By Figure 1. xi ∈ R 8 This content is available online at <http://cnx.14 CHAPTER 1.5/>. 1 inside rectangle.5 DFT as a Matrix Operation 1.5.22 (b) 1.20 If we sample f (x..1 Matrix Review 8 Recall: • Vectors in R : N x= x0 x1 .

= aN −1.. .... .1 aN −1...N −1 x0 x1 .N −1 AT AH 9 "Inner kn = Ank kn = [A∗ ]nk Products" <http://cnx... ..0 aN −1.0 aN −1. real: 9 xT = xH = x0 x0 ∗ x1 x1 ∗ ... xN −1 . conjugate: Inner product : a. .org/content/m10755/latest/> .. ...... . ... . ....1 .. transpose: b..N −1 a1.. .15 • Vectors in C : N x= x0 x1 .. yN −1 .N −1 N −1 (akn xn ) n=0 • Matrix Transposition: T A = a00 a01 a10 a11 ..N −1 a1. AH = AT ∗ a0.. Matrix transposition involved simply swapping the rows with columns.. The above equation is Hermitian transpose... xN −1 y0 y1 .N −1 aN −1. xi ∈ C • Transposition: a.. yk = a0... complex: xH y = • N −1 (xn ∗ yn ) i=0 Matrix Multiplication: Ax = a00 a10 a01 a11 .... . xN −1 xN −1 ∗ • N −1 xT y = (xi yi ) i=0 b.. aN −1.

. and X is the DFT vector.5.. Wkn = e−(j N ) 2π X = Wx kn X=W x [0] x [1] . 10 x= X= x [0] x [1] .. 1 H W X N Fourier Transform (DFT)" <http://cnx. How are x and X related: N −1 X [N − 1] ∈ CN X [k] = x [n] e−(j N kn) 2π where so n=0 akn = e−(j N ) 2π kn = WN kn where X is the DFT vector. 10 "Discrete is the inverse DFT matrix.org/content/m10249/latest/> . x [N − 1] X [0] X [1] . DSP SYSTEMS I Now let's represent the DFT in vector-matrix notation.... So.16 1. Here x is the vector of time samples and X is the vector of DFT coecients. x [N − 1] IDFT: x [n] = where WN nk ∗ 1 N N −1 X [k] ej N k=0 nk ∗ 2π nk ej N 2π = WN nk is the matrix Hermitian transpose. x= 1 H NW where x is the time vector.2 Representing DFT as Matrix Operation CHAPTER 1. W is the matrix and x the time domain vector.

.org/content/m10963/2.1 Important Application of the FFT Exercise 1. content is available online at <http://cnx.23 1. Multiply DFTs The cost is 3. Inverse DFT using FFT The cost is So the total cost for direct convolution of two N -point sequences is O N . Zero-pad these two sequences to length 2N − 1.23). That is a huge savings (Figure 1.24). 59.6. Figure 1.17 1.6/>.2 Summary of DFT • x [n] 11 This is an N -point signal (Figure 1. take DFTs using the FFT algorithm x [n] → X [k] h [n] → H [k] O ((2N − 1) log (2N − 1)) = O (N logN ) X [k] H [k] O (2N − 1) = O (N ) X [k] H [k] → y [n] O ((2N − 1) log (2N − 1)) = O (N logN ) 2 The cost is 2.6 Fast Convolution Using the FFT 1. Total cost for convolution using FFT algorithm is O (N logN ).2 11 How many complex multiplies and adds are required to convolve two N -pt sequences? N −1 (Solution on p.6.) y [n] = m=0 (x [m] h [n − m]) 1.

6. We can do frequency domain analysis on a computer! k N N −1 n=0 1. DSP SYSTEMS I Figure 1.24 • X [k] = N −1 N −1 x [n] e−(j N kn) = 2π x [n] WN kn n=0 where W N =e −(j 2π ) N is a "twiddle factor" and the rst part is the basic DFT.25) (a) Figure 1.18 CHAPTER 1.23) 1. This is a sample of the DTFT.1 What is the DFT = n=0 x [n] e−(j2πF n) −(j2πF n) where X F = is the DTFT of x [n] and x [n] e is the DTFT of x [n] at digital frequency F .6.2.6.2.2. X [k] = X F = k N N −1 n=0 1.2 Inverse DFT (IDFT) x [n] = • • 1 N N −1 X [k] ej N kn n=0 2π Build x [n] using Simple complex sinusoidal building block signals Amplitude of each complex sinusoidal building block in x [n] is X [k] 1 N 1.4 Regular Convolution from Circular Convolution • • Zero pad x [n] and h [n] to length = length (x) + length (h) − 1 Zero padding increases frequency resolution in DFT domain (Figure 1.25: (b) (a) 8-pt DFT of 8-pt signal (b) 16-pt DFT of same signal padded with 8 additional zeros .6.2.3 Circular Convolution DFT ((x [n] ⊕ h [n]) ↔ X [k] H [k]) (1.

org/content/m10249/latest/> .org/content/m10964/2. Figure 1.1: FFT 12 (Fast Fourier Transform) An ecient computational algorithm for computing the DFT .26 1.6. as N gets large: 2 2 12 This content is available online at <http://cnx. then build N -pt DFT by cleverly combining shorter DFTs N -pt DFT: O N ↓ O (N log N ) 2 2 1. 0≤k ≤N −1 For each k.1 The Fast Fourier Transform FFT DFT can be expensive to compute directly N −1 X [k] = n=0 x [n] e−(j2π N n) k .7 The FFT Algorithm Denition 1.2.6/>.2.26. 13 "Discrete Fourier Transform (DFT)" <http://cnx.19 1.6.5 The Fast Fourier Transform (FFT) • • • • Ecient computational algorithm for calculating the DFT "Divide and conquer" Break signal into even and odd samples keep taking shorter and shorter DFTs. we must execute: • N complex multiplies • N − 1 complex adds The total cost of direct computation of an N -point DFT is • N complex multiplies • N (N − 1) complex adds How many adds and mults of real numbers are required? This " O N " computation rapidly gets out of hand. 13 1.6 Fast Convolution • • • O N 2 ↓ O (N log2 N ) Use FFT's to compute circular convolution of zero-padded signals Much faster than regular convolution if signal lengths are long See Figure 1.7.

2 106 1012 6 × 106 How long is 10 12 µsec ? More than 10 days! How long is 6 × 10 µsec? 6 Figure 1. The FFT requires only " " computations to compute the N -point DFT.28 The FFT and digital computers revolutionized DSP (1960 .1 106 1012 Figure 1.1: Complex Conjugate Symmetry WN k(N −n) = WN −(kn) = WN kn k k N kn = e−(j N kn) 2π ∗ e−(j2π N (N −n)) = ej2π N n = e−(j2π N n) k ∗ Rule 1.000 10 6 Table 1.27 O (N logN ) The FFT provides us with a much more ecient way of computing the DFT. 1.20 N N2 CHAPTER 1.1980).2 How does the FFT work? • • WN kn The FFT exploits the symmetries of the complex exponentials W are called "twiddle factors" Rule 1. DSP SYSTEMS I 1 10 100 1000 1 100 10.2: Periodicity in n and k WN kn = WN k(N +n) = WN (k+N )n . N N2 N log10 N 10 100 1000 100 10.000 10 10 200 3000 6 Table 1.7.

So = = N 2 −1 r=0 N 2 −1 r=0 X [k] x [2r] WN 2kr + x [2r] WN 2 kr N 2 −1 r=0 x [2r + 1] WN (2r+1)k N 2 −1 r=0 + WN k x [2r + 1] WN 2 kr (1.21 e−(j N kn) = e−(j N k(N +n)) = e−(j N (k+N )n) 2π 2π 2π WN = e−(j N ) 2π 1. 0≤k ≤N −1 x [n] WN kn X [k] = n=0 x [n] WN kn X [k] = x [n] WN kn + where 0 ≤ r ≤ N 2 −1 . Why would we want to do this? Because it is more ecient! Cost to compute an N -point DFT is approximately N complex mults and adds. so we can complete X [k] by separating x [n] into two subsequences each of length x [n] → N −1 N 2 N 2 .24) where W N 2 = e−(j 2π N 2 )=e „ « − j 2π N 2 = WN 2 . N 2 kr N 2 x [2r + 1] W N kr 2 is N 2 X [k] = G [k] + WN k H [k] . 0 ≤ k ≤ N − 1 Decomposition of an N -point DFT as a sum of 2 -point DFTs.3 Decimation in Time FFT • • • Just one of many dierent FFT algorithms The idea is to build a DFT out of smaller and smaller DFTs by decomposing x [n] into smaller and smaller subsequences.7. N 2 note: 2 . So N 2 N 2 −1 −1 X [k] = r=0 N 2 −1 r=0 x [2r] W N 2 kr + WN k r=0 x [2r + 1] W N kr 2 N 2 −1 r=0 where x [2r] W is -point DFT of even samples ( G [k]) and point DFT of odd samples ( H [k]). if n = even if n = odd . Assume N = 2 (a power of 2) m N 2 1.3.7.1 Derivation N is even.

So replacing 2 2 N 4 2 + N 2 +N =4 N 4 2 2 + 2N = N2 N2 + 2N = p + pN 22 2 As we keep going p = {3. .6: Savings For N = 1000. H [k]. The second part is the number of complex mults and adds for -point DFT. Example 1. where N = 21 N N N N N .29: N = 8 point FFT. Figure 1. The third part is the number of complex mults and adds for combination. The cost is N each -pt DFT with two -pt DFTs will reduce cost to N 2 N 4 2 →2 N 2 2 +N . Break each of the -point DFTs into two -point DFTs. m}. We can keep decomposing: N 2 N 4 etc. We get the cost N2 2log2 N N + N log2 N + N log2 N = is the total number of complex adds and mults. .22 But decomposition into 2 -point DFTs + combination requires only N 2 CHAPTER 1. cost ≈ N log N or " O (N log N )". N2 106 +N = + 1000 2 2 106 N2 +N ≈ 2 2 So why stop here?! Keep decomposing. m 2 4 8 2 2 m = log2 N = N 2 =1 times Computational cost: N -pt DFT [U+F577] two -pt DFTs. . m−1 . . N 2 N2 2 N 2 = 106 Because 1000 is small compared to 500. since (N log N 2 2 2 N2 + N log2 N = N + N log2 N N N) for large N .. DSP SYSTEMS I N 2 2 + N 2 2 +N = N2 +N 2 N 2 where the rst part is the number of complex mults and adds for -point DFT. where m = log N .000.. . . . Summing nodes Wn k twiddle multiplication factors. And the total is + N complex mults and adds. . . note: Weird order of time samples . .. G [k]. . . 4. For large N .

23 This is called "butteries. The other function. How do we carry out these frequency domain analysis on the computer? Recall the following relationships: x [n] ↔ X (ω) x (t) ↔ X (Ω) CTFT DTFT where ω and Ω are continuous frequency variables.org/content/m10992/2. 1. Why not just sample X (ω)? i.8. X (ω) = x [n] e (1.31): Figure 1.1.e. an N -point signal).1 Sampling DTFT Consider the DTFT of a discrete-time (DT) signal x [n]. This enabled DT signal processing solutions for CT applications (Figure 1.1 Introduction 14 We just covered ideal (and non-ideal) (time) sampling of CT signals (Section 1. is a discrete function that is indexed by integers.26) 14 This content is available online at <http://cnx.3/>.10)." Figure 1. We want to work with X (ω) on a computer. Assume x [n] is of nite duration N ( .8. .30: 1.25) where X (ω) is the continuous function that is indexed by the real-valued parameter −π ≤ ω ≤ π. x [n].31 Much of the theoretical analysis of such systems relied on frequency domain representations.8 The DFT: Frequency Domain with a Computer Analysis 1. N −1 (−j)ωn n=0 X [k] = = X 2π N k N −1 n=0 x [n] e(−j)2π N n k (1.

1.24 In (1. . For example.1.27) where ω is any 2π-interval. N − 1} is called the Finite Duration DT Signal Figure 1.) 1. (This is precisely how we would plot X (ω) in Matlab. . . N − 1} and X [k] for k = {0.1. In the following section (Section 1.1.26) we sampled at ω = CHAPTER 1.33 where again we sampled at ω = 2π N k where k = {0.1. .32 The DTFT of the image in Figure 1.1 Choosing M 1. . for example −π ≤ ω ≤ π.7 where k = {0.1: Choosing M) we will discuss in more detail how we should choose M . 1. . . 1. .32 (Finite Duration DT Signal) is written as follows: N −1 X (ω) = n=0 x [n] e(−j)ωn (1. choose (M N) to obtain a dense sampling of the DTFT (Figure 1. . M − 1}. .1 Case 1 Given N (length of x [n]).8. we take M = 10 . .1.8.34): . Example 1. Sample X(ω) Figure 1.8. the number of samples in the 2π interval. 2π N k DSP SYSTEMS I Discrete Fourier Transform (DFT) of x [n]. .1.

29) (1. n = {0.8. . N − 1} and k = {0.1. N − 1} x [n] ↔ X [k] DFT numbers ↔ N numbers 1. . . . N − 1}. .1.1.30) X [k] = n=0 x [n] e(−j)2π N n k Inverse DFT (IDFT) x [n] = 1 N N −1 X [k] ej2π N n k=0 k . we require M ≥ N in order to represent all information in x [n] . . . . .2 Case 2 Choose M as small as possible (to minimize the amount of computation).25 Figure 1. .8.34 1. In this case. . DFT N −1 2πk N (1. . In general. M = N .2 Discrete Fourier Transform (DFT) Dene X [k] ≡ X where N = length (x [n]) and k = {0. . .28) (1. . N − 1} Let's concentrate on M = N : for n = {0. .

35 15 "The Complex Exponential" <http://cnx. . . . . N − 1} N 2π N 1.31) (Solution on p. .1.1.1 Interpretation 15 CHAPTER 1.2.) where n ∈ {0. . we will compute the DFT using two dierent methods (the DFT Formula and Sample DTFT): Figure 1. .2 Remark 2 1 N Proof that the IDFT inverts the DFT for n ∈ {0.8: Computing DFT Given the following discrete-time signal (Figure 1.org/content/m10060/latest/> .26 1. .2.1 Remark 1 IDFT treats x [n] as though it were N -periodic.3 What about other values of n? 1. N − 1} Exercise 1. . . 59.32) Example 1. DSP SYSTEMS I Represent x [n] in terms of a sum of N complex sinusoids of amplitudes X [k] and frequencies ωk = note: Fourier Series with fundamental frequency 2πk . . .2.35) with N = 4. x [n] = 1 N N −1 X [k] ej2π N n k=0 k (1.8.8. k ∈ {0. N − 1} N −1 k=0 X [k] ej2π N n k = = 1 N N −1 k=0 N −1 m=0 x [m] e(−j)2π N m ej2π N n k k ??? (1.8.

Using the same gure. we can solve and get the following results: x [0] = 4 x [1] = 0 x [2] = 0 x [3] = 0 +e (−j)πk +e (−j) 3 πk 2 2.33) Using the above equation.36 . ωk = Figure 1. Sample DTFT. we will take the DTFT of the signal and get the following equations: X (ω) = = = 3 (−j)ωn n=0 e (−j)4ω 1−e 1−e(−j)ω (1. 1. 3} (Figure 1. Figure 1.27 1.35.36). 2. DFT Formula X [k] = = = N −1 n=0 x [n] e(−j)2π N n k k k k 1 + e(−j)2π 4 + e(−j)2π 4 2 + e(−j)2π 4 3 1+e (−j) π k 2 (1.34) ??? 2πk π = k 4 2 Our sample points will be: where k = {0.

x [n] = = = 1 N 1 N N −1 n=0 N −1 n=0 X [k] ej2π N n X [k] ej2π N (n+mN ) k k (1. an N -periodic DFT.37). recall.8.37 Also. can be converted to X [k].9: Illustration .36) ??? Example 1.28 1.35) X [k] = x [n] e where e is an N -periodic basis function (See Figure 1. N −1 k (−j)2π N n n=0 k (−j)2π N n Figure 1. a 2π-periodic DTFT signal. DSP SYSTEMS I DFT X [k] consists of samples of DTFT. (1. so X (ω).3 Periodicity of the DFT CHAPTER 1.

4 A Sampling Perspective Think of sampling the continuous function X (ω).8.38 note: When we deal with the DFT.39. 1. . X [k]. This will result in our discretetime sequence. S (ω) will represent the sampling function applied to X (ω) and is illustrated in Figure 1. as depicted in Figure 1. we need to remember that. this treats the signal as an N -periodic sequence. in eect.29 Figure 1.39 as well.

37) (1.38) equal S [n]? .8.4.1 Inverse DTFT of S(ω) ∞ δ ω− k=−∞ Given the above equation.39 domain! note: Remember the multiplication in the frequency domain is equal to convolution in the time 1.38) (Solution on p.4 Why does (1.) N m=−∞ (δ [n − mN ]) ≡ S [n] Exercise 1. DSP SYSTEMS I Figure 1.30 CHAPTER 1. we can take the DTFT and get the following equation: ∞ 2πk N (1. 59.

40 .31 So.40): Figure 1. in the time-domain we have (Figure 1.

Figure 1. DSP SYSTEMS I Figure 1.42 .5 Connections CHAPTER 1.8.41 Combine signals in Figure 1.41 to get signals in Figure 1.42.32 1.

39) where we know that Y (ω) = X (ω) G (ω) and G (ω) is the frequency response of the DT LTI system.33 1.1 Analysis (1. will be: 16 This Yc (Ω) = HLP (Ω) G (ΩT ) 2π T ∞ (Xc (Ω − kΩs )) k=−∞ (1. Y (Ω).org/content/m10993/2. remember that Yc (Ω) = HLP (Ω) Y (ΩT ) So.9. .9 Discrete-Time Processing of CT Signals 1.1 DT Processing of CT Signals DSP System 16 Figure 1.43 1. Also. where Y (Ω) and H c note: ω ≡ ΩT LP (Ω) are CTFTs and G (ΩT ) and X (ΩT ) are DTFTs. X (ω) = 2π T ∞ Yc (Ω) = HLP (Ω) G (ΩT ) X (ΩT ) (1.2/>.1.40) Xc k=−∞ ∞ OR c ω − 2πk T X (ΩT ) = 2π T (Xc (Ω − kΩs )) k=−∞ Therefore our nal output signal.41) content is available online at <http://cnx.9.

G (ΩT ) X (Ω) c Yc (Ω) = 0 if |Ω| < otherwise Ωs 2 (1.43) where G (ΩT ) if |Ω| < G (Ω) = 0 otherwise c eﬀ c eﬀ Ωs 2 .42) 1.44 Then. Figure 1. DSP SYSTEMS I − Ωs 2 .44: c CHAPTER 1.9. Ωs 2 and we use the usual lowpass reconstruction lter in the Figure 1.34 Now.1. if X (Ω) is bandlimited to D/A.2 Summary For bandlimited signals sampled at or above the Nyquist rate. we can relate the input and output of the DSP system by: Y (Ω) = G (Ω) X (Ω) (1.

" This is what we term time-varying behavior. 1 0 1 0 Xc (t) = u (t − T0 ) − u (t − T1 ) Example 1.35 Figure 1.10 . 2. G (ω) is LTI (in DT). then some samples might "miss" the pulse while others might not be "missed.1 Note Geﬀ (Ω) is LTI if and only if the following two conditions are satised: 1. For example. X (T ) is bandlimited and sampling rate equal to or greater than Nyquist. If the sampling period T > T − T .9.2.45 1.1. if we had a simple pulse described by c where T > T .

9. in real-world situations electrodes also pick up ambient 60 Hz signals from lights.48 shows the EKG signal.47.36 CHAPTER 1. it is barely noticeable as it has become overwhelmed by noise. In fact. .47 Unfortunately.2 Application: 60Hz Noise Removal Figure 1. Figure 1. . c 1. usually this "60 Hz noise" is much greater in amplitude than the EKG signal shown in Figure 1.46 If 2π T > 2B and ω 1 < BT . DSP SYSTEMS I Figure 1.46. etc. determine and sketch Y (Ω) using Figure 1. computers.

48: Our EKG signal.2. y (t). 1.1 DSP Solution Figure 1. the minimum rate should be 120 Hz. is overwhelmed by noise.2 Sampling Period/Rate First we must note that |Y (Ω) | is bandlimited to ±60 Hz.49 Figure 1.50 1.2. In order to get the best results we should set fs = 240Hz .9.9.37 Figure 1. Therefore.

2.1 Understanding Sampling in the Frequency Domain 17 We want to relate x (t) directly to x [n].10.3 Digital Filter Therefore.org/content/m10994/2.2/>.9. we want to design a digital lter that will remove the 60Hz component and preserve the rest.38 . Figure 1. Compute the CTFT of c ∞ xs (t) = n=−∞ (xc (nT ) δ (t − nT )) 17 This content is available online at <http://cnx.51 1.10 Sampling CT Signals: A Frequency Domain Perspective 1.52 1. Ωs = 2π 240 CHAPTER 1. DSP SYSTEMS I rad s Figure 1. .

45) where this last part is 2π-periodic.1.39 Xs (Ω) = = = = = ∞ ∞ (−j)Ωt dt n=−∞ (xc (nT ) δ (t − nT )) e −∞ ∞ ∞ (−j)Ωt dt n=−∞ xc (nT ) −∞ δ (t − nT ) e ∞ (−j)ΩnT n=−∞ x [n] e ∞ (−j)ωn n=−∞ x [n] e (1. 1.1 Sampling Figure 1.10.44) where ω ≡ ΩT and X (ω) is the DTFT of x [n].53 . note: X (ω) Xs (Ω) = 1 T 1 T 1 T ∞ (Xc (Ω − kΩs )) k=−∞ ∞ k=−∞ ∞ k=−∞ X (ω) = = (Xc (Ω − kΩs )) Xc ω−2πk T (1.

54 Figure 1. DSP SYSTEMS I Speech is intelligible if bandlimited by a CT lowpass lter to the band ±4 kHz. We can sample speech as slowly as _____? Figure 1.55: Note that there is no mention of T or Ωs ! 1.2 Relating x[n] to sampled x(t) Recall the following equality: xs (t) = n (x (nT ) δ (t − nT )) .10.40 Example 1.11: Speech CHAPTER 1.

41 Figure 1.47) Xs (Ω) ≡ X (ΩT ) . 1 X α Ω α (1.46) (1.56 Recall the CTFT relation: x (αt) ↔ 1 α where α is a scaling of time and is a scaling in frequency.

58): 18 This Y (ω) = X (ω) H (ω) content is available online at <http://cnx.42 1. .org/content/m11022/2. Exercise 1.3/>.49) Assume that H (ω) is specied.11.48) (1. DSP SYSTEMS I Figure 1. 59.5 (Solution on p.1 Introduction 18 CHAPTER 1.57 y [n] = = x [n] ∗ h [n] ∞ k=−∞ (x [k] h [n − k]) (1.11 Filtering with the DFT 1.) How can we implement X (ω) H (ω) in a computer? Recall that the DFT treats N -point sequences as if they are periodically extended (Figure 1.

58 1.11.43 Figure 1.59: h [n − m] = h [((n − m))N ] .50) This computes as shown in Figure 1.2 Compute IDFT of Y[k] ∼ y [n] = = = = = 1 N 1 N 1 N And the IDFT periodically extends h [n]: ∼ k N −1 Y [k] ej2π N n k=0 k N −1 X [k] H [k] ej2π N n k=0 k k N −1 N −1 −(j2π N m) H [k] ej2π N n k=0 m=0 x [m] e k N −1 N −1 1 H [k] ej2π N (n−m) m=0 x [m] N k=0 N −1 m=0 (x [m] h [((n − m))N ]) (1.

61 Note that in general: .11.2. DSP SYSTEMS I Figure 1.51) is called circular convolution and is denoted by Figure 1.60: The above symbol for the circular convolution is for an N -periodic extension. Figure 1.44 CHAPTER 1.59 ∼ N −1 y [n] = m=0 (x [m] h [((n − m))N ]) (1. 1.1 DFT Pair Figure 1.60.

0. 2. 2. 1. 2. • Regular Convolution: (1. . 2 y [n] = (x [m] h [n − m]) m=0 19 · n=0 {. . . our convolved signal will be length-5. 2. Circular Convolution To begin with. 0.org/content/m10087/latest/> .45 Figure 1. . . . we can calculate the resulting value for y [0] to y [4]. . 0. 3} h [n] = {1. 2} We can zero-pad these signals so that we have the following discrete sequences: x [n] = {. . 2. . . . 0. .12: Regular vs. . 0. 0. } where x [0] = 1 and h [0] = 1. . . . . . 0. 1. 0. 0. 0. . . 0. 2. } {. . . . . 0.52) Using the above convolution formula (refer to the link if you need a review of convolution ). . . . we are given the following two length-3 signals: x [n] = {1. 0. 0. Recall that because we have two length-3 signals. . 1. . 0. } {. . 1. 0. . 3. 0. 0. . } y [1] = = 19 "Discrete-Time 1×1+2×0+3×0 1 (1. 0. . 3. 0.62 Example 1. 3. } y [0] = = · n=1 {. . . 1. 0.53) 1×0+2×1+3×0 2 (1. } h [n] = {. 0.54) Convolution" <http://cnx. 1. 2. .

58) And now with circular convolution our h [n] changes and becomes a periodically extended signal: h [((n)) ] = {. . . not periodic! • Circular Convolution: (1. . . 0. 2. . . . . .46 · n=2 {. 0.56) (1. 0. 2.55) (1.60) . 3. 2. .57) Regular Convolution Result Figure 1. . } {. .63: Result is nite duration. 0. 2. 1. . 1. . 0. 2. } ∼ y [0] = = 1×1+2×2+3×0 5 (1. . 0. . . 1. 0.59) ∼ 2 y [n] = (x [m] h [((n − m))N ]) m=0 N · n=0 {. 3. . . 1. 2. . 0. 1. 0. DSP SYSTEMS I 1×2+2×0+3×1 5 (1. . 0. 0. } {. 0. 1. . . } y [2] = = · n=3 y [3] = 4 · n=4 y [4] = 6 CHAPTER 1. . 0. 2. . . . 0. 1. 1. . . 1. . } (1. 2.

65 (Circular Convolution from Regular) illustrates the relationship between circular convolution and regular convolution using the previous two gures: Circular Convolution from Regular Figure 1. 0. . Figure 1. . The left plot (the circular convolution results) has a "wrap-around" eect due to periodic . } {.64: Result is 3-periodic. 2. } ∼ y [1] = = 1×1+2×1+3×2 8 ∼ (1.62) (1. 2. . .65: extension. 2. 1. 1. . 0.47 · n=1 {. 0.61) (1.63) (1. 0. . 3. 0. . 0. . . 1.64) · n=2 · n=3 · n=4 y [2] = 5 y [3] = 5 y [4] = 8 ∼ ∼ Circular Convolution Result Figure 1. . . 0. .

67).65) Now we can plot this result (Figure 1.11. note: .2 Regular Convolution from Periodic Convolution CHAPTER 1.66: The sequence from 0 to 4 (the underlined part of the sequence) is the regular convolution result. 3.48 1. the product of the DFTs of the zero-padded signals) (Figure 1. 2. "Zero-pad" x [n] and h [n] to avoid the overlap (wrap-around) eect. We will zero-pad the two signals to a length-5 signal (5 being the duration of the regular convolution result): x [n] = {1. DSP SYSTEMS I 1. 2.2. From this illustration we can see that it is 5-periodic! x [n] We can compute the regular convolution result of a convolution of an M -point signal with an N -point signal h [n] by padding each signal with zeros to obtain two M + N − 1 length sequences and computing the circular convolution (or equivalently computing the IDFT of H [k] X [k].66): Figure 1. 0} 2. 0. 0} h [n] = {1. Now take the DFTs of the zero-padded signals: ∼ y [n] = = 1 N 4 k=0 4 m=0 X [k] H [k] ej2π 5 n k (x [m] h [((n − m))5 ]) (1. 0. 0.

. Compute DFTs X [k] and H [k] 4. Zero-pad x [n] and h [n] to length M + N − 1. . .68: The system has a length N impulse response. Sample nite duration continuous-time input x (t) to get x [n] where n = {0. . h [n] 1. . . 1.49 Figure 1.11.3 DSP System Figure 1. 5. .67: Note that the lower two images are simply the top images that have been zero-padded. 3. . Compute IDFTs of X [k] H [k] where n = {0. M − 1}. Reconstruct y (t) y [n] = y [n] ∼ . M + N − 1}. 2.

50 1.12. ∞ xs (t) = n=−∞ (x [n] δ (t − nT )) (1.3/>.71).70 1.69 1. DSP SYSTEMS I How do we go from x [n] to CT (Figure 1.12. content is available online at <http://cnx.70).1.1.org/content/m11044/2.69)? Figure 1.12 Ideal Reconstruction of Sampled Signals 1.66) Figure 1. .1 Step 1 Place x [n] into CT on an impulse train s (t) (Figure 1.1 Reconstruction of Sampled Signals 20 CHAPTER 1.2 Step 2 s Pass x (t) through an idea lowpass lter H 20 This LP (Ω) (Figure 1.12.

where x [n] = x (nT ).51 Figure 1. Xr (Ω) = HLP (Ω) Xs (Ω) note: Xs (Ω) = X (ΩT ) Xr (Ω) = HLP (Ω) X (ΩT ) In Time Domain: 1.71 If we had no aliasing then x r [t] = xc (t) .72 In Frequency Domain: 1. c 1.12. 2. where X (ΩT ) is the DTFT of x [n] at digital frequency ω = ΩT . ∞ xs (t) = n=−∞ (x [n] δ (t − nT )) .2 Ideal Reconstruction System Figure 1.

52 2.

xr (t) =

n=−∞ ∞

CHAPTER 1.

DSP SYSTEMS I

(x [n] δ (t − nT )) ∗ hLP (t) hLP (t) = sinc π t T

note:

∞

xr (t) =

n=−∞

x [n] sinc

π (t − nT ) T

Figure 1.73

hLP (t)

= =

hLP (t)

"interpolates" the values of x [n] to generate x

π sinc T t π sin( T t) π ( T t) r

(1.67)

(t)

(Figure 1.74).

53

Figure 1.74

Sinc Interpolator

xr (t) =

∞

x [n]

n=−∞

sin

π T

π T

(t − nT ) (t − nT )

(1.68)

1.13 Amplitude Quantization

s

21

The Sampling Theorem says that if we sample a bandlimited signal s (t) fast enough, it can be recovered without error from its samples s (nT ), n ∈ {. . . , −1, 0, 1, . . . }. Sampling is only the rst phase of acquiring data into a computer: Computational processing further requires that the samples be quantized: analog values are converted into digital form. In short, we will have performed analog-to-digital (A/D) conversion.

22

21 This content is available online at <http://cnx.org/content/m0051/2.22/>. 22 "Signals Represent Information": Section Digital Signals <http://cnx.org/content/m0001/latest/#digital>

54

Q[s(nTs)] 7 ∆ 6 5 4 3 2 1 0 –1 –0.5 0.5 1 s(nTs)

CHAPTER 1.

DSP SYSTEMS I

(a)

signal 1 1 0.75 0.5 0.25 0 0 -0.25 -0.5 -0.75 –1 -1 sampled signal

7 6 5 4 3 2 1 0

amplitude-quantized and sampled signal

(b)

A three-bit A/D converter assigns voltage in the range [−1, 1] to one of eight integers between 0 and 7. For example, all inputs having values lying between 0.5 and 0.75 are assigned the integer value six and, upon conversion back to an analog value, they all become 0.625. The width of a single quantization interval ∆ equals 22 . The bottom panel shows a signal going through the analog-to-digital, B where B is the number of bits used in the A/D conversion process (3 in the case depicted here). First it is sampled, then amplitude-quantized to three bits. Note how the sampled signal waveform becomes distorted after amplitude quantization. For example the two signal values between 0.5 and 0.75 become 0.625. This distortion is irreversible; it can be reduced (but not eliminated) by using more bits in the A/D converter.

Figure 1.75:

A phenomenon reminiscent of the errors incurred in representing numbers on a computer prevents signal amplitudes from being converted with no error into a binary number representation. In analog-to-digital conversion, the signal is assumed to lie within a predened range. Assuming we can scale the signal without aecting the information it expresses, we'll dene this range to be [−1, 1]. Furthermore, the A/D converter assigns amplitude values in this range to a set of integers. A B-bit converter produces one of the integers 0, 1, . . . , 2 − 1 for each sampled input. Figure 1.75 shows how a three-bit A/D converter assigns input values to the integers. We dene a quantization interval to be the range of values assigned to the same integer. Thus, for our example three-bit A/D converter, the quantization interval ∆ is 0.25; in general, it is . Exercise 1.6 (Solution on p. 59.) Recalling the plot of average daily highs in this frequency domain problem , why is this plot so jagged? Interpret this eect in terms of analog-to-digital conversion. Because values lying anywhere within a quantization interval are assigned the same value for computer processing, the original amplitude value cannot be recovered without error. Typically, the D/A converter, the device that converts integers to amplitudes, assigns an amplitude equal to the value lying halfway in the quantization interval. The integer 6 would be assigned to the amplitude 0.625 in this scheme.

B 2 2B

23

23 "Frequency

Domain Problems", Problem 5 <http://cnx.org/content/m10350/latest/#i4>

) How many bits would be required in the A/D converter to ensure that the maximum amplitude quantization error was less than 60 db smaller than the signal's peak value? Exercise 1. Why go to all the bother if the same function can be accomplished using analog techniques? Knowing when digital processing excels and when it does not is an important issue.8 (Solution on p. To calculate the rms value. we must square the error and average it over the interval. rms ( ) = = 1 ∆ ∆2 12 ∆ 2 } ε ∆ −( ∆ ) 2 1 2 2d (1. and unsampling is equivalent to some analog linear system. we note that no matter into which quantization interval the signal's value falls. the smaller the quantization error.) This derivation assumed the signal's amplitude lay in the range [−1. It can be shown that if the computer processing is linear.5dB 2 Thus. As we have xed the input-amplitude range. Thus. What would the amplitude quantization signal-to-noise ratio be if it lay in the range [−A.) Music on a CD is stored to 16-bit accuracy.76) details a single quantization interval. the signal power is the square of the rms amplitude: power (s) = = . which equals the ratio of the signal power and the quantization error power. we need to compute the signal-to-noise ratio.70) = 3 2B 2 = 6B + 10log10 1. Its width is ∆ and the quantization error is denoted by . the more bits available in the A/D converter. To analyze the amplitude quantization error more deeply. To nd the power in the quantization error. A]? Exercise 1. 1 2B 1 √ 2 2 1 2 s(nTs) Q[s(nTs)] Figure 1.9 (Solution on p.7 (Solution on p.69) 2 2B Since the quantization interval width for a B-bit converter equals to-noise ratio for the analog-to-digital conversion process equals SNR = 1 2 2−(2(B−1)) 12 = 2−((B−1)) . denotes the error thus incurred. every bit increase in the A/D converter yields a 6 dB increase in the signal-to-noise ratio. the error will have the same characteristics. . 59.55 The error introduced by converting a signal from analog to digital form by sampling and amplitude quantization then back again would be half the quantization interval for each amplitude value. along with a typical signal's value before amplitude quantization s (nTs ) and after Q (s (nTs )). we nd that the signal(1. the result of sampling. 59. computer processing. The illustration (Figure 1. 1]. Assuming the signal is a sinusoid. 59. Exercise 1. we can process them using digital hardware or software.76: A single quantization interval is shown. To what signal-to-noise ratio does this correspond? Once we have acquired signals with an A/D converter. the so-called A/D error equals half the width of a quantization interval: .

The Fourier coecients.10 if k = l and k = 0 and l = 0 0 if k = l or k = 0 = l if k = l and k = 0 and l = 0 2πlt 2πkt cos dt = cos T if k = 0 = l T T 0 if k = l These orthogonality relations follow from the following important trigonometric identities. each representing a signal's spectrum.22/>.73) 24 This content is available online at <http://cnx. T sin 0 2πkt T sin 2πlt T dt = T 2 T T 2 0 sin (α) sin (β) = cos (α) cos (β) = sin (α) cos (β) = These identities allow you to substitute a sum of sines and/or cosines for a product of them. Note that the cosine and sine of harmonically related frequencies. • The integral of a sinusoid over an integer number of periods equals zero. (1) <http://cnx.72) T T T 0 Exercise 1. k ∈ Z l ∈ Z (1.71) a cos b sin T T ∞ 0 k k The complex Fourier series and the sine-cosine series are identical. 59. k k k k=1 k=1 25 ck = 1 (ak − jbk ) 2 (Solution on p.org/content/m0042/latest/#complexfourierseries> .56 1. we can nd the Fourier coecients using the orthogonality properties of sinusoids. an extra factor of two and complex conjugate become necessary to relate the Fourier coecients in each.) Derive this relationship between the coecients of the two Fourier series.14 Classic Fourier Series ∞ CHAPTER 1. 24 DSP SYSTEMS I The classic Fourier series as derived originally expressed a periodic signal (period T ) in terms of harmonically related sines and cosines. Equating the classic Fourier series (1. 2πkt 2πlt sin cos dt = 0 . Just as with the complex Fourier series.org/content/m0039/2. • The integral of the square of a unit-amplitude sinusoid over a period T equals . T 2 1 2 (cos (α − β) − cos (α + β)) 1 2 (cos (α + β) + cos (α − β)) 1 2 (sin (α + β) + sin (α − β)) (1. Each term in the sum can be integrating by noticing one of two important properties of sinusoids.71) to the complex Fourier series . a and b . are orthogonal. express the real and imaginary parts respectively of the spectrum while the coecients c of the complex Fourier series express the spectrum as a magnitude and phase. 25 "Complex Fourier Series". 2πkt 2πkt s (t) = a + + (1. even the same frequency.

60.76) s (t) = 0 if ≤ t < T Begin with the sine terms in the series.75) (Solution on p. T T T s (t) cos 2πlt dt = 0 a0 cos 2πlt dt + ∞ ak 0 cos 2πkt cos 2πlt dt + (1. sin if 0 ≤ t < (1. the integration will sift out all but the term involving a . 0 al = All of the Fourier coecients can be found similarly.) The expression for a is referred to as the average value of s (t).) What is the Fourier series for a unit-amplitude square wave? Example 1. multiply the Fourier series for a signal by the cosine of the l harmonic cos and integrate. to nd b we must calculate the integral 0 2πt T T 2 T 2 k Exercise 1.13 Let's nd the Fourier series representation for the half-wave rectied sinusoid.12 (Solution on p. we obtain a T . in which case we obtain . the only non-zero term in the sum results when the indices k and l are equal (but not zero). 60. a0 = ak = bk = 1 T 2 T 2 T T 0 T 0 T 0 2 T T s (t) cos 0 2πlt T dt . which are much easier to evaluate. if k = 1 0 otherwise 1 2 0 cos 2π(k−1)t T − cos 2π(k+1)t T dt (1. because integration is linear.57 To use these.74) k=1 T T T T 0 th 2πlt T l ∞ k=1 bk T 0 sin 2πkt T cos 2πlt T dt al T 2 The rst and third terms are zero. for example. The idea is that. l = 0 s (t) dt s (t) cos s (t) sin 2πkt T 2πkt T dt .78) b1 = 1 2 b2 = b3 = · · · = 0 . in the second.11 (1. Why? Exercise 1. bk = 2 T T 2 sin 0 2πt T sin 2πkt T dt T 2 0 sin 2πt T sin 2πkt T dt = = 1 2 T 2 Thus.77) Using our trigonometric identities turns our integral of a product of sinusoids into a sum of integrals of individual sinusoids. let's. Consequently. k = 0 dt (1. If k = 0 = l.

The remainder of the cosine coecients are easy to nd. } (1. The average value. the Fourier series for the half-wave rectied sinusoid has non-zero terms for the average. 0 k . but yield the complicated result − if k ∈ {2. the fundamental. which corresponds to a .58 2 1 π k2 −1 CHAPTER 1. . . . and the even harmonics. equals . 1 π DSP SYSTEMS I On to the cosine terms.79) a = 0 if k odd Thus. 4.

we should take the DFTs of x [n] and h [n] to get X [k] and X [k]. ∼ 2A 2B A √ 2 −B 10 k k k ∞ ∞ ck ej k=−∞ 2πkt T = k=−∞ (Ak + jBk ) ej 2πkt T 26 "Fourier Series: Eigenfunction Approach" <http://cnx. 55) A 16-bit A/D converter yields a SNR of 6 × 16 + 10log 1.001 results in B = 10 bits.81) (1. 42) Discretize (sample) X (ω) and H (ω). Total Cost = (2N − 1) (N + N − 1) ≈ O N 2 Solution to Exercise 1. In order to do this. n] = = IDFT (H [k. Thus. l]) circular convolution in 2D (1.82) 2πk N ∞ S [n] = k=−∞ e(−j)2π N n k where the DTFT of the exponential in the above equation is equal to δ ω − .3 (p. 26) x [n + N ] = ??? Solution to Exercise 1.8 dB.80) There are 2N − 1 non-zero output points and each will be computed using N complex mults and N − 1 complex adds.org/content/m10496/latest/> . Therefore.7 (p. 54) The plotted temperatures were quantized to the nearest degree. Solution to Exercise 1.59 Solutions to Exercises in Chapter 1 Solution to Exercise 1.5 = 97. 30) S [n] is N -periodic.1 (p. 17) Solution to Exercise 1. l] F [k. Solution to Exercise 1. A]. Solution to Exercise 1. 55) Solving 2 = . Solution to Exercise 1.4 (p. so it has the following Fourier Series : 26 ck = = 1 N 1 N N 2 −( N ) 2 δ [n] e(−j)2π N n dn k (1.8 (p. the high temperature's amplitude was quantized as a form of A/D conversion. 9) 2D Circular Convolution ∼ g [m. With an A/D range of [−A.5 (p. Then we will compute ∼ y [n] = IDFT (X [k] H [k]) Does y [n] = y [n]? Solution to Exercise 1. Solution to Exercise 1.9 (p.2 (p.6 (p. 55) The signal-to-noise ratio does not depend on the signal amplitude. the quantization interval ∆ = and the signal's rms value (again assuming it is a sinusoid) is . 56) Write the coecients of the complex Fourier series in Cartesian form as c = A + jB and substitute into the expression for the complex Fourier series.10 (p.

the integral corresponds to the average.. the Fourier series for the square wave is 4 2πkt sq (t) = sin (1. The coecients are pure imaginary.83) πk T CHAPTER 1. = (A + jB ) cos + jsin (A + jB ) e − B sin + j A sin + B cos = A cos We now combine terms that have the same frequency index in magnitude.. Solution to Exercise 1.3. 57) We found that the complex Fourier series coecients are given by c = .11 (p. the coecients of the complex Fourier series have conjugate symmetry: c = c or A = A and B = −B . The coecients of the sine terms are given by b = − (2Im (c )) so that if k odd b = 0 if k even Thus. Viewing signal integration as the limit of a Riemann sum. } .. which means a = 0.71). After we add the positive-indexed and negative-indexed terms. Solution to Exercise 1. DSP SYSTEMS I k k j 2πkt T k k 2πkt T 2πkt T k 2πkt T k 2πkt T k 2πkt T k 2πkt T −k k ∗ −k k −k k k 2πkt T k 2πkt T k k k k k 2 jπk k k k k 4 πk k∈{1. Because the signal is realvalued. we must have 2A = a and 2B = −b .60 Simplifying each term in the sum using Euler's formula. 57) The average of a set of numbers is the sum divided by the number of terms. To obtain the classic Fourier series (1. each term in the Fourier series becomes 2A cos − 2B sin .12 (p.

you have probably dealt strictly with the theory behind signals and systems. note that we will be dealing strictly with discrete-time signals since they are the signals we deal with in DSP and most real-world computations. Each value of these signals are xed and can be determined by a mathematical expression. As most of you have probably already dealt with the rst type. Stochastic For this study of signals and systems. This often causes the information contained in the signal to be hidden and distorted. we will divide signals into two groups: those that have a xed behavior and those that change randomly.Chapter 2 Random Signals 2. For this reason. 2. as well as look at some the basic characteristics of signals and systems . 2. it is important to understand these random signals and how to recover the necessary information.org/content/m10084/latest/> 61 . For this reason. we will focus on introducing you to random signals. these signals are relatively easy to analyze as they do not change.2/>. Because of this.1 Deterministic Signals Most introductions to signals and systems deal strictly with deterministic signals.1 Introduction to Random Signals and Processes 2 3 1 Before now. 2 "Signal Classications and Properties" <http://cnx. In doing so you have developed an important foundation. rule. which leaves them noisy and often corrupted.org/content/m10649/2. however. 1 This content is available online at <http://cnx. most electrical engineers do not get to work in this type of fantasy world. Also.1.1 Signals: Deterministic vs.org/content/m10057/latest/> 3 "System Classications and Properties" <http://cnx. future values of any deterministic signal can be calculated from past values. In many cases the signals of interest are very complex due to the randomness of the world around them. but these same ideas apply to continuous-time signals.1. and we can make accurate assumptions about their past and future behavior.1. or table.

These are the types of signals that we wish to learn how to deal with so that we can recover the original sine wave.62 Deterministic Signal CHAPTER 2. the sine wave. because of their randomness.1. let us look at the Random Sinusoidal Process below. Figure 2.5) from a collection of signals are usually studied rather than analyzing one individual signal.1.1: random process A family or ensemble of signals that correspond to every possible outcome of a certain signal measurement.1. Example As an example of a random process.2 Stochastic Signals Unlike deterministic signals. Also. we must use probability and statistics to analyze their behavior.2 Random Process As mentioned above. RANDOM SIGNALS Figure 2. Denition 2. Rather. well-dened mathematical equation and their future values cannot be predicted.2: 2. Random Signal We have taken the above sine wave and added random noise to it to come up with a noisy. in order to study random signals. are not so nice. stochastic signals.1: An example of a deterministic signal. average values (Section 2. 2. We use f [n] = Asin (ωn + φ) to represent the sinusoid with a given amplitude and phase. or random. Random signals cannot be characterized by a simple. signal. Note that the . or random signals. This collection of signals is called a random process. we want to look at a collection of these signals rather than just one instance of that signal. Each signal in this collection is referred to as a realization or sample function of the process.

3: A random sinusoidal process.3 (Random Sinusoidal Process). Often times discrete and continuous refer to the amplitude of the process. For this study. In many notes and books. and process or sequence refer to the nature of the time variable. you might see the following notation and terms used to describe dierent types of random processes. Random Sinusoidal Process Figure 2. with x (t) or x [n] used to represent an individual signal or waveform from this process.63 phase and amplitude of each sinusoid is based on a random number. we have a continuous random process. sometimes just called a random sequence. t represents time that has a nite number of values. For a discrete random process. If t can take on any value of time. as seen above in Figure 2. we often just use random process to refer to a general collection of discrete-time signals. . with the amplitude and phase being random numbers. thus making this a random process. A random process is usually denoted by X (t) or X [n].

b2 ) = P r [Xt1 ≤ b1 . Xt2 ≤ b2 ] (2. and stationarity Given a sample space.4 For a given t. .1 Denitions.4.1) t Example 2.2.1 Received signal at an antenna as in Figure 2.2) Denition 2.15/>.3: First-order stationary process Second-order distribution If F Xt (b) is not a function of time then X is called a rst-order stationary process. X (ω) is a random variable with a distribution First-order distribution t FXt (b) = P r [Xt ≤ b] = P r [{ ω ∈ Ω |Xt (ω) ≤ b}] (2.Xt2 (b1 .2 Introduction to Stochastic Processes Denition 2. FXt1 . distributions. . . t ∈ R (2. a stochastic process is an indexed collection of random variables dened for each ω ∈ Ω. RANDOM SIGNALS 2.org/content/m10235/2. X (ω) .3) content is available online at <http://cnx.64 2. t for all t 4 This 1 ∈ R t2 ∈ R b1 ∈ R b2 ∈ R . Figure 2.2: Stochastic Process 4 CHAPTER 2.

. .XtN +T (b1 . .. b2 .. bN ) = FXt1 +T . . . Example 2. .. if θ ∈ [−π. ..2 X = cos (2πf t + Θ (ω)) where f is the deterministic carrier frequency and Θ (ω) : (Ω → R) is a random variable dened over [−π. . if |x| ≤ 1 0 otherwise .7) (2. bN ) Strictly stationary : A process is strictly stationary if it is N th order stationary for all N .e. .Xt2 +T ..6) (2. ..... Θ 1 2π FXt (b) = P r [Xt ≤ b] = P r [cos (2πf0 t + Θ) ≤ b] (2. π] f (θ) = 0 otherwise t 0 0 i.5) th-order stationary : A random process is stationary of order N if FXt1 .. . XtN ≤ bN ] N (2. .Xt2 . b2 .8) (2.XtN (b1 . bN ) = P r [Xt1 ≤ b1 . . b2 . .4) (2..65 Nth-order distribution FXt1 . π] and is assumed to be a uniform random variable..9) FXt (b) = P r [−π ≤ 2πf0 t + Θ ≤ − (arccos (b))] + P r [arccos (b) ≤ 2πf0 t + Θ ≤ π] FXt (b) = = −(arccos(b))−2πf0 t 1 2π dθ −π−2πf0 t 1 (2π − 2arccos (b)) 2π + π−2πf0 t 1 dθ arccos(b)−2πf0 t 2π fXt (x) = = d dx 1 1 − π arccos (x) √1 π 1−x2 This process is stationary of order 1.Xt2 . . . .XtN (b1 .

12) t 0 t2 2 t1 1 t1 1 0 1 0 Xt2 = = cos (2πf0 t2 + arccos (x1 ) − 2πf0 t1 ) cos (2πf0 (t2 − t1 ) + arccos (x1 )) (2.5 The second order stationarity can be determined by rst considering conditional densities and the joint density.6 .66 CHAPTER 2. RANDOM SIGNALS Figure 2. Recall that X = cos (2πf t + Θ) (2.10) Then the relevant step is to nd Pr [X ≤ b | X = x ] (2.13) Figure 2.11) Note that X = x = cos (2πf t + Θ) ⇒ Θ = arccos (x ) − 2πf t (2.

3 Every T seconds. If heads. If tails.7 pXt (x) = for all t ∈ R. Second order probability mass function t 1 2 1 2 if x = 1 if x = −1 (2. x2 ) = pXt2 |Xt1 (x2 |x1 ) pXt1 (x1 ) The conditional pmf pXt2 |Xt1 when nT ≤ t 1 1 < (n + 1) T and nT ≤ t 1 0 (x2 |x1 ) = 1 2 if x = x if x = x < (n + 1) T for some n. b1 ) = −∞ fXt1 (x1 ) P r [ Xt2 ≤ b2 | Xt1 = x1 ] dx1 (2. t2 < (n + 1) T pXt1 (x1 ) if x2 = x1 for nT ≤ t1 . then X X = −1 for nT ≤ t < (n + 1) T . a fair coin is tossed.17) with n = m (2. x1 ) = 0 if x2 = x1 for nT ≤ t1 . X is stationary of order 1. t2 < (n + 1) T p (x ) p (x ) if n = mfor nT ≤ t < (n + 1) T and mT ≤ t < (m + 1) T Xt1 1 Xt2 2 1 2 . 2 2 1 1 for all x and for all x when nT ≤ t 2 pXt2 |Xt1 (x2 |x1 ) = pXt2 (x2 ) < (n + 1) T and mT ≤ t 2 < (m + 1) T pXt2 Xt1 (x2 .18) (2. 2 1 t FXt2 .16) (2.14) t =1 for nT ≤ t < (n + 1) T . Example 2.15) (2. then Figure 2.19) pXt1 Xt2 (x1 .Xt1 (b2 .67 b1 Note that this is only a function of t − t .

but also with distance (e. They can be described as usual by their cdf and either their pmf (if the amplitude is discrete.org/content/m10989/2.org/content/m11067/latest/> . as in a digitized signal) or their pdf (if the amplitude is continuous.g.g. audio noise). often with time (e. 6 "Approximation Formulae for the Gaussian Error Integral. or sometimes another parameter. Q(x)" <http://cnx. However a very important additional property is how rapidly a random signal uctuates. as in the following examples. We will see later in here how to deal with these frequency dependent characteristics of randomness.3 Random Signals 5 CHAPTER 2.68 2. 6 5 This content is available online at <http://cnx. as in most analogue signals). RANDOM SIGNALS Random signals are random variables which evolve. intensity in an image of a random texture). Clearly a slowly varying signal such as the waves in an ocean is very dierent from a rapidly varying signal such as vibrations in a vehicle. For the moment we shall assume that random signals are sampled at regular intervals and that each signal is equivalent to a sequence of samples of a given random process.5/>.

69 Detection of signals in noise: (a) the transmitted binary signal. (c) the channel noise after ltering with the same lter.8: . (d) the ltered signal plus noise at the detector in the receiver. (b) the binary signal after ltering with a half-sine receiver lter. Figure 2.

8. at zero. Let the ltered data signal be D (t) and the ltered noise be U (t).22) fU (u) du .8 and it will also convert wide bandwidth channel noise into the form shown in (c) of Figure 2. i. then the detector signal is R (t) = D (t) + U (t) (2. The aim of the lter is to reduce the noise as much as possible without reducing the peak values of the signal signicantly. In order to reduce the channel noise. the optimum detector threshold is clearly midway between these two states. Bandlimited noise of this form will usually have an approximately Gaussian pdf. The transmitted signal is typically as shown in (a) of Figure 2. A good lter for this has a half-sine impulse response of the form: sin if 0 ≤ t ≤ T h (t) = (2.3. Because this lter has an impulse response limited to just one bit period and has unit gain at zero frequency (the area under h (t) is unity). The vertical dashed line is the detector threshold and the shaded area to the left of the origin represents the probability of error when data = 1.Detection of a binary signal in noise We now consider the example of detecting a binary signal after it has passed through a channel which adds noise.1 Example . 2. the receiver will include a lowpass lter.8. the signal values at the center of each bit period at the detector will still be ±1.20) 0 otherwise Where T = bit period.9: The pdfs of the signal plus noise at the detector for the two ±1. the pdfs of the signal plus noise at the detector will be shown in Figure 2. If we choose to sample each bit at the detector at this optimal mid point. This lter will convert the rectangular data bits into sinusoidally shaped pulses as shown in (b) of Figure 2. The probability of error when the data = +1 is then given by: π 2Tb πt Tb b b P r [ error | D = +1] = P r [ R (t) < 0 | D = +1] = FU (−1) = −1 −∞ (2.70 CHAPTER 2.e. RANDOM SIGNALS Figure 2.21) If we assume that +1 and −1 bits are equiprobable and the noise is a symmetric zero-mean process.9.

org/content/m10684/2.26) = f (σu) σdu 0 2 U “ 2 ” u − 2σ2 2 ∞ v0 ∞ v0 σ U U (2. Q(x)" <http://cnx. Similarly the probability of error when the data = −1 is then given by: U U P r [ error | D = −1] = P r [ R (t) > 0 | D = −1] = = 1 − FU (+1) ∞ 1 (2. random processes have all the properties of random variables. Figure 1 <http://cnx. etc. Q(x)".24) fU (u) du Hence the overall probability of error is: P r [error] = U = P r [ error | D = +1] P r [D = +1] + P r [ error | D = −1] P r [D = −1] −1 −∞ fU (u) duP r [D = +1] + since f is symmetric about zero ∞ ∞ 1 fU (u) duP r [D = −1] ∞ P r [error] = 1 fU (u) du (P r [D = +1] + P r [D = −1]) = 1 fU (u) du To be a little more general and to account for signal attenuation over the channel. 7 "Approximation 8 "Approximation . The argument ( ) in (2. From (2.26) is the signal-to-noise voltage ratio (SNR) at the detector.4. variances.1 Introduction 9 From the denition of a random process (Section 2. or alternatively as 1 error in 500 bits (on average).23) (2. we shall assume that the signal values at the detector are ±v (rather than ±1) and that the ltered noise at the detector has a zero-mean Gaussian pdf with variance σ : 1 f (u) = √ e (2.9. Because of this. this would often be expressed as a bit error rate of 2 × 10 . When dealing with groups of signals Formulae for the Gaussian Error Integral. For example.org/content/m11067/latest/> Formulae for the Gaussian Error Integral..25) 2πσ and so P r [error] = f (u) du (2. each at its own unique point in time. which is often expressed as the bit error rate or BER. but a good approximation to it exists and is discussed in some detail in here .26) we may obtain the probability of error in the binary detector. correlation. and the BER rapidly diminishes with increasing SNR (see here ).27) This integral has no analytic solution.1).4 Stationary and Nonstationary Random Processes 2.71 where F and f are the cdf and pdf of U .2/>. 1 Q (x) = √ 2π ∞ where = Q v0 σ “ u2 2 e − ” du x 7 3 3 v0 σ 8 2. This is the shaded area in Figure 2.org/content/m11067/latest/#gure1> 9 This content is available online at <http://cnx. if P r [error] = 2 × 10 . such as mean. we know that all random processes are composed of random variables.

one needs to be somewhat familiar with the concepts of distribution and density functions. Again we can also dene a joint density function which will include multiple random variables just as was done for the distribution function. In other words. and let x be our given value.29) x x 2. This equations tells us the probability that our random variable falls within a given interval can be approximated by f (x) dx. fx (x) dx = P r [x < X ≤ x + dx] x note: k 2. The above examples explain the distribution and density functions in terms of a single random variable.30) f (x) = dx x x (2. we can now use our knowledge of integrals to evaluate probabilities from the above approximation. Y ≤ y] (2. For example.31) reveals some of the physical signicance of the density function. To do this. RANDOM SIGNALS In order to properly dene what it means to be stationary from a mathematical standpoint.4. The density function is used for a variety of calculations. F (x. X . From the pdf. let X be our random variable. the distribution and density function will also need to take into account the choice of time.31) (2. We dene the the pdf as d F (x) (2. . it is not always the most useful for calculations. below is an example of a second-order joint distribution function. More precisely. various levels of stationarity exist and we will look at the most common types. we will look at various levels of stationarity used to describe the various types of stationarity characteristics a random process can have. from this we can dene the distribution function as F (x) = P r [X ≤ x] (2. the probability distribution function is a simply tool used to identify the probability that our observed random variable will be less than or equal to a given number. remember that we will have a set of random variables where a dierent random variable will occur at each time instance of the random process. If you can remember your statistics then feel free to skip this section! Recall that when dealing with a single random variable.72 or sequences it will be important for us to be able to show whether of not these statistical properties hold true for the entire random process. the probability density function (pdf). As was mentioned previously. X (t ). Often times we will want to look at its derivative. y) = P r [X ≤ x.4: stationary process a random process where all of its statistical properties do not vary with time Processes whose statistical properties do change are referred to as nonstationary. The general denition of a stationary process is: Denition 2. When we are dealing with signals and random processes. the concept of stationary processes has been developed. Also. to name a few. There may be situations where we want to look at the probability of event X and Y both occurring. Understanding the basic idea of stationarity will help you to be able to follow the more concrete and mathematical denition to follow.4.3 Stationarity Below we will now look at a more in depth and mathematical denition of a stationary process. CHAPTER 2. such as nding the expected value or proving a random variable is stationary.2 Distribution and Density Functions While the distribution function provides us with a full view of our variable or processes probability.28) This same idea can be applied to instances where we have multiple random variables as well.

τ . Below we show the results for a random process. In order to be WSS a random process only needs to meet the following two requirements. t1 1 x t1 x t1 +τ x t1 1 X = = = mx [n] E [x [n]] constant (independent of n) (2.35) These random processes are often referred to as strict sense stationary (SSS) when all of the distribution functions of the process are unchanged regardless of the time shift applied to them.3.32) The physical signicance of this equation is that our density function. 1. We dene a nal type of stationarity. X (t + τ ) ≤ x ] (2.7) to see its most important property. E [X (t + τ )] = R (τ ) Note that a second-order (or SSS) stationary process will always be WSS. x ) = f (x .4. f (x . t + τ ) = E [X (t + τ )] (2. If we let x represent a given value at time t . xx 2.x ) (2.3 Wide-Sense Stationary Process .36) = R (τ ) t1 t2 x t1 t2 x t1 +τ t2 +τ 1 1 2 2 1 1 2 2 xx xx As you begin to work with random processes. it will become evident that the strict requirements of a SSS process is more than is often necessary in order to adequately approximate our calculations on random processes. independent of any time shift. Looked at another way.33) 2. Since we have already stated that a second-order stationary process depends only on the time dierence. is completely independent of t and thus any time shift.4. is the fact that the mean is a constant.4.3. the reverse will not always hold true. this equation can be described as P r [X (t ) ≤ x . X . however. x [n].73 2. to have slightly more relaxed requirements but ones that are still enough to provide us with adequate results. that is a discrete-time signal. and the identifying characteristic of any rst-order stationary process. The most important result of this statement. f (x ). X = E [x [n]] = constant 2.1 First-Order Stationary Process A random process is classied as rst-order stationary if its rst-order probability density function remains equal regardless of any shift in time to its time origin. rather it only really depends on the time dierence between the two variables. we need to look at the autocorrelation function (Section 2.2 Second-Order and Strict-Sense Stationary Process A random process is classied as second-order stationary if its second-order probability density function does not vary over any time shift applied to both values. In other words. For a second-order stationary process. then all of these types of processes have the following property: R (t. X (t ) ≤ x ] = P r [X (t + τ ) ≤ x . then we dene a rst-order stationary as one that satises the following equation: f (x ) = f (x ) (2.3.34) From this equation we see that the absolute time does not aect our functions. for values x and x then we will have the following be equal for an arbitrary time shift τ . referred to as wide-sense stationary (WSS).

The other important notation used is. Finding the average value of a set of random signals or random variables is probably the most fundamental concepts we use in evaluating random processes through any sort of statistical method. also referred to as uncorrelated. x (t). µ (t). The mean. but we want to introduce you to the various notations used to represent the mean of a random signal or process. then the integrals become summations over all the possible values of the random variable.5 Random Processes: Mean and Variance CHAPTER 2.1. remember that these will all equal the same thing.5.37) xf (x) dx This equation may seem quite cluttered at rst glance.5. Below we will focus on the operations of the random signals that compose our random processes. of a random process.1 Mean Value (2. which represents the "expected value of X " or the mathematical expectation. E [X]. their averages can reveal how the two signals interact. which make up our random process.3/>. The symbol. We will denote our random process with X and a random variable from a random process or signal by x. Throughout texts and other readings. In order to nd this average. such as in a binary process. In this case we must estimate the mean through the time-average mean (Section 2.74 2.4: Time Averages). are discrete or quantized values.org/content/m10656/2. is given by the following equation: mx (t) = = = = µx (t) X E [X] ∞ −∞ 2. or average. (2. then the two signals are said to be linearly independent.5.39) (2.40) 10 This . 10 RANDOM SIGNALS In order to study the characteristics of a random process (Section 2. In the case where we have a random process in which only one sample can be viewed at a time. The mean of a random process is the average of all realizations of that process. x x 2. This notation is very common and will appear again. In this case. then these are the averages most commonly used. let us look at some of the basic properties and operations of a random process. For elds such as signal processing that deal mainly with discrete signals and values. α. and the X with a bar over it are often used as a short-hand to represent an average. then we will often not have all the information available to calculate the mean using the density function as shown above.1 Properties of the Mean • The expected value of a constant. is the constant: E [α] = α • Adding a constant. so you might see it in certain textbooks. we must look at a random signal over a range of time (possible values) and determine our average from this set of values. our expected value becomes E [x [n]] = (αP r [x [n] = α]) (2.1). If the product of the two individual averages of both signals do not equal the average of the product of the two signals. If the random variables. discussed later. α.38) If we have two random signals or variables. to each term increases the expected value by that constant: E [X + α] = E [X] + α content is available online at <http://cnx.

42) 2 2. α. the standard deviation is simply the square root of the variance. which is commonly seen: σ = X − X (2. 2. is the sum of each individual expected value.45) = E X − (E [X]) 2 2 2 2 2 2. we can rewrite this formula as the following form.5. to a random variable does not aect the variance because the mean increases by the same value: Var (X + α) = σ(X + α) (2.3. E [αX] = αE [X] (2.5.5. E [X + Y ] = E [X] + E [Y ] (2. The variance.3 Variance Now that we have an idea about the average value or values that a random process takes.41) The expected value of the sum of two or more random variables. then we will have the mean-square value of our random process.1 Standard Deviation Another common statistical tool is the standard deviation.5.75 • • Multiplying the random variable by a constant. we look at the variance which is a measure of this spread. α. or σ. 2. multiplies the expected value by that constant. Once you know how to calculate the variance.44) x − X f (x) dx Using the rules for the expected value.2 Properties of Variance • The variance of a constant. we are often interested in seeing just how spread out the dierent random values might be. this is written as X2 = E X2 = ∞ −∞ x2 f (x) dx (2.46) 2 • Adding a constant.47) = σ(X) 2 . is written as follows: 2 σ2 = Var (X) 2 ∞ −∞ 2 = E (X − E [X]) = (2. often denoted by σ . To do this. As you would expect.2 Mean-Square Value If we look at the second moment of the term (we now look at x in the integral).43) This equation is also often referred to as the average power of a process or signal. α. equals zero: Var (α) = σ(α) = 0 2 (2.3.

this will only give us acceptable results for independent and ergodic processes. For the ergodic random process.49) = σ(X) + σ(Y ) Otherwise.4.76 • Var (αX) = σ(αX) 2 CHAPTER 2. RANDOM SIGNALS Multiplying the random variable by a constant. increases the variance by the square of the constant: (2.5. meaning those processes in which each signal or member of the process seems to have the same statistical behavior as the entire process.5.53) σ =X − X x 2 2 2 . We will represent this mean as 1 T T 0 2. x (t). Y ) + σ(Y ) (2. we will estimate the mean using the time averaging function dened as X = E [X] (2.4. Var (X + Y ) = σ(X + Y ) (2. The time averages will also only be taken over a nite interval since we will only be able to see a nite part of the sample.5. if the random variable are not independent.52) 2. we must use time averages to estimate the values of the mean and variance for the process.48) The variance of the sum of two random variables only equals the sum of the variances if the variable are independent. α. Generally.51) = X (t) dt However.2 Estimating the Variance Once the mean of our random process has been estimated then we can simply use those values in the following variance equation (introduced in one of the above sections) (2.50) = α2 σ(X) 2 2 2 2 2 2 • In the case where we can not view the entire ensemble of the random process. for most real-world situations we will be dealing with discrete values in our computations and signals. then we must also include the covariance of the product of the variables as follows: Var (X + Y ) = σ(X) + 2Cov (X.1 Estimating the Mean X = = E [X] 1 N N n=1 (X [n]) (2.4 Time Averages 2.

56) And nally. First.55) Using (2. we will use (2. For this example.43) we can obtain the mean-square value for the above density function.10: A uniform probability density function. continuous random variable for this example.5. let us solve for the variance of this function. but the calculations and methods are the same for a random process. We will just look at a single.54) 0 otherwise 1 10 Probability Density Function Figure 2. we will consider a random variable having the probability density function described below and shown in Figure 2.5 Example Let us now look at how some of the formulas and concepts above apply to a simple example.33 2 233.57) . σ2 = = = X2 − X 8.33 − 152 (2.10 (Probability Density Function).37) to solve for the mean value.33 233. if 10 ≤ x ≤ 20 f (x) = (2.77 2. X = = = = 20 1 x 10 dx 10 2 1 x |20 x=10 10 2 1 10 (200 − 50) (2. X2 = = = = 20 2 1 x 10 dx 10 1 x3 |20 x=10 10 3 1 10 8000 3 15 − 1000 3 (2.

we measure several important characteristics about how the process behaves in general. Mathematically. Denition 2. we use the covariance.5: Covariance A measure of how much the deviations of two or more variables or processes match. Below we will go into more details as to what these words mean and how these tools are helpful.58) (2.3/>. 11 This .1 Useful Properties • If X and Y are independent and uncorrelated or one of them has zero mean value. but keep in mind that these variables can represent random signals or random processes.78 CHAPTER 2.59) (2.1 Covariance cov (X. For two processes. To do this.60) This can also be reduced and rewritten in the following two forms: σxy = (xy) − (x) (y) ∞ ∞ σxy = −∞ −∞ X −X Y − Y f (x. 11 RANDOM SIGNALS 2. Z) Covariance of equal variables cov (X. or a very slightly dierent. Z) = cov (X. X) Basic covariance identity cov (X + Y.6." Two processes are "closely related" if their distribution spreads are almost equal and they are around the same.6 Correlation and Covariance of a Random Signal When we take the expected value (Section 2. X) = Var (X) content is available online at <http://cnx. Note that much of the following discussions refer to just random variables. This proves to be a very important observation.5). it should be obvious that it would be nice to be able to have a number that could quickly give us an idea of how similar the processes are. y) dxdy 2.2: Random Process). However. Let us clarify this statement by describing what we mean by "related" and "similar. suppose we have several random processes measuring dierent aspects of a system. of a random process (Section 2.org/content/m10673/2. X and Y . Z) + cov (Y. then σxy = 0 xy • • • • If X and Y are orthogonal.1. which is analogous to the variance of a single variable. when dealing with more than one random process. covariance is often written as σ and is dened as xy 2. or average.1. mean. The relationship between these dierent processes will also be an important observation. then σ = − (E [X] E [Y ]) The covariance is symmetric cov (X. if they are not closely related then the covariance will be small. Y ) = cov (Y. Y ) = = σxy E X −X Y −Y (2. The covariance and correlation are two important tools in nding these relationships.6. and if they are similar then the covariance will be large. To begin with.

6. we use the correlation coecient to give us the linear relationship between our variables. For a given set of variables.5.6: Correlation A measure of how much one random variable depends upon the other.3. If there is a perfect inverse relationship. This measure of association between the variables will provide us with a clue as to how well the value of one variable can be predicted from the value of the other. −1 ≤ ρ ≤ 1 . where one set of variables increases while the other decreases.2. Y ) ρ= (2.1: Standard Deviation).or Pearson's Product Moment Correlation. you should be able to see that the idea of dependence/independence among variables and signals plays an important role when dealing with random processes. as seen below cov (X.2 Correlation For anyone who has any kind of statistical background.1 Correlation Coecient It is often useful to express the correlation of random variables with a range of numbers.61) 2. The correlation coecient of two variables is dened in terms of their covariance and standard deviations (Section 2. Because of this. y) dxdy (2. Y ) = E [XY ] = ∞ ∞ −∞ −∞ xyf (x.79 2. The correlation is equal to the average of the product of two random variables and is dened as cor (X. the correlation of two variables provides us with a measure of how the two variables aect one another. This type of correlation is often referred to more specically as the Pearson's Correlation Coecient.6. like a percentage. Denition 2. If there is no relationship between the variables then the correlation coecient will be zero and if there is a perfect positive match it will be one. denoted by σ .62) σ σ where we will always have x x y This provides us with a quick and easy way to view the correlation between our variables. then the correlation coecient will be negative one.

3} y To begin with. 3. for the covariance we will need to nd the expected value (Section 2.80 CHAPTER 2. note: 2.11: So far we have dealt with correlation simply as a number relating the relationship between any two variables. 3. 1 x= y= xy = 5 (3 + 1 + 6 + 3 + 4) = 3. since our goal will be to relate random processes to each other. 1.6.4 1 (1 + 5 + 3 + 4 + 3) = 3. 4. or mean.5. 4} y = {1. σ= E (X − E [X]) 2 . However. of x and . which deals with signals as a function of time. we will want to continue this study by looking at correlation functions (Section 2.2 5 1 (3 + 5 + 18 + 12 + 12) = 10 5 Next we will solve for the standard deviations of our two sets using the formula below (for a review click here (Section 2.3: Variance)). 6. RANDOM SIGNALS (a) (b) (c) Types of Correlation (a) Positive Correlation (b) Negative Correlation (c) Uncorrelated (No Correlation) Figure 2. We are given the following data sets: x = {3.5).3 Example Now let us take just a second to look at a simple example that involves calculating the covariance and correlation of two sets of random numbers.7). 5.

let us quickly review the idea of correlation (Section 2. Below I have included the code to nd all of the values above.y) 2. from Equation Above (same result as std(?.*y) % Standard Dev. .84 + 3.64 + 0. ρ.3. then imagine us dealing with two samples of a random 12 This content is available online at <http://cnx.1) corrcoef(x.6. Recall that the correlation of two signals or variables is the expected value of the product of those two variables.327 6 Now we can nally calculate the covariance using one of the two formulas found above.88 σy = 2.327 σxy = 10 − 3.1)) sqrt( 1/5 * sum((x-mx).6).y.76 + 0.4/>.7 Autocorrelation of Random Processes 12 Before diving into a more complex statistical analysis of random signals and processes (Section 2. x = [3 1 6 3 4].16 + 0.4 × 3. mx = mean(x) my = mean(y) mxy = mean(x.1). a collection of random signals.04 + 0.36) = 1.04) = 1.625 5 1 (4.2 = −0. Since our focus will be to discover more about a random process.81 σx = 1 (0.1) % Standard Dev. ρ= −0. y = [1 5 3 4 3].59) since it will be much simpler.1) std(y.625 × 1. we will solve for the correlation coecient.24 + 0.16 + 5. we will use that formula (2.88 = −0.1 Matlab Code for Example The above example can be easily calculated using Matlab. And for our last calculation.408 1.^2)) sqrt( 1/5 * sum((y-my).org/content/m10676/2. from built-in Matlab Functions std(x.76 + 6.^2)) cov(x. Since we calculated the three means.

then the autocorrelation is often written as R (t . Also recall that the key property of these random processes is that they are now functions of time.7: Autocorrelation the expected value of the product of a random variable or signal realization with a time-shifted version of itself With a simple calculation and analysis of the autocorrelation function.82 process. we can generalize this expression a little further. Whether our process has a periodic component and what the expected frequency might be As was mentioned above. we will let τ = t − t .4). t ) = E [X X ] (2. imagine them as a collection of signals.7. n + m] = n=−∞ (x [n] x [n + m]) (2. where each of the random variables we will deal with come from the same random process. For stationary processes (Section 2. t + τ ) = R (τ ) (2. X = X (t ) and X = X (t ). For this discussion. The formula below will prove to be more common and useful than (2. In most DSP course we will be more interested in dealing with real signal sequences. x ) dx dx The above equation is valid for stationary and nonstationary random processes. We will now look at a correlation function that relates a pair of random variables from the same process to the time separations between them. and thus we generalize our autocorrelation expression as R (t. The expected value (Section 2. where each sample is taken at a dierent point in time. look at the crosscorrelation function (Section 2.65) (2. Assume we have a pair of random variables from the same process. we can discover a few important characteristics about our random process.64) = E [X (t) X (t + τ )] for the continuous-time case. the autocorrelation function is simply the expected value of a product.1 Autocorrelation Function The rst of these correlation functions we will discuss is the autocorrelation. Given a wide-sense stationary processes. Denition 2.5) of the product of these two variables (or samples) will now depend on how quickly they change in regards to time.8). we can say that our autocorrelation function will depend on the time dierence and not some absolute time. How quickly our random signal or processes changes with respect to the time function 2. where the argument to this correlation function will be the time dierence. it can be proven that the expected values from our random process will be independent of the origin of our time function. CHAPTER 2. if the two variables are taken from almost the same time period. For example. then we should expect them to have a high correlation. Therefore. For the correlation of signals from two dierent random process. RANDOM SIGNALS 2. and thus we will want to look at the discrete-time case of the autocorrelation function.63) = x x f (x . These include: 1.63): 1 1 2 2 xx 1 2 1 2 ∞ ∞ −∞ −∞ 1 2 1 2 2 1 2 1 xx xx ∞ Rxx [n. n + m] = = Rxx [m] E [X [n] X [n + m]] .66) And again we can generalize the notation for our autocorrelation function as Rxx [n.

of the sample function.7. we can treat the information we do know about the function as a discrete signal and use the discrete-time formula for estimating the autocorrelation. even if we just have part of one sample function. If we take the autocorrelation of a period function. T −τ xx 0 ˘ Rxx [m] = 1 N −m N −m−1 (x [n] x [n + m]) n=0 (2. then R frequency. Example 2. In these cases.1 Properties of Autocorrelation Below we will look at several properties of the autocorrelation function that hold for stationary random processes.7. it is important to remember the following fact about the mean of a GWN function: E [x [n]] = 0 .1. which gives us Rxx (0) = X 2 • The autocorrelation function will have its largest value when τ = 0. 0 to T seconds. For this problem.1. We will begin with a simple example dealing with Gaussian White Noise (GWN) and a few basic statistical properties that will prove very useful in these and future calculations. In order to do this we can estimate the autocorrelation from a given interval. • Autocorrelation is an even function for τ Rxx (τ ) = Rxx (−τ ) • The mean-square value can be found by evaluating the autocorrelation where τ = 0. a lot of times we will not have sucient information to build a complete continuous-time function of one of our random signals for the above analysis. Rxx (0) ≥ |Rxx (τ ) | xx • (τ ) will also be periodic with the same 2.4 We will let x [n] represent our GWN.68) 2.67) R (τ ) = T −τ However.83 2.2 Estimating the Autocorrleation with Time-Averaging Sometimes the whole random process is not available to us.2 Examples Below we will look at a variety of examples that use the autocorrelation function. This value can appear again. for example in a periodic function at the values of the equivalent periodic points. 1 ˘ x (t) x (t + τ ) dt (2.7. If this is the case. we would still like to be able to nd out some of the characteristics of the stationary random process. but will never be exceeded.

the mean equals zero. Thus we get the following results for the autocorrelation: 0 if m = 0 R [n.12: Along with being zero-mean.org/content/m10686/2. . RANDOM SIGNALS Gaussian density function. Figure 2.6). Recall that the correlation of two signals or variables is the expected value of the product of those two variables. With these two facts. we are now ready to do the short calculations required to nd the autocorrelation. a collection of random signals.7). For the second part. n + m] = σ if m = 0 Or in a more concise way. where the argument to this correlation function will be the time dierence. By examination.1). We are left with the following piecewise function to solve: E [x [n]] E [x [n + m]] if m = 0 R [n. n + m] = E [x [n] x [n + m]] Rxx [n. Since our main focus is to discover more about random processes. is independent.8 Crosscorrelation of Random Processes 13 Before diving into a more complex statistical analysis of random signals and processes (Section 2. can easily see that the above statement is true . then we can take the product of the individual expected values of both functions. where in this case we will deal with samples from two dierent random processes. n + m] = E [x [n]] E [x [n + m]] Now. we can represent the results as xx xx 2 xx 2 Rxx [n. Since the function. look at the autocorrelation function (Section 2. The rst equation is easy to solve as we have already stated that the expected value of x [n] will be zero. When they are equal we can combine the expected values. n + m] = σ 2 δ [m] 2. We will analyze the expected value (Section 2.84 CHAPTER 2. n + m] = E x [n] if m = 0 We can now solve the two parts of the above equation. 13 This content is available online at <http://cnx. recall that GWN is always independent. R [n. For the correlation of signals from the same random process. let us quickly review the idea of correlation (Section 2.1: Mean Value) of the product of these two variables and how they correlate to one another. looking at the above equation we see that we can break it up further into two conditions: one when m and n are equal and one when they are not equal. x [n]. we will deal with two random processes in this discussion. you should recall from statistics that the expected value of the square of a function is equal to the variance.2/>.5.

however.71) 2.8. denoted as U (t) and V (t). if our input and output. are at least jointly wide sense stationary. t − τ ) = E [U V ] = ∞ ∞ −∞ −∞ uvf (u. it does have a unique symmetry property: R (−τ ) = R (τ ) (2. where the variables are instances from two dierent wide sense stationary random processes.74) xy yx xy xx yy xy yx 14 "Discrete-Time Convolution" <http://cnx. • Crosscorrelation is not an even function.85 2. x [n] and y [n]. this may occur if more than one random signal is applied to a system.70) or if we deal with two real signal sequences. v) dvdu (2. random variable from a dierent random process Looking at the generalized formula for the crosscorrelation. This means we can simplify our writing of the above function as R (τ ) = E [U V ] (2.8: Crosscorrelation if two processes are wide sense stationary.1 Properties of Crosscorrelation Below we will look at several properties of the crosscorrelation function that hold for two wide sense stationary (WSS) random processes.69) Just as the case with the autocorrelation function. then we arrive at a more commonly seen formula for the discrete crosscorrelation function.1 Crosscorrelation Function When dealing with multiple random processes. we can prove the following property revealing to us what value the maximum cannot exceed. See the formula below and notice the similarities between it and the convolution of two signals: uv 14 Rxy (n. n − m) = Rxy (m) = ∞ n=−∞ (x [n] y [n − m]) (2. (2.org/content/m10087/latest/> . it is also important to be able to describe the relationship. then the crosscorrelation does not depend on absolute time. We will dene the crosscorrelation function as Ruv (t.73) |R (τ ) | ≤ R (0) R (0) • When two random processes are statistically independent then we have R (τ ) = R (τ ) (2.72) • The maximum value of the crosscorrelation is not always when the shift equals zero. however.1. the expected value of the product of a random variable from one random process with a time-shifted. In order to do this. we use the crosscorrelation function.8. Denition 2. if any. it is just a function of the time dierence. For example. we will represent our two random processes by allowing U = U (t) and V = V (t − τ ). between the processes.

3. 1.86 2. . 0. 1. Using (2. 87. −3. . 1.1 CHAPTER 2. 2. nd the crosscorrelation of the sequences x [n] = {. 0. 6. . } y [n] = {.8. . .71). 0. 0.) for each of the following possible time shifts: m = {0. 0. 0. . . . . −2. . 0. 3. RANDOM SIGNALS Let us begin by looking at a simple example showing the relationship between two sequences. −1}. 4. −3. . . } (Solution on p.2 Examples Exercise 2. . 0.

6. 2. we will now shift y [n] to the right. −4. we will now shift y [n] to the left. . 0. 0. . Then we can nd the product sequence s [n] = x [n] y [n − 3]. 0. 0. 0. . . however. . 0. Then we can nd the product sequence s [n] = x [n] y [n + 1]. . . 0.1 (p. −12. . 0. . 0. 24. . 0. . Doing this we get the following sequence: and so from the sum in our crosscorrelation function we arrive at the answer of Rxy (0) = 22 s [n] = {. 0. 0. For m = −1. } and from the crosscorrelation function we arrive at the answer of Rxy (−1) = −13 . 0. 6. . . . we should begin by nding the product sequence s [n] = x [n] y [n]. which yields s [n] = {. 1. . . we will approach it the same was we did above. 0. we will again take the same approach. . For m = 0. For m = 3. . −3.87 Solutions to Exercises in Chapter 2 Solution to Exercise 2. 0. −6. however. 0. which yields s [n] = {. 86) 1. 1. } and from the crosscorrelation function we arrive at the answer of Rxy (3) = −6 3. } 2. −9.

88 CHAPTER 2. RANDOM SIGNALS .

2 General Formulas from the Dierence Equation As stated briey in the denition above.2. Denition 3. is shown below: N M (ak y [n − k]) = k=0 k=0 (bk x [n − k]) (3. Example y [n] + 7y [n − 1] + 2y [n − 2] = x [n] − 4x [n − 1] (3. as a dierence equation help in understanding and manipulating a system. H (z).1. which represent the characteristics of the LTI system.1. a dierence equation is a very useful tool in describing and calculating the output of the system described by the formula for a given sample n.Chapter 3 Filter Design I (Z-Transform) 3. we will look at the general form of the dierence equation and the general conversion to a z-transform directly from the dierence equation.1 Dierence Equation The general form of a linear.5/>. They are often rearranged as a recursive formula so that a systems output can be computed from the input signal and past outputs.1 Introduction 1 One of the most important concepts of DSP is to be able to properly represent the input/output relationship to a given LTI system. A linear constant-coecient dierence equation (LCCDE) serves as a way to express just this relationship in a discrete-time system. In the following two subsections.1) 3.1. 3.2) 1 This content is available online at <http://cnx. of a system. constant-coecient dierence equation (LCCDE). 89 .1: dierence equation An equation that shows the relationship between consecutive values of a sequence and the dierences among them. Writing the sequence of inputs and outputs.1 Dierence Equation 3.org/content/m10595/2. The key property of the dierence equation is its ability to help easily nd the transform.

5) 3.2).10) for a look into how all of these ideas of the Z-transform (Section 3. that is being represented by the dierence equation.8) play a role in lter design. must be known. look at the module concerning Filter Design from the Z-Transform (Section 3. certain past outputs.4) 2 i.1. which looks like this: CHAPTER 3. and Pole/Zero Plots (Section 3.2). note that y [n − k] represents the outputs and x [n − k] represents the inputs.3) From this equation. Below are the steps taken to convert any dierence equation into its transfer function. Then we use the linearity property to pull the transform inside the summation and the time-shifting property of the z-transform to change the time-shifting terms to exponentials. Below is the general formula for the frequency response of a z-transform. By being able to nd the frequency response. referred to as the initial conditions.2). we arrive at the following equation: a = 1. Dierence Equation. The value of N represents the order of the dierence equation and corresponds to the memory of the system being represented. or lter.90 We can also write the general form to easily express a recursive output.1. Remember that the reason we are dealing with these formulas is to be able to aid us in lter design. for any dierence equation. Y (z) = − a Y (z) z + b X (z) z (3. 2 "Derivation H (z) |z. The conversion is simple a matter of taking the z-transform formula. N M FILTER DESIGN I (Z-TRANSFORM) y [n] = − k=1 (ak y [n − k]) + k=0 (bk x [n − k]) (3. The rst step involves taking the Fourier Transform of all the terms in (3. A LCCDE is one of the easiest ways to represent FIR lters. (3. note: jw H (w) = = Once you understand the derivation of this formula. in order to compute a numerical solution. H (z).2. 3. and replacing every instance of z with e .6) of the Fourier Transform" <http://cnx. we can go one step further to dene the frequency response of the system. we can easily generalize the transfer function.2 Conversion to Z-Transform Using the above formula. H (z).z=ejw PM −(jwk) ) k=0 (bk e PN ak e−(jwk) ) k=0 ( (3. Because this equation relies on past values of the output.e.3 Conversion to Frequency Response Once the z-transform has been calculated from the dierence equation. 0 N M k −k k −k k=1 k=0 H (z) = = Y (z) X(z) PM −k ) k=0 (bk z P 1+ N (ak z −k ) k=1 (3.2. ztransform. Once this is done.org/content/m0046/latest/> . we will be able to look at the basic properties of any lter represented by a simple LCCDE.

is referred to as the homogeneous solution and the second part. 3.1. The following method is very similar to that used to solve many dierential equations. Using these coecients and the above form of the transfer function.8) In order for a linear constant-coecient dierence equation to be useful in analyzing a LTI system. the later being based on the z-transform. we can rewrite the dierence equation in its more common form showing the recursive nature of the system.1: Finding Dierence Equation Below is a basic example showing the opposite of the steps above: given a transfer function one can easily calculate the systems dierence equation.1. expressed in the following equation: y (n) = y (n) + y (n) (3. we want to nd the dierence equation.2). the coecients of the two polynomials will be our a and b values found in the general dierence equation formula.1 Direct Method 3.7) H (z) = z− z+ Given this transfer function of a time-domain lter.91 3. h p h h . x (n). Two common methods exist for solving a LCCDE: the direct method and the indirect method.4 Solving a LCCDE The nal solution to the output based on the direct method is the sum of two parts. (3.11) The rst part. Below we will briey discuss the formulas for solving a LCCDE using each of these methods. we must be able to nd the systems output based upon a known input. y (n). (z + 1) (3.1. we can easily write the dierence equation: 1 3 x [n] + 2x [n − 1] + x [n − 2] = y [n] + y [n − 1] − y [n − 2] (3. 3 −1 y [n − 1] + y [n − 2] (3. To begin with. 2 1 2 3 4 H (z) = = = (z+1)(z+1) 1 3 (z− 2 )(z+ 4 ) From this transfer function. and a set of initial conditions.4.10) y [n] = x [n] + 2x [n − 1] + x [n − 2] + 4 8 k k z 2 +2z+1 z 2 +2z+1− 3 8 1+2z −1 +z −2 3 −2 1 −1 1+ 4 z − 8 z (3. so if you have taken a dierential calculus course or used dierential equations before then this should seem very familiar. is referred to as particular solution. expand both polynomials and divide them by the highest order z.9) 4 8 In our nal step. y (n).3 Example Example 3.

2 Indirect Method The indirect method utilizes the relationship between the dierence equation and z-transform.2. 3.92 3. The basic idea is to convert the dierence equation into a z-transform.1. Now we simply need to solve the homogeneous dierence equation: (a y [n − k]) = 0 (3. we can arrive at the solution. We now have to solve the following equation: a λ =0 (3. The roots of this polynomial will be the key to solving the homogeneous equation. we will make the assumption that the solution is in the form of an exponential. Below we have the modied version for an equation where λ has K multiple roots: y (n) = C (λ ) + C n(λ ) + C n (λ ) + · · · + C n (λ ) + C (λ ) + · · · + C (λ ) (3.15) N k k=0 N k n−k k=0 n n n h 1 1 2 2 N N 1 n n h 1 1 1 1 1 2 n 1 1 K−1 n n n 1 2 2 N N 3. to represent our exponential terms. Then by inverse transforming this and using partial-fraction expansion.1 Basic Denition of the Z-Transform The z-transform of a sequence is dened as 3 This ∞ X (z) = n=−∞ x [n] z −n (3. If there are all distinct roots.4.2: Conversion to ZTransform). then the general solution to the equation will be as follows: y (n) = C (λ ) + C (λ ) + · · · + C (λ ) (3. to get the resulting output.1.4.16) In order to solve.17) content is available online at <http://cnx.9/>.1 Homogeneous Solution CHAPTER 3.1.2 The Z Transform: Denition 3 3. if the characteristic equation contains multiple roots then the above general solution will be slightly dierent.org/content/m10549/2. as described above (Section 3.2.12) In order to solve this. (ak yp (n − k)) = p (bk x (n − k)) k=0 k=0 3. FILTER DESIGN I (Z-TRANSFORM) We begin by assuming that the input is zero. discussed earlier (Section 3. After guessing at a solution to the above equation involving the particular solution.13) We can expand this equation out and factor out all of the lambda terms. will be any solution that will solve the general dierence equation: N M (3. our guess for the solution to y (n) will take on the form of the input. y p (n) . x (n) = 0. one only needs to plug the solution into the dierence equation and solve it out.4. λ. x (n).14) However.1. which is referred to as the characteristic polynomial.2: General Formulas from the Dierence Equation).1. to nd a solution.1.2 Particular Solution The particular solution. . We will use lambda. Y (z).1. This will give us a large polynomial in parenthesis.

∞ −n n=0 ∞ jω −(jωn) n=−∞ −n −(jωn) jω 3.19) x [n] e Notice that that when the z is replaced with e the z-transform reduces to the Fourier Transform. There is a close relationship between the z-transform and the Fourier transform of a discrete time signal. and the angle from the positive. The position on the complex plane is given by re .1 The Z-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable . real axis around the plane is denoted by ω. X (z) is dened everywhere on this plane. So for example. X e on the other hand is dened only where |z| = 1. which is referred to as the unit circle. At times the z-transform is dened as X (z) = (3. z = e . ω = 1 at z = 1 and ω = π at z jω jω . which is to have the magnitude of z equal to unity.2.93 Sometimes this equation is referred to as the bilateral z-transform.18) x [n] z which is known as the unilateral z-transform. which is dened as X e = (3.2 The Complex Plane In order to get further insight into the relationship between the Fourier Transform and the Z-Transform it is useful to look at the complex plane or z-plane. When the Fourier Transform exists. Take a look at the complex plane: Z-Plane Figure 3.

Example 3. This is best illustrated by looking at the dierent ROC's of the ztransforms of α u [n] and α u [n − 1].2. known as the ROC. the periodicity of Fourier transform is easily seen. is dened as the range of z for which the z-transform converges. |x [n] z | < ∞ (3. Since the z-transform is a power series.22) .20) must be satised for convergence.94 z = −1 CHAPTER 3.2: x [n] = αn u [n] where α = 0. The ROC for a given x [n] . This is useful because.3 Region of Convergence Figure 3. it converges when x [n] z is absolutely summable. by representing the Fourier transform as the z-transform on the unit circle. Stated dierently. FILTER DESIGN I (Z-TRANSFORM) . The region of convergence.5. is important to understand because it denes the region where the z-transform exists.21) −n ∞ −n n=−∞ n n n 3.2 For x [n] = α u [n] (3. X (z) = = = = ∞ −n ) n=−∞ (x [n] z ∞ n −n ) n=−∞ (α u [n] z ∞ n −n ) n=0 (α z ∞ −1 n αz n=0 (3.

It only converges when |αz | < 1.25) |z| > |α| Figure 3.24) (3.26) . Thus the ROC is the range of |<1 (3. values where −1 ∞ n=0 αz −1 n or.95 This sequence is an example of a right-sided exponential sequence because it is nonzero for n ≥ 0. equivalently.3 For x [n] = (− (αn )) u [−n − 1] (3. then the series. |αz −1 does not converge. −1 1 1−αz −1 z z−α X (z) = = (3.23) If |αz | ≥ 1.3: ROC for x [n] = αn u [n] where α = 0.5 Example 3. When it converges.

96

CHAPTER 3.

FILTER DESIGN I (Z-TRANSFORM)

Figure 3.4:

x [n] = (− (αn )) u [−n − 1] where α = 0.5.

X (z)

= = = = = =

**The ROC in this case is the range of values where or, equivalently, If the ROC is satised, then
**

X (z)

∞ −n ) n=−∞ (x [n] z ∞ n n=−∞ ((− (α )) u [−n −1 n −n − ) n=−∞ (α z −n −1 α−1 z − n=−∞ n ∞ − α−1 z n=1 n ∞ 1 − n=0 α−1 z

− 1] z −n )

(3.27)

|α−1 z| < 1 |z| < |α| = = 1−

z z−α 1 1−α−1 z

(3.28) (3.29) (3.30)

97

Figure 3.5:

ROC for x [n] = (− (αn )) u [−n − 1]

**3.3 Table of Common z-Transforms
**

note:

4

The table below provides a number of unilateral and bilateral z-transforms (Section 3.2). The table also species the region of convergence (Section 3.7). The notation for z found in the table below may dier from that found in other tables. For example, the basic z-transform of u [n] can be written as either of the following two expressions, which are equivalent:

z 1 = z−1 1 − z −1

(3.31)

4 This

content is available online at <http://cnx.org/content/m10119/2.14/>.

98

Signal

δ [n − k] u [n] − (u [−n − 1]) nu [n] n2 u [n] n u [n] (− (αn )) u [−n − 1] α u [n] nα u [n] n α u [n]

Qm

k=1 (n−k+1) αn u [n] αm m!

CHAPTER 3.

FILTER DESIGN I (Z-TRANSFORM)

Z-Transform

z −k

z z−1 z z−1 z (z−1)2 z(z+1) (z−1)3 z (z 2 +4z+1) (z−1)4 z z−α z z−α αz (z−α)2 αz(z+α) (z−α)3 z (z−α)m+1 z(z−γcos(α)) z 2 −(2γcos(α))z+γ 2 zγsin(α) z 2 −(2γcos(α))z+γ 2

Table 3.1

ROC

Allz

|z| > 1 |z| < 1 |z| > 1 |z| > 1 |z| > 1 |z| < |α| |z| > |α| |z| > |α| |z| > |α|

3

n

n

2 n

γ n cos (αn) u [n] γ n sin (αn) u [n]

|z| > |γ| |z| > |γ|

**3.4 Poles and Zeros
**

3.4.1 Introduction

7

5

It is quite dicult to qualitatively analyze the Laplace transform and Z-transform (Section 3.2), since mappings of their magnitude and phase or real part and imaginary part result in multiple mappings of 2-dimensional surfaces in 3-dimensional space. For this reason, it is very common to examine a plot of a transfer function's poles and zeros to try to gain a qualitative idea of what a system does. Given a continuous-time transfer function in the Laplace domain, H (s), or a discrete-time one in the Z-domain, H (z), a zero is any value of s or z such that the transfer function is zero, and a pole is any value of s or z such that the transfer function is innite. To dene them precisely: Denition 3.2: zeros 1. The value(s) for z where the numerator of the transfer function equals zero 2. The complex frequencies that make the overall gain of the lter transfer function zero. Denition 3.3: poles 1. The value(s) for z where the denominator of the transfer function equals zero 2. The complex frequencies that make the overall gain of the lter transfer function innite.

6

3.4.2 Pole/Zero Plots

When we plot these in the appropriate s- or z-plane, we represent zeros with "o" and poles with "x". Refer to this module (Section 3.8) for a detailed looking at plotting the poles and zeros of a z-transform on the Z-plane.

5 This content is available online at <http://cnx.org/content/m10112/2.12/>. 6 "The Laplace Transforms" <http://cnx.org/content/m10110/latest/> 7 "Transfer Functions" <http://cnx.org/content/m0028/latest/>

we must ask what it is that this plot gives us. This yields zeros at s = −2 and s = −4. To nd where this equals zero.6 (Pole and Zero Plot) s2 +6s+8 s2 +2 2 2 Pole and Zero Plot Figure 3.3 Repeated Poles and Zeros 2 It is possible to have more than one pole or zero at any given point.99 Example 3. Basically what we can gather from this is that the magnitude of the transfer function will be larger when it is closer to the poles and smaller when it is closer to the zeros. (s + 2) (s + 4). 8 3. we must recognize that the transfer function will be innite whenever the bottom part is zero. we factor this to get. This yields purely imaginary roots of +j 2 and − j 2 Plotting this gives Figure 3. Had this function been more complicated. The rst thing we recognize is that this transfer function will equal zero whenever the top. To nd this. √ is when s + 2 is zero. This That to yields s + j 2 s − j√2 . equals zero. 1 s25 8 "BIBO Stability" <http://cnx.org/content/m10113/latest/> . it might have been necessary to use the quadratic formula. we again look√ factor the√equation. the discrete-time transfer function H (z) = z will have two zeros at the origin and the continuous-time function H (s) = will have 25 poles at the origin. This provides us with a qualitative understanding of what the system does at various frequencies and is crucial to the discussion of stability . For instance. For poles. s + 6s + 8.6: Sample pole-zero plot Now that we have found and plotted the poles and zeros.4.4 and plot the results in the Find the poles and zeros for the transfer function H (s) = s-plane.

skip to the next section (Section 3. it is very unlikely that the pole and zero would remain in exactly the same place.5. In theory they are equivalent. (s+3)(s−1) s−1 3. x −4 (3.5. let us look at some of their basic properties and characteristics. the roots will provide us with information on many key properties.1 Roots To understand many of the following characteristics of a rational function. However.3: Rational Functions and the Z-Transform) to see how rational functions are useful when dealing with the z-transform. In order to do this. 3. the roots of the rational function are as follows: Roots of the numerator are: {−2. 2} 9 This content is available online at <http://cnx.5. This is generally a very bad way to try to eliminate a pole.5 Rational Functions 3.2.4: rational function For any two polynomials. think about what may happen if this were a transfer function of a system that was created with physical circuits. you should have noticed by now they are all rational functions. If this were to occur a tremendous amount of volatility is created in that area.7/>.4.2).33) (2x + 3) (x − 1) Thus. Below we will look at some of the properties of rational functions and how they can be used to reveal important characteristics about a z-transform. In this case. A and B.org/content/m10593/2. A minor temperature change. let us factor both of the polynomials so that the roots can be easily determined.100 3. Like all polynomials. since there is a change from innity at the pole to zero at the zero in a very small range of signals. If you are familiar with rational functions and basic algebraic properties.5. (x + 2) (x − 2) f (x) = (3.32). Example Below is a simple example of a basic rational function. the term rational function is a simple way to describe a particular relationship between two polynomials. FILTER DESIGN I (Z-TRANSFORM) An easy mistake to make with regards to poles and zeros is to think that a function like is the same as s + 3.4 Pole-Zero Cancellation CHAPTER 3. but the rational function is undened when the denominator equals zero.2 Properties of Rational Functions In order to see what makes rational functions special. one must begin by nding the roots of the rational function. for instance. could cause one of them to move just slightly. and thus a signal or LTI system. Denition 3. 3. (3. .32) f (x) = 2x + x − 3 2 2 If you have begun to study the Z-transform (Section 3. their quotient is called a rational function. A much better way is to use control theory to move the pole to a better place. Note that the numerator and denominator can be polynomials of any order. f (x).1 Introduction 9 When dealing with operations on polynomials. The function below shows the results of factoring the above rational function. as the pole and zero at s = 1 cancel each other out in what is known as pole-zero cancellation.

Because we have already solved for our roots. which have become the most common way of representing the z-transform.2 Discontinuities i. Written out mathematical. we can see that the function will have discontinuities at the following points: x = {−3. Because we have already found the roots of the equation this process is very simple.e.34) 3.32). especially those of the roots. These discontinuities often appear as vertical asymptotes on the graph to represent the values where the function is undened. the rational function becomes undened. Example Using the rational function above. Below is the general form of the z-transform written as a rational function: b + b z + ··· + b z (3. we have a discontinuity in the function.3 Domain Using the roots that we found above. all z-transforms can be written as rational functions. in order to reveal certain characteristics about the signal or LTI system described by the z-transform.e. we get the following: { x ∈ R |x = −3 and x = 1} (3. Example 3. it is very easy to see when this occurs. From algebra.5 Continuing to look at our rational function above. the domain of the rational function can be easily dened. (3.5. 3. the domain can be dened as any real number x where x does not equal 1 or negative 3.5: domain The group.101 Roots of the denominator are: {−3. the function becomes undened. 3. note: 3. of values that are dened by a given function. Because of this. the function will have an x-intercept wherever x equals one of the roots of the numerator. we must be aware of the values of the variable that will cause the denominator of our fraction to be zero. Denition 3.2.32). Because we are dealing with division of two polynomials. or set. i. The y-intercept occurs whenever x equals zero. 1} In respect to the Cartesian plane. 1} In order to understand rational functions.5. we know that the output will be zero whenever the numerator of the rational function is equal to zero. we can use the properties above.35) X (z) = a + a z + ··· + a z 0 1 −1 M −M 0 1 −1 N −N . When this happens. When the variable in the denominator equals any of the roots of the denominator.2.5. (3.3 Rational Functions and the Z-Transform As we have stated above.4 Intercepts The x-intercept is dened as the point(s) where f (x). the output of the rational functions. This can be found by setting all the values of x equal to zero and solving the rational function. we say that the discontinuities are the values along the x-axis where the function in undened.5. Therefore.2. equals zero. it is essential to know and understand the roots that make up the rational function.

We can apply this knowledge to representing an LTI system graphically through a Pole/Zero Plot (Section 3. The following two relationships become apparent: CHAPTER 3.2/>.102 If you have already looked at the module about Understanding Pole/Zero Plots and the Z-transform (Section 3.8). 11 10 This content is available online at <http://cnx. see (3. Before looking over the rest of this module. The equation above.org/content/m0081/latest/> .4 Conclusion Once we have used our knowledge of rational functions to nd its roots. Any complex number can be expressed as a point on the complex plane. or to analyze and design a digital lter through Filter Design from the Z-Transform (Section 3. Denition 3.8).10).6 The Complex Plane 3.6.1 Complex Plane 10 The complex plane provides a way to express complex numbers graphically. one should be very familiar with complex numbers. we can manipulate a z-transform in a number of useful ways. (3.35). 11 "Complex Numbers" <http://cnx. FILTER DESIGN I (Z-TRANSFORM) Relationship of Roots to Poles and Zeros • • The roots of the numerator in the rational function will be the zeros of the z-transform The roots of the denominator in the rational function will be the poles of the z-transform 3. Please refer to the complex number module for an explanation or review of these numbers.33).6: Complex Plane A two-dimensional graph where the horizontal axis maps the real part and the vertical axis maps the imaginary part of any complex number or function.5. you should see how the roots of the rational function play an important role in understanding the z-transform. Thus. 3. we can easily nd the roots of the numerator and denominator of the z-transform.org/content/m10596/2. can be expressed in factored form just as was done for the simple rational function above.

7: Plot of the complex number.37) r = a +b jθ 2 2 θ = arctan b a (3. b).6. z . where a and b are real numbers that describe a unique complex number through the following general form: z = a + jb (3.1.103 Complex Plane Figure 3. the following equations show the conversion between polar and rectangle coordinates: (3. the polar coordinates use r and θ in their ordered pairs. real axis.36) This form is referred to as the rectangular coordinate. Rather than using a and b. as a point on the complex plane.2 Polar Form The complex plane can also be used to plot complex numbers that are in polar form. The r is the distance from the origin to the complex number and θ is the angle of the complex number relative to the positive.1 Rectangular Coordinates Similar to the Cartesian plane. The general form for polar numbers is as follows: re As a reminder.38) .6.1. 3. 3. the complex plane allows one to plot ordered pairs of the form (a. Look at the gure above to see these variables displayed on the complex plane.

The z-transform of a sequence is dened as ∞ X (z) = n=−∞ x [n] z −n (3.org/content/m10622/2.104 3.5/>. Since X (z) must be nite for all z for convergence. and when n ≥ 0 the ROC will include |z| = ∞. On the other hand. when n ≤ 0 then the ROC will include z = 0.2) exists. then.2 Properties of the Region of Convergencec The Region of Convergence has a number of properties that are dependent on the characteristics of the signal.39) (3. x [n]. By denition a pole is a where X (z) is innite.7 Region of Convergence for the Z-transform The region of convergence. 1 2 2 −1 1 2 1 12 This content is available online at <http://cnx. When n < 0 then the sum will be innite and thus the ROC will not include |z| = ∞. is important to understand because it denes the region where the z-transform (Section 3. Stated dierently. As long as each value of x [n] is nite then the sequence will be absolutely summable. .7. Since the z-transform is a power series. then the ROC is the entire z-plane. −n ∞ |x [n] z −n | < ∞ must be satised for convergence. known as the ROC.40) The ROC for a given x [n] . except possibly z = 0 or |z| = ∞. • The ROC cannot contain any poles. there cannot be a pole in the ROC. the only signal. n=−∞ 3. • If x [n] is a nite-duration sequence. is dened as the range of z for which the z-transform converges. A nite-duration sequence is a sequence that is nonzero in a nite interval n ≤ n ≤ n . FILTER DESIGN I (Z-TRANSFORM) 12 3. it converges when x [n] z is absolutely summable. With these constraints. When n > 0 there will be a z term and thus the ROC will not include z = 0. whose ROC is the entire z-plane is x [n] = cδ [n].1 The Region of Convergence CHAPTER 3.7.

105

Figure 3.8:

An example of a nite duration sequence.

The next properties apply to innite duration sequences. As noted above, the z-transform converges when |X (z) | < ∞. So we can write

∞ ∞ ∞

|X (z) | = |

n=−∞

x [n] z −n | ≤

n=−∞

|x [n] z −n | =

n=−∞

|x [n] |(|z|)

−n

(3.41) (3.42) (3.43) (3.44) (3.45) (3.46)

We can then split the innite sum into positive-time and negative-time portions. So

|X (z) | ≤ N (z) + P (z)

where

N (z) =

−1

|x [n] |(|z|)

n=−∞ ∞

−n

and

P (z) =

n=0

|x [n] |(|z|)

−n

**In order for |X (z) | to be nite, |x [n] | must be bounded. Let us then set
**

|x (n) | ≤ C1 r1 n

for and for

n<0 |x (n) | ≤ C2 r2 n n≥0

From this some further properties can be derived:

106

•

CHAPTER 3.

FILTER DESIGN I (Z-TRANSFORM)

If x [n] is a right-sided sequence, then the ROC extends outward from the outermost pole in X (z). A right-sided sequence is a sequence where x [n] = 0 for n < n1 < ∞. Looking at the

**positive-time portion from the above derivation, it follows that
**

∞

∞

P (z) ≤ C2

n=0

r2 n (|z|)

2

−n

= C2

n=0

r2 |z|

n

(3.47)

Thus in order for this sum to converge, |z| > r , and therefore the ROC of a right-sided sequence is of the form |z| > r .

2

Figure 3.9:

A right-sided sequence.

Figure 3.10:

The ROC of a right-sided sequence.

107

•

If x [n] is a left-sided sequence, then the ROC extends inward from the innermost pole in X (z). A right-sided sequence is a sequence where x [n] = 0 for n > n2 > −∞. Looking at the

**negative-time portion from the above derivation, it follows that
**

−1 −1

N (z) ≤ C1

n=−∞

r1 n (|z|)

−n

= C1

n=−∞ 1

r1 |z|

n

∞

= C1

k=1

|z| r1

k

(3.48)

Thus in order for this sum to converge, |z| < r , and therefore the ROC of a left-sided sequence is of the form |z| < r .

1

Figure 3.11:

A left-sided sequence.

Figure 3.12:

The ROC of a left-sided sequence.

FILTER DESIGN I (Z-TRANSFORM) in the positive and negative directions.108 • CHAPTER 3. From the derivation of the above two properties. Figure 3.13: A two-sided sequence.14: The ROC of a two-sided sequence. the ROC will be a ring in the z-plane that is bounded on the interior and exterior by a pole. Therefore the ROC of a two-sided sequence is of the form r < |z| < r . 3. . then both the positive-time and negative-time portions converge and thus X (z) converges as well. it follows that if r < |z| < r converges.3 Examples To gain further insight it is good to look at a couple of examples.7. 2 2 2 2 If x [n] is a two-sided sequence. A two-sided sequence is an sequence with innite duration Figure 3.

15: The ROC of ` 1 ´n 2 u [n] The z-transform of −1 n u [n] 4 is z z+ 1 4 with an ROC at |z| > .49) Figure 3.109 Example 3. 1 2 1 2 n u [n] + 1 4 n u [n] (3.6 Lets take x1 [n] = The z-transform of 1 n u [n] 2 is z 1 z− 2 with an ROC at |z| > . −1 4 .

16: The ROC of ` −1 ´n 4 u [n] Due to linearity. 1 2 (z− 1 )(z+ 1 ) 2 4 . and two poles. FILTER DESIGN I (Z-TRANSFORM) Figure 3. the ROC is |z| > . and . X1 [z] = = z z + z+ 1 z− 1 2 4 1 2z (z− 8 ) (3. Following the obove properties. at . at 0 and .110 CHAPTER 3.50) 1 8 1 2 −1 4 By observation it is clear that there are two zeros.

17: The ROC of x1 [n] = ` 1 ´n 2 u [n] + ` −1 ´n 4 u [n] Example 3.6).51) The z-transform and ROC of u [n] was shown in the example above (Example 3.7 (3. u [−n − 1] is x2 [n] = −1 4 n Now take u [n] − 1 2 n u [−n − 1] 1 n 2 −1 n 4 z z− 1 2 1 2 .111 Figure 3. The z-transorm of − with an ROC at |z| > .

at 0 and . in ths case though. (z+ 1 )(z− 1 ) 4 2 . and two poles. FILTER DESIGN I (Z-TRANSFORM) Figure 3. and .112 CHAPTER 3. by linearity.52) 1 16 1 2 −1 4 By observation it is again clear that there are two zeros. at . the ROC is |z| < . X2 [z] = = 1 2 z z + z− 1 z+ 1 4 2 1 z (2z− 8 ) (3.18: The ROC of − ` `` 1 ´n ´´ 2 u [−n − 1] Once again.

53) Q (z) The two polynomials. Example 3. . The value(s) for z where Q (z) = 0. Denition 3. 3. The complex frequencies that make the overall gain of the lter transfer function zero. The value(s) for z where P (z) = 0. The Z-transform will have the below structure.7: zeros 1.1 Introduction to Poles and Zeros of the Z-Transform 13 Once the Z-transform of a system has been determined.8 Below is a simple transfer function with the poles and zeros shown below it.4) of the Z-Transform.8. Denition 3. H (z) = 13 This z+1 z−1 z+ 2 3 4 content is available online at <http://cnx. 2.8: poles 1.8/>. one can use the information contained in function's polynomials to graphically represent the function and easily observe many dening characteristics.113 Figure 3. The complex frequencies that make the overall gain of the lter transfer function innite.19: The ROC of x2 [n] = ` −1 ´n 4 u [n] − ` 1 ´n 2 u [−n − 1]. P (z) and Q (z). based on Rational Functions (Section 3.5): P (z) X (z) = (3.8 Understanding Pole/Zero Plots on the Z-Plane 3. 2.org/content/m10556/2. allow us to nd the poles and zeros (Section 3.

2 The Z-Plane Once the poles and zeros have been found for a given Z-Transform.3 Examples of Pole/Zero Plots This section lists several examples of nding the poles and zeros of a transfer function and then plotting them onto the Z-Plane.8. − 1 2 3 4 CHAPTER 3. When mapping poles and zeros onto the plane. Example 3.20 3. The Z-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable z. poles are denoted by an "x" and zeros by an "o". and examples of plotting zeros and poles onto the plane can be found in the following section. The below gure shows the Z-Plane.8. FILTER DESIGN I (Z-TRANSFORM) 3. The position on the complex plane is given by re and the angle from the positive. real axis around the plane is denoted by θ.114 The zeros are: {−1} The poles are: .9: Simple Pole/Zero Plot H (z) = z z− 1 2 The zeros are: {0} The poles are: . they can be plotted onto the Z-Plane. − 1 2 z+ 3 4 3 4 . jθ Z-Plane Figure 3.

and the poles are placed at −1.10: Complex Pole/Zero Plot H (z) = The zeros are: {j. 1 + 1 j and 1 − 1 j 2 2 2 2 . the one zero is mapped to ` zero and the two poles are placed at 1 and − 3 2 4 Example 3. 2 − 1j 2 Pole/Zero Plot Figure 3. −j} The poles are: −1.115 Pole/Zero Plot Figure 3.22: Using the zeros and poles found from the transfer function. + 1 2 z− 1 2 (z − j) (z + j) 1 − 2j z − 1 + 1j 2 2 1 1 2 j.21: Using the zeros and poles found´ from the transfer function. the zeros are mapped to ±j .

Below is a pole/zero plot with a possible ROC of the Z-transform in the Simple Pole/Zero Plot (Example 3. then the system is stable.9: Simple Pole/Zero Plot) discussed earlier. If the ROC includes the unit circle. FILTER DESIGN I (Z-TRANSFORM) MATLAB . there are some choices that are more practical. title('Pole/Zero Plot for Complex Pole/Zero Plot Example').5j]. The shaded region indicates the ROC chosen for the lter. then you can use its functions to easily create pole/zero plots. From this gure. -j]. . then the system is causal. 3. Although several regions of convergence may be possible. figure(1).5-.If access to MATLAB is readily available.8. % Set up vector for zeros z = [j . we can see that the lter will be both causal and stable since the above listed conditions are both met. . Filter Properties from ROC • • If the ROC extends outward from the outermost pole.11 H (z) = z z− 1 2 z+ 3 4 .116 CHAPTER 3.4 Pole/Zero Plot and Region of Convergence The region of convergence (ROC) for X (z) in the complex Z-plane can be determined from the pole/zero plot.p). % Set up vector for poles p = [-1 . Below is a short program that plots the poles and zeros from the above example onto the Z-Plane. zplane(z. Example 3.5j . where each one corresponds to a dierent impulse response. A ROC can be chosen to make the transfer function causal and/or stable depending on the pole/zero plot.5+.

5 Frequency Response and the Z-Plane The reason it is helpful to understand and create these pole/zero plots is due to their ability to help us easily design a lter.23: The shaded area represents the chosen ROC for the transfer function.117 Region of Convergence for the Pole/Zero Plot Figure 3.org/content/m10700/2. Based on the location of the poles and zeros. the magnitude response of the lter can be quickly understood.2/>.10) for information on the relationship between the pole/zero plot and the frequency response. If z is a zero of H (z). Also.8. . by starting with the pole/zero plot. one can design a lter and obtain its transfer function very easily. Taking the Z -transform of both sides gives H (z) = z −((N −1)) H 0 h (n) = h (N − (1 − n)) Recall that we are assuming that h (n) is real-valued. 3.1 Zero Locations of Linear-Phase Filters 14 The zeros of the transfer function H (z) of a linear-phase lter lie in specic congurations. We can write the symmetry condition in the Z domain.9 Zero Locations of Linear-Phase FIR Filters 3. Refer to this module (Section 3. then 14 This 1 z (3. 3.9.54) H (z0 ) = 0 H (z0 ∗ ) = 0 content is available online at <http://cnx.

generic zeros of a linear-phase lter exist in sets of 4. zeros on the unit circle ( z = e ) exist in sets of 2. 2. 3. ( z = ±1) 3.2 ZEROS LOCATIONS It follows that 1. ( z = ±1) 4.118 (Because the roots of a polynomial with real coecients exist in complex-conjugate pairs.9. (3. zeros at 1 and -1 do not imply the existence of zeros at other specic points. zeros on the real line ( z = a) exist in sets of 2.54). then so are z .) Using the symmetry condition. and . it follows that CHAPTER 3. FILTER DESIGN I (Z-TRANSFORM) H (z0 ) = z −((N −1)) H and or note: 1 z0 1 z0 ∗ =0 H (z0 ∗ ) = z −((N −1)) H 1 z0 1 z0 ∗ =0 H 0 =H =0 0 ∗ 1 z0 1 z0 ∗ If z is a zero of a (real-valued) linear-phase lter. . 0 jω0 0 0 0 .

119

(a)

(b)

Figure 3.24:

Examples of zero sets

3.9.3 ZERO LOCATIONS: AUTOMATIC ZEROS

The frequency response H

f

(ω)

**of a Type II FIR lter always has a zero at ω = π:
**

h (n) = [h0 , h1 , h2 , h2 , h1 , h0 ]

H (z) = h0 + h1 z −1 + h2 z −2 + h2 z −3 + h1 z −4 + h0 z −5 H (−1) = h0 − h1 + h2 − h2 + h1 − h0 = 0 H f (π) = H ejπ = H (−1) = 0

always for Type II lters. Similarly, we can derive the following rules for Type III and Type IV FIR lters.

note:

H f (π) = 0

120

note: note:

CHAPTER 3.

FILTER DESIGN I (Z-TRANSFORM)

always for Type III lters. H (0) = 0 always for Type IV lters. The automatic zeros can also be derived using the characteristics of the amplitude response A (ω) seen earlier.

H f (0) = H f (π) = 0

f

Type automatic zeros

I II III IV

ω=π ω = 0 or π ω=0

Table 3.2

3.9.4 ZERO LOCATIONS: EXAMPLES

The Matlab command zplane can be used to plot the zero locations of FIR lters.

121

Figure 3.25

**Note that the zero locations satisfy the properties noted previously.
**

3.10 Filter Design using the Pole/Zero Plot of a Z-Transform

3.10.1 Estimating Frequency Response from Z-Plane

15

One of the motivating factors for analyzing the pole/zero plots is due to their relationship to the frequency response of the system. Based on the position of the poles and zeros, one can quickly determine the frequency response. This is a result of the correspondence between the frequency response and the transfer function evaluated on the unit circle in the pole/zero plots. The frequency response, or DTFT, of the system is

15 This

content is available online at <http://cnx.org/content/m10548/2.9/>.

if close to a zero. we can begin to understand how the magnitude of the frequency response is a ratio of the distances to the poles and zero present in the z-plane as w goes from zero to pi.56) we have the frequency response in a form that can be used to interpret physical characteristics about the lter's frequency response. and also show the physical meaning of the terms in (3. where h is either a zero. Example 3. are explicitly drawn onto the complex plane shown in the gure below. the following two properties. FILTER DESIGN I (Z-TRANSFORM) H (w) = = Next. If a zero is on the unit circle. jw k jw jw 0 0 M k=1 N k=1 |ejw − ck | (|ejw − dk |) While moving around the unit circle. please refer back to the Pole/Zero Plots (Section 3. These characteristics allow us to interpret |H (w) | as follows: b ”distances from zeros” |H (w) | = | | (3. The pole or zero. using the distances from the unit circle to the poles and zeros. 1. From this. for random values of w. some of the vectors represented by |e −h|. If a pole is on the unit circle. denoted by c or a pole. is a vector from the origin to its location anywhere on the complex plane and e is a vector from the origin to its location on the unit circle. 3.. the vector is the longest and thus the frequency response will have its largest amplitude here. then the magnitude is large.z=ejw PM −(jwk) ) k=0 (bk e PN ak e−(jwk) ) k=0 ( (3. Vectors are commonly used to represent the term and its parts on the complex plane.10.57) a ”distances from poles” In conclusion.122 dened as: jw CHAPTER 3. then the magnitude is small. These vectors show how the amplitude of the frequency response changes as w goes from 0 to 2π. The vector connecting these two points.12 In this rst example we will take a look at the very simple z-transform shown below: H (z) = z + 1 = 1 + z −1 H (w) = 1 + e−(jw) jw For this example. then the frequency response is zero at that point. denoted by d . 2.56) above. One can see that when w = 0. As w approaches π. As w goes from 0 to 2π. The numerator and denominator contain a product of terms of the form |e − h|. connects the pole or zero location to a place on the unit circle dependent on the value of w. then the frequency response goes to innity at that point. If you have forgotten or are unfamiliar with pole/zero plots. the length of the . we can plot the frequency response of the system.55) (3. by factoring the transfer function into poles and zeros and multiplying the numerator and denominator by e we arrive at the following equations: H (w) = | b0 | a0 H (z) |z..8) module. specify how one should draw |H (w) |. if close to a pole.2 Drawing Frequency Response from Pole/Zero Plot Let us now look at several examples of determining the magnitude of the frequency response from the pole/zero plot of a z-transform. |e − h|. h.56) k From (3. taken from the above equations.

56) helps us see the mathematics behind this conclusion and the relationship between the distances from the unit circle and the poles and zeros. The Figure 3.123 vectors decrease as does the amplitude of |H (w) |. H (w). As w moves from 0 to π. (a) Pole/Zero Plot Figure 3. Figure 3. a more complex transfer function is analyzed in order to represent the system's frequency response.27(a) (Pole/Zero Plot) represents the basic pole/zero plot of the z-transform.56). we see how the zero begins to mask the eects of the pole and thus force the frequency response closer to 0. we can see that when w = 0 the frequency will peak since it is at this value of w that the pole is closest to the unit circle. The ratio from (3. From the formulas and statements in the previous section.13 For this example. Example 3.26: (b) Frequency Response: |H(w)| The rst gure represents the pole/zero plot with a few representative vectors graphed while the second shows the frequency response with a peak at +2 and graphed between plus and minus π.27(b) (Frequency Response: |H(w)|) shows the magnitude of the frequency response. 1− 1 −(jw) 2e . 1 z H (z) = z− 1 2 = 1 − 1 z −1 2 1 H (w) = Below we can see the two gures described by the above equations. there is only this one vector term rather than a ratio as seen in (3. Since there are no poles in the transform.

27: (b) Frequency Response: |H(w)| The rst gure represents the pole/zero plot while the second shows the frequency response with a peak at +2 and graphed between plus and minus π. FILTER DESIGN I (Z-TRANSFORM) (a) Pole/Zero Plot Figure 3.124 CHAPTER 3. .

it turns out to be easier if we talk about a normalized impedance. and it rotates around at a rate 2βs as we move down the line.1 Now. and dene another complex vector. and look at a bilinear transform! Let's do one more substitution. It has a magnitude equal to that of the reection coecient.1 Bilinear Transform 1 There is a way that we can make things a good bit easier for ourselves however.Chapter 4 Filter Design II 4.org/content/m1057/2. 125 .1) The vector r (s) is just the rotating part of the crank diagram which we have been looking at Figure 4. which we get by dividing 1 This content is available online at <http://cnx. The only drawback is that we have to do some complex analysis rst. For every r (s) there is a corresponding Z (s) which is given by: 1 + r (s) (4.13/>. which we can call r (s): r (s) ≡ |Γ |e (4.2) Z (s) = Z 1 − r (s) ν j(θr −2βs) 0 The Vector r(s) Figure 4.1 (The Vector r(s)).

2 (The Complex Impedance Plane). as we move along in s. 0 (4. behaves in a most dicult manner (dividing one phasor by another). we are going to have to interpret it in a way which might seem a little odd to you.2 Let's start with (4. given an r (s). and nd that. I just read along the two axes. Whereas. for the example in Figure 4. This is what we shall attempt to do. The way we will read the equation is to say: "Take and add 1 to it. Invert what you get. by simply rotating r (s)around by from the grid I had developed. Given one r (s ) it is easy to nd another r (s). "A" represents an impedance of = 4 + 2j. What I would like to do would be to get a grid similar to that on the plane. For every r (s) that we can imagine. is nd . That way. And then suppose I have some point "A" on that plane and I want to know what impedance it represents. Suppose I have a complex plane. r (s) simply rotates around on the complex plane. and multiply Z(s) Z0 .126 Z (s) CHAPTER 4.4) This relationship is called a bilinear transform. representing .5). FILTER DESIGN II by Z . The reason for this should be readily apparent.4) and re-write it as: r (s) = = Z(s) Z0 +1−2 Z(s) Z0 +1 1+ −2 Z(s) Z0 +1 (4. What we would like to be able to do. at any other s. and then reading o the new r (s) = Z(s) Z0 Z(s) Z0 which we can solve for r (s) Z(s) Z0 Z (s) 1 + r (s) = Z0 1 − r (s) −1 +1 (4. 2βs.5) In order to use (4. there is one and only one and for every there is one and only one r (s). We just rotate around! We shall nd the required relationship in a graphical manner.3) Z(s) Z0 Z(s) Z0 Z(s) Z0 0 Z(s) Z0 Z(s) Z0 Z(s) Z0 Z(0) Z0 ZL Z0 Z(s) Z0 The Complex Impedance Plane Figure 4. but on the r (s) plane instead. if I knew one impedence (say = then I could nd any other impedance.

it turns out. Then add 1 to the result. (4. There are a lot of ways we could express the line s.3 (A Vertical Line. located a distance d away from the imaginary axis Figure 4. s. s. but we will choose one which will turn out to be convenient for us." Simple isn't it? The only hard part we have in doing this is inverting + 1. d.9) We can substitute for tan (φ): And then. Consider a vertical line.7) (4.3 Now we ask ourselves the question: what is the inverse of s? 1 1 1 = s d 1 − jtan (φ) (4. The one fact about algebra on the complex plane that we need is as follows. s. since cos (φ) − jsin (φ) = e −(jφ) 1 s = = 1 1 d 1−j sin(φ) cos(φ) cos(φ) 1 d cos(φ)−jsin(φ) 1 s = = 1 cos(φ) d e−(jφ) 1 jφ d cos (φ) e .8) (4. This. Let's let: π π . on the complex plane. is pretty easy once we learn one very important fact. d. Away From the Imaginary Axis Figure 4. Away From the Imaginary Axis).127 by -2.6) s = d (1 − jtan (φ)) φ ∈ − 2 2 Z(s) Z0 A Vertical Line. a Distance. a Distance.

4 A careful look at Figure 4. FILTER DESIGN II Figure 4." The Line s' Figure 4. with a diameter = . but rather has its perpendicular to the origin at some angle φ.5: The line s multiplied by ejφ . whose diameter is the inverse of the distance between the line and the origin. And so we come to the one fact we have to keep in mind: 1 d jφ 1 s 1 d "The inverse of a straight line on the complex plane is a circle. If s is not parallel to the imaginary axis.9) is an equation for a circle on the complex plane. taking simply will give us a circle with a diameter of .5 (The Line s'). Since s = se . to make a line s Figure 4.4 (A Plot of 1/s) should allow you to convince yourself that (4.128 A Plot of 1/s CHAPTER 4. which has been rotated by an angle φ from the real axis Figure 4.6 (Inverse of a Rotated Line).

129 Inverse of a Rotated Line Figure 4.6 .

130 CHAPTER 4. FILTER DESIGN II .

and θ (ω) continuous.Chapter 5 Filter Design III 5.3/>.3) where A (ω) can be positive and negative.1 THE AMPLITUDE RESPONSE If the real and imaginary parts of H f (ω) are given by H f (ω) = Re (ω) + jIm (ω) the magnitude and phase are dened as |H f (ω) | = (Re (ω)) + (Im (ω)) Im (ω) Re (ω) 2 2 (5. but it can be very helpful to write H (ω) as H (ω) = A (ω) e (5.org/content/m10705/2. Figure 5. H f (ω) = |H f (ω) |ejp(ω) f f f jθ(ω) f 1 This content is available online at <http://cnx.1 Linear-Phase FIR Filters 1 5.1 illustrates the dierence between |H (ω) | and A (ω).2) With this denition. |H (ω) | is never negative and p (ω) is usually discontinuous. 131 .1) p (ω) = arctan so that (5.1. A (ω) is called the amplitude response.

1. If a discrete-time cosine signal x1 (n) = cos (ω1 n + φ1 ) θ (ω) = (−M ) ω + B is processed through a discrete-time lter with frequency response H f (ω) = A (ω) ejθ(ω) then the output signal is given by y1 (n) = A (ω1 ) cos (ω1 n + φ1 + θ (ω1 )) . H f (ω) = A (ω) ejθ(ω) with 5.1 A linear-phase phase lter is one for which the continuous phase θ (ω) is linear. FILTER DESIGN III Figure 5.132 CHAPTER 5.2 WHY LINEAR-PHASE? We assume in the following that the impulse response h (n) is real-valued.

2 .45 .) When does the system delay cosine signals with dierent frequencies by the same amount? The function is called the phase delay. Exercise 5.1 (Solution on p.6804x (n − 2) + x (n − 3) + 0.7865y (n − 2) + 0. A linear phase lter therefore has constant phase delay.4) When ω 1 = 0.31π .2: Figure 5. then the delay is −θ(ω1 ) ω1 = 2.7865x (n − 1) − 0.6804y (n − 1) − 0.1821y (n − 3) (5.3 WHY LINEAR-PHASE: EXAMPLE Consider a discrete-time lter described by the dierence equation y (n) = −0.133 or y1 (n) = A (ω1 ) cos ω1 n + The LTI system has the eect of scaling the cosine signal and delaying it by .1821x (n) + 0. 152.1. The delay is illustrated in Figure 5. θ(ω1 ) ω1 θ(ω) ω θ (ω1 ) ω1 + φ1 5.

1. The fractional delay can be interpreted in this case as a delay of the underlying continuous-time cosine signal.4 WHY LINEAR-PHASE: EXAMPLE (2) 2 −θ(ω2 ) ω2 CHAPTER 5. Consider the same system given on the previous slide. the delay depends on the frequency.3 For this example.5 WHY LINEAR-PHASE: MORE From the previous slides. note: 5. because this system does not have linear phase. . = 0. but let us change the frequency of the cosine signal. FILTER DESIGN III Notice that the delay is fractional the discrete-time samples are not exactly reproduced in the output. we see that a lter will delay dierent frequency components of a signal by the same amount if the lter has linear phase (constant phase delay).14. then the delay is Figure 5. When ω = 0.1.134 5.47π.

. Type I II III IV symmetric symmetric anti-symmetric anti-symmetric impulse response length is odd length is even length is odd length is even Table 5.2 Four Types of Linear-Phase FIR Filters 2 5. Also. 5.2. when a narrow band signal (as in AM modulation) goes through a lter.1 FOUR TYPES OF LINEAR-PHASE FIR FILTERS Linear-phase FIR lter can be divided into four basic types. The amount by which the envelop is delayed is independent of the carrier frequency only if the lter has linear phase.org/content/m10706/2. in applications like image processing.135 In addition.1 2 This content is available online at <http://cnx.2/>. the envelop will be delayed by the group delay or envelop delay of the lter. lters with non-linear phase can introduce artifacts that are visually annoying.

then the symmetry of the impulse response can be written as h (n) = h (N − 1 − n) (5.4 When h (n) is nonzero for 0 ≤ n ≤ N − 1 (the length of the impulse response h (n) is N ). H f (ω) = h0 + h1 e(−j)ω + h2 e−2jω + h1 e−3jω + h0 e−4ω (5.136 CHAPTER 5.5) and the anti-symmetry can be written as h (n) = − (h (N − 1 − n)) (5.7) . FILTER DESIGN III Figure 5.6) 5.2 TYPE I: ODD-LENGTH SYMMETRIC The frequency response of a length N = 5 FIR Type I lter can be written as follows.2.

3 TYPE II: EVEN-LENGTH SYMMETRIC The frequency response of a length N = 4 FIR Type II lter can be written as follows. H f (ω) = h0 + h1 e(−j)ω + h1 e−2jω + h0 e−3jω H f (ω) = e H f (ω) = e −3 2 jω (5. for a Type II FIR lters of length N : 3 ω + 2h1 cos 2 H f (ω) = A (ω) ejθ(ω) .10) (5. In general.2.8) (5.9) (5.15) (5.17) h0 e 2 jω + h1 e 2 jω + h1 e h0 e 2 jω + e 3 −3 2 jω 3 1 −1 2 jω + h0 e −3 2 jω −3 2 jω + h1 e 2 jω + e 1 ω 2 1 −1 2 jω H f (ω) = e −3 2 jω 2h0 cos 3 ω + 2h1 cos 2 where H f (ω) = A (ω) ejθ(ω) θ (ω) = A (ω) = 2h0 cos −3 ω 2 1 ω 2 In general.12) θ (ω) = (−M ) ω M= N −1 2 5.16) (5.14) (5. for a Type I FIR lters of length N : H f (ω) = A (ω) ejθ(ω) M −1 A (ω) = h (M ) + 2 n=0 (h (n) cos ((M − n) ω)) (5.137 H f (ω) = e−2jω h0 e2jω + h1 ejω + h2 + h1 e(−j)ω + h0 e−2jω H f (ω) = e−2jω h0 e2jω + e−2jω + h1 ejω + e(−j)ω + h2 H f (ω) = e−2jω (2h0 cos (2ω) + 2h1 cos (ω) + h2 ) H f (ω) = A (ω) ejθ(ω) θ (ω) = −2ω A (ω) = 2h0 cos (2ω) + 2h1 cos (ω) + h2 where (5.13) (5.11) Note that A (ω) is real-valued and can be both positive and negative.

22) (5.18) 5.4 TYPE III: ODD-LENGTH ANTI-SYMMETRIC The frequency response of a length N = 5 FIR Type III lter can be written as follows. for a Type III FIR lters of length N : M −1 A (ω) = 2h0 sin (2ω) + 2h1 sin (ω) H f (ω) = A (ω) ejθ(ω) A (ω) = 2 n=0 (h (n) sin ((M − n) ω)) π 2 (5.25) where In general.20) (5.23) (5.138 N 2 CHAPTER 5. H f (ω) = h0 + h1 e(−j)ω − h1 e−3jω − h0 e−4ω H f (ω) = e−2jω h0 e2jω + h1 ejω − h1 e(−j)ω − h0 e−2jω H f (ω) = e−2jω h0 e2jω − e−2jω + h1 ejω − e(−j)ω H f (ω) = e−2jω (2jh0 sin (2ω) + 2jh1 sin (ω)) H f (ω) = e−2jω j (2h0 sin (2ω) + 2h1 sin (ω)) H f (ω) = e−2jω ej 2 (2h0 sin (2ω) + 2h1 sin (ω)) H f (ω) = A (ω) ejθ(ω) θ (ω) = −2ω + π 2 π (5.2.19) (5.24) (5. FILTER DESIGN III −1 A (ω) = 2 n=0 (h (n) cos ((M − n) ω)) θ (ω) = (−M ) ω M= N −1 2 (5.21) (5.26) θ (ω) = (−M ) ω + M= N −1 2 .

org/content/m10701/2. When the interpolation points are equally spaced between 0 and 2π.5 TYPE IV: EVEN-LENGTH ANTI-SYMMETRIC The frequency response of a length N = 4 FIR Type IV lter can be written as follows. for a Type IV FIR lters of length N : N 2 3 ω + 2h1 sin 2 H f (ω) = A (ω) ejθ(ω) −1 A (ω) = 2 n=0 (h (n) sin ((M − n) ω)) π 2 (5. and the lter can be found by solving a system of linear equations. If the number of specied interpolation points is the same as the number of lter parameters.34) θ (ω) = (−M ) ω + M= N −1 2 5. then this interpolation problem can be solved very eciently using the DFT.29) (5. then the lter is totally determined by the interpolation conditions.1 DESIGN OF FIR FILTERS BY DFT-BASED INTERPOLATION 3 One approach to the design of FIR lters is to ask that A (ω) pass through a specied set of values.139 5. 3 This content is available online at <http://cnx.30) (5.31) (5. H f (ω) = h0 + h1 e(−j)ω − h1 e−2jω − h0 e−3jω H f (ω) = e −3 2 jω (5.3 Design of Linear-Phase FIR Filters by DFT-Based Interpolation 5.2.27) (5. .33) h0 e 2 jω + h1 e 2 jω − h1 e h0 e 2 jω − e 2jh0 sin j 2h0 sin π 3 −3 2 jω 3 1 −1 2 jω − h0 e −3 2 jω H f (ω) = e −3 2 jω + h1 e 2 jω − e 1 ω 2 1 ω 2 1 −1 2 jω H f (ω) = e −3 2 jω 3 ω + 2jh1 sin 2 3 ω + 2h1 sin 2 3 ω + 2h1 sin 2 H f (ω) = e H f (ω) = e −3 2 jω −3 2 jω ej 2 2h0 sin 1 ω 2 where H f (ω) = A (ω) ejθ(ω) θ (ω) = A (ω) = 2h0 sin −3 π ω+ 2 2 1 ω 2 In general.32) (5.3.28) (5.2/>.

FILTER DESIGN III To derive the DFT solution to the interpolation problem.37) Therefore.39) . H 2π N k = = N −1 n=0 h (n) ej N nk 2π DFTN (h (n)) (5.1 Types I and II Recall the relation between A (ω) and H A f (ω) for a Type I and II lter. It is important to note however. that the specied values A k must possess the appropriate symmetry in order for the result of the inverse DFT to be a real Type I or II FIR lter. we have A 2π N k = = (−j) H 2π N k ejM N k 2π M (−j) DFTN (h (n)) WN k (5.140 CHAPTER 5.3.1. we can then obtain the lter coecients h (n)that satises the interpolation conditions by using the inverse DFT.38) Then we can related the N -point DFT of h (n) to the samples of A (ω): DFTN (h (n)) = jA 2π M k WN k N Solving for the lter coecients h (n) gives: h (n) = DFTN −1 jA 2π −M k WN k N (5. if the values A k are specied.1.35) 5. (5. recall the formula relating the samples of the frequency response to the DFT. we can solve for the lter coecients h (n). to obtain = H = 2π N k 2π N k ejM N k 2π M DFTN (h (n)) WN k (5.36) Now we can related the N -point DFT of h (n) to the samples of A (ω): DFTN (h (n)) = A 2π −M k WN k N Finally. h (n) = DFTN −1 A 2π −M k WN k N 2π N 2π N 5. In the case we are interested here.3.2 Types III and IV For Type III and IV lters. the number of samples is to be the same as the length of the lter ( L = N ).

^(M*k). M = (N-1)/2.0000i % samples of A(w) Observe that the lter coecients h are real and symmetric. 0. plot(w/pi. 1) .0474 -0. 0. w = k*2*pi/L. 0 ≤ k ≤ N − 1 and N = 11 .0000i 0.0000i 0. 0.'o') xlabel('\omega/\pi') title('A(\omega)') .0540 -0. 1.Ak.0000i 0. 0.*W.4545 0. N = 11.0000i 0. A = H .2*[0:N-1]/N. 0. k = 0:L-1.0694 -0.3194 0. Ak = [1 1 1 0 0 0 0 0 0 1 1}.2 EXAMPLE: DFT-INTERPOLATION (TYPE I) 2π N T The following Matlab code fragment illustrates how to use this approach to design a length 11 Type I FIR lter for which A k = (1.0000i 0. W = exp(j*2*pi/N).0474 0.1094 -0. 0.1094 0.0000i 0.^(-M*k)).0000i 0.0540 0.0694 + + + + + + + 0. L = 512. The plot of A (ω) for this lter illustrates the interpolation points. h = ifft(Ak.3. that a Type I lter is obtained as desired.3194 0. h' ans = 0.A.L-N)]). 1.0000i 0. H = fft([h zeros(1.0000i 0.0000i 0.* W. k = 0:N-1.141 5. 1. W = exp(j*2*pi/L). A = real(A).

42) (5. III.41) (5. The formulas for evaluating the amplitude response A (ω) at L equally spaced points from 0 to 2π ( L ≥ N ).3. 0 ]) W (5.142 CHAPTER 5. TYPE I and II: 2π A k = DFT ([h (n) . FILTER DESIGN III Figure 5.5 An exercise for the student: develop this DFT-based interpolation approach for Type II.43) 2π k L M = (−j) DFTL ([h (n) . Modify the Matlab code above for each case. For an N -point linear-phase FIR lter h (n). The formulas for the DFT-based interpolation design of h (n). and IV FIR lters.40) L L L−N Mk L 5.3 SUMMARY: IMPULSE AND AMP RESPONSE h (n) = DFTN −1 A TYPE III and IV: A 2π −M k WN k N (5. 2. we summarize: 1. 0L−N ]) WL k h (n) = DFTN −1 jA 2π −M k WN k N .

.1 DESIGN OF FIR FILTERS BY GENERAL INTERPOLATION M −1 4 If the desired interpolation points are not uniformly spaced between 0 and π then we can not use the DFT... This is done with the command C = cos(wk*[0:M]). 0. A (ω) = h (M ) + 2 (h (n) cos ((M − n) ω)) n=0 For convenience. 0. cos (M ω1 ) a (1) cos (M ωM ) a (M ) . . Note that there are M + 1 parameters. . ..3π} 0 ≤ k ≤ 3 (5. . a (M )} 5.. cos (M ω0 ) a (0) A (0) . 0. 0.4. We choose: A (ω ) = 1 .143 5. . .4 Design of Linear-Phase FIR Filters by General Interpolation 5. . . where wk is a column vector containing the frequency points.. we design a length 19 Type I FIR.5π. . it is common to write this as A (ω) = a(M −n) 2 M (a (n) cos (nω)) n=0 .45) To solve this interpolation problem in Matlab. 0. 0 ≤ k ≤ M In matrix form. . 0.. Recall that for a Type I FIR lter. 1. a (M − 1) .. 1 ≤ n ≤ N − 1 . To solve the linear system of equations. a (1) . ωk = {0. cos (ωM ) cos (2ωM ) .7π.6π. we have 1 cos (ω0 ) cos (ω1 ) 1 1 n=0 cos (2ω0 ) cos (2ω1 ) .org/content/m10704/2. 0. a (1) . We can therefore have 10 interpolation equations. We must take a dierent approach. A (ωk ) = 0 .8π. A (1) = A (M ) . ω = {0. 2a (0) . . one can set up a linear system of equations. note that the matrix can be generated by a single multiplication of a column vector and a row vector.0π} 4 ≤ k ≤ 9 4 This content is available online at <http://cnx.8π.2 EXAMPLE In the following example. Once a (n) is found..44) k k (5. 0 ≤ k ≤ M (a (n) cos (nωk )) = Ak . where h (M ) = a (0) and h (n) = Suppose it is desired that A (ω) interpolates a set of specied values: To obtain a Type I FIR lter satisfying these interpolation equations. Then M = 9 and we have 10 parameters.. the lter h (n) is formed as {h (n)} = 1/2 {a (M ) .1π. M A (ωk ) = Ak . .4. a (M − 1) . we can use the Matlab backslash command.2π.2/>.

M = (N-1)/2. C = cos(wk*[0:M]). plot(w/pi. The general interpolation problem also arises as a subproblem in the design of optimal minimax (or Chebyshev) FIR lters.wk/pi.6 . a = C/Ak. 2*a([0]+1).5 .3 . a(1:M]+1)].'o') title('A(\omega)') xlabel('\omega/\pi') CHAPTER 5.8 .1).4. Ak = [1 1 1 1 0 0 0 0 0 0]'.144 N = 19.6 The general interpolation problem is much more exible than the uniform interpolation problem that the DFT solves. [A. h = (1/2)*[a([M:-1:1]+1).Ak. • They can not be unstable.w] = firamp(h. FILTER DESIGN III Figure 5. • They can have exactly linear phase.3 LINEAR-PHASE FIR FILTERS: PROS AND CONS FIR digital lters have several desirable properties. the ripple near the band edge is reduced (but the transition between the pass.9 1]'*pi. by leaving a gap between the pass-band and stop-band as in this example.and stop-bands is not as sharp). 5.7 .2 .1 . wk = [0 . For example. .A.

145 There are several very eective methods for designing linear-phase FIR digital lters.5/>.1 SUMMARY: AMPLITUDE FORMULAS Type 5 I II III IV where M = N −1 2 θ (ω) − (M ω) − (M ω) − (M ω) + − (M ω) + π 2 π 2 A (ω) h (M ) + 2 2 2 2 −1 n=0 M −1 n=0 N 2 −1 n=0 N 2 M −1 n=0 (h (n) cos ((M − n) ω)) (h (n) cos ((M − n) ω)) (h (n) sin ((M − n) ω)) (h (n) sin ((M − n) ω)) Table 5.org/content/m10707/2. . then IIR lters can be more ecient. we need to know the characteristics of the amplitude response A (ω).5. • 5.2 AMPLITUDE RESPONSE CHARACTERISTICS To analyze or design linear-phase FIR lters.5. • If the phase need not be linear. • Linear-phase lters can have long delay between input and output. Type Properties A (ω) I II III is even about ω = 0 A (ω) is even about ω = π A (ω) is periodic with 2π A (ω) is even about ω = 0 A (ω) is odd about ω = π A (ω) is periodic with 4π A (ω) is odd about ω = 0 A (ω) is odd about ω = π A (ω) is periodic with 2π A (ω) = A (−ω) A (π + ω) = A (π − ω) A (ω + 2π) = A (ω) A (ω) = A (−ω) A (π + ω) = − (A (π − ω)) A (ω + 4π) = A (ω) A (ω) = − (A (−ω)) A (π + ω) = − (A (π − ω)) A (ω + 2π) = A (ω) continued on next page 5 This content is available online at <http://cnx.2 5. On the other hand.5 Linear-Phase FIR Filters: Amplitude Formulas 5.

FILTER DESIGN III A (ω) = − (A (−ω)) A (π + ω) = A (π − ω) A (ω + 4π) = A (ω) Figure 5.146 IV is odd about ω = 0 A (ω) is even about ω = π A (ω) is periodic with 4π A (ω) Table 5.3 EVALUATING THE AMPLITUDE RESPONSE H 2π k L N −1 = n=0 h (n) e(−j) L nk 2π .3 CHAPTER 5.5. Samples of the frequency response of the lter can be written as f 5.7 The frequency response H (ω) of an FIR lter can be evaluated at L equally spaced frequencies between 0 and π using the DFT. with N ≤ L. Consider a causal FIR lter with an impulse response h (n) of length-N .

the lter is a Type I FIR lter of length 7. This can be written as: 2π A k = DFT ([h (n) .5.1 Types I and II 2π k L = G (k) = DFTL (g (n)) Suppose the FIR lter h (n) is either a Type I or a Type II FIR lter. the samples of the real-valued amplitude function can be obtained by zero-padding h (n). samples of the real-valued amplitude A (ω) can be obtained from samples of the function H as: 2π 2π L = (−j) H L k ejM 2π L k A (ω) = (−j) H f (ω) ejM ω f (ω) M = (−j) G (k) WL k Therefore.1: EVALUATING THE AMP RESP (TYPE I) In this example. 0 ]) W (5.46) L L L−N Mk L 5.5. 0 ]) W (5. Then we have from above H f (ω) = A (ω) e(−j)M ω or A 2π k L Samples of the real-valued amplitude A (ω) can be obtained from samples of the function H =H 2π 2π M k ejM L k = G (k) WL k L A (ω) = H f (ω) ejM ω f (ω) as: Therefore. An accurate plot of A (ω) can be obtained with zero padding. 5. and multiplying by the complex exponential. .3.3. taking the DFT.2 Types III and IV For Type III and Type IV FIR lters. 2π A k = (−j) DFT ([h (n) . and multiplying by the complex exponential. the samples of the real-valued amplitude function can be obtained by zero-padding h (n). taking the DFT. we have H f (ω) = je(−j)M ω A (ω) or A k Therefore.147 Dene the L-point signal { g (n) |0 ≤ n ≤ L − 1} as h (n) 0 ≤ n ≤ N − 1 g (n) = 0 N ≤n≤L−1 Then H if if where G (k) is the L-point DFT of g (n).47) L L L−N Mk L Example 5.

A) ylabel('A(\omega)') xlabel('\omega/\pi') . (N-1)/2.abs(H)) ylabel('|H(\omega)| = |A(\omega)|') xlabel('\omega/\pi') subplot(2. FILTER DESIGN III Figure 5. real(A). fft([h zeros(1.1.* W. h N M L H k W A A = = = = = = = = = [3 4 5 6 5 4 3]/30. H .1) plot(w/pi. figure(1) w = [0:L-1]*2*pi/(L-1). 0:L-1. subplot(2.8 The following Matlab code fragment for the plot of A (ω) for a Type I FIR lter. 7.148 CHAPTER 5.L-N)]).2) plot(w/pi. exp(j*2*pi/L). 512.1.^(M*k).

^(M*k). . Because of this symmetry. A (ω) is usually plotted for 0 ≤ ω ≤ π only. 8. 0:L-1. Matlab takes A to be a complex vector and the following plot command will not be right. (N-1)/2. Observe the symmetry of A (ω) due to h (n) being real-valued.149 print -deps type1 The command A = real(A) removes the imaginary part which is equal to zero to within computer precision. real(A). Without this command. fft([h zeros(1. 512. h N M L H k W A A = = = = = = = = = [3 5 6 7 7 6 5 3]/42.2: EVALUATING THE AMP RESP (TYPE II) Figure 5. exp(j*2*pi/L).* W. Example 5.L-N)]). H .9 The following Matlab code fragment produces a plot of A (ω) for a Type II FIR lter.

5.1. highpass. An exercise for the student: Describe how to obtain samples of A (ω) for Type III and Type IV FIR lters. and bandstop linear-phase FIR lters based on the windowing method. Do you notice that A (ω) = 0 always for special values of ω? 5.1. bandpass. The command b = fir1(N.6 FIR Filter Design using MATLAB 5.1. The cut-o frequency Wn must be between 0 and 1 with 1 corresponding to the half sampling rate. .6. The command b = fir1(N. Design of Linear-Phase FIR Filters by Interpolation (Section 5. Zero Locations of Linear-Phase Filters (Section 3.org/content/m10917/2.3) 3.1 FIR Filter Design Using MATLAB 5. Modify the Matlab code above for these types. subplot(2.2/>.2) plot(w/pi. In fact this will always be the case for a Type II FIR lter.150 figure(1) w = [0:L-1]*2*pi/(L-1). FILTER DESIGN III The imaginary part of the amplitude is zero.A) ylabel('A(\omega)') xlabel('\omega/\pi') print -deps type2 CHAPTER 5.4 Modules for Further Study 1.abs(H)) ylabel('|H(\omega)| = |A(\omega)|') xlabel('\omega/\pi') subplot(2. Linear-Phase FIR Filter Design by Least Squares 6 7 5.9) 2.Wn) returns in vector b the impulse response of a lowpass lter of order N.1) plot(w/pi.Wn.6.1 Design by windowing The MATLAB function fir1() designs conventional lowpass.'high') 6 "Linear-Phase Fir Filter 7 This content is available Design By Least Squares" <http://cnx. Notice that A (π) = 0.org/content/m10577/latest/> online at <http://cnx.

2 Parks-McClellan FIR lter design The MATLAB command b = remez(N. .A(k+1)) for odd k. the Hamming window is employed in the design. Assuming sampling rate at 48kHz.8 Parks-McClellan Optimal FIR Filter Design 8 This 9 This content is available online at <http://cnx. 152.7 MATLAB FIR Filter Design Exercise 5. design an order-40 low-pass lter having cut-o frequency 10kHz by windowing method.2/>. 5.7. For example. Blackman window can be used instead by the command b = fir1(N.org/content/m10918/2.1. For odd k.7. A is a real vector of the same size as F which species the desired amplitude of the frequency response of the points (F(k).2/>.2 Parks-McClellan Optimal Design Exercise 5. Without explicit specication. Similarly.'stop') with Wn a two-element vector designating the stopband designs a bandstop lter. 9 (Solution on p.2 8 Assuming sampling rate at 48kHz.F.3 (Solution on p. F is a vector of frequency band edges in ascending order between 0 and 1 with 1 corresponding to the half sampling rate. 152.Wn. the bands between F(k+1) and F(k+2) is considered as transition bands.1 FIR Filter Design MATLAB Exercise 5.) 5.1 Design by windowing Exercise 5. design an order-40 lowpass lter having transition band 10kHz11kHz using the Parks-McClellan optimal FIR lter design algorithm.151 returns the impulse response of a highpass lter of order N with normalized cuto frequency Wn.1. use Hamming window as the windowing function. content is available online at <http://cnx.6. In your design. 5. b = fir1(N.A(k)) and (F(k+1). Other windowing functions can be used by specifying the windowing function as an extra argument of the function.) 5.7. Wn.1.A) returns the impulse response of the length N+1 linear phase FIR lter of order N designed by Parks-McClellan algorithm. blackman(N)).org/content/m10914/2.

1 (p.2 (p.0/48.0) Solution to Exercise 5. 151) b = remez(40. Solution to Exercise 5. 151) b = fir1(40.152 Solutions to Exercises in Chapter 5 Solution to Exercise 5.[1 1 0 0]. 133) • θ(ω) = constant ω • θ (ω) = Kω • CHAPTER 5.[0 10/48 11/48 1]) . FILTER DESIGN III The phase is linear.3 (p.10.

Chapter 6 Wiener Filter Design 153 .

154 CHAPTER 6. WIENER FILTER DESIGN .

1 Introduction 1 Figure 7.org/content/m10481/2. which is the most widely used adaptive ltering algorithm. The adaptive lter. e [n]. 155 .Chapter 7 Adaptive Filtering 7. H . is computed as e [n] = d [n] − y [n].1 Adaptive Filtering: LMS Algorithm 7. the adaptive lter will change its coecients in an attempt to reduce the error. First the error signal. The objective is to change (adapt) the coecients of an FIR lter.1: System identication block diagram. to match as closely as possible the response of an unknown system. On the basis of this measure.1 is a block diagram of system identication using adaptive ltering. W .1. x[n] H W d[n] e[n] y[n] Figure 7.14/>.1. The 1 This 7.1.1 Gradient-descent adaptation content is available online at <http://cnx. is adapted using the least mean-square algorithm. which measures the dierence between the output of the adaptive lter and the output of the unknown system. The unknown system and the adapting lter process the same input signal x [n] and have outputs d [n](also referred to as the desired signal) and y [n]. W .

(It is possible in some cases to determine analytically the largest value of µ ensuring convergence. After repeatedly adjusting each coecient in the direction opposite to the gradient of the error. and the lter converges more quickly. implement the system identication block on a sample-by-sample basis with a do loop. however. then the coecients change only a small amount at each update. and the lter converges slowly. elliptical. Previously in MATLAB.2 MATLAB Simulation Simulate the system identication block diagram shown in Figure 7. Random white noise provides signal at all digital frequencies to train the adaptive lter. Simulate the system with an adaptive lter of length 32 and a step-size of 0.org/content/m10623/latest/> . that is. ∂ 2 (|e|) = 2 (− (x [n − i])) e ∂h [i] (7. low-pass.1. How well does your adaptive lter match the "unknown" lter? How long does it take to converge? Once your simulation is working.3) 7. If µ is very small. To express the gradient decent coecient update equation in a more usable manner. With a larger step-size. plot the error (or squared-error) as it evolves over time and plot the frequency response of the adaptive lter coecients at the end of the simulation. The gradient is a vector pointing in the direction of the change in lter coecients that will cause the greatest increase in the error signal. the dierence between the unknown and adaptive systems should get smaller and smaller.1) th The term inside the parentheses represents the gradient of the squared-error with respect to the i coecient. we can rewrite the derivative of the squared-error term as ∂ ∂h[i] (|e|) 2 = = = ∂ 2 ∂h[i] (e) e ∂ 2 ∂h[i] (d − y) e ∂ 2 ∂h[i] d − N −1 i=0 (7. Use Gaussian random noise as your input. (7. For the "unknown" system. the adaptive lter should converge.1) updates the lter coecients in the direction opposite the gradient. which controls the amount of gradient information used to update each coecient. Initialize all of the adaptive lter coecients to zero.02. The constant µ is a step-size. more gradient information is included in each update. similar to the way you might implement a time-domain FIR lter on a DSP.1.) hn+1 [i] = hn [i] + µex [n − i] which in turn gives us the nal LMS coecient update. you used the filter command or the conv command to implement shiftinvariant lters.2) (h [i] x [n − i]) (e) (7. the coecients may change too quickly and the lter will diverge. 2 2 "IIR Filtering: Filter-Design Exercise in MATLAB" <http://cnx.4) The step-size µ directly aects how quickly the adaptive lter will converge toward the unknown system. that is why the gradient term is negated. From your simulation. Because the goal is to minimize the error. when the step-size is too large. Therefore. which can be generated in MATLAB using the command randn. ADAPTIVE FILTERING hn+1 [i] = hn [i] + µ 2 − ∂ 2 (|e|) ∂hn [i] (7.156 coecient update relation is a function of the error signal squared and is given by CHAPTER 7. however. IIR lter designed for the IIR Filtering: Filter-Design Exercise in MATLAB . Those commands will not work here because adaptive lters are shift-varying. experiment with dierent step-sizes and adaptive lter lengths. use the fourth-order. since the coecient update equation changes the lter's impulse response at every sample time.

then the noise will not be zero-mean and this will aect convergence. which is designed for this application and yields a very ecient implementation of the coecient update equation.1. To generate noise on the DSP. y [n]. equalization.4 Extensions If your project requires some modications to the implementation here.) Send the desired signal. Although the coecient update equation is relatively straightforward. consider using the lms instruction available on the TI processor.157 7.1 change for dierent applications? (noise cancellation. 3 "Digital Transmitter: Introduction to Quadrature Phase-Shift Keying" <http://cnx. you should notice that the error converges very quickly.3 Processor Implementation Use the same "unknown" lter as you used in the MATLAB simulation. ) • What happens to the error when the step-size is too large or too small? • How does the length of an adaptive FIR lters aect convergence? • What types of coecient update relations are possible besides the described LMS algorithm? Haykin etc. 3 7. but shift the PN register contents up to make the sign bit random. Try an extremely small µ so that you can actually watch the amplitude of the error signal decrease towards zero. refer to [1] and consider some of the following questions regarding such modications: • How would the system in Figure 7.org/content/m10042/latest/> . When using the step-size suggested in the MATLAB simulation section. and the error to the D/A for display on the oscilloscope.1. you can use the PN generator from the Digital Transmitter: Introduction to Quadrature Phase-Shift Keying . d [n]. the output of the adaptive lter. (If the sign bit is always zero.

ADAPTIVE FILTERING .158 CHAPTER 7.

discovered in the last 15 years.org/content/m10092/latest/> 159 .7/>.Chapter 8 Wavelets and Filterbanks 8. T ]) and have many nice properties. are another kind of basis for L ([0.1.org/content/m10760/latest/> 4 "Gibbs's Phenomena" <http://cnx. T ]) especiallly for inputs into LTI systems. image processing (recall Gibb's phenomena ). it is ill suited for some applications.e. 3 2 4 2 8.org/content/m10764/2.org/content/m10496/latest/> 3 "Orthonormal Basis Expansions" <http://cnx. However. 2 "Fourier Series: Eigenfunction Approach" <http://cnx.c give frequency information. i. Wavelets.1 Haar Wavelet Basis 8.1 Introduction 2 1 Fourier series is a useful orthonormal representation on L ([0.1. 1 This content is available online at <http://cnx.2 Basis Comparisons n Fourier series . Basis functions last the entire interval.

basis functions give frequency info but are local in time. WAVELETS AND FILTERBANKS Figure 8.160 CHAPTER 8.1: Fourier basis functions Wavelets . .

the basis functions are scaled and translated versions of a "mother wavelet" ψ (t).5). the basis functions are harmonic multiples of e jω0 t Figure 8. .2: Wavelet basis functions In Fourier basis.3: basis = n 1 √ T ejω0 nt o In Haar wavelet basis (Section 8.161 Figure 8.

Let φ (t) = 1 .k j 2 j j −1 . 2 j. . 1. WAVELETS AND FILTERBANKS Figure 8. 0 ≤ t < T Then φ (t) . 2.4 Basis functions {ψ (t)} are indexed by a scale j and a shift k.162 CHAPTER 8. . . . 2 ψ 2 t − k |j ∈ Z and k = 0.

163 Figure 8.5 ψ (t) = if 0 ≤ t < −1 if 0 ≤ < T 1 T 2 T 2 (8.1) .

164 CHAPTER 8. 2 Check: each ψ (t) has unit energy j j. 1. j = {0. .k j −1 . }. . . . . 1.7 Larger j → "skinnier" basis function.6 Let ψ j. 2 shifts at each scale: k = 0.k (t) = 2 2 ψ 2j t − k j Figure 8. . 2. WAVELETS AND FILTERBANKS Figure 8. .

k (t) Any two basis functions are orthogonal. 2 =1 (8.8 ψj.2) (a) Figure 8.165 Figure 8.k 2 (t) dt = 1 ⇒ ψj.9: (b) Integral of product = 0 (a) Same scale (b) Dierent scale .

org/content/m10957/2. (8. The problem is how to nd a function ψ (t) so that the set B is an orthonormal set.3) (8. please see http://cnx. φ} CHAPTER 8. we can write (8.llb 8.166 Also.5) f (t) = j k (wj. To view.k . j.1 This demonstration lets you create a signal by combining Haar basis functions.org/content/m11550/latest/> 7 This content is available online at <http://cnx.10).k = 0 T f (t) ψj.1.org/content/m10764/latest/HaarSyn.3/>.1) (described by Haar in 1910) is an orthonormal basis with wavelet ψ (t) 1 if 0 ≤ t ≤ 1/2 ψ (t) = (8.6) . 5 "Inner Products" <http://cnx.k ψj. T ]) 8. illustrating the synthesis equation of the Haar Wavelet Transform. Example 8. {ψ Synthesis j.2 Orthonormal Wavelet Basis B= j 7 An orthonormal wavelet basis is an orthonormal basis of the form 2 2 ψ 2j t − k |j ∈ Z and k ∈ Z The function ψ (t) is called the wavelet.org/content/m10755/latest/> 6 "How to use the LabVIEW demos" <http://cnx.7) −1 if 1/2 ≤ t ≤ 1 0 otherwise For the Haar wavelet.3 Haar Wavelet Transform Using what we know about Hilbert spaces : For any f (t) ∈ L 5 2 ([0. T ]) .k (t)) + c0 φ (t) Analysis wj.k 6 This is an unsupported media type.4) (8. it is easy to verify that the set B is an orthonormal set (Figure 8. WAVELETS AND FILTERBANKS span L 2 ([0.2: Haar Wavelet The Haar basis (Section 8.k (t) dt T c0 = 0 note: f (t) φ (t) dt the w are real The Haar transform is super useful especially in image compression Example 8. See here for instructions on how to use the demo.

If B is an orthonormal set then we have the wavelet series.10 Notation: Wavelet series where j is an index of scale and k is an index of location.k (t) dt . k) = −∞ x (t) ψj.k (t) = 2 2 ψ 2j t − k j x (t) = j=−∞ k=−∞ ∞ (d (j. k) ψj.8) d (j.k (t)) (8.167 Figure 8. ∞ ∞ ψj.

org/content/m10418/2. implying the need for a many-second observation.168 8. Consider two signals composed of sinusoids with frequency 1 Hz and 1. . Now consider two signals with pure frequencies of 1000 Hz and 1001 Hz-again. In other words.11): omega a_small a_large t Figure 8.τ (t) = 1 where |a| ψ t−τ a a and τ ∈ R ∞ ψ (t) dt = 0 −∞ ∞ Cψ = −∞ (|ψ (Ω) |) dΩ < ∞ |Ω| 2 8 This content is available online at <http://cnx. It may be dicult to distinguish between these two signals in the presence of background noise unless many cycles are observed. The previous example motivates a multi-resolution time-frequency tiling of the form (Figure 8. Let's now take a closer look at the implications of uniform resolution. good frequency resolution requires longer observation times as frequency decreases. respectively. often called the mother wavelet.1% dierence.001 Hz. 8 WAVELETS AND FILTERBANKS The STFT provided a means of (joint) time-frequency analysis with the property that spectral/temporal widths (or resolutions) were the same for all basis elements. The a-scaled and τ -shifted basis elements is given by ψa.14/>.3 Continuous Wavelet Transform CHAPTER 8. it might be more convenient to construct a basis whose elements have larger temporal width at low frequencies. a 0.11 The Continuous Wavelet Transform (CWT) accomplishes the above multi-resolution tiling by time-scaling and time-shifting a prototype function ψ (t). Here it should be possible to distinguish the two signals in an interval of much less than one second. Thus.

since Ψ (0) ≈ 7 × 10 = 0. it is usually said that wavelets perform a "time-scale" analysis rather than a time-frequency analysis. it can be corrected. the CWT says that a waveform can be decomposed into a collection of shifted and stretched versions of the mother wavelet ψ (t). the discrete wavelet transform (DWT) is much more practical. The Morlet wavelet is a classic example of the CWT.12). the denition above ensures that ψ (t) = 1 for all a and τ . τ ) = −∞ x (t) ψa. As such. Assuming that ψ (t) = 1. we take a step back and review some of the basic concepts from the branch of mathematics known as Hilbert Space theory (Vector . −7 Figure 8.12 While the CWT discussed above is an interesting theoretical and pedagogical tool. It employs a windowed complex exponential as the mother wavelet: 1 ψ (t) = √ 2π e −(jΩ0 t) − “ 1 Cψ ∞ −∞ ∞ −∞ e t2 2 ” „ Ψ (Ω) = e 0 2 log2 − (Ω−Ω0 )2 2 « where it is typical to select Ω = π .169 The conditions above imply that ψ (t) is bandpass and suciently smooth.τ (t) dt XCWT (a. though in practice the correction is negligible and usually ignored.τ ∞ XCWT (a.τ (t) dτ da a2 ∗ x (t) = In basis terms. (See illustration (Figure 8. τ ) ψa. Before shifting our focus to the DWT. The CWT is then dened by the transform pair a.) While this wavelet does not exactly satisfy the conditions established earlier.

k and n ∈ Z denes a family of wavelets { ψ (t) |k ∈ Z and n ∈ Z} related by power-of-two stretches.170 Space .9) 2 ∞ ∞ k. For example. In this way. though somewhat arbitrary. Even with power-of two stretches. closer to −∞).n i. the normalization ensures that ψ (t) = 1 for all k ∈ Z. Specically. The DWT can describe any signal in L using a coecient set parameterized by two countable indices: { d |k ∈ Z and n ∈ Z}. n ∈ Z.e.n k.n k=−∞ n=−∞ (8. choice. WAVELETS AND FILTERBANKS 9 10 11 12 13 8.n note: k. To aid the development.n (t) x (t) dt k. As k becomes smaller ( . In both cases. When ψ (t) = 1. there are various possibilities for ψ (t).n (t) . (8. as n increases.4. Essentially. discrete) set of coecients. Hilbert Space .n k.10) where {d } are the wavelet coecients.12) dk. 2 2 k. the wavelets become more "ne grained" and the level of detail increases. Wavelets are orthonormal functions in L obtained by shifting and stretching a mother wavelet. Denoting the wavelet basis as { ψ (t) |k ∈ Z and n ∈ Z}.1 Main Concepts k.12/>. Like with wavelets. CHAPTER 8.e.org/content/m10436/2. very useful in analyzing "real-world" signals.n = < ψk. we make use of the so-called scaling function φ (t) ∈ L .org/content/m10428/latest/> 11 "Inner Product Space" <http://cnx. ψ (t) ∈ L . the DWT can give a multiresolution description of a signal.4 Discrete Wavelet Transform: Main Concepts 8. 2 k. These concepts will be essential in our development of the DWT.e.n − k 2 −k k. Fourier series enabled us to describe periodic signals using Fourier coecients { X [k] |k ∈ Z}. . Note the relationship to Fourier series and to the sampling theorem: in both cases we can perfectly describe a continuous-time signal x (t) using a countably-innite ( . In the modules that follow.org/content/m10419/latest/> 10 "Normed Vector Space" <http://cnx. the wavelet shifts right. describes a particular level of 'detail' in the signal. however. x (t) > = ∞ −∞ ∗ ψk. Normed Vector Space .org/content/m10435/latest/> 14 This content is available online at <http://cnx. while the sampling theorem enabled us to describe bandlimited signals using signal samples { x [n] |n ∈ Z}. each giving a dierent avor of DWT.n i. Inner Product Space . the set of all shifted wavelets at xed scale k).org/content/m10434/latest/> 13 "Projection Theorem" <http://cnx.org/content/m10430/latest/> 12 "Hilbert Spaces" <http://cnx. these DWT concepts will be developed "from scratch" using Hilbert space principles. signals within a limited class are represented using a coecient set with a single countable index. Power-of-two stretching is a convenient.n − k 2 −k 9 "Vector Space" <http://cnx. Projection Theorem ). In our treatment of the discrete wavelet transform. a family of scaling functions can be constructed via shifts and power-of-two stretches φ (t) = 2 ( ) φ 2 t − n . As k increases.n 14 The discrete wavelet transform (DWT) is a representation of a signal x (t) ∈ L using an orthonormal basis consisting of a countably-innite set of wavelets. the wavelet stretches by a factor of two. k and n ∈ Z (8. the DWT transform pair is x (t) = (d ψ (t)) (8. we will focus on this choice. which will be used to approximate the signal up to a particular level of detail.11) ψ (t) = 2 ( ) ψ 2 t − n . i. the DWT gives us a discrete multi-resolution description of a continuous-time signal in L . Wavelets are constructed so that { ψ (t) |n ∈ Z} ( .n 2 2 k.

The relationships between wavelets and scaling functions will be elaborated upon later via theory and example (Section 8. that the Haar basis is only an example. (8. For the Haar case.n The Haar basis is perhaps the simplest example of a DWT basis. we will assume real-valued signals and wavelets.14) 2k 15 "The Scaling Equation" <http://cnx.5).org/content/m10437/2. and we will frequently refer to it in our DWT development.13) and Figure 8. there are many other ways of constructing a DWT decomposition. Keep in mind.10/>.10) is written for the general complex-valued case.n (t)} (8.13 From the mother scaling function.org/content/m10476/latest/> 16 This content is available online at <http://cnx.13) 0 otherwise 8. we omit the complex conjugations in the remainder of our DWT discussions 15 note: k.171 given mother scaling function φ (t).14 φk. The inner-product expression for d . 1 if 0 ≤ t < 1 φ (t) = (8. the mother scaling function is dened by (8. however. however. we dene a family of shifted and stretched scaling functions {φ according to (8.13.5 The Haar System as an Example of DWT 16 Figure 8. For this reason. In our treatment of the discrete wavelet transform. . k ∈ Z n ∈ Z k 2−( 2 ) φ 1 t − n2k k k.14) and Figure 8.n (t) = = 2−( 2 ) φ 2−k t − n .

15 that { φ (t) |n ∈ Z} is orthonormal for each k ( .14 which are illustrated in Figure 8.e. WAVELETS AND FILTERBANKS Figure 8. Figure 8.14) makes clear the principle that incrementing by one shifts the pulse one place to the right.172 CHAPTER 8.n i.15 . (8. n k. along each row of gures).15 for various k and n. Observe from Figure 8.

Fourier Transform (DFT)" <http://cnx. A measure of how much one random variable depends upon the other. y [n] + 7y [n − 1] + 2y [n − 2] = x [n] − 4x [n − 1] (3.32).GLOSSARY 173 Glossary A Autocorrelation C Complex Plane Correlation Covariance the expected value of the product of a random variable or signal realization with a time-shifted version of itself A two-dimensional graph where the horizontal axis maps the real part and the vertical axis maps the imaginary part of any complex number or function. the domain can be dened as any real number x where x does not equal 1 or negative 3.1) Example: D dierence equation domain The group. (3. They are often rearranged as a recursive formula so that a systems output can be computed from the input signal and past outputs. we get the following: { x ∈ R |x = −3 and x = 1} (3. Crosscorrelation if two processes are wide sense stationary. the expected value of the product of a random variable from one random process with a time-shifted. random variable from a dierent random process An equation that shows the relationship between consecutive values of a sequence and the dierences among them.org/content/m10249/latest/> . t P poles 17 "Discrete 1. or set. Using the rational function above. A measure of how much the deviations of two or more variables or processes match.34) Example: F FFT First-order stationary process (Fast Fourier Transform) An ecient computational algorithm for computing the DFT . 17 Xt If F (b) is not a function of time then X is called a rst-order stationary process. of values that are dened by a given function. Written out mathematical. The value(s) for z where Q (z) = 0.

The complex frequencies that make the overall gain of the lter transfer function zero. A and B. X (ω) . 2. Note that the phase and amplitude of each sinusoid is based on a random number.32) S stationary process Stochastic Process a random process where all of its statistical properties do not vary with time Given a sample space. The value(s) for z where P (z) = 0. As an example of a random process. A family or ensemble of signals that correspond to every possible outcome of a certain signal measurement. t ∈ R (2. . Below is a simple example of a basic rational function. a stochastic process is an indexed collection of random variables dened for each ω ∈ Ω. The complex frequencies that make the overall gain of the lter transfer function innite. Each signal in this collection is referred to as a realization or sample function of the process. We use f [n] = Asin (ωn + φ) to represent the sinusoid with a given amplitude and phase. Note that the numerator and denominator can be polynomials of any order. their quotient is called a rational function. The complex frequencies that make the overall gain of the lter transfer function innite.174 poles GLOSSARY 2. let us look at the Random Sinusoidal Process below. The value(s) for z where the denominator of the transfer function equals zero 2. Example: Example: R random process rational function For any two polynomials. 1. f (x). but the rational function is undened when the denominator equals zero. The complex frequencies that make the overall gain of the lter transfer function zero.1) t Z zeros zeros 1. 1. f (x) = 2x2 x2 − 4 +x−3 (3. thus making this a random process. The value(s) for z where the numerator of the transfer function equals zero 2.

Prentice Hall. 3rd edition edition. 1996. Haykin.Bibliography [1] S. Adaptive Filter Theory . 175 .

8(23). 1. 24 discrete random process. 37. 1. 1.1(1). 1. 63 continuous time. 78 Crosscorrelation. Keywords do not necessarily appear in the text of the page. 1. 5. 1.8(113) complex plane.5(145) analog. 116 characteristic polynomial. 61 DFT.9(33) envelop delay.4(71).5(14). 3.4(11) 2D sampling. 76 Euler. 92 circular convolution.4(11) 2D-DFT. 1. 3.1(155) amplitude response.6(17). 1. 2.6(17) .4(98) control theory.4(71). 1.11(42) deblurring. 91 discrete fourier transform.11(42). 125. 78. 79.7(19).8(23). 2. 75 bandlimited.4(170).8(23). 93 bilinear transform. 3.8(23). 3. 82 average. 1.8(23). 89 digital.1(1). 2 A B C 2d convolution. 5.1(1) density function.2(3) 2d FT.9(33) analog-to-digital (A/D) conversion.4(71) deterministic.7(81). 1. 1. 7. 2. apples. 8.1(159) BER.2(3) 2D FTs. 1. 1. Ex.9(33).9(33). 2.3(7) A/D. 8.7(151) expansion. 8. 2. 1. 170.3(139) dierence equation.1(131). 1. 1. 1. 1.4(11) 2d fourier transforms. 3.176 Index of Keywords and Terms INDEX Keywords are listed by the section with that keyword (page numbers are in parentheses). 1.7(19). 63 discrete time. 100 convolution.13(53).3(7). 126 bit error rate. 1.5(74) average power.3(7).7(19). 2. 53 autocorrelation. 1.8(23).7(19) FFT. 4. 1. 1. 85 D E F CT. 1. 1.3(168) D/A. 5. 135 ergodic.8(23) Discrete Wavelet Transform.9(33) digital lters. 2. 1. 1 deconvolution. 102. 173 continuous random process.11(42) correlation. 85 crosscorrelation function. 7. 1. 1. 1. 1. 1.6(102). 1. 2.7(81). 1. 1.1(1). 1. 1. 131. 2.10(38) CTFT. 71 blur. 1. 19 lter.12(50).4(98) discrete time fourier transform.1(1) causal.7(81) correlation coecient. 3.10(121) ltering. 1. 79.5(74). 5.6(17). 8. 44 complex. 1.11(42) DFT-based interpolation.4(71) domain.1(61) deterministic signals. 101 dsp.6(17).1 (1) Terms are referenced by the page they appear on. 2.1(159) fast fourier transform. 71 bilateral z-transform. 1. 79 correlation functions. 3. 1.9(33) CWT. 80 covariance.3(7).5(14).2(64) distribution function. 89.1(125). 1.2(3). 8. 1.3(97). 1.3(97). 102 computational algorithm. 82.1(155) DTFT. 1.11(42) adaptive ltering. 1. 1. 1. 2. 1.5(171) distribution.7(19). They are merely associated with that section.14(56) exercise. 3.11(42) Discrete Fourier Transform (DFT). 170 basis. 1 Ex. 1.8(23). 101. 1. apples.6(102).6(17) direct method.

133 plane.4(11) imaginary. 91 initial conditions. 3. 1.12(50) rectangular coordinate.1(1). 91 pdf. 1. 2. 74 LMS.9(33). 1. 174 reconstruction.4(71) Pearson's Correlation Coecient.1(1). 100 poles. 94.4(71). 1.6(102) independent.14(56).14(56) frequency domain. 1.4(71). 3. 8. 103 pole.1(155) group delay. 72 joint distribution function. 2. 3.10(38) frequency. 169 mother wavelet. 8.4(71) probability density function (pdf). 64 fourier. 1. 72 probability distribution function. 3. 5.1(61). 54 quantized. 1.5(74).5(145) linearly independent. 3.1(159) hilbert spaces. 2. 135 H haar.5(14) mean. 1.5(74) random signals. 1.8(113).14(56). 1. 170 multiresolution. 3.10(38) function.4(71). 170 multi-resolution. 93 fourier transforms. 106.3(68).2(92). 94. 56 Fourier series.1(61).6(17). 8. 2. 100. 5. 2. 2. 1. 53 random.10(121) pole-zero cancellation. 1. 2.6(150). 1. 75 Morlet wavelet. 107 ROC. 63 random signal. 2.2(92). 104 FIR. 63. 1. 7.5(100) realization. 56 particular solution. 5. 11 polar form. 5.7(151) FIR Filters.14(56). 1. 3. 62. 90 orthogonal wavelet basis. 2. 5.14(56) gradient descent.10(38). 1.4(170) nonstationary. 79 periodic. 3. 1.9(33) M MATLAB.7(81) random process. 104 probability.1(155) LPF. 72 L laplace transform. 1. 2.11(42). 72 normalized impedance.1(61).13(53). 113 polynomial. 72 probability function.5(74). 3. 91 I ideal low pass lter. 1.2(135). 2. 3.5(74).1(159) Fourier Transform. 2. 2.7(151) matrix. 3.7(19) FT.6(150).5(100) rational function.7(81) rational. 2. 62. 5. 5.13(53).12(50) LTI systems. 104 lters.5(100). 168. 76 indirect method.4(98) linear algebra. 74 N O P Q R . 8.5(100) G Gauss. 62.3(7). 2. 73 First-order stationary process.5(14) linear-phase. 1. 8. 90 interpolation. 8. 8. 8.5(74). 2.4(98). 3.12(50) image. 5. 2.1(159) Fourier coecients.7(81) random sequence.1(131).INDEX 177 mean-square value.1(159). 75 moment.3(139) linear-phase FIR lters.12(50) J joint density function. 170 phase delay. 7.1(159) homogeneous solution. 125 nyquist.11(42) nite-duration sequence. 5. 1. 8.6(102) point spread function.4(71) rst-order stationary. 1. 100 rational functions. 2.1(1) right-sided sequence. 1.10(38) order. 1. 3.3(139) rst order stationary. 2.5(171) haar wavelet. 1.2(166) orthogonality.4(71) quantization interval. 1.5(100) power series.1(61).2(64). 98. 103 restoration.1(159) hilbert. 2. 1. 102. 2. 1.3(7) image processing. 1.

2.8(113) z-transform. 98. 8.5(74).5(171) wavelets.2(92). 1.10(121) zeros. 166. 2. 74 unilateral. 1.1(159). 2. 3.4(98) time. 2. 104 z-transforms. 101 W wavelet. 1.1(159). 3. 2.4(71) stable. 73 signal-to-noise.3(97) twiddle factors. 116 stationarity. 55 signals.178 S INDEX T sample function. 10 WSS.6(102). 8. 3. 92. 1.8(23).4(71) wide-sense stationary (WSS). 170. 1.4(98).4(71) X x-intercept.4(98) z-plane.5(14) vertical asymptotes.4(71). 3. 7. 90 transform pairs.7(81) stationary process.4(98) sinc. 35 transfer function.12(50) sinusoid.5(171).2(64). 8.3(168) time-varying behavior.1(155) systems. 1. 108 U uncertainty principle. 72 stochastic. 159. 3. 174 sampling. 72 stationary processes. 3.10(38) scaling function. 171 second order stationary. 2. 2. 1. 8. 8.14(56) square wave. 97 zero. 1. 1. 3. 75 vector.4(71) second-order stationary. 2. 3. 93 V variance. 73 wrap-around.5(100).3(7) two-sided sequence. 20 two-dimensional dft. 73 symmetries.3(97). 3.4(71) strict sense stationary (SSS).4(170). 2. 3.3(168) uncorrelated.3(97) unilateral z-transform. 2.1(61) stochastic process.2(3). 2. 3.4(71) stationary.2(64). 3.8(113). 113 . 8. 101 Y y-intercept. 8.13(53). 101 Z z transform. 102. 3. 170 wide sense stationary. 2.10(38) time-frequency analysis.14(56) SSS. 20 system identication. 62 strict sense stationary.3(97). 93. 1. 170. 3. 64 stochastic signals.

2/ Pages: 1-3 Copyright: Robert Nowak License: http://creativecommons.org/content/m10987/2.org/licenses/by/1.org/content/m10963/2.0 Module: "DFT as a Matrix Operation" By: Robert Nowak URL: http://cnx.7/ Pages: 11-14 Copyright: Robert Nowak License: http://creativecommons.org/content/m10961/2.6/ Pages: 17-19 Copyright: Robert Nowak License: http://creativecommons.0 Module: "Images: 2D signals" By: Robert Nowak URL: http://cnx.4/ Pages: 7-11 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.org/licenses/by/1.0 Module: "Fast Convolution Using the FFT" By: Robert Nowak URL: http://cnx.org/licenses/by/1.org/licenses/by/1.org/content/m10972/2.0 Module: "Digital Image Processing Basics" By: Robert Nowak URL: http://cnx.4/ License: http://creativecommons.org/content/m10962/2.ATTRIBUTIONS 179 Attributions Collection: Edited by: Robert Nowak URL: http://cnx.org/licenses/by/1.org/licenses/by/1.2/ Pages: 3-6 Copyright: Robert Nowak License: http://creativecommons.org/content/col10203/1.0 Module: "2D DFT" By: Robert Nowak URL: http://cnx.org/content/m10973/2.0 Module: "Image Restoration Basics" By: Robert Nowak URL: http://cnx.0 Intro to Digital Signal Processing .5/ Pages: 14-16 Copyright: Robert Nowak License: http://creativecommons.

0 Module: "Filtering with the DFT" By: Robert Nowak URL: http://cnx.org/licenses/by/1.org/licenses/by/1.0 Module: "Ideal Reconstruction of Sampled Signals" By: Robert Nowak URL: http://cnx.0 ATTRIBUTIONS .0 Module: "Sampling CT Signals: A Frequency Domain Perspective" By: Robert Nowak URL: http://cnx.org/content/m0039/2.org/licenses/by/1.2/ Pages: 33-38 Copyright: Robert Nowak License: http://creativecommons.0 Module: "Discrete-Time Processing of CT Signals" By: Robert Nowak URL: http://cnx.6/ Pages: 19-23 Copyright: Robert Nowak License: http://creativecommons.3/ Pages: 23-32 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Amplitude Quantization" By: Don Johnson URL: http://cnx.org/content/m0051/2.org/content/m11022/2.org/content/m11044/2.org/licenses/by/1.0 Module: "The DFT: Frequency Domain with a Computer Analysis" By: Robert Nowak URL: http://cnx.org/licenses/by/1.org/licenses/by/1.org/content/m10993/2.org/content/m10964/2.22/ Pages: 53-55 Copyright: Don Johnson License: http://creativecommons.3/ Pages: 42-49 Copyright: Robert Nowak License: http://creativecommons.2/ Pages: 38-41 Copyright: Robert Nowak License: http://creativecommons.0 Module: "Classic Fourier Series" By: Don Johnson URL: http://cnx.3/ Pages: 50-53 Copyright: Robert Nowak License: http://creativecommons.org/content/m10992/2.org/licenses/by/1.org/content/m10994/2.22/ Pages: 56-58 Copyright: Don Johnson License: http://creativecommons.180 Module: "The FFT Algorithm" By: Robert Nowak URL: http://cnx.

0 Module: "Correlation and Covariance of a Random Signal" By: Michael Haag URL: http://cnx.org/licenses/by/1.org/content/m10989/2.org/content/m10673/2.org/content/m10684/2.0 Module: "Stationary and Nonstationary Random Processes" By: Michael Haag URL: http://cnx.0 181 .15/ Pages: 64-67 Copyright: Behnaam Aazhang License: http://creativecommons.0 Module: "Random Signals" By: Nick Kingsbury URL: http://cnx.3/ Pages: 74-77 Copyright: Michael Haag License: http://creativecommons.3/ Pages: 78-81 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.org/licenses/by/1.org/licenses/by/1.org/content/m10649/2.org/content/m10656/2.2/ Pages: 84-86 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Crosscorrelation of Random Processes" By: Michael Haag URL: http://cnx.2/ Pages: 71-73 Copyright: Michael Haag License: http://creativecommons.2/ Pages: 61-63 Copyright: Michael Haag License: http://creativecommons.ATTRIBUTIONS Module: "Introduction to Random Signals and Processes" By: Michael Haag URL: http://cnx.org/licenses/by/1.0 Module: "Introduction to Stochastic Processes" By: Behnaam Aazhang URL: http://cnx.org/content/m10235/2.org/content/m10676/2.org/licenses/by/1.5/ Pages: 68-71 Copyright: Nick Kingsbury License: http://creativecommons.org/licenses/by/1.0 Module: "Autocorrelation of Random Processes" By: Michael Haag URL: http://cnx.0 Module: "Random Processes: Mean and Variance" By: Michael Haag URL: http://cnx.4/ Pages: 81-84 Copyright: Michael Haag License: http://creativecommons.org/content/m10686/2.

org/licenses/by/1.org/content/m10622/2.org/content/m10593/2.0 Module: "The Z Transform: Denition" By: Benjamin Fite URL: http://cnx.org/licenses/by/1.org/licenses/by/1.org/licenses/by/1.5/ Pages: 89-92 Copyright: Michael Haag License: http://creativecommons. Richard Baraniuk License: http://creativecommons.182 Module: "Dierence Equation" By: Michael Haag URL: http://cnx.0 Module: "Rational Functions" By: Michael Haag URL: http://cnx.org/licenses/by/1.12/ Pages: 98-100 Copyright: Richard Baraniuk License: http://creativecommons.0 ATTRIBUTIONS .0 Module: "Poles and Zeros" By: Richard Baraniuk URL: http://cnx.0 Module: "The Complex Plane" By: Michael Haag URL: http://cnx.7/ Pages: 100-102 Copyright: Michael Haag License: http://creativecommons.5/ Pages: 104-113 Copyright: Benjamin Fite License: http://creativecommons.9/ Pages: 92-97 Copyright: Benjamin Fite License: http://creativecommons.2/ Pages: 102-103 Copyright: Michael Haag License: http://creativecommons.0 Module: "Table of Common z-Transforms" By: Melissa Selik.org/content/m10549/2.0 Module: "Region of Convergence for the Z-transform" By: Benjamin Fite URL: http://cnx.org/licenses/by/1.14/ Pages: 97-98 Copyright: Melissa Selik.org/content/m10112/2.org/content/m10119/2.org/content/m10596/2. Richard Baraniuk URL: http://cnx.org/content/m10595/2.org/licenses/by/1.

0 183 .13/ Pages: 125-129 Copyright: Bill Wilson License: http://creativecommons.org/licenses/by/1.org/content/m10706/2.org/content/m10701/2.2/ Pages: 139-142 Copyright: Ivan Selesnick License: http://creativecommons.0 Module: "Filter Design using the Pole/Zero Plot of a Z-Transform" By: Michael Haag URL: http://cnx.org/content/m10705/2.3/ Pages: 131-135 Copyright: Ivan Selesnick License: http://creativecommons.0 Module: "Zero Locations of Linear-Phase FIR Filters" By: Ivan Selesnick URL: http://cnx.2/ Pages: 117-121 Copyright: Ivan Selesnick License: http://creativecommons.8/ Pages: 113-117 Copyright: Michael Haag License: http://creativecommons.0 Module: "Design of Linear-Phase FIR Filters by DFT-Based Interpolation" By: Ivan Selesnick URL: http://cnx.org/content/m10548/2.org/content/m1057/2.9/ Pages: 121-124 Copyright: Michael Haag License: http://creativecommons.0 Module: "Bilinear Transform" By: Bill Wilson URL: http://cnx.ATTRIBUTIONS Module: "Understanding Pole/Zero Plots on the Z-Plane" By: Michael Haag URL: http://cnx.org/licenses/by/1.0 Module: "Four Types of Linear-Phase FIR Filters" By: Ivan Selesnick URL: http://cnx.org/content/m10700/2.org/licenses/by/1.org/licenses/by/1.org/licenses/by/1.0 Module: "Linear-Phase FIR Filters" By: Ivan Selesnick URL: http://cnx.2/ Pages: 135-139 Copyright: Ivan Selesnick License: http://creativecommons.org/licenses/by/1.org/licenses/by/1.org/content/m10556/2.

Mark Haun. Jones. Daniel Sachs URL: http://cnx. Jones. Daniel Sachs.2/ Pages: 150-151 Copyright: Hyeokho Choi License: http://creativecommons.org/content/m10764/2. Brian Wade License: http://creativecommons. Jake Janovetz.org/content/m10914/2.0 Module: "Adaptive Filtering: LMS Algorithm" By: Douglas L.184 Module: "Design of Linear-Phase FIR Filters by General Interpolation" By: Ivan Selesnick URL: http://cnx. Matthew Berry.org/licenses/by/1.0 Module: "MATLAB FIR Filter Design Exercise" By: Hyeokho Choi URL: http://cnx. Dima Moussa.org/licenses/by/1. Swaroop Appadwedula.org/licenses/by/1.0 Module: "FIR Filter Design using MATLAB" By: Hyeokho Choi URL: http://cnx.14/ Pages: 155-157 Copyright: Douglas L. Swaroop Appadwedula.0 ATTRIBUTIONS .0 Module: "Linear-Phase FIR Filters: Amplitude Formulas" By: Ivan Selesnick URL: http://cnx.2/ Page: 151 Copyright: Hyeokho Choi License: http://creativecommons.org/content/m10917/2.org/content/m10918/2. Justin Romberg License: http://creativecommons.0 Module: "Haar Wavelet Basis" By: Roy Ha.org/content/m10704/2. Michael Kramer. Dima Moussa.org/licenses/by/1.5/ Pages: 145-150 Copyright: Ivan Selesnick License: http://creativecommons.2/ Pages: 143-145 Copyright: Ivan Selesnick License: http://creativecommons.org/content/m10707/2.0 Module: "Parks-McClellan Optimal FIR Filter Design" By: Hyeokho Choi URL: http://cnx.org/licenses/by/1.org/licenses/by/1. Justin Romberg URL: http://cnx. Mark Haun. Matthew Berry.2/ Page: 151 Copyright: Hyeokho Choi License: http://creativecommons.org/licenses/by/1.org/content/m10481/2.7/ Pages: 159-166 Copyright: Roy Ha.

14/ Pages: 168-170 Copyright: Phil Schniter License: http://creativecommons.3/ Pages: 166-167 Copyright: Ivan Selesnick License: http://creativecommons.org/licenses/by/1.0 Module: "The Haar System as an Example of DWT" By: Phil Schniter URL: http://cnx.org/content/m10418/2.10/ Pages: 171-172 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.org/content/m10436/2.org/licenses/by/1.12/ Pages: 170-171 Copyright: Phil Schniter License: http://creativecommons.ATTRIBUTIONS Module: "Orthonormal Wavelet Basis" By: Ivan Selesnick URL: http://cnx.org/content/m10437/2.0 Module: "Discrete Wavelet Transform: Main Concepts" By: Phil Schniter URL: http://cnx.0 Module: "Continuous Wavelet Transform" By: Phil Schniter URL: http://cnx.0 185 .org/content/m10957/2.org/licenses/by/1.

About Connexions Since 1999. Portuguese. and Thai. and lterbanks. . wavelets. We connect ideas and facilitate educational communities. image restoration.Intro to Digital Signal Processing The course provides an introduction to the concepts of digital signal processing (DSP). teaching and learning environment open to anyone interested in education. K-12 schools. Some of the main topics covered include DSP systems. and lifelong learners. Vietnamese. adaptive lters. Italian. Spanish. including English. French. professors and lifelong learners. distance learners. teachers. including students. Chinese. Connexions is part of an exciting new information distribution system that allows for Print on Demand Books. interactive courses are in use worldwide by universities. Connexions's modular. Japanese. Connexions has partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed course materials and textbooks into classrooms worldwide at lower prices than traditional academic publishers. We are a Web-based authoring. Connexions materials are in many languages. FIR lters. Connexions has been pioneering a global system where anyone can create course materials and make them fully accessible and easily reusable free of charge. community colleges. z-transform.

- Lecture Note 2007
- Arduino Wattmeter Code v1 0
- PCIUT3100
- FFT Algorithm
- Signal Processing With Scilab
- IBCAST- 2015 DsPIC-based Advanced Data Acquisition System for Monitoring, Control and Security Applications
- Two Dmension Sgnal and Image Processing by Jae s. Lim
- The Art of Instrumentation
- Reliability of Microprocessor-Based Relay
- Dsp2 Adda Demos
- malaysia_SITIS_437.pdf
- ADC
- ADC-VGood
- [Dan_E._Dudgeon,_Russell_M._Mersereau]_Multidimens(Bookos.org).pdf
- Lecture Notes
- 723053.pdf
- PSDASKFR
- Presentation a Nu
- Idft 8 Point Dif Using Tms320f2812 Dsp
- LPC2148quicksheet_v1.0
- AtalSchroeder PredictiveCodingSpeechSignalsSubjectiveErrorCriteria Ieee j Assp79
- Adis 16209 full
- The ADC of the AVR
- 1
- 16 Mfiis-12 Mim7
- Bio
- Datasheet (2)
- Further Look at Data Handling. Communication and Standards
- Pocket Guide to Oscilloscopes
- sample matlab pgms for dsp

Skip carousel

- Digital Thermometer
- UT Dallas Syllabus for ee7326.501.08f taught by James Hellums (hellums)
- UT Dallas Syllabus for taught by (bpg101020)
- Microcontroller Based Sign Language Glove
- tmp840C
- Embedded Based Colour Recognition
- Automatic Water Distribution System
- Biomedical Parameter Transfer Using Wireless Communication
- Design of Low Power, High Speed 3-Bit Pipelined ADC
- Flex Based Glove System for Dumb People
- Effective Low Voltage Wireless Charging System with an Active Matching Circuit for AIMD'S
- Simulation of 3 bit Flash ADC in 0.18μmTechnology using NG SPICE Tool for High speed Application
- Implementation of Blood Warmer Before Transfusion Process using Microcontroller At89s52
- Desining and FFT Analysis of Sigma Delta Converter using Spice
- A comprehensive study on Implementing of 10bit Two step Flash ADC
- Smart Greenhouse System
- UT Dallas Syllabus for ee7327.501 06f taught by James Hellums (hellums)
- Measurement of Parameters of Different Analog Circuits through Android app
- tmpE43
- UT Dallas Syllabus for taught by (bpg101020)
- Automatic Water Management in Drip Irrigation

Skip carousel

- UT Dallas Syllabus for te3302.501.07f taught by Philipos Loizou (loizou)
- UT Dallas Syllabus for engr3102.101.11s taught by Philipos Loizou (loizou)
- tmp8D46.tmp
- Digital Video Watermarking Techniques
- UT Dallas Syllabus for te3102.101.09f taught by Nasser Kehtarnavaz (nxk019000)
- UT Dallas Syllabus for ee3102.104.11f taught by Bo Ram Kim (bxk105120)
- tmpB14B.tmp
- tmp808C.tmp
- Speaker Recognition System using MFCC and Vector Quantization Approach
- UT Dallas Syllabus for ee3102.103.10s taught by Philipos Loizou (loizou)
- UT Dallas Syllabus for ce3102.103.11f taught by Bo Ram Kim (bxk105120)
- UT Dallas Syllabus for ee6360.501 06f taught by Issa Panahi (imp015000)
- UT Dallas Syllabus for te3302.501.08s taught by (zenman)
- tmpFFD7.tmp
- UT Dallas Syllabus for ee3102.001.08f taught by Nasser Kehtarnavaz (nxk019000)
- A Review on Video Watermarking and Their Techniques
- UT Dallas Syllabus for te3102.101.11f taught by Bo Ram Kim (bxk105120)
- UT Dallas Syllabus for te3102.104.11f taught by Bo Ram Kim (bxk105120)
- UT Dallas Syllabus for ee3102.102.11f taught by Bo Ram Kim (bxk105120)
- UT Dallas Syllabus for ee3102.101.09f taught by Nasser Kehtarnavaz (nxk019000)
- UT Dallas Syllabus for ee3102.102.11f taught by Bo Ram Kim (bxk105120)
- Isolated Word Speech Recognition Techniques and Algorithms
- UT Dallas Syllabus for te3102.003.08s taught by Nasser Kehtarnavaz (nxk019000)
- UT Dallas Syllabus for ee3102.001.08s taught by Philipos Loizou (loizou)
- UT Dallas Syllabus for ee3102.601.09f taught by Nasser Kehtarnavaz (nxk019000)
- UT Dallas Syllabus for ce3102.602.09f taught by Nasser Kehtarnavaz (nxk019000)
- UT Dallas Syllabus for ee4361.501.09f taught by Issa Panahi (imp015000)
- PAPR Reduction in DFT- Spread OFDMA using Companding Transform
- UT Dallas Syllabus for ee3302.001 06f taught by Philipos Loizou (loizou)
- UT Dallas Syllabus for engr3102.101.10f taught by Nasser Kehtarnavaz (nxk019000, nxk019000)

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading