You are on page 1of 334

Fundamentals of Electrical Engineering I

By: Don Johnson

Fundamentals of Electrical Engineering I

By: Don Johnson

Online: < >

Rice University, Houston, Texas

This selection and arrangement of content as a collection is copyrighted by Don Johnson. It is licensed under the Creative Commons Attribution 1.0 license ( Collection structure revised: August 6, 2008 PDF generated: December 22, 2012 For copyright and attribution information for the modules contained in this collection, see p. 312.

Table of Contents
1 Introduction 1.1 Themes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Signals Represent Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Structure of Communication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 The Fundamental Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.5 Introduction Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Signals and Systems 2.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Elemental Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Signal Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4 Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.5 Introduction to Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.6 Simple Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.7 Signals and Systems Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3 Analog Signal Processing 3.1 Voltage, Current, and Generic Circuit Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.2 Ideal Circuit Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.3 Ideal and Real-World Circuit Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4 Electric Circuits and Interconnection Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.5 Power Dissipation in Resistor Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.6 Series and Parallel Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.7 Equivalent Circuits: Resistors and Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.8 Circuits with Capacitors and Inductors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.9 The Impedance Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.10 Time and Frequency Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.11 Power in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.12 Equivalent Circuits: Impedances and Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.13 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.14 Designing Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.15 Formal Circuit Methods: Node Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.16 Power Conservation in Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.17 Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.18 Dependent Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.19 Operational Ampliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.20 The Diode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.21 Analog Signal Processing Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

4 Frequency Domain 4.1 Introduction to the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.2 Complex Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.3 Classic Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.4 A Signal's Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.5 Fourier Series Approximation of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 4.6 Encoding Information in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.7 Filtering Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135


4.8 Derivation of the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.9 Linear Time Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 4.10 Modeling the Speech Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 4.11 Frequency Domain Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

5 Digital Signal Processing 5.1 Introduction to Digital Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.2 Introduction to Computer Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.3 The Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 173 5.4 Amplitude Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5.5 Discrete-Time Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.6 Discrete-Time Fourier Transform (DTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5.7 Discrete Fourier Transforms (DFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 5.8 DFT: Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5.9 Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 5.10 Spectrograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 5.11 Discrete-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5.12 Discrete-Time Systems in the Time-Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.13 Discrete-Time Systems in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5.14 Filtering in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.15 Eciency of Frequency-Domain Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5.16 Discrete-Time Filtering of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 5.17 Digital Signal Processing Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

6 Information Communication 6.1 Information Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6.2 Types of Communication Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.3 Wireline Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.4 Wireless Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.5 Line-of-Sight Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 232 6.6 The Ionosphere and Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.7 Communication with Satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.8 Noise and Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.9 Channel Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.10 Baseband Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 6.11 Modulated Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 237 6.12 Signal-to-Noise Ratio of an Amplitude-Modulated Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 6.13 Digital Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 6.14 Binary Phase Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 6.15 Frequency Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 6.16 Digital Communication Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6.17 Digital Communication in the Presence of Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 6.18 Digital Communication System Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 6.19 Digital Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 6.20 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 6.21 Source Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 6.22 Compression and the Human Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 254 6.23 Subtlies of Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 6.24 Channel Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 6.25 Repetition Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 259 6.26 Block Channel Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Available for free at Connexions <>

. . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Communication Protocols . . . . . . . . . . . . . . . . 278 Solutions . . . . . . . . . . . . . . . . . 304 Index . . . . . . . . . . . . . . . . . . . . . . . . .36 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 7. . . . . . . . . . . . . . . . . . . 305 Attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 7 Appendix 7. . . . . . . . . . .1 Decibels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 6. . . . . . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .27 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9> . . . . . .v 6. . . . . . . . . . . . . . . . . . .31 6. . . . . . . . . . . . . . . . . . . . 262 Error-Correcting Codes: Channel Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 6. . . . . . . . . . . . . . . . . . . . . . .33 6. . . . . . .37 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 6. . . . . . . . . . . . . . . . . 312 Available for free at Connexions <http://cnx. . . . .3 Frequency Allocations . . . . 273 Ethernet . . . . . . . . . . . . . . . .28 6. . . . . . . . . . . . . . . . . . 271 Message Routing . . . . . . . . . . . . . . . . . . . . . . . . . . .org/content/col10040/1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Comparison of Analog and Digital Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Communication Networks . . 299 7. . . . . . . . . . . . . . 264 Error-Correcting Codes: Hamming Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38 Error-Correcting Codes: Hamming Distance . . . . . . . . . . . 268 Capacity of a Channel . . . . . . . . . . . . . . . . . . .2 Permutations and Combinations .29 6. . . . . . . . . . . . . . . . . . 272 Network architectures and interconnection . . 276 Information Communication Problems . . . . . . . . . . . . . 266 Noisy Channel Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9> .vi Available for free at Connexions <

There. transmission. which propagate to your friend's ear. the lipsto move in a coordinated fashion. You might send the le via e-mail to a friend. conversion or transduction. all of these scenarios are equivalent. This course describes what information is. and electromagnetic waves. and how electrical signals represent information.1 Themes 1 From its beginnings in the late nineteenth century. Available for free at Connexions <http://cnx. how engineers quantify information. Information can take a variety of forms. and manipulation (circuits can be built to reduce noise and computers can be used to modify information). Information can take the form of a text le you type into your word processor. The conversion of information-bearing signals from one energy form into another is known as energy All conversion systems are inecient since some input energy is lost as heat. examples are audio and video. we will be concerned with how to • • • • 1 represent all forms of information with electrical signals. encode information as voltages. Utterances convey information in sound pressure waves.18/>. plastic and computer lesare very currents. broadly known structure so that someone else can understand what you say. Conceptually we could use any form of energy to represent information. although the forms of the information representationsound waves. Information arises in your thoughts and is represented by back into a useful This content is available online at <http://cnx. When you speak to a friend. and receive electric signals and convert the information expressed by electric signals form.Chapter 1 Introduction 1. Your words could have been recorded on a compact disc (CD). mailed to your friend and listened to by her on her stereo. and reception of information by electrical means. which must have a well dened. the tongue. who reads it and understands it. the underlying themes are relevant today: on the latter theme: the Power creation and transmission and information have been the underlying themes of electrical engineering for a century and a half. sound energy is converted back to neural activity. Thus. telegraphy and telephony to focusing on a much broader range of disciplines.9> 1 . who don't care about information and analog digital. manipulate information-bearing electric signals with circuits and computers. transmission (signals can be broadcast from antennas or sent through wires). but electric signals are uniquely well-suited for information representation. Engineers. but this loss does not necessarily mean that the conveyed information is lost. However. examples are text (like what you are reading now) and DNA sequences. she understands what you say. manipulation. electrical engineering has blossomed from focusing on electrical circuits for power. categorize information into two dierent forms: Digital Analog information is continuous valued. if what you say makes sense. content. This course concentrates representation. information is discrete valued. and. your thoughts are translated by your brain into motor commands that cause various vocal tract componentsthe jaw. From an information theoretic viewpoint.

The result is pressure waves propagating in the air. these creations gave birth to the information age.1 Analog Signals Analog signals are usually signals dened over continuous independent variable(s).org/content/m0001/ le containing all the necessary . t) (Here we use vector x to denote spatial coordinates). 2 proclaimed in 1864 a set of equations These equations predicted that light was an electroOnce Heinrich Hertz conrmed magnetic wave. circuit theory and electromagnetic theory were all an electrical engineer needed to know to be qualied and produce rst-rate designs. the laser was invented. 1.dss. and the publication of of Communication by Claude Shannon3 (1948).uk/∼history/Mathematicians/Maxwell.dxb and image les. An example of the resulting waveform s (x0 .1: Speech Example). At mid-century. three "inventions" changed the ground rules. Analog signals are continuous- valued. INTRODUCTION Telegraphy represents the earliest electrical information system. Electrical science came of age when James Clerk Maxwell that he claimed governed all electrical phenomena. During the rst part of the twentieth century.2. Maxwell's equations were simplied by Oliver Heaviside and others. 2 x0 say. 3 Available for free at Connexions < 6 This content is available online at < Stated in mathematical terms. a signal is merely a function. the development of the telephone in 1876 was due largely to empirical work. and understand the tradeos between digital and analog alternatives. the invention of the transistor (1947). digital signals are discrete-valued. 4 http://www. Please click here to download a . About twenty years later. and the speech signal thus corresponds to a function having inde- s (x. the primary focus shifted from implementation be> .org/content/m0000/latest/FundElecEngBraille. which opened even more design possibilities. and were widely read. information is represented by the fundamental quantity in electrical engineering: signal. When you record someone talking. space (images). Thanks to the translation eorts of Rice University's Disability Support Services 4 .dcs. note: 5 collection is now available in a Braille-printable version. These were the rst public demonstration of the rst electronic computer (1946). or the integers (denoting the sequencing of letters and numbers in the football score). the vacuum tube (1905).edu/ 5 http://cnx. and between hardware and software congurations in designing information systems.10) is produced by your vocal cords exciting acoustic resonances in your vocal tract. and that energy could propagate. in which digital and analog communication systems interact and compete for design preferences. Only once the intended system is specied can an Today's electrical engineer must be mindful of the system's ultimate goal.rice. circuit theory served as the foundation and the framework of all of electrical engineering education. and radio broadcastingthat marked the true emergence of the communications age.html http://www. how to build communication systems (the circuit theory era) to what communications systems were intended to accomplish. for example).27/>. and it dates from 1837.2 CHAPTER 1. electrical science was largely empirical. Because of the complexity of Maxwell's presentation. this 1. you are evaluating the speech signal at a particular spatial A Mathematical Theory Although conceived separately. At that time. Thus. The independent variable of the signal could be time (speech. Maxwell's prediction of what we now call radio waves in about 1882. gure (Figure 1. and only those with experience and intuition could develop telegraph systems. pendent variables of space and time and a value corresponding to air pressure: notation Speech (Section 4.2 Signals Represent Information the 6 Whether analog or digital. This understanding of fundamentals led to a quick succession of inventionsthe wireless telegraph (1899). t) is shown in this http://www-groups.

3 0.1 Amplitude 0 -0. In Figure 1.4 -0. demonstrating that it (and all other images as well) are functions of two Available for free at Connexions <http://cnx. Photographs are static.5 Figure 1.3 -0. an image is shown. and are continuous-valued signals dened over space. Black-and-white images have only one value at each point in space.5 0.1: A speech signal's amplitude relates to tiny air pressure variations.3 Speech Example 0.2 (Lena). which amounts to its optical reection properties.9> .4 0. Shown is a recording of the vowel "e" (as in "speech"). independent spatial variables.1 0.2 -0.

2.2: (b) It On the left is the classic Lena image.2 Digital Signals The word "digital" means discrete-valued and implies the signal has an integer-valued independent variable. punctuation marks. the ASCII code represents the letter A as 65.1: ASCII Table shows the international convention on associating characters with integers. Mathematically. b (x)) Interesting cases abound where the analog signal depends not on a continuous variable. a as the number 97 and the letter Table 1. ASCII Table Available for free at Connexions <http://cnx. Thus. 1. which is used ubiquitously as a test image. for example). Symbols do not have a numeric value. In this image. s (x) = (r (x) .4 CHAPTER 1. images today are usually thought of as having three values at every point in space. and each is represented by a unique number. signal values range between 0 and 255.and lowercase characters. temperature readings taken every hour have continuousanalogvalues. such as time. The colors merely help show what signal values are about the same size. yellow and bluecan produce very realistic color images. T . INTRODUCTION Lena (a) Figure 1. why is that? Color images have values that express how reectivity depends on the optical spectrum. and various other symbols represented by a seven-bit integer. Digital information includes numbers and symbols (characters typed on the keyboard.9> . Computers rely on the digital representation of information to manipulate and transform information. but the signal's independent variable is (essentially) the integers. contains straight and curved lines. For color pictures are multivaluedvector-valuedsignals: green and blue is present. The ASCII character code has the upper. the numbers. but on a discrete variable. Painters long ago found that mixing together combinations of the so-called primary colorsred. For example. complicated texture. and a face. On the right is a perspective display of the Lena image as a signal: a function of two spatial variables. but a dierent set of colors is used: How much of red. g (x) .

this table displays rst the so-called 7-bit code (how many characters in a seven-bit code?). then the character the number represents.3: The Fundamental Model of Communication.5 00 08 10 18 20 28 30 38 40 48 50 58 60 68 70 78 nul bs dle car sp ( 0 8 @ H P X ' h p x 01 09 11 19 21 29 31 39 41 49 51 59 61 69 71 79 soh ht dc1 em ! ) 1 9 A I Q Y a i q y 02 0A 12 1A 22 2A 32 3A 42 4A 52 5A 62 6A 72 7A stx nl dc2 sub " * 2 : B J R Z b j r z 03 0B 13 1B 23 2B 33 3B 43 4B 53 5B 63 6B 73 7B etx vt dc3 esc # + 3 . 7 This content is available online at <http://cnx. C K S [ c k s { 04 0C 14 1C 24 2C 34 3C 44 4C 54 5C 64 6C 74 7C eot np dc4 fs $ . In pairs of columns. 7 1. The numeric codes are represented in hexadecimal (base-16)>.3 Structure of Communication Systems Fundamental model of communication s(t) Source Transmitter message x(t) Channel r(t) Receiver s(t) Sink modulated message corrupted modulated message demodulated message Figure 1. some of which may be familiar (like cr for carriage return) and some not (bel means a "bell").9> .org/content/m0002/2. Available for free at Connexions <http://cnx.1: The ASCII translation table shows how standard keyboard characters are represented by integers. 6 07 0F 17 1F 27 2F 37 3F 47 4F 57 5F 67 6F 77 7F bel si etb us ' / 7 ? G 0 W _ g o w del < D L T > F N V ^ f n v \ d l t | ∼ Table 1. 4 05 0D 15 1D 25 2D 35 3D 45 4D 55 5D 65 6D 75 7D enq cr nak gs % 5 = E M U ] e m u } 06 0E 16 1E 26 2E 36 3E 46 4E 56 5E 66 6E 76 7E ack so syn rs & . Mnemonic characters correspond to control characters.

output signals by arrows pointing away.4 (Denition of a system)). how information ows. (It is ridiculous to transmit a signal in such a way that communications. distorted. because of the channel. The channel is another system in our block and receiver design focus r (t). we must understand electrical science and technology. If the channel were benign (good luck nding such a channel in the real world). the same block diagram applies although the systems can be very dierent. s (t). receiving input signals (usually coming from the left) and producing from them new output signals. take this word to mean error-freedigital communication was possible over arbitrarily noisy channels.9> . To be able to design systems. else the communication system cannot be considered and characters typed on a keyboard. In the communications model. each message-bearing signal. In electrical engineering. and yield the message with no distortion. Examples of time-domain signals produced by a source are music. the receiver must do its best to produce a received message s (t) ˆ that resembles s (t) as much as possible.6 CHAPTER 1. INTRODUCTION Denition of a system x(t) System y(t) Figure 1. In the case of a radio transmitter. As typied by the communications model. Nothing good happens to a signal in a channel: It can become corrupted by noise. information sources produce signals. one. The fundamental model of communications is portrayed in Figure 1. we represent a system as a box. and transmitter design diagram. the receiver would serve as the inverse system to the transmitter. However. From the communication systems big picture perspective. In the case of a computer network. and produces on how best to jointly fend o the channel's eects on signals.) Transmitted signals next pass through the next stage. However. Signals can also be functions of two variablesan image is a signal that depends on two spatial variablesor more television pictures (video signals) are functions of two spatial variables and time. It is this result that modern communications systems exploit. In any case. A system operates on zero. the signal received by the receiver. it accepts an input audio signal and produces a signal that physically is an electromagnetic wave radiated by an antenna and propagating as Maxwell's equations predict.3 (Fundamental model of communication). how it is corrupted and manipulated. In the mathematical sense. exemplied by function of Available for free at Connexions <http://cnx. and attenuated among many possibilities. However. This graphical representation is known as a We denote input signals by lines having arrows pointing into the box. and launched into the Internet.4: A system operates on its input signal x (t) to produce an output y (t).lucent. is analog and is a absorb them (Figure 1. Shannon 8 showed in his 1948 paper that reliablefor the moment. In communication systems. we rst need to understand the big picture to appreciate the context in which the electrical engineer works. the inverse system must exist. In physical systems. typed characters are encapsulated in packets. clever systems exist that transmit signals so that only the in crowd can recover them. Such crytographic systems underlie secret channel. attached with a destination address. each signal corresponds to an electrical voltage or current. the evil no one can recover the original. and how it is ultimately received is summarized by interconnecting block diagrams: The outputs of one or more systems serve as the inputs to others. the source produces a signal that will be absorbed by the sink. speech. and why many 8 http://www. messagessignals produced by sourcesmust be recast for transmission. The block diagram has the message signal s (t) passing through a block labeled transmitter that produces the x (t). the transmitter should not operate in such a way that the message s (t) cannot be recovered from x (t). In this fundamental model. The channel cannot be escaped (the real world is cruel). Thus. or several signals to produce more signals or to simply block diagram.

Understanding signal generation and how systems work amounts to understanding signals. Here.4. φ = −π. Frequency can also be expressed by the symbol ω. Acos (2πf n + φ). Note that if sinusoid corresponds to a sine function. and how information can be processed by systems operating on information-bearing we most often express frequency in Hertz. the source is a system having no input but producing an output. how information is transformed between analog and digital forms.) Can you think of a simple signal that has a nite number of values but is dened in continuous 9 This content is available online at <http://cnx. and takes on values between 0 Exercise 1. which has units of radians/second. We can also dene a discrete-time variant of the sinusoid: variable is and 1. (Solution on p.2 time? Such a signal is also an analog signal.>. a sink has 1. In communications. the independent n and represents the integers. This understanding demands two dierent elds of knowledge. Exercise 1.9> . 11. One is electrical science: How are signals represented and manipulated electrically? The second is signal science: What is the structure of signals. The module on Information Communication (Section 6. In the communications model.4 The Fundamental Signal 1. Clearly. the received message is passed to the information an input and no output. 2 Asin (2πf t + φ) = Acos 2πf t + φ − π 2 (1. φ is the phase. while FM stations have carrier frequencies of about 100 MHz.) which means that a sinusoid having a frequency larger cos (2πf n) = cos (2π (f + 1) n). The amplitude. and what capabilities does this structure force upon communication systems? sink that somehow makes use of the message. we term either a sinusoid. what is their information content. the only dierence between a sine and cosine signal is the phase. Only when the discrete-time signal takes on a nite set of values can it be considered a digital signal. ω = 2πf . and determines t always has units of seconds.4. and the determines the sine wave's behavior at the origin (t = 0). The amplitude conveys the frequency f has units of Hz (Hertz) or s−1 . no matter what their source. and there we learn of Shannon's result and how to use it. (1. 11. note: Notice that we shall call either sinusoid an analog signal.2) Thus. realizing that in computations we must convert from degrees to radians. It has units of radians. The temporal variable frequencies of about 1 MHz (one mega-hertz or thus the frequency determines how many oscillations/second the sinusoid has. than one corresponds to a sinusoid having a frequency less than one. Finally. AM radio stations have carrier 106 Hz).7 communications systems are going digital. and determines the sinusoid's size. Finally.1) s (t) = Acos (2πf t + φ) orAcos (ωt + φ) is known as the sinusoid's sinusoid's physical units (volts.1 Show that (Solution on p. etc). and how rapidly the sinusoid oscillates per unit time.1 The Sinusoid Sine Denition A 9 The most ubiquitous and important signal in electrical engineering is the sinusoid.4. Frequency now has no dimensions.1) details Shannon's theory of information. having a zero value at the origin. Available for free at Connexions <http://cnx. but we can express it in degrees. the nature of the information they represent.

and more importantly.9> .8 CHAPTER 1. we'll learn what the limits are on such digital communication schemes. we could modulate the sinusoid's frequency as well as its amplitude.1: RMS Values The rms (root-mean-square) value of a periodic signal is dened to be s= where 1 T T s2 (t) dt 0 T is dened to be the signal's period: the smallest positive number such that s (t) = s (t + T ). What we can do is modulate them for a limited time (say T seconds). where We could relate temperature to amplitude by the formula A0 and k are constants that the transmitter and receiver must both know. a) What is the period of s (t) = Asin (2πf0 t + φ)? b) What is the rms value of this signal? How is it related to the peak value? c) What is the period and rms value of the depicted (Figure 1. for example) by changing a sinusoid's amplitude accordingly.2 Communicating Information with Signals The basic idea of communication engineering is to use a signal's parameters to represent either real numbers or other signals. What is the expression for the voltage provided by a wall socket? What is its rms value? 10 This content is available online at <http://cnx.5) square wave.5 Introduction Problems 10 Problem and the individual bits are encoded into a sinusoid's amplitude and frequency. INTRODUCTION 1.4. every parameters. This simple notion corresponds to how a modem works. we shall learn that this is indeed possible. The technical term is to from one place to another. If we had two numbers we wanted to send at the same time. we can send a real number (today's temperature. typed characters are encoded into eight bits. modulate the carrier signal's parameters to transmit information To explore the notion of modulation. we would keep the frequency constant (so the receiver would know what to expect) and change the amplitude at midnight. This modulation scheme assumes we can estimate the sinusoid's amplitude and We have exploited all of the sinusoid's two frequency.18/>. you'll see that it is labeled "110 volts AC". Now suppose we have a sequence of parameters to send. A = A0 (1 + kT ). If we wanted to send the daily temperature. and send two parameters T. We'll learn how this is done in subsequent modules. Here. Available for free at Connexions <http://cnx. generically denoted by sq (t)? d) By inspecting any device you plug into a wall socket.

datarate? If f0 ? b) Assuming that ten cycles of the sinusoid comprise a single bit's transmission interval. a) What is the smallest transmission interval that makes sense to use with the frequencies given above? In other words.5 Problem 1.9> . T (known as the transmission or baud interval) and equals the sum of two amplitude-weighted carriers. such as letters of the alphabet. the amplitude is either A or zero. Discuss how the transmitter must interface with the message source since the source is producing letters of the alphabet. and sending them one after another. what should you axes are labeled.2: Modems The word "modem" is short for "modulator-demodulator. not bits." Modems are used not only for connecting computers to telephone lines. the modem's transmitted signal that represents a single bit has the form x (t) = Asin (2πf0 t) . what is the resulting d) The classic communications block diagram applies to the modem. 0 ≤ t ≤ T We send successive symbols by choosing an appropriate frequency and amplitude combination.9 sq(t) A ••• –2 –A 2 ••• t Figure 1. 0 ≤ t ≤ T Within each bit interval T. but also for connecting digital (discrete-valued) sources to generic channels. in which binary information is represented by the presence or absence of a sinusoid (presence representing a "1" and absence a "0").org/content/col10040/1. a) What is the smallest transmission interval that makes sense with the frequency datarate of this transmission scheme? c) Now suppose instead of using "on-o" signaling. RU computer modems use two frequencies (1600 and 1800 Hz) and several amplitude levels.3: Advanced Modems A transmission is sent for a period of time To transmit symbols. Problem 1. Make sure Available for free at Connexions <http://cnx. In this problem. what is the dierent values for the N amplitude values are used. x (t) = A1 sin (2πf1 t) + A2 sin (2πf2 t) . we explore a simple kind of modem. Consequently. T be so that an integer number of cycles of the carrier occurs? b) Sketch (using Matlab) the signal that modem produces over several transmission intervals. we allow one of several amplitude during any transmission interval.

compute log2 (N1 N2 ). To convert this number into bits (the fundamental unit of information engineers use to qualify things).10 CHAPTER 1.200 bits/s? Assume use of the extended (8-bit) ASCII code. we have N1 N2 possible symbols that can be sent during each T second interval.9> . and N2 values for A2 . INTRODUCTION c) Using your signal transmission interval. how many amplitude levels are needed to transmit ASCII characters at a datarate of 3. Available for free at Connexions <http://cnx. note: We use a discrete set of values for A1 and A2 .org/content/col10040/1. If we have N1 values for amplitude A1 .

See the plot in the module Elemental Signals Available for free at Connexions <http://cnx. sin (2πf n) sin (2πn) = cos (2πf n). As cos (α + β) = cos (α) cos (β) − sin (α) sin (β).4.1 (p.9> . 7) A square wave takes on the values (Section 2.6: Square Wave).11 Solutions to Exercises in Chapter 1 Solution to Exercise 1. 7) Solution to Exercise 1. cos (2π (f + 1) n) = cos (2πf n) cos (2πn) − 1 and −1 (p.

org/content/col10040/1. INTRODUCTION Available for free at Connexions <http://cnx.9> .12 CHAPTER 1.

Ampère −1 originated with the quadratic formula: the solution of certain quadratic √ −1 could be dened.1 Denitions The notion of the square root of used equations mathematically exists only if the so-called imaginary quantity i 2 rst 3 for the imaginary unit but that notation did not take hold until roughly Ampère's time.1 Complex Numbers complex exponentials is 1 While the fundamental signal usd in electrical engineering is the sinusoid.html 3∼history/Mathematicians/∼history/Mathematicians/ By then. Note that a and plane. z .Chapter 2 Signals and Systems 2. Understanding information and power system designs and developing new systems all hinge on using complex numbers. Euler used the symbol i to denote current (intensité de current). The imaginary number jb equals (0. 2.html Available for free at Connexions <http://cnx. Here.dcs. http://www-groups. This content is available online at <http://cnx. is the x-coordinate and b. 1 2 Figure 2. Representing sinusoids in terms of not a mathematical they are critical to of complex variables is a critical skill all engineers √ −b2 . Fluency with complex numbers and rational functions In it can be expressed mathematically in terms of an even more fundamental signal: the complex exponential.1 (The Complex Plane) shows that we can locate a complex number in what we call the complex a. the imaginary part. the real part. It wasn't until the twentieth century that the importance of complex numbers to circuit theory became evident. j for writing complex numbers.b).b). modern electrical engineering. i for current was entrenched imaginary number has the form jb = is the real component and are real-valued a realization made over a century ago. A complex number. using and electrical engineers chose An (a.1. consists of the ordered pair j is suppressed because the imaginary a b b is the imaginary component (the component of the pair is always in the second position).27/>. is the y -coordinate.9> 13 .

we know that locations in the plane can be expressed as the sum of vectors. the real part of the result equals the sum of the real parts and the imaginary part equals the sum of the imaginary parts. this notation for a complex number represents vector addition.1: A complex number is an ordered pair (a. 37. the real and imaginary parts remain separate. has the same real part as z but an imaginary part of the z = Re (z) + jIm (z) z ∗ = Re (z) − jIm (z) (2.1) j. Re (z) = Complex numbers can also be expressed in an alternate form. Available for free at Connexions <http://cnx. SIGNALS AND SYSTEMS The Complex Plane Figure 2. Again. • If we add two complex numbers. Consequently. The Cartesian form of a z+z ∗ and 2 Im (z) = z−z ∗ 2j . Complex numbers can also be expressed in polar coordinates as r∠θ An imaginary number can't be numerically added to a real number. The product of j and an imaginary rotates the number's position by 90 degrees. equals Im (z). multiplying a complex number (Solution on p. the following properties easily follow. Exercise 2.1 j number is a real number: j (jb) = −b because j 2 = −1. Polar form arises arises from the geometric interpretation of complex numbers. written as z ∗ .9> . a complex number z can be expressed z = a + jb where j indicates the y -coordinate. written as Re (z). both the real and imaginary parts of a complex number are real-valued. Some obvious terminology. which we will nd quite useful. Using Cartesian notation. This property follows from the laws of vector addition. b: that part of a complex number that is complex conjugate of z . with the vectors corresponding to the as the (vector) sum Cartesian form x and y directions. Consequently.) Use the denition of addition to show that the real and imaginary parts can be expressed as a sum/dierence of a complex number and its conjugate. This representation is known as the of z . From analytic geometry. multiplied by The We consider the real part as a function that works by selecting that component of a complex number The imaginary part of z . The real part of the complex number z = a + jb. • The product of by j and a real number is an imaginary number: ja. rather. polar form. opposite sign.b) that can be regarded as coordinates in the plane. a1 + jb1 + a2 + jb2 = a1 + a2 + j (b1 + b2 ) In this way. equals a. not multiplied by j .14 CHAPTER 2. but it provides a convenient notation when we perform arithmetic manipulations.

1. we must take (Solution on p. Exercise 2. the imaginary ones to The real-valued terms correspond to the Taylor's series for rst relation results. 1! 2! 3! jθ for x. 1! 2! 3! θ θ2 θ3 − − j + .org/content/col10040/1. j 2 = −1. we see that the real and imaginary parts correspond to the cosine and sine of the triangle's base angle.3) (2.4) The rst of these is easily derived from the Taylor's series for the exponential. ex = 1 + Substituting x x2 x3 + + + . and j 4 = 1. We thus obtain the polar form for complex numbers. j 3 = −j .2 Euler's Formula Surprisingly.2 Convert 3 − 2j to polar form.. and Euler's The remaining relations are easily derived from the rst.) into account the quadrant in which the complex number lies.. 2. z = a + jb = r∠θ √ r = |z| = a2 + b2 a = rcos (θ) b = rsin (θ) θ = arctan The quantity quantity b a r is known as the θ is the complex number's magnitude of the complex number z .9> . Grouping separately the real-valued terms and the imaginary-valued ejθ = 1 − θ2 + ··· + j 2! θ θ3 − + .3) by a real constant corresponds to setting the radius of the complex number by the constant.1.2) z = rejθ To show this result. sin (θ).15 complex number can be re-written as a + jb = By forming a right triangle having sides a2 + b2 a and √ a2 a b +j√ 2 2 + b2 +b a b. In using the arc-tangent formula to nd the angle. Available for free at Connexions <http://cnx. 37.. The angle. Because of ... the polar form of a complex number z can be expressed mathematically as (2.. and is frequently written as |z|. we see that multiplying the exponential in (2. we nd that ejθ = 1 + j because ones. we use of trigonometric functions. Euler's relations that express exponentials with imaginary arguments in terms ejθ = cos (θ) + jsin (θ) cos (θ) = sin (θ) = ejθ + e−(jθ) 2 ejθ − e−(jθ) 2j (2. 1! 3! cos (θ).

org/content/col10040/1.10) . To divide. suppose the transfer function s+2 s2 + s + 1 s = j2πf Available for free at Connexions <http://cnx.9) (2. it's usually worth translating into polar form.6) Note that we are. known as a transfer function. the radius equals the ratio of the radii and the angle the dierence of the angles. Exercise 2. then performing the Addition and subtraction of polar forms amounts to converting to Cartesian form.3 (Solution on p. SIGNALS AND SYSTEMS 2. When the original complex numbers are in Cartesian form. in a sense. z1 z2 = = r1 ejθ1 r2 ejθ2 r1 r2 ej(θ1 +θ2 ) (2.3 Calculating with Complex Numbers Adding and subtracting complex numbers expressed in Cartesian form is quite easy: You add (subtract) the real parts and imaginary parts separately. For instance. and converting back to polar Example 2.7) Because the nal result is so complicated. z1 z2 = = = = a1 +jb1 a2 +jb2 a1 +jb1 a2 −jb2 a2 +jb2 a2 −jb2 (a1 +jb1 )(a2 −jb2 ) a2 2 +b2 2 a1 a2 +b1 b2 +j(a2 b1 −a1 b2 ) a2 2 +b2 2 (2. it's best to remember nal result.9> (2. the crucial quantity. performing the arithmetic operation.) What is the product of a complex number and its conjugate? Division requires mathematical manipulation. 37.1 When we solve circuit problems. will always be expressed as the ratio of polynomials in the variable equals s = j2πf . (2. z1 ± z2 = (a1 ± a2 ) + j (b1 ± b2 ) the usual rules of arithmetic.1.16 CHAPTER 2.1.8) r1 ejθ1 r1 z1 = = ej(θ1 −θ2 ) jθ2 z2 r2 e r2 To multiply. multiplying two vectors to obtain another vector. how to perform divisionmultiplying numerator and denominator by the complex conjugate of the denominatorthan trying to remember the The properties of the exponential make calculating the product and ratio of two complex numbers much simpler when the numbers are expressed in polar form. We convert the division problem into a multiplication problem by multiplying both the numerator and denominator by the conjugate of the denominator. the radius equals the product of the radii and the angle the sum of the angles. but follows directly from following z1 z2 = = (a1 + jb1 ) (a2 + jb2 ) a1 a2 − b1 b2 + j (a1 b2 + a2 b1 ) (2. What we'll need to understand the circuit's eect is the transfer function in polar form. Complex arithmetic provides a unique way of dening vector multiplication.5) To multiply two complex numbers in Cartesian form is not quite as easy. multiplication or division (especially in the case of the latter). form.

manipulations at rst appear to be more dicult because complex-valued numbers are introduced. Signals are nothing more than functions dened with respect to some independent variable. 2.html Available for free at Connexions <http://cnx. s2 = s+2 j2πf + 2 = 2 f 2 + j2πf + 1 +s+1 4π 4 + 4π 2 f 2 ejarctan(πf ) (1 − 4π 2 f 2 ) + 4π 2 f 2 e 2 “ ” 2πf jarctan 1−4π2 f 2 (2.14) A is its amplitude. In fact. For Elemental Signals 4 Elemental signals are the building blocks with which we build complicated signals. which we take to be time for the most part.17 Performing the required division is most easily accomplished by rst expressing the numerator and denominator each in polar form. and is the phasor. most of the ideas underlying modern signal theory can be exemplied with one-dimensional signals.9> . The complex exponential cannot be further decomposed most important signal in electrical engineering! Mathematical 5 A and its angle the signal phase. then calculating the ratio. By denition. f0 its>. one great example of which is an image. mathematicians thought engineers would not be suciently sophisticated to handle complex exponentials even though they greatly simplied solving circuit problems. Fortunately. Aejφ is known as the signal's complex amplitude. s (t) = Acos (2πf0 t + φ) For this signal. j denotes −1. the independent variables are x and y (two-dimensional space). s (t) √ = Aej(2πf0 t+φ) = Aejφ ej2πf0 t (2.1 Sinusoids Perhaps the most common real-valued signal is the sinusoid.13) 2. elemental signals have a simple structure. Complex Exponentials The most important signal is complex-valued. and φ its phase. and demonstrated that "mere" engineers could 4 5 This content is available online at <http://cnx.invent. Considering the complex amplitude as a complex number in polar form. Thus.15) Here. Very interesting signals are not functions solely of time. (2. 2.11) (2.12) = 4 + 4π 2 f 2 j e 1 − 4π 2 f 2 + 16π 4 f 4 “ “ ”” 2πf arctan(πf )−arctan 1−4π2 f 2 (2. early in the twentieth century. Exactly what we mean by the "structure of a signal" will unfold in this section of the course. Steinmetz introduced complex exponentials to electrical engineering. Video signals are functions of three variables: two spatial dimensions and time. the complex exponential. its magnitude is the amplitude The complex amplitude is also known as a into more elemental signals.

17) ej2πf t = cos (2πf t) + jsin (2πf t) (2. sinusoidal signals can be expressed as either the real or the imaginary part of a complex exponential signal. The sinusoid consists of two frequency components: one at the frequency f0 and the −f0 .16) sin (2πf t) = (2. the choice depending on whether cosine or sine phase is needed. Thus. or as the sum of two complex SIGNALS AND SYSTEMS use them to good eect and even obtain right answers! See Complex Numbers (Section 2.19) (2. These two decompositions are mathematically equivalent to each other.20) Available for free at Connexions <http://cnx.9> . Euler relation: cos (2πf t) = ej2πf t + e−(j2πf t) 2 ej2πf t − e−(j2πf t) 2j (2. This decomposition of the sinusoid can be traced to Euler's relation.18 CHAPTER 2.1) for a review of complex numbers and complex arithmetic.18) Decomposition: The complex exponential signal can thus be written in terms of its real and imaginary parts using Euler's relation. The complex exponential denes the notion of frequency: it is the other at only signal that contains only one frequency component. Acos (2πf t + φ) = Re Aejφ ej2πf t Asin (2πf t + φ) = Im Aejφ ej2πf t (2.

2). we can envision the complex exponential's temporal variations as seen in the above gure (Figure 2. and equals Available for free at Connexions <http://cnx. As time increases. Its real and imaginary parts are sinusoids. Using the complex plane. f. period The time taken for the complex exponential to go around the circle once is Graphically.16)).19 Figure 2. the locus of points traced by the complex exponential is a circle (it has constant magnitude of circle equals the frequency known as its A). The projections onto the real and imaginary axes of the rotating vector representing the complex exponential signal are the cosine and sine signal of Euler's relation ((2. complex exponential at The magnitude of the complex exponential is A. A fundamental relationship is T = f . The number of times per second we go around the 1 f . the complex exponential scribes a circle in the complex plane as time evolves. and the initial value of the t = 0 has an angle of φ. The rate at which the signal goes around the circle is the 1 frequency f and the time taken to go around is the period T .9> .

s (t) = = Aejφ e− τ ej2πf t 1 Aejφ e(− τ +j2πf )t t the exponential to decrease by a factor of . e time constant A decaying complex (2.368. SIGNALS AND SYSTEMS 2.3) decay.3: The real exponential. and corresponds to the time required for 1 . this signal corresponds to an exponential spiral.4) is denoted by u (t). Origin warning: This signal is discontinuous at the origin. s (t) = e− τ t (2.4: The unit step. In the complex plane.9> .21) Exponential 1 e–1 τ Figure 2. and doesn't matter in signal theory.23)  1 if t > 0 to be u(t) 1 t Figure 2. real exponentials (Figure 2. which approximately equals 0.2. For such signals. we can dene Its value at the origin need not be dened.3 Real Exponentials As opposed to complex exponentials which oscillate.20 CHAPTER 2.4 Unit Step The unit step function (Figure 2.22) complex frequency as the quantity multiplying t. t The quantity τ is known as the exponential's exponential is the product of a real and a complex exponential. and is dened   0 if t < 0 u (t) = (2. Available for free at Connexions <http://cnx.

Available for free at Connexions <http://cnx. ∆ seconds. to mathematically represent turning on an oscillator. t We will nd that this is the second most important signal in communications.6) sq (t) is a periodic signal like the sinusoid. then p∆ (t) =   0   1 0    if if if t<0 0<t<∆ t>∆ (2. simpler signal than the square wave. which must be specied to characterize the signal. It too has an amplitude and a period.6: The square wave. We nd subsequently that the sine wave is a Square Wave A t T Figure 2. we can write it as the product of a sinusoid and a step: s (t) = Asin (2πf t) u (t).9> .5 Pulse The unit pulse (Figure 2. 2.5) describes turning a unit-amplitude signal on for a duration of turning it o.24) 1 p∆(t) ∆ Figure 2. For example. 2.21 This kind of signal is used to describe signals that "turn on" suddenly.2.2.5: The pulse.6 Square Wave The square wave (Figure

the approach is dierent: We develop a common representation of all symbolic-valued signals so that we can embody the information they contain in a unied way. 7 Thus.12/>. Example 2. for both real-valued and symbolic-valued signals.) T and amplitude A as a superposition of delayed and Because the sinusoid is a superposition of two complex exponentials. we seek ways of decomposing real-valued discrete-time signals into simpler components. For example. Subsequent modules describe how virtually all analog signal processing can be performed with software. writing a signal as a sum of component We could not prevent ourselves from the pun in this statement. discrete-time signals are more general. We will discover that virtually every signal can be decomposed into a sum of complex exponentials. SIGNALS AND SYSTEMS 2. eciency. we have treated what are known as analog signals and systems. it essentially equals the number of terms in its decomposition.24/>. analog signals are functions having continuous quantities as their independent variables. 2. which we term the signal decomposition. they are sequences. Available for free at Connexions <http://cnx.3) will detail conditions under which an analog signal can be converted into a discrete-time one and retrieved without error.2 As an example of signal complexity.3 Signal Decomposition a given signal into a 6 A signal's complexity is not related to how wiggly it is. the sinusoid is more complex. we can change the component signal's gain by multiplying it by a constant and by delaying it. Though we will In never compute a signal's complexity. which is also a discrete-time signal.5) are functions dened on the integers. encompassing signals derived from analog ones as well. such as space and time. Clearly. This content is available online at <http://cnx. Rather. we can express the pulse p∆ (t) as a sum of delayed unit steps. (Solution on p. We must deal with such symbolic valued (p. 37. As with analog signals.16)) as a sum of a sine and a cosine. the most important issue becomes. 6 7 and signals that aren't.9> . Thus.1 Express a square wave having period amplitude-scaled pulses. From an information representation Mathematically. With this approach leading to a better understanding of signal structure.22 CHAPTER 2.4 Discrete-Time Signals So far.25) p∆ (t) = u (t) − u (t − ∆) to the pulse is very useful Exercise 2.3. and Euler's relation does not adequately reveal its complexity. One of the fundamental results of signal theory (Section 5. 180) signals and systems This content is available online at <http://cnx. For symbolic-valued signals. the word "complex" is used in two dierent ways here. a signal expert looks for ways of decomposing sum of simpler signals. the pulse is a more complex signal than the step. we can exploit that structure to represent information (create ways of representing information with signals) and to extract information (retrieve the information thus represented). the complex exponential is more fundamental. What is the most parsimonious and compact way to represent information so that it can be extracted later. Discrete-time signals (Section 5. (2. Be that as it may. the characters forming a text le form a sequence. This result is important because discrete-time signals can be manipulated by systems instantiated as computer programs. More complicated decompositions could contain derivatives or integrals of simple signals. The complex exponential can also be written (using Euler's relation (2. As important as such results are. and that this decomposition is very useful.

Discrete-Time Cosine Signal sn 1 … n … Figure 2.and Complex-valued Signals A discrete-time signal is represented symbolically as s (n).2 Complex Exponentials The most important signal is. A delayed unit sample has the expression δ (n − m). . .org/content/col10040/1.28)   1 δ (n) =  0 if otherwise Available for free at Connexions <http://cnx. This property can be easily understood by noting that adding an integer to the frequency of the discrete-time complex exponential ej2π(f +m)n = ej2πf n ej2πmn = ej2πf n (2. where n = {. −1.4. Can you nd the formula for this 2.3 Sinusoids Discrete-time sinusoids have the obvious form time counterparts yield unique waveforms has no eect on the signal's value. . We can delay a discrete-time signal by an integer just as with analog ones. s (n) = Acos (2πf n + φ). of course. which is dened to be n=0 (2.4. 1.23 2. s (n) = ej2πf n (2. frequencies of their discrete- only when f −1. . and equals one when n = m.7: signal? The discrete-time cosine signal is plotted as a stem plot.27) This derivation follows because the complex exponential evaluated at an integer multiple of 2π equals one.4 Unit Sample The second-most important discrete-time signal is the unit sample. . . the complex exponential sequence. 2.26) 2.1 Real. We usually draw discrete-time signals as stem plots to emphasize the fact they are functions dened only on the integers. }.9> . 1 2 2 . lies in the interval As opposed to analog complex exponentials and sinusoids that can have their frequencies be any real value.4. 0.4.

29) This kind of decomposition is unique to discrete-time signals.5 Symbolic-valued Signals Another interesting aspect of discrete-time signals is that their values do not need to be real numbers. integers that convey daily temperature. the transmission and reception of discrete-time signals.8: The unit sample. They could represent keyboard characters. Examination of a discrete-time signal's plot. is accomplished with analog signals and systems. m is denoted by s (m) and the unit sample delayed to occur at m is δ (n − m). Because the value of a sequence at each integer written and scaled by the signal value. reveals that all signals consist of a sequence of delayed and scaled unit samples. Discrete-time systems can act on discrete-time signals in ways similar to those found in analog signals and systems. like that of the cosine signal shown in Figure 2. 2.llb> Available for free at Connexions <http://cnx. For such signals. discrete-time systems are ultimately constructed from digital circuits. Because of the role of software in discrete-time systems. with equivalent analog realizations dicult. We do have real-valued discrete-time signals like the sinusoid. Whether controlled by software or not. they have little mathematical structure other than that they are members of a set. to design. . if not impossible.4. which consist entirely of analog circuit elements. aK } which This technical terminology does not mean we restrict symbols to being mem- bers of the English or Greek alphabet.24 CHAPTER 2. More formally. . s (n) takes on one of the values {a1 . In fact. but we also have signals that denote the sequence of characters typed on the keyboard. systems can be easily produced in software. Understanding how discrete-time and analog signals and systems intertwine is perhaps the main goal of this course. [Media Object] 8 8 This media object is a LabVIEW VI.9> .7 (DiscreteTime Cosine Signal). Furthermore. Please view or download it at <SignalApprox. many more dierent systems can be envisioned and constructed with programs than can be with analog signals. SIGNALS AND SYSTEMS Unit Sample δn 1 n Figure 2. like e-mail. . bytes (8-bit quantities).org/content/col10040/1. Such characters certainly aren't real numbers. we can decompose any signal as a sum of unit samples delayed to the appropriate location ∞ s (n) = m=−∞ s (m) δ (n − m) (2. and as a collection of possible signal values. and converted back into an analog signal. all without the incursion of error. processed with software. and will prove useful subsequently. . each element of the symbolic-valued signal comprise the alphabet A. a special class of analog signals can be converted into discrete-time signals.

org/content/col10040/1. In many ways. Simple systems can be connected togetherone system's output becomes another's inputto accomplish some overall design. We term S (·) the input-output relation for the system. systems operate on function(s) to produce other function(s).25 2. The notation y (t) = S (x (t)) corresponds to this block diagram. For the mathematically inclined.10: The most rudimentary ways of interconnecting systems are shown in the gures in this section.3: Fundamental model of communication) the ordering most certainly matters. with 9 Mathematically. This is the cascade conguration. For example.5 Introduction to Systems Signals are manipulated by systems. in the fundamental model of communication (Figure 1. The simplest form is when one system's output is connected only to another's input. in others it does not. Interconnection topologies can be quite complicated. but usually consist of weaves of three basic interconnection forms. Mathematically. Denition of a system x(t) System y(t) Figure 2. with the information contained in x (t) processed by the rst. then the second system. 2.9: The system depicted has input x (t) and output y (t). This notation mimics the mathematical symbology of a function: A system's input is analogous to an independent variable and its output the dependent variable. and y (t) = S2 (w (t)). rules that yield a value for the dependent variable (our output signal) for each value of its independent variable (its input signal). Available for free at Connexions <http://cnx.1 Cascade Interconnection cascade x(t) S1[•] w(t) S2[•] y(t) Figure 2. a system is a functional: a function of a function (signals are functions). the ordering of the systems>. 9 This content is available online at <http://cnx.5. we represent what a system does by the notation x representing the input signal and y the output signal. w (t) = S1 (x (t)). In some cases. Mathematically. y (t) = S (x (t)). systems are like functions.9> .

Two or more systems operate on outputs are added together to create the output in y (t). Thus.26 CHAPTER 2.12: The feedback conguration.9> . x (t) and their y (t) = S1 (x (t))+S2 (x (t)). Available for free at Connexions <http://cnx.12: feedback) is that the feed-forward system produces the output: output to y (t) = S1 (e (t)). and the information x (t) is processed separately by both systems. The input e (t) equals the input signal minus the output of some other system's y (t): e (t) = x (t) − S2 (y (t)). SIGNALS AND SYSTEMS 2. The subtlest interconnection conguration has a system's output also contributing to its input. than one system are not split into pieces along the way.5. with the error signal used to adjust the output to achieve some condition dened by the input (controlling) signal. with this signal appearing as the input to all systems Block diagrams have the convention that signals going to more simultaneously and with equal strength. The mathematical statement of the feedback interconnection (Figure 2. Engineers would say the output is "fed back" to the input through system 2. hence the Feedback systems are omnipresent in control problems.5.3 Feedback Interconnection feedback x(t) + – S2[•] e(t) S1[•] y(t) Figure 2. y(t) A signal x (t) is routed to two (or more) systems.11: The parallel conguration.2 Parallel Interconnection parallel x(t) x(t) S2[•] S1[•] + x(t) Figure 2. 2.

org/content/m0006/2.24/>. like amplitude and frequency.30) y (t) = Gx (t) amplier 1 Amplifier G G Figure 2.6.13: amplier) multiplies its input by a constant known as the amplier gain. in a car's cruise control system. the amplier actually inverts its input) and attenuates. If less than one. 2.6 Simple Systems 2. Examples would be oscillators that produce periodic signals like sinusoids and square waves and noise generators that yield signals with erratic waveforms (more about noise subsequently).31) This content is available online at <http://cnx. x (t) is a constant representing what speed you want. system 2 is the identity system (output 2. Available for free at Connexions <http://cnx. Sources produce signals without having input.1 Sources 10 Systems manipulate signals. which A and frequency f0 . equals input).13: An amplier. y (t) = x (t − τ ) 10 (2. and y (t) is the car's speed as measured by a speedometer. Why the following are categorized as "simple" will only become evident towards the end of the You control the gain by turning the volume control.14: delay) when the output signal equals the input signal at an earlier time. Simply writing an expression for the signals they produce species sources. A real-world example of an amplier is your home stereo.2 Ampliers An amplier (Figure 2. We like to think of these as having controllable parameters. A sine wave generator might be specied by sinusoid of amplitude y (t) = Asin (2πf0 t) u (t).9> . creating output signals derived from their inputs. says that the source was turned on at t=0 to produce a 2.6.3 Delay A system serves as a time delay (Figure 2.6. we would say that the amplier can be greater than one or less than one.27 For example. In this application. (2. The gain can be positive or negative (if negative.

The way to understand this system is to focus on the time origin: The output at equals the input at time t = 0. to produce signal values derived from what the input advances its input. Exercise 2. time τ t=τ is the delay. in which case we say the system in time. but the notion of time reversal occurs frequently in communications systems.28 CHAPTER 2. SIGNALS AND SYSTEMS delay Delay τ τ Figure 2. Such systems are dicult to build (they would have will be).9> .14: A delay.6. 37. if the delay is positive.32) time reversal Time Reverse Figure 2.4 Time Reversal In other words. Thus. The delay can be negative. Here. y (t) = x (−t) (2. if we have two systems in cascade. Again. such systems are dicult to build. the output emerges later than the input. the output signal equals the input signal ipped about the time origin. but we will have occasion to advance signals 2.15: A time reversal system.1 (Solution on p. does the output depend on which comes rst? Determine if the ordering matters for the cascade of an amplier and a delay and for the cascade of a time-reversal system and a delay. Available for free at Connexions <http://cnx. and plotting the output amounts to shifting the input plot to the right.) Mentioned earlier was the issue of whether the ordering of systems mattered.6.

9> . you double the linear system denition provides the same output no matter which the output. S (0) = S (x (t) − x (t)) = S (x (t)) − S (x (t)) = 0. Derivative systems operate in a straightforward way: A rst-derivative system would have the input-output relationship a lower limit of d dt x (t). t A simple integrator would have input-output relation y (t) = −∞ x (α) dα (2." Note that this property is consistent with alternate ways of expressing gain changes: Since x (t) + x (t). today's output is merely delayed to occur tomorrow. but also because they lend themselves to relatively simple mathematical today. "They're the only systems we thoroughly understand!" We can nd the output of any linear system to a complicated input by decomposing the input into simple signals. Said another way.29 2. if x (t) = e−t + sin (2πf0 t) the output S (x (t)) of any linear system equals y (t) = S e−t + S (sin (2πf0 t)) 2. (2. • S (0) = 0 If the input is identically zero for all time.33) 2. Integral systems have the complication that the integral's limits must be dened. and are the most important class of systems in communications. (2.6 Linear Systems Linear systems are a class of systems rather than having a specic input-output relation. its output to a decomposed input is the sum of outputs to each input.6.6.6. S (·) Linear systems form the foundation of system theory. the output is similarly delayed (advanced).34) S (G1 x1 (t) + G2 x2 (t)) = G1 S (x1 (t)) + G2 S (x2 (t)) for all choices of signals and gains. This general input-output relation property can be manipulated to indicate specic properties shared by all linear systems. The equation above (2. the output equals the same weighted sum of the outputs produced by each component. of these is used to express a given signal. (y (t) = S (x (t))) ⇒ (y (t − τ ) = S (x (t − τ ))) If you delay (or advance) the input. • S (Gx (t)) = GS (x (t)) 2x (t) also equals The colloquialism summarizing this property is "Double the input.35) Thus.34) says that when a system is linear. which are divulged throughout this course. When is linear.3: Delay). a time-invariant system responds to an input you may supply tomorrow the same way it responds to the same input applied Available for free at Connexions <http://cnx.6.7 Time-Invariant Systems Systems that don't change their input-output relation with time are said to be time-invariant. The mathematical way of stating this property is to use the signal delay concept described in Simple Systems (Section 2. This property follows from the simple derivation Just why linear systems are so important is related not only to their properties. They have the property that when the input is expressed as a weighted sum of component signals. the output of a linear system must be zero.5 Derivative Systems and Integrators Systems that perform calculus-like operations on their inputs can produce waveforms signicantly dierent than present in the input. and that the value of all signals at t = −∞ equals zero. It is a signal theory convention that the elementary integral operation have y (t) = −∞. For example.

a) b) c) d) Complex Number Arithmetic 11 Find the real part. a) What are the cube-roots of 27? In other words. the magnitude and angle of the complex numbers given by the following −1√ 1 + j + ej 2 π π ej 3 + ejπ + e−(j 3 ) 1+ 3j 2 π Problem 2.7 Signals and Systems Problems Problem 2. Time-Invariant Table Input-Output Relation y (t) = y (t) = y (t) = y (t) = d dt (x) d2 dt2 (x) 2 d dt (x) dx dt + x Linear yes yes no yes yes yes yes yes no no no Time-Invariant yes yes yes yes yes yes no no yes yes yes y (t) = x1 + x2 y (t) = x (t − τ ) y (t) = cos (2πf t) x (t) y (t) = x (−t) y (t) = x2 (t) y (t) = |x (t) | y (t) = mx (t) + b Table 2.3: 11 Cool Exponentials Simplify the following (cool) expressions. but characterizing them so that you can predict their behavior for any input remains an unsolved problem. Available for free at Connexions <http://cnx. linear and time-invariant.1: expressions. etc. what is b) What are the fth roots of 3 (3 5 )? c) What are the fourth roots of one? 1 27 3 ? 1 Problem 2.2: Discovering Roots Complex numbers expose all the roots of real (and complex) numbers. there should be two square-roots.9> . Linear. Much of the signal processing and system theory discussed here concentrates on such systems. For example. This content is available online at <http://cnx. imaginary part. Find the following roots. For example. SIGNALS AND SYSTEMS The collection of linear.29/>. time-invariant systems are the most thoroughly understood systems. for the most 2. three cube-roots. electric circuits are.30 CHAPTER 2. Nonlinear ones abound. of any

prove it.16).org/content/col10040/1. that when the input is the real part of a complex exponential. are your answers unique? If so. nd an alternative answer for the complex exponential representation. and indicate the location of the frequencies in the complex s-plane. a) b) c) d) e) f) g) h) v (t) = cos (5t) v (t) = sin 8t + π 4 v (t) = e−t v (t) = e−(3t) sin 4t + 3π 4 v (t) = 5e(2t) sin (8t + 2π) v (t) = −2 v (t) = 4sin (2t) + 3cos (2t) √ v (t) = 2cos 100πt + π − 3sin 100πt + 6 π 2 Available for free at Connexions <http://cnx. S Re Aej2πf t = Re S Aej2πf t Aej2πft Re[•] Aej2πft S[•] Re[•] S[•] S[Re[Aej2πft]] Re[S[ Aej2πft]] Figure 2.9> . complex exponentials is much easier than for sinusoids.31 a) b) c) jj j 2j j jj Problem 2.5: For each of the indicated voltages. and linear systems analysis is particularly easy. write it as the real part of a complex exponential (v Explicitly indicate the value of the complex amplitude complex amplitude as a vector in the (t) = Re (V est )). if not. i) ii) iii) 3sin (24t) √ 2cos 2π60t + π 4 2cos t + π + 4sin t − 6 π 3 b) Show that for linear systems having real-valued outputs for real inputs. the output is the real part of the system's output to the complex exponential (see Figure 2. Represent each V and the complex frequency s. What is the frequency (in Hz) of each? In general.16 Problem 2.4: Complex-valued Signals Solving systems for Complex numbers and phasors play a very important role in electrical engineering. V -plane. a) Find the phasor representation for each. and re-express each as the real and imaginary parts of a complex exponential.

17) as a linear combination of delayed and weighted step functions and ramps (the integral of a step).6: Express each of the following signals (Figure 2. Available for free at Connexions <http://cnx.9> .org/content/col10040/1.32 CHAPTER 2. SIGNALS AND SYSTEMS Problem 2.

9>… 1 2 3 4 -1 .33 s(t) 10 1 (a) t s(t) 10 1 2 (b) t s(t) 10 1 2 (c) t 2 s(t) –1 1 –1 (d) t s(t) 1 t Available for free at Connexions <

Available for free at Connexions <http://cnx. x(t) 1 0.9> .org/content/col10040/1.7: ure 2. time-invariant system is the signal x (t). time-invariant system yields the output y (t).19 3 t Problem 2.18 a) Find and sketch this system's output when the input is the depicted signal (Figure 2. b) Find and sketch this system's output when the input is a unit step.8: Linear Systems The depicted input (Figure 2. SIGNALS AND SYSTEMS Problem 2.20) x (t) to a linear.18).19). Linear. the output is the signal y (t) (Fig- x(t) 1 1 y(t) 1 2 3 t 1 –1 2 3 t Figure 2.5 1 2 Figure 2.34 CHAPTER 2. Time-Invariant Systems When the input to a linear.

21)? x(t) 2 ••• 1 –2 Figure 2. time-invariant system.35 x(t) 1 1/2 1 t –1/2 Figure 2 t a) What will be the received signal when the transmitter sends the pulse sequence (Figure 2. When the transmitted signal x (t) is a pulse.23) x1 (t)? Available for free at Connexions <http://cnx.22).21 2 3 4 t Problem 2. x(t) 1 t 1 r(t) 1 1 Figure 2.20 y(t) 1 2 1 t a) What is the system's output to a unit step input u (t)? b) What will the output be when the input is the depicted square wave (Figure 2.9: Communication Channel A particularly interesting communication channel can be modeled as a linear.9> . the received signal r (t) is as shown (Figure 2.

a is a constant. a) When the input is a unit step (x (t) = u (t)). Suppose we are given the following dierential equation to solve.23) has half the duration as the original? x2 (t) that 1 x1(t) 1 1 x2(t) 2 3 t 1/2 1 t Figure 2. SIGNALS AND SYSTEMS b) What will be the received signal when the transmitter sends the pulse signal (Figure 2.36 CHAPTER 2. particularly when they involve dierential equations.10: Analog Computers So-called analog computers use circuits to solve mathematical problems. Available for free at Connexions <http://cnx. suppose the input is a unit pulse (unit-amplitude.23 Problem 2.9> . unit-duration) delivered to the circuit at time t = dy (t) + ay (t) = x (t) dt In this equation. What is the total energy expended by the input? b) Instead of a unit step. What is the output voltage in this case? Sketch the waveform. the output is given by y (t) = 1 − e−(at) u (t).

1 (p. "Delay" means means t → t − τ.37 Solutions to Exercises in Chapter 2 Solution to Exercise 2. 28) (−1) ApT /2 t − n T 2 zz ∗ = r2 = (|z|) 2 . Time-reverse then delay: y (t) = x (− (t − τ )) = x (−t + τ ).3. we rst locate the number in the complex plane in the fourth quadrant. 22) Solution to Exercise 2. 13∠ (−33. In the rst case.7) degrees.1 (p. The nal answer is √ 13 = 32 + (−2) 2 . Im (z)) √ to polar form. Solution to Exercise 2. 15) 3 − 2j The angle equals z + z ∗ = a + jb + a − jb = 2a = 2Re (z).6. 14) To convert Solution to Exercise 2.1. z − z ∗ = a + jb − (a − jb) = 2jb = 2 (j.2 (p. which equals Solution to Exercise 2.1 (p.588 radians (−33. Case 2 and the way we apply the gain and delay the signal gives the same result.7 degrees).1. sq (t) = ∞ n=−∞ n −arctan 2 3 or −0. "Time-reverse" t → −t Case 1 y (t) = Gx (t − τ ). x ((−t) − τ ). Similarly.9> . order does not matter. 16) zz ∗ = (a + jb) (a − jb) = a2 + b2 .org/content/col10040/1.3 (p. Thus. in the second it does. The distance from the origin to the complex number is the magnitude r.1. Delay then time-reverse: y (t) = Available for free at Connexions <http://cnx.

38 CHAPTER 2. SIGNALS AND SYSTEMS Available for free at Connexions <http://cnx.9> .org/content/col10040/1.

current Current ow also occurs in nerve cells found in your brain. Because electrons have a negative charge. Available for free at Connexions <http://cnx. When we say that "electrons ow through a conductor. neurons holes. Thus. A missing electron. is a virtual positive charge. and they rene the information representation or extract information from the voltage or current. they A generic circuit element places a constraint between the classic variables of a circuit: voltage and current. excess positive charge at one terminal and negative charge at the across a circuit element. creating an electric eld. "communicate" using propagating voltage pulses that rely on the ow of positive ions (potassium and sodium "Flow" thus means that electrons hop from atom to atom driven along by the applied electric potential. through electrochemical means. electrons move in the opposite direction of positive current ow: Negative charge owing to the right is equivalent to It is important to understand the physics of current ow in conductors to appreciate the innovation of new electronic devices. particularly certain semiconductors. with the positive sign denoting a positive voltage drop across the element. negative. and circuit theory can be used to understand how current ows in reaction to electric elds. When a conductor connects the positive and negative potentials. and to some degree calcium) across the neuron's outer wall. A battery Voltage is electric potential and represents the "push" that drives electric charge from one place to another. and Generic Circuit Elements 1 We know that information can be represented by signals. 1 This content is available online at <http://cnx. Thus. Electric charge can arise from many sources. Current. with positive current indicating that positive charge ows from the positive terminal to the positive charge moving to the left.Chapter 3 Analog Signal Processing 3. Electrical engineers call these ow is actually due to holes. Voltage is dened generates. now we need to understand how signals are physically realized. current ows. current can come from many sources. however.14/>. The systems used to manipulate electric signals directly are called make nice examples of linear systems. other. and in some materials. Electrons comprise current ow in many cases. electric signals have been found to be the easiest to use. Voltage and currents comprise the electric instantiations of signals. In many cases.1 Voltage. we need to delve into the world of electricity and electromagnetism.9> 39 ." what we mean is that the conductor's constituent atoms freely give up electrons from their outer shells. Here. Over the years. the simplest being the electron. What causes charge to move is a physical separation between positive and negative charge.

dcs. and is named for the French physicist Ampère . Is this really a unit of energy? If so. The units of energy are joules since a watt equals joules/second.21/>. Voltage has units of volts. Current ows through circuit elements.40 CHAPTER 3. which we indicate by lines in circuit diagrams. In v-i relation. dening the The element has a v-i relation dened by the element's physical properties.1 (Generic Circuit instantaneous power at each moment of time consumed by the element p (t) = v (t) i (t) is given by the product of the voltage and current. Current has power.bioanalytical. power is the rate at which energy is A positive value for power indicates that at time means it is consumed or produced.html 4 This content is available online at <http://cnx. Exercise 3. the drop. With voltage expressed in volts and current in amperes. a negative value producing energy is the integral of power. Voltages and currents also carry Element) for circuit elements.1 (Solution on power dened this way has units of watts. 116.1: The generic circuit element.1 (Generic Circuit Element).uk/∼history/Mathematicians/ we have the convention that positive current ows from positive to negative voltage 3 units of amperes.2 Ideal Circuit Elements voltage and Residential energy bills typically state a home's energy usage in kilowatt-hours. Consequently. such as that depicted in Figure 3. t the circuit element is consuming power. t E (t) = −∞ p (α) dα Again.htm http://www-groups. positive energy corresponds to consumed energy and negative energy corresponds to energy production. 2 3 4 The elementary circuit elementsthe resistor. Note that a circuit element having a power prole that is both positive and negative over some time interval could consume or produce energy according to the sign of the integral of power. and both the unit and the quantity are named for Volta 2 . and inductor impose linear relationships between http://www. Available for free at Connexions <http://cnx.1. and through conductors. Again using the convention shown in Figure 3. For every circuit element we dene a voltage and a current. ANALOG SIGNAL PROCESSING Generic Circuit Element i + v – Figure 3. capacitor.9> .org/content/col10040/1. how many joules equals one kilowatt-hour? 3. Just as in all areas of physics and chemistry.

2. v = Ri The resistor is far and away the simplest circuit with the constant of proportionality∼history/Mathematicians/Ohm.9> named for the German electrical scientist Georg Ohm i = Gv .41 3.siemens. denoted by Sometimes. the voltage goes to zero short i = C dv(t) dt 5 6 http://www-groups. Conductance has units of Siemens (S). When resistance is positive.html Available for free at Connexions <http://cnx. the conductance. as it is in most cases. we have what is known as an for a non-zero current ow. A superconductor physically 3. A resistor's instantaneous power consumption can be written one of two ways. and is named for the German electronics industrialist Werner von relation for the resistor is written v-i Ω. v (t) = Ri (t) Resistance has units of ohms. This situation corresponds to a realizes a short circuit.2: Resistor.1 Resistor Resistor i R + v – Figure 3. In a resistor. equal to 6 .3: Capacitor. the voltage is proportional to the current. 5 .org/content/col10040/1. open circuit: No current ows but a non-zero voltage can appear across the open circuit.dcs. known as the resistance. a resistor consumes power. As the resistance becomes zero. p (t) = Ri2 (t) = 1 2 v (t) R As the resistance approaches innity. with G.html http://w4.2 Capacitor Capacitor i C + v – Figure 3. the Siemens 1 R.

2) The power consumed/produced by an inductor depends on the product of the inductor current and its derivative p (t) = Li (t) and its total energy expenditure up to time di (t) dt t is given by E (t) = 7 8 1 2 Li (t) 2 http://www.html http://www. The dierential and integral v (t) = L di (t) dt or i (t) = 1 L t v (α) dα −∞ (3. the capacitor is equivalent to an open circuit.9> . 3.3 Inductor Inductor i L + v – Figure Available for free at Connexions < i (t) = C dv (t) dt or v (t) = 1 C i (α) dα −∞ If the voltage across a capacitor is constant. In this v = L di(t) dt The inductor stores magnetic ux.1) can be expressed in dierential or integral form. has units of farads (F).2. The constant of proportionality. Inductance has units of henries (H). p (t) = Cv (t) dv (t) dt t is concisely given by This result means that a capacitor's total energy expenditure up to time E (t) = 1 2 Cv (t) 2 This expression presumes the fundamental assumption of circuit theory: all voltages and currents in any circuit were zero in the far distant past (t = −∞).iee. then the current owing into it equals zero. As current is the rate of change of charge. ANALOG SIGNAL PROCESSING The capacitor stores charge and the relationship between the charge stored and the resultant voltage is q = Cv .42 CHAPTER 3. and is named for the American physicist Joseph Henry forms of the inductor's v-i relation are 8 . the capacitance. the t v-i relation ( with larger valued inductors capable of storing more ux.4: Inductor. The power consumed/produced by a voltage applied to a capacitor depends on the product of the voltage and its derivative. and is named for the English experimental physicist Michael Faraday 7 .

resistor you can hold in your hand is not exactly an ideal 1 kΩ resistor. For example. roughly corresponds to a 1. any current (that's what ideal ones can do!) 3. on the other hand.2. a ashlight battery. v = vs for any current i.4 Sources Sources i i vs + – + v – is (b) + v – (a) Figure 3. the voltage source's As for the current source. the 1 kΩ First of all. A simple resistive circuit is shown in Figure 3. This content is available online at <http://cnx. At very high frequencies.5: The voltage source on the left and current source on the right are like all circuit elements in that they have a particular relationship between the voltage and current dened for them. it ceases to be modeled by a voltage source capable of when the resistance of the light bulb is too small. i = −is for any voltage v. For the voltage source. For example. like a C-cell. physical devices the ideal elements to describe how physical elements operate in the real world. 10 This content is available online at <http://cnx. More pertinent to the current discussion is another deviation from the ideal: If a sinusoidal voltage is placed across a physical resistor.5 V voltage source.43 3. but never have exactly their advertised values. and can be purchased in any supermarket. the smart engineer must be aware of the frequency ranges over which his ideal models match reality well. Available for free at Connexions <http://cnx. Current sources. On the other> . supplying However. but they will always deviate from the ideal in some way. physical circuit elements can be readily found that well approximate the ideal. are much harder to acquire. Sources of voltage and current are also circuit elements. are manufactured to close tolerances (the tighter the tolerance. regardless of the Ideal and Real-World Circuit Elements Source and linear circuit elements are 9 ideal circuit elements. the way the resistor is constructed introduces inductance and capacitance eects. i = −is v-i relation is v = vs regardless of what the current might be. we'll learn why later.9/>. say above 1 MHz. for the current source. 10% is common. The fourth band on resistors species their tolerance. but they are not linear in the strict sense of linear systems.4 Electric Circuits and Interconnection Laws A 10 circuit 9 connects circuit elements together in a specic conguration designed to transform the source signal (originating from a voltage or current source) into another signalthe outputthat corresponds to the current or voltage dened for a particular circuit element. the current will not be exactly proportional to it as frequency becomes high. 3. One central notion of circuit theory is combining For example. the more money you pay). Another name for a constant-valued voltage source is a battery.30/>.

Recasting this problem mathematically.6: The circuit shown in the top two gures is perhaps the simplest circuit that performs a On the bottom is the block diagram that corresponds to the circuit.44 CHAPTER 3. current variables (Section 3. It would be simplea little too simple at this pointif we could instantly write down the one equation that relates these two voltages. this Available for free at Connexions <http://cnx. and then solving the circuit and element equations. they may be positive or negative according to your denition. ANALOG SIGNAL PROCESSING This circuit is the electrical embodiment of a system having its input provided by a source system producing vin (t). we have one. By specifying the source.9> . input is provided by the voltage source vin and the output is the voltage vout across the resistor label R2 . we have a total of six voltages and currents that must be either specied or determined. Once the values for the voltages and When two people currents are calculated. current ow and voltage drop values for each element will agree. To understand what this circuit accomplishes. The signal processing function. we need to solve some set of equations so that we relate the output voltage vout to the source voltage. Because we have a three-element circuit. but v-i relations for the elements presume that positive current ow is in Do recall in dening your voltage and the same direction as positive voltage drop. Once you dene voltages and currents. You can dene the directions for positive current ow and positive voltage drop any way you like. we need six nonredundant equations to solve for the six unknown voltages and currents.2) that the dene variables according to their individual preferences. As shown in the middle. the signs of their variables may not agree. Until we have more knowledge about how circuits work. we want to determine the voltage across the resistor labeled by its value R2 . we analyze the circuitunderstand what it accomplishesby dening currents and voltages for all circuit we must write a set of equations that allow us to nd all the voltages and currents that can be dened for every circuit element. vin + – R1 R2 + vout – (a) i 1 + v1 – i vin + – + v – R1 R2 iout + vout – (b) Source vin(t) (c) System vout(t) Figure 3.

Given any two of these KCL equations.9> . together. 11 http://en.1 In writing KCL equations. one for voltage (Section 3.4. we need the laws that govern the electrical connection of circuit elements. the sum of all currents entering or leaving a node must equal zero. Said another way. resistor fashion.6.6. This line simply means that the two elements are connected Kirchho's Laws. First of all. In the example. the places where circuit elements attach to each other are called nodes. These laws are essential to analyzing this and any circuit. 3. Exercise 3. The v-i relations for the resistors give us two more.45 amounts to providing the source's v-i relation. we can nd the other by adding or subtracting Available for free at Connexions <http://cnx. Figure 3. Electrical engineers tend to draw circuit diagramsschematics in a rectilinear Thus the long line connecting the bottom of the voltage source with the bottom of the resistor is intended to make the diagram look pretty. Thus. exactly one of them is always redundant. one of them is redundant and. determine what a connection among circuit elements means.2: Kirchho 's Voltage Law (KVL)) and one for current (Section 3. 116. What this law means physically is that charge cannot accumulate in a node.wikipedia. vout across the resistor R2 . Two nodes are explicitly indicated in Figure 3.4.7: The circuit shown is perhaps the simplest circuit that performs a signal processing function. a third is at the bottom where the voltage source and R2 are connected. Can you sketch a proof of why this might be true? Hint: It has to do with the fact that charge won't accumulate in one place on its own. below we have a three-node circuit and thus have three KCL equations. where do we get the other three equations we need? What we need to solve every circuit problem are mathematical statements that express how the circuit elements are interconnected. 11 . The convention is to discard the equation for the (unlabeled) node at the bottom of the circuit. in mathematical terms. We are only halfway there.) n-node circuit.1 Kirchho's Current Law At every we can discard any one of them. i1 + v1 – vin + – R1 R2 + i vout – vin + – + v – R1 R2 iout + vout – (a) The input is provided by the voltage source labelled (b) vin and the output is the voltage Figure 3.4. what goes in must come out. They are named for Gustav Kirchho a nineteenth century German physicist.1: Kirchho 's Current Law).4. (−i) − i1 = 0 i1 − i2 = 0 i + i2 = 0 Note that the current entering a node is the negative of the current leaving the node. you will nd that in an (Solution on p.

v-i relation for the output two KCL equations. we go from the positive to the negative of the voltage's denition. For the moment.7). v-i: v = vin v1 = R1 i1 vout = R2 iout KCL: (−i) − i1 = 0 i1 − iout = 0 −v + v1 + vout = 0 Eventually. we have three v-i relations. we want to eliminate all the variables but on resistor values. we temporarily eliminate the quantity we seek. we will discover shortcuts for solving circuit KVL: We have exactly the right number of equations! problems. all circuits conserve power. Using the we obtain the quantity we seek. vin : We have expressed one voltage or current in terms of R1 +R2 .9> .7/>. For the example circuit (Figure 3. 12 Later. What kind of system does our circuit realize and. it should not dissipate or create energy. trace a path through the circuit that returns you to the origin node.46 CHAPTER 3. The KVL equation for our circuit is v 1 + v2 − v = 0 In writing KVL equations. we have into our original equations or ones we developed along the way. ANALOG SIGNAL PROCESSING 3. we can back substitute this answer vin = R1 iout + R2 iout = (R1 + R2 ) iout .) Exercise 3.2 Referring back to Figure 3.4. To nd any other circuit quantities. we follow the convention that an element's voltage enters with a plus sign when traversing the closed path. A closed loop has the obvious denition: Starting at a node. vout = R2 vin R1 + R2 (Solution on p. We should examine whether these circuits variables obey the Conservation of Power principle: since a circuit is a closed system.5 Power Dissipation in Resistor Circuits rst a resistor circuit's 12 We can nd voltages and currents in simple circuits containing resistors and voltage or current sources. in terms of element values. KVL expresses the fact that electric elds are conservative: The total work performed in moving a test charge around a closed path is zero. Yes. One of the KCL equations says means that i1 = iout .4. 116. Available for free at Connexions <http://cnx.2 Kirchho's Voltage Law (KVL) The voltage law says that the sum of voltages around every closed loop in the circuit must equal a circuit should serve some useful purpose. Substituting into it the resistor's Though which relation. we have vin = R1 i1 + R2 iout . for now. we will prove that because of KVL and KCL This content is available online at <http://cnx. The KVL equation can be rewritten as v-i vout and determine how it depends on vin and vin = v1 + vout . it is the simplest way to solve the equations. iout = We have now solved the circuit Solving for the current in the output resistor. not obvious. and one KVL equation for solving for the circuit's six voltages and currents. our approach is to investigate power consumption/creation. what are the system's parameter(s)? 3.6. sources and circuit-element values.

the dissipated power must be absorbed somewhere.1 Calculate the power consumed/created by the resistor (Solution on p. especially resistors. v2 R i2 R (3. which means the longer the wire. the greater the resistance and thus the power dissipated. directly predicted by circuit theory. no more. 116. the smaller the resistance. The current owing the power calculation for the source yields into the source's positive terminal is −iout . In performing our calculations. The total power consumed/created by a circuit equals the sum of each P = k vk ik Recall that each element's current and voltage must obey the convention that positive current is dened to enter the positive-voltage terminal. electric power could be sent through power lines without loss! Exercise 3. no less. In fact. but the theory decrees that all sources must be provided energy to work. Because the total power in a circuit must be zero (P = 0).org/content/col10040/1. which points to the voltage source as the producer of power. element's power. or is positive. a negative value to created power.5. Exercise 3. The voltage across the resistor R2 is the output voltage and we found it to equal vout = R1 +R2 R1 +R2 vin . 40. but is by physics. circuit theory does not model how sources are constructed. The thicker the wire. Available for free at Connexions <http://cnx. Consequently. − (vin iout ) = − We conclude that the source provides the power consumed by the resistors. we dened the vk ik corresponds to consumed power. If a room-temperature superconductor could be found. With this convention.3) But where does a resistor's power The answer is not By Conservation of Power. 116. This result should not be surprising since Since resistors are positive-valued. Superconductors have zero resistivity and hence do not dissipate power.2 Conrm that the source produces exactly the total power consumed by both resistors. Consider the simple series circuit should in Figure 3. consume it. its power is dissipated by heat. some iout to ow through the positive-voltage terminals of both resistors and found it to equal iout = R2 vin .5. Current owing through a resistor makes it hot. (Solution on p. the instantaneous power consumed/created by every circuit element equals the product of its voltage and current.) R1 in our simple circuit example. Most materials have a positive value for ρ. calculating the power for this resistor yields current P2 = R2 P2 (R1 + R2 ) 2 vin 2 Consequently. this resistor dissipates power because we showed (p.9> .47 As dened on p. go? resistors always dissipate power. 41) that the power consumed by any resistor equals either of the following. 1 vin 2 R1 + R2 Consequently. a positive value of circuit elements must create power while others consume it. We conclude that both resistors in our example circuit consume power. Resistivity has units of ohm-meters. unit cross- ρL A sectional area material constituting the wire.6. note: A physical wire has a resistance and hence dissipates power (it gets warm just like a resistor in a circuit).) This result is quite general: sources produce power and the circuit elements. But where do sources get their power? Again. the resistance of a wire of length L and cross-sectional area A is given by R= The quantity ρ is known as the resistivity and presents the resistance of a unit-length.

takes the form of a ratio of the output voltage to the input voltage. as far as the voltage source is concerned.4)) with regard to this circuit (Figure 3. This result is the rst of several equivalent circuit ideas: In many cases. the equivalent circuit for a series combination of resistors is a single resistor having a resistance equal to the sum of its component resistances. we have that vin i1 = R1 + R2 : Resistors in series: The series combination of two resistors acts.48 CHAPTER 3. series.4). This concept is so pervasive it has a name: voltage divider. this important way of expressing input-output relationshipsas a ratio of output to inputpervades circuit and system theory. The input-output relationship for this system. only into anothercurrents in all vout R2 = vin R1 + R2 In this way. In any case. Available for free at Connexions <http://cnx.9/>. a complicated circuit when viewed from its terminals (the two places to which you might attach a source) appears to be a single circuit element (at best) or a simple combination of elements at worst. as a single resistor having a value equal to the sum of the two resistances. we express how the components used to build the system aect the input-output relationship. For the two the voltage across one resistor equals the ratio of that resistor's value and the sum of resistances times the voltage across the series combination.8: labelled The input is provided by the voltage source (b) vin and the output is the voltage The circuit shown is perhaps the simplest circuit that performs a signal processing function. Because it equals i2 .6 Series and Parallel Circuits 13 i1 + v1 – vin + – R1 R2 + i vout – vin + – + v – R1 R2 iout + vout – (a) Figure 3. and the values of other currents and voltages in this circuit as well. we might expect this relation to break down if the input amplitude is too high (Will the circuit survive if the input changes from 1 volt to one million volts?) or if the source's frequency becomes too high. The current i1 is the current owing out of the voltage source. have profound implications. Resistors connected in such a way that current from one must ow resistors connected this way have the same magnitudeare said to be connected in series-connected resistors in the example. The results shown in other modules (circuit elements (Section vout across the resistor R2 . Because this analysis was made with ideal circuit elements.9> . ANALOG SIGNAL PROCESSING 3. KVL and KCL (Section 3.8). 13 This content is available online at <http://cnx. interconnection laws (Section 3. found in this particular case by voltage divider.

9> .6. To write the KCL equation. Note that in making this equivalent circuit. a parallel connection.10: A simple parallel circuit. this equivalence is made strictly from the voltage iout iin R1 R2 iin i1 + + v R1 v1 R2 iout + v2 Figure 3. applying KVL reveals that all the voltages are identical: v1 = v v2 = v . Using the iout = R1 iin R1 + R2 (Solution on p. the circuit the voltage source "feels" (through the current drawn from it) is a single resistor having resistance R1 + R2 . the output voltage can no longer be dened: The output resistor labeled source's viewpoint.10 by a voltage source. You can easily show that the parallel combination of R1 and R2 has the v-i relation of a resistor having resistance 1 R1 + 1 R2 −1 = R1 R2 R1 +R2 . what purpose does this revised circuit have? This circuit highlights some important properties of parallel has two resistors connected side-by-side.) Suppose that you replaced the current source in Figure 3. and One interesting simple circuit (Figure 3. How would iout be related to the source voltage? Based on this result. Thus. 116. A shorthand notation for this quantity is R1 R2 . rather than in series. R2 no longer appears.2. what we will term Here.49 vin + – R1 R2 vin + – R1+R2 Figure 3. we nd that Exercise 3. Thus. note that the top node consists of the entire upper interconnection section. The KCL equation is v-i relations.1 iin − i1 − i2 = 0. This result typies parallel connections. As the reciprocal of resistance is conductance (Section 3.1: Available for free at Connexions <http://cnx.9: The resistor (on the right) is equivalent to the two resistors (on the left) and has a resistance equal to the sum of the resistances of the other two resistors.

org/content/col10040/1. we can say that R1 R2 R 1R 2 R1+R2 Figure 3. current R1 resistor divided by the sum of resistances: i2 = R1 +R2 i. Resistor). Thus. 48) for series resistances. The current through a resistor in parallel with another is the ratio of the conductance of the rst to the sum other i2 = G2 G1 +G2 i. for the depicted circuit.50 CHAPTER 3.9> . divider takes the form of the resistance of the current divider for parallel resistances. we have of the conductances.12 Available for free at Connexions <http://cnx. Expressed in terms of resistances. i i2 R1 R2 Figure 3. ANALOG SIGNAL PROCESSING for a parallel combination of resistors.11 Similar to voltage divider (p. the equivalent conductance is the sum of the conductances .

In system-theory terms. a signal processing system (simple as it is). the current v-i relations: i2 = R1 must equal the sum of the other two currents leaving the node.51 vin + – R1 R2 + vout – RL source Figure 3. We must analyze afresh how this revised circuit. such as an oscilloscope or a voltmeter. 1 1 i1 = i2 + iL .9> . This is the condition we seek: Voltage measurement: Voltage measurement devices must have large resistances compared with that of the resistor across which the voltage is to be measured. works. 49)). we describe a RL . a sink is called a system-theoretic sink as a load resistance load. we nd vin = vout or R1 +1 Req Req vout = vin R1 + Req Thus. we can nd the vout vout R2 and iL = RL . the approximation would apply if R2 1 RL or R2 RL . Resistors current through each from their entering the node through R2 and RL are in a parallel conguration: The voltages across each resistor are the same while the currents are not. For most applications. KCL says that the sum of the three currents must equal zero. Therefore. Thus. the voltage across it is v1 = Req . thus. 48). Using R1 's v-i R1 vout relation. Req = 1 R2 + 1 RL −1 1 . let's take a hint from other analysis (series rules (p. Comparing the input-output relations before and after. shown in Figure 3. The KVL equation written around the leftmost loop has vin = v1 + vout . Rather than dening eight variables and solving for the current in the load resistor. Said another way. Let Req denote the equivalent resistance of the parallel combination of R2 and RL . we have a complete system built from a cascade of three systems: a source. we want to pass our circuit's output to a sink. and a sink.8) is attached to an oscilloscope's input. which means that i1 = vout R2 + RL . We should look more carefully to determine if any values for the load resistance would lessen its impact on the circuit. In circuits. Req We can not measure voltages reliably unless the measurement device has little eect on what we are trying to measure. +R Suppose we want to pass the output signal into a voltage measurement device. we have the input-output relationship for our entire system having the form of voltage divider. Considering the node where all three resistors join.13: system sink input-output relation for the above circuit without a load is: The simple attenuator circuit (Figure 3. what we need is As R2 .13. Available for free at Connexions <http://cnx. Because the voltages are the same. substituting for v1 . with the current passing through it driving the measurement device through some type of display. but it does not equal the input-output relation of the circuit without the voltage measurement device. parallel rules (p. The vout = R1R2 2 vin . we can represent these measurement devices as a

this approach works well.14 We want to nd the total resistance of the example circuit.and large-scale views. this combination is in series with R4 .org/content/col10040/1. Checking units does not guarantee accuracy. the ratio of the numerator's and denominator's units should be ohms. Design is in the hands of the engineer.52 CHAPTER 3. systems can be cascaded without changing the input-output relation of intermediate systems. but can catch many errors. Furthermore. and worked toward them.) Let's be more precise: How much larger would a load resistance need to be to aect the input- Example 3. ANALOG SIGNAL PROCESSING Exercise 3. Said another way. In our simple circuit. we started structure: away from the terminals. In cascading circuits. you can never make the resistance of your any values for the two resistances we want to achieve the desired attenuation. nd that oscilloscopes and voltmeters have their internal resistances clearly stated. To apply the series and parallel We have combination rules. you will voltage measurement device big enough. you might think that making the resistance Because the resistors a circuit cannot be designed in isolation that will work in cascade with all other circuits.9> R2 R1 +R2 = 1+ R1 R2 −1 . that in determining this structure. since our resistor circuit functions as an attenuator.1 R2 R1 R4 R3 Figure 3. it is best to rst determine the circuit's structure: What is in series with what and what is in parallel with what at both small. RL large enough would do the trick. Note In most cases. RT = R1 RT = (R2 R3 + R4 ) R1 R2 R3 + R1 R2 R4 + R1 R3 R4 R1 R2 + R1 R3 + R2 R3 + R2 R4 + R3 R4 Ω3 ) as well as in the Such complicated expressions typify circuit "simplications. Electrical engineers deal with this situation through the notion of specications: Under what conditions will the circuit perform as designed? Thus. thus. this ideal is rarely true unless the circuits are so designed. 116. with R2 in parallel R3 . enabling you to determine whether the voltage you measure closely equals what was present before they were attached to your circuit. he or she must recognize what have come to be known as loading eects. with the attenuation (a fancy word for gains less than one) depending only on the ratio of the two resistor values select R1 and R2 can have virtually any value. In system theory. Available for free at Connexions <http://cnx. The total resistance expression mimics the This series combination is in parallel with R1 .6.2 output relation by less than 10%? by less than 1%? (Solution on p. 2 The entire expression is to have units of resistance. try it rst. Another valuable lesson emerges from this example concerning the dierence between cascading systems and cascading circuits. we can The designer of this ." A simple check for accuracy is the units: Each component of the numerator should have the same units (here denominator (Ω ).

14 14 We have found that the way to think about circuits is to locate and group parallel and series resistor combinations. in parallel ones.15 (series and parallel combination rules) summarizes the series and parallel combination results.6. equivalent circuit: from the viewpoint of a pair of terminals. while for parallel combinations current and conductance are more important.7 Equivalent Circuits: Resistors and Sources This result is known as an rules.53 circuit must thus specify not only what the attenuation is. the voltages are the same.15: (b) parallel combination rule RT = PN n=1 Series and parallel combination rules.9> .3 (Solution on but also the resistance values employed so that integratorspeople who put systems together from component systemscan combine systems together and have a chance of the combination the currents through each element are the same. series and parallel combination rules + R1 + RT v2 v – … R2 i GT G1 G2 i2 … GN RN – (a) series combination rule Figure 3. These results are easy to remember and very useful. in between. Figure 3. 116.24/>. a group of resistors functions as a single resistor. Keep in mind that for series combinations. the resistance of which can usually be found by applying the parallel and series This content is available online at <http://cnx.) Contrast a series combination of resistors with a parallel one. In series combinations. (a) Rn vn = in = Gn GT Rn RT v (b) GT = PN n=1 Gn i Exercise 3. Which variable (voltage or current) is the same for each and which diers? What are the equivalent resistances? When resistors are placed in series. or smaller than the component resistances? What is this relationship for a parallel combination? 3. is the equivalent resistance bigger. Available for free at Connexions <http://cnx. Those resistors not involved with variables of interest can be collapsed into a single resistance. voltage and resistance are the key quantities.

4) If the source were zero. it could be replaced by a short circuit.5) In this case the v = Req i + veq Comparing the two v-i relations. Thus. Let's consider our simple attenuator circuit (shown in the gure (Figure 3. The result is i that describes the kind of element that lurks within v = (R1 R2 ) i + R2 vin R1 + R2 (3. from viewpoint of the terminals. v and To perform this calculation. we equivalent resistance is Req = R1 nd that they have the same form. i Req veq + – + v – Figure 3. veq = R2 vin .17: The Thévenin equivalent circuit. you cannot distinguish the two circuits. We want to nd the v-i relation for the output terminal pair.6) .54 CHAPTER 3.9> (3. use the circuit laws and element relations.16)) from the viewpoint of the output terminals.17. ANALOG SIGNAL PROCESSING i + – R1 R2 + vin v – Figure 3. we nd it has the v-i relation at its terminals of (3. Because the R1 +R2 equivalent circuit has fewer elements. but do not attach anything to the output terminals. We seek the relation between the dashed box. it is easier to analyze and understand than any other alternative. which would conrm that the circuit does indeed function as a parallel combination of resistors. and the has voltage For R2 Thévenin equivalent source Thévenin any circuit containing resistors and sources. If we consider the simple circuit of Figure 3.16 This result generalizes to include sources in a very interesting and useful way. the source's presence means that the circuit is not well modeled as a resistor. and then nd the equivalent circuit for the boxed However. the v-i relation will be of the form v = Req i + veq Available for free at Connexions <http://cnx.

Starting with the open/short-circuit approach.19. we should be able to make measurements or calculations only from the terminals to determine the equivalent circuit.18. (Solution on p. and derive the equivalent source and resistance. eq v From this property. by applying KVL voc equals the Thévenin equivalent voltage. Consequently. Let the terminals be open- i to zero. Because no current ows through the resistor. the voltage across it is zero (remember.17). circuited. the source voltage now appears entirely across the resistor.55 and the Thévenin equivalent circuit for any such circuit is that of Figure 3.2) below.) Use the open/short-circuit approach to derive the Thévenin equivalent of the circuit shown in i + – R1 R2 + vin v – Figure 3. let's derive its Thévenin equivalent two dierent ways.17. we can determine the equivalent resistance. which has the eect of setting the current we have that the so-called open-circuit voltage To be more specic. Because Thévenin's theorem applies in general. Now consider the situation when we set the terminal voltage to zero (short-circuit it) and measure the resulting current.7) (3. Referring to the equivalent circuit. In the example (Example 3.8) Exercise 3.2 R2 iin R1 R3 Figure 3.19 For the circuit depicted in Figure 3.1 Figure 3. This equivalence applies no matter how many sources or resistors may be present in the consider the equivalent circuit of this gure (Figure 3. let's rst nd the open-circuit voltage voc . We have Available for free at Connexions <http://cnx. veq = voc Req = − voc isc (3.18 Example 3. leaving the short-circuit current to be isc = − Req . 116. Ohm's Law says that v = Ri).9> . we know the circuit's construction and element values.7.

org/content/col10040/1. To verify. let's nd the equivalent resistance by reaching inside the circuit and setting the current source to zero. and can be Thus. When we short-circuit the terminals. resistor R3 is now in parallel with the series combination of R1 and R2 .56 CHAPTER 3. not on what is actually inside the circuit. the v = Req i + veq or (3. equivalent circuits come in two forms: the voltage-source oriented Thévenin equivTo derive the latter. In short. From the viewpoint of the terminals. the Thévenin 1 2 R3 (R1 +R2 ) equivalent resistance is R1 +R2 +R3 . and we obtain the same result. alent Mayer-Norton equivalent (Figure 3.20 be easily shown to have this relation. Req = R3 R1 + R2 .20). v-i relation for the Thévenin equivalent can be written as 15 and the current-source oriented As you might expect. Thus. R3 does not aect the short-circuit i R eliminated. 15 "Finding Thévenin Equivalent Circuits" <http://cnx.9) i= where ieq v − ieq Req (3.9> . We again have a current divider relationship: isc = − in 1 . Because the current is now zero. = v-i The short-circuit current equals the negative of the Mayer-Norton equivalent source. Note that both variations have the same equivalent> Available for free at Connexions <http://cnx. The Mayer-Norton equivalent shown in Figure 3.10) veq Req is the Mayer-Norton equivalent source. ANALOG SIGNAL PROCESSING R1 is in parallel with the series combination iin R3 R1 .20: Mayer-Norton Equivalent All circuits containing sources and resistors can be described by simpler equivalent circuits. R +R a current divider relationship as of R2 and voc = across R3 . R3 . Choosing the one to use depends on the application. Thus. no voltage appears R1 +R2 +R3 no current ows through it. i + Sources and Resistors v – i Req veq + – + i + v – ieq Req v – Thévenin Equivalent Figure 3. we can replace the current source by an open circuit. and thus current.

either equivalent circuit can accurately describe it. The rst is to simplify the analysis of a complicated circuit by realizing the any portion of a circuit can be described by either a Thévenin or Mayer-Norton equivalent. 16 http://www-gap.html Available for free at Connexions <http://cnx. he published in a munications engineering when he joined Siemens & Halske in 1922. He was an engineer with France's Postes. the battery serves resistance of the circuit to which you attach it. Consider now the Mayer-Norton equivalent. When we buy a ashlight battery. If the load resistance the natural choice. this If the load resistance is comparable to neither as a voltage source or a current course.21 Equivalent circuits can be used in two basic ways. the voltage output can be found veq RL . Léon Charles Thévenin: resistance should be much smaller than the equivalent resistance. In 1883. he turned to comIn 1926. Thus. to a good During his interesting career.7. 16 . he rose to lead Siemen's Central Laboratory in∼history/Mathematicians/Helmholtz. the battery does serve as a voltage source.57 Exercise 3. He rose to a position on Siemen's Board of Directors before retiring. Which one is used depends on whether what is attached to the terminals is a series conguration (making the Thévenin equivalent the best) or a parallel one (making Mayer-Norton the best). He did not realize that the same result had been published by Hermann Helmholtz physicist. you get a voltage source if its equivalent resistance is much smaller than the equivalent the equivalent resistance. On the other hand. when you buy a battery. the current through the load resistance is given by current divider. the renowned nineteenth century Hans Ferdinand Mayer: After earning his doctorate in physics in 1920. was arrested by the Gestapo in 1943 for listening to BBC radio broadcasts. spent two years in Nazi concentration camps. and equals eq eq i = − RL +Req . if you attach it to a circuit having a small equivalent resistance. thiry years earlier.2 Find the Mayer-Norton equivalent circuit for the circuit below. Another application is modeling. and went to the United States for four years working for the Air Force and Cornell University before returning to Siemens in 1950.9> . Télégraphe et Téléphone. These models help us understand the limitations of a you bought a current source.) R2 iin R1 R3 Figure 3. i R For a current that does not vary with the load resistance. If a load resistance using voltage divider: v= is much smaller. we certainly don't have a voltage source (the output voltage depends directly on the load resistance). then. 116. Since batteries are labeled with a voltage specication.dcs. he published (twice!) a proof of what is now called the Thévenin equivalent while developing ways of teaching electrical engineering concepts at the École Polytechnique. If we have a load resistance much larger than the battery's equivalent RL +Req resistance. surruptiously leaked to the British all he knew of German warfare capabilities a month after the Nazis invaded Poland. German technical journal the Mayer-Norton equivalent. (Solution on p. they should serve as voltage sources and the Thévenin equivalent serves as RL is placed across its terminals.

At the turn of the twentieth century. Norton wrote in an 17 was an electrical engineer who worked at Bell Laboratory internal technical memorandum a paragraph describing the current-source equivalent. Let's consider a circuit having something other than resistors and sources. Charles Steinmetz we have learned to solved resistor circuits. we know that vin = vR + vout . We would have to slog our way through the circuit equations. a method was discovered that not only made nding the dierential equation easy. ANALOG SIGNAL PROCESSING Edward where we obtain an relation that requires more work to obtain answers. the increased insight into circuit behavior and the ease with which circuits are solved with impedances is well worth the diversion. simplifying them until we nally found the equation that related the source(s) to the output. we now have an implicit In approach in 1893. but also simplied the solution process in the most common situation. 18 http://www. Note rst that even nding the dierential equation relating an output variable to a source is often very tedious. Though 19 presented the key paper describing the the arithmetic of complex numbers is mathematically more complicated than with real numbers. Because of KVL.rice. No evidence suggests Norton knew of Mayer's publication. we arrive at vR = Ri RC into the KVL equation and using the v-i relation for the dvout + vout = vin dt (3. dt and this current equals that passing through the resistor.58 CHAPTER 3.22: A simple RC circuit. In the same month when Mayer's paper∼dhj/norton This content is available online at <http://cnx. 3. 19 http://www. we must master complex numbers.invent.8 Circuits with Capacitors and Inductors R vin + – C 18 + vout – Figure 3.9> . At this point. contrast to resistive circuits.11) The input-output relation for circuits involving energy storage elements takes the form of an ordinary dierential equation. The parallel and series combination rules that apply to resistors don't directly apply when capacitors and inductors occur. It allows circuits containing capacitors and inductors to be solved with the impedance same methods To use impedances. explicit input-output relation. Substituting capacitor. But more importantly.html 17 Available for free at Connexions <http://cnx. we could learn how to solve dierential equations.12/>. The current through the capacitor is given by i = C dvout .ece. which we must solve to determine what the output voltage is for a given Although not original with him. Norton: Edward Norton from its inception in 1922. the impedance concept is central to engineering and having a reach far beyond just circuits.

i = C dv . v = V ej2πf t . 20 i = CV j2πf ej2πf t . resistor's voltage is a complex exponential. The amplitude of this complex exponential is I = CV j2πf . Available for free at Connexions <http://cnx. so would the voltage. i = amplitude I = then V j2πf t .org/content/col10040/1. let's investigate how each circuit element behaves when either the voltage or current For the resistor.23: A simple RC circuit. if the Re V (determined by the R Clearly. assuming the current to be a complex exponential results in the voltage dt j2πf t having the form v = LIj2πf e .23/>. di for the inductor. let's pretend that all sources in the circuit are complex exponentials having the same frequency. When why this should be true. let vin = Vin ej2πf t . this ction will greatly ease solving the circuit no matter what Simple Circuit R vin + – C + vout – Figure 3. 20 Rather than solving the dierential equation that arises in circuits containing capacitors and inductors. Although this pretense can only be mathematically true. if the current were assumed to be Letting the voltage be a complex a complex exponential. with an v-i relation) and a frequency the same as the voltage. but does depend on source frequency.9 The Impedance Concept the source really is. making its complex amplitude V = LIj2πf .org/content/m0024/2. For a capacitor. I This quantity is known as the element's impedance. To appreciate v = Ri. dt This content is available online at <http://cnx.9> .23 (Simple Circuit)). and the is a complex exponential. Thus.59 3. where v = L . resistor's all voltages and currents in the circuit are also complex exponentials. The complex Vin determines the size of the source and its phase. so is the current. KCL. exponential. we have The major consequence of assuming complex exponential voltage and currents is that the ratio Z = V for each element does not depend on time. The critical consequence of assuming that sources have this form is that amplitudes governed by KVL. For the above example amplitude RC circuit (Figure 3. Finally. having v-i relations and the same frequency as the source.

and has a phase of decreases with frequency. Consequently. ANALOG SIGNAL PROCESSING Impedance i R (a) Figure 3. we use impedance.10 Time and Frequency Domains circuit in what is known as the 21 When we nd the dierential equation relating the source and the output.13) the complex amplitudes of the voltages obey KVL. What we emphasize here is that it is often easier to nd the output if we use impedances. the magnitude of the capacitor's impedance is inversely related to frequency. Available for free at Connexions <http://cnx.24: (a) Resistor: + v – i C (b) + v – i L (c) + v – ZR = R (b) Capacitor: ZC = 1 (c) Inductor: j2πf C ZL = j2πf L The impedance is. Because impedances depend only on frequency. What we have discovered is that source(s) equaling a complex exponential of the same frequency forces all circuit variables to be complex exponentials of the same frequency. suppose we had a circuit element where the voltage equaled the square of the current: v (t) = Ki2 (t). Because for linear circuit elements the complex amplitude of voltage is proportional to the complex amplitude of current Because complex amplitudes for voltage and current also obey Kircho's laws. For example. as if they were resistors. 2 This observation means that if the current is a complex exponential and has constant amplitude. meaning that voltage and current no longer had the same frequency and that their ratio was time-dependent. we have −π. For example. When voltages around a loop are all complex exponentials of the same frequency. This situation occurs because the circuit elements are linear and time-invariant. frequency-dependent quantity. which depends only on the source's frequency and element values. we are faced with solving the time V = ZI  assuming complex exponential sources means circuit elements behave 3. Let's consider Kircho 's circuit laws. If i (t) = Iej2πf t .9> . where instead of resistance. the ratio of voltage to current for each element equals the ratio of their complex we nd ourselves in the 21 This content is available online at <http://cnx. we can solve circuits using voltage and current divider and the series and parallel combination rules by considering the elements to be impedances. in general. the amplitude of the voltage nn vn = = 0 nn Vn ej2πf t (3.12) which means Vn = 0 nn (3. v (t) = KI e 2 j2π2f t . a complex-valued.60 CHAPTER 3. We can easily imagine that the complex amplitudes of the currents obey KCL.10/>.

Impedances and complex exponentials are the way you get between the two rooms. Security guards make sure you don't try to sneak time domain variables into the frequency domain room and vice versa. signals are represented entirely by complex amplitudes. frequency. and suggests a general way 1. We'll learn how to "get the pulse back" later. With a source equaling a complex exponential. 2.html Available for free at Connexions <http://cnx. KCL superposition vout(t) = … Vout = Vin•H(f) Figure 3. As we unfold the impedance exponentials having the 22 same frequency. Only after we nd the result in the frequency domain do we go back to the time domain and put things back together again. in the fray.9> . we'll see that the powerful use of impedances suggested by Steinmetz of thinking about circuits.invent. Because of the importance of this approach. If it were a voltage source having voltage vin = p (t) (a pulse). A common error in using impedances is keeping the time-dependent part. pretend the source is a complex exponential. Since you can't be two places at the same time. the frequency domain and impedances t together. alleviates us from solving dierential all variables in a linear circuit will also be complex The circuit's only remaining "mystery" is what each variable's http://www. Even though it's not. consider the time domain and frequency domain to be two work rooms. still let vin = Vin ej2πf t . let's go over how it works. We do this because the impedance approach simplies nding how input and output are related. Passing into the frequency domain work room.25: The time and frequency domains are linked by assuming signals are complex exponentials. you are faced with solving your circuit problem in one of the two rooms at any point in time.61 frequency domain. Figure 3. Two Rooms R vin + – C + vout – time-domain room frequency-domain room f Only signals differential equations KVL. the complex The entire point of using impedances is to get rid of time and concentrate on exponential. In the time domain. signals can have any form.25 (Two Rooms) shows how this works. 22 greatly simplies solving circuits. KCL superposition t v(t) = Ve j2πft i(t) = Ie j2πft Only complex amplitudes impedances transfer functions voltage & current divider KVL. To illustrate how the time domain.

org/content/col10040/1. the complex amplitude of the output voltage divider: Vout can be found using voltage Vout = ZC Vin ZC + ZR 1 j2πf C 1 j2πf C + Vout = Vout = R Vin 1 Vin j2πf RC + 1 If we refer to the dierential equation for this circuit (shown in Circuits with Capacitors and Inductors (Section 3. Example 3. we can nd the dierential equation between input and output amplitudes. Simple Circuits ZR R vin + – C + vout – Vin + ZC + Vout (a) Figure 3.3 To illustrate the impedance approach. 3. we have RCj2πf Vout ej2πf t + Vout ej2πf t = Vin ej2πf t In the process of dening impedances. In fact. we consider the source to be a complex number (Vin here) and the elements to be impedances. dt letting the output and input voltages be complex exponentials. and revert back to the dierential equation.8) to be RC dvout + vout = vin ). To nd these. note that the factor j2πf arises from the derivative of a complex exponential. Note that RC circuit. If we cross-multiply the relation Vout (j2πf RC + 1) = Vin and then put the complex exponentials back in.62 CHAPTER 3. directly using impedances. We can reverse the impedance process.9> . Using impedances. (b) The impedance counterpart for the the source and output voltage are now complex amplitudes. ANALOG SIGNAL PROCESSING complex amplitude might be.26 (Simple Circuits)) vin = Vin ej2πf t . RC dvout + vout = vin dt Available for free at Connexions <http://cnx. and we assume that RC circuit (Figure 3. using impedances is equivalent to using the dierential equation and solving it when the source is a complex exponential. we refer to the below.26: (a) A simple (b) RC circuit. Thus. We can now solve using series and parallel combination rules how the complex amplitude of any variable relates to the sources complex amplitude. we obtain the same relationship between their complex amplitudes.

We can also write the voltage and current in terms of their complex amplitudes using Euler's formula (Section 2. V equals |V |ejφ and that of the current is |I|ejθ .2: v (t) = i (t) = 1 2 1 2 V ej2πf t + V ∗ e−(j2πf t) Iej2πf t + I ∗ e−(j2πf t) Multiplying these two expressions and simplifying gives p (t) = = = 1 ∗ 4 VI 1 2 Re (V 1 2 Re (V + V ∗ I + V Iej4πf t + V ∗ I ∗ e−(j4πf t) I ∗ ) + 1 Re V Iej4πf t 2 I ∗ ) + 1 |V ||I|cos (4πf t + φ + θ) 2 We dene 1 2V I∗ to be complex power. as the integration interval increases. it represents the power consistently consumed/produced by the circuit.1.9> . this term details how power "sloshes" back and forth in the circuit because of the sinusoidal source. Exercise 3.10. The second term varies with time at a frequency twice that of the source. how are φ θ related for maximum power dissipation? This content is available online at <http://cnx. Pave = 1 Re (V I ∗ ) 2 appreciates while the time-varying term "sloshes.11. the rst term erage power consumed/produced by any circuit is in terms of complex amplitudes. 23 Recalling that the instantaneous power consumed by a circuit element or an equivalent circuit that represents a collection of elements equals the voltage times the current entering the positive-voltage terminal. the voltage and current for any circuit element or collection of elements are sinusoids of the same frequency. What phase relationship between voltage and current maximizes the average power? In other words.14) Exercise 3.8). What time- 3. the complex amplitude of the voltage Euler's Formula). Energy is the integral of power and. v (t) = |V |cos (2πf t + φ) i (t) = |I|cos (2πf t + θ) Here. 116.1 (Solution on p.11 Power in the Frequency Domain v (t) i (t)." Consequently. 116.) Suppose you had an expression where a complex amplitude was divided by domain operation corresponds to this division? j2πf . Available for free at Connexions < From another viewpoint. The real-part of complex power is the rst term and since it does not change with time. the most convenient denition of the av- (3. power consumption in circuits and the introduction of the concept of When all sources produce sinusoids of frequency p (t) = what is the equivalent expression using impedances? The resulting calculation reveals more about average power.1 and 23 (Solution on p.2/>. Conceptually. Finding the dierential equation relating output to input is far simpler when we use impedances than with any other technique. the real-part of complex power represents long-term energy consumption/production. f.) Suppose the complex amplitudes of the voltage and current have xed magnitudes.63 This is the same equation that was derived much more tediously in Circuits with Capacitors and Inductors (Section

charging the capacitor does consume power. Thus. Capacitors and inductors dissipate no power in the long term. 24 This content is available online at <http://cnx. What is average power expressed in terms of the rms values of the voltage and current (Vrms and Irms respectively)? 3. (Solution on p. and inductors. 116.20/>.3) we obtained for resistor circuits.2 In an earlier problem (Section 1.1: RMS Values). Exercise 3.) It is important to realize that these statements apply only for sinusoidal sources. we found that the rms value of a sinusoid was its amplitude divided by √ 2. the input-output relation complex amplitudes of the terminal voltage and current is V = Zeq I + Veq V − Ieq Zeq I= with Veq = Zeq Ieq . average power can also be written as Pave = 1 1 2 Re (Z) (|I|) = Re 2 2 1 Z (|V |) 2 These expressions generalize the results (3. If you turn on a constant voltage source in an RC-circuit.5.9> . We have derived a fundamental result: elements. For any circuit containing sources. resistors.64 CHAPTER 3.12 Equivalent Circuits: Impedances and Sources for the 24 When we have circuits with capacitors and/or inductors as well as resistors and sources. Available for free at Connexions <http://cnx.27 (Equivalent Circuits).11. ANALOG SIGNAL PROCESSING Because the complex amplitudes of the voltage and current are related by the equivalent impedance. capacitors. Only the real part of impedance contributes to long-term power dissipation. Of the circuit only the resistor dissipates power. we have Thévenin and Mayer-Norton equivalent circuits as shown in Figure Thévenin and MayerNorton equivalent circuits can still be dened by using impedances and complex amplitudes for voltage and currents.

4 Available for free at Connexions <http://cnx. Resistors. Inductors + V – I Zeq Veq + – + I + V – Ieq Zeq V – Thévenin Equivalent Mayer-Norton Equivalent (b) Equivalent circuits with impedances. Figure 3. all having the same frequency.27: Comparing the rst.9> .org/content/col10040/1.65 Equivalent Circuits i + Sources and Resistors v – i Req veq + – + i + v – ieq Req v – Thévenin Equivalent Mayer-Norton Equivalent (a) Equivalent circuits with resistors. Capacitors. which carries the implicit assumption that the voltages and currents are single complex exponentials. simpler. I Sources. we see two dierences. Secondly. Example 3. gure with the slightly more complicated second gure. more circuits (all those containing linear elements in fact) have equivalent circuits that contain equivalents. the terminal and source variables are now complex amplitudes. First of all.

Isc equals In our case. The open-circuit voltage and short-circuit current techniques still Is it? 3. Zeq = R 1 j2πf C = R 1+j2πf RC . The circuit's function is thus summarized by the transfer function. ANALOG SIGNAL PROCESSING Simple RC Circuit I R Vin + – C V + – Figure 3.29 (Simple Circuit).66 CHAPTER 3.9> . is given by Vout Vin transfer (3. When we short the terminals.28 Let's nd the Thévenin and Mayer-Norton equivalent circuits for Figure 3. In fact.28 (Simple RC Circuit). the resistor and capacitor are in parallel once the voltage source is removed (setting it to zero amounts to replacing it with a short-circuit). and the short-circuit current Vout R . Consequently. we should check the units of our answer. The transfer function reveals how the circuit modies the input amplitude in creating the output amplitude.20/>. Thus. and the output is also a complex exponential having the same frequency. known as the function or the frequency response. The open-circuit voltage corresponds to the transfer function we have already found. The equivalent impedance can be found by setting the source to zero. the transfer function completely describes how the circuit processes the input complex exponential to produce the output complex exponential. Veq = 1 Vin 1 + j2πf RC 1 Vin R Ieq = Zeq = R 1 + j2πf RC j2πf RC must be dimen- Again. we have Thus. Available for free at Connexions <http://cnx. the capacitor no longer has any eect on the circuit. and nding the impedance using series and parallel combination rules. Note in particular that sionless. circuits are often designed to meet transfer 25 This content is available online at <http://cnx.15) = = H (f ) 1 j2πf RC+1 Implicit in using the transfer function is that the input is a complex exponential. except we use impedances and complex amplitudes.13 Transfer Functions 25 The ratio of the output and input amplitudes for Figure

29: A simple RC circuit. Magnitude and phase of the transfer function |H(f)| 1 1/√ 2 1 2 πRC -1 0 1 2 πRC 1 f (a) ∠H(f) π/ 2 π/ 4 0 1 2 πRC –π/ 4 –π/ 2 1 2 πRC f 1 -1 (b) Figure 3. Recall that sinusoids consist of the sum of two complex exponentials.29 1 RC = 1. Simple Circuit R vin + – C + vout – Figure 3. Because transfer functions are complex-valued.67 function specications.30 (Magnitude and phase of the transfer function)).9> . one having the negative frequency of the other. we can better appreciate a circuit's function by examining the magnitude and phase of its transfer function (Figure 3. note that we can compute the frequency response for both positive and negative frequencies. even symmetry: The negative frequency portion is a mirror image of the positive Available for free at Connexions <http://cnx.30: Magnitude and phase of the transfer function of the RC circuit shown in Figure that the magnitude has We will consider how the circuit acts on a sinusoid soon. (a) |H (f ) | = √ (b) ∠ (H (f )) = −arctan (2πf RC) (2πf RC)2 +1 (Simple Circuit) when This transfer function has many important properties and provides all the insights needed to determine Do note how the circuit functions. First of all. frequency-dependent quantities.

we know that vin (t) = Asin (2πf t) = A 2j ej2πf t − e−(j2πf t) (3. we know that the output is also a sum of two similar complex exponentials. It will turn out that this input-output relation description applies to any linear circuit having a sinusoidal source. what would we do if the source were a unit step? When we use impedances to nd the transfer function between the source and the output variable.16) Since the input is the sum of two complex exponentials. vout (t) = As noted earlier. we know what it is from the positive frequency part. Thus. of its maximum gain (1 at f 2 denominator of the magnitude are equal). when the source frequency is in this range. The frequency operating ranges. Thus.68 CHAPTER 3. The notion of impedance arises when we assume the sources are complex exponentials. For frequencies greater than fc . superposition applies. Show that a similar result also holds for the real part.59 × 100 nF or 10 Ω and 1. |H (−f ) | = |H (f ) | (even symmetry of the magnitude) −∠ (H (f )) (odd symmetry of the phase). As Available for free at Connexions <http://cnx. but at higher frequencies. phase is little aected.59 kΩ and The phase shift caused by the circuit at the cuto frequency precisely equals −π. a cuto frequency of 1 kHz occurs same cuto frequency. Secondly. Consequently. 4 Thus. the circuit does not much alter the amplitude of the complex exponential source. Show that if the source can be written as the imaginary part of a complex exponential vin (t) = Im V ej2πf t  the output vout (t) = Im V H (f ) e j2πf t . the circuit's output has a much smaller amplitude than that of the source.59 µF result in the cuto frequency. below the cuto frequency.17) function |H (f ) |ej∠(H(f )) . we = 0) fc = when don't need to plot the negative frequency component.1 is given by (Solution on p.) This input-output property is a special case of a more general result. The phase has properties of this specic example apply for odd symmetry: ∠ (H (−f )) = −∠ (H (f )). These all transfer functions associated with circuits. We can use the transfer function to nd the output when the input voltage is a sinusoid for two reasons. First of all. Thus resistance-capacitance combinations of 1. the only dierence being that the complex amplitude of each is multiplied by the transfer function evaluated at each exponential's frequency. the phase shift caused by the circuit becomes −π. we can derive from it the dierential equation that relates input and output. In this circuit the cuto frequency only on the product of the resistance and the capacitance. the transfer A A H (f ) ej2πf t − H (−f ) e−(j2πf t) 2j 2j is most conveniently expressed in polar form: and (3. a sinusoid is the sum of two complex exponentials. 1 The magnitude equals √ 2πf RC = 1 (the two terms in the 1 2πRC denes the boundary between two • • For frequencies below this The circuit's output to a sinusoidal input is also a sinusoid. having a gain equal to the magnitude of the circuit's transfer function evaluated at the source frequency and a phase equal to the phase of the transfer function at the source frequency.13. For these reasons. The output voltage expression simplies to vout (t) = = A 2j |H H (f ) = ∠ (H (−f )) = (f ) |ej2πf t+∠(H(f )) − A 2j |H (f ) |e(−(j2πf t))−∠(H(f )) A|H (f ) |sin (2πf t + ∠ (H (f ))) (3. If the source is a sine wave. Exercise 3. the circuit strongly attenuates the amplitude. 2 This phase shift corresponds to the dierence between a cosine and a sine. each having a frequency equal to the negative of the other. The dierential equation applies no matter what the source may be. 117. ANALOG SIGNAL PROCESSING frequency portion: |H (−f ) | = |H (f ) |. because the circuit is linear. Furthermore. 10−4 . This assumption may seem restrictive. this frequency is known as the depends when 3 10−3 1 2πRC = 10 or RC = 2π = 1.9> .

use it to nd the output due to each input component. Because the input is the sum of sinusoidsa constant is a zero-frequency cosineour approach is 1.14 Designing Transfer Functions 26 If the source consists of two (or more) signals. we can write the output directly as indicated by the output of a circuit for a sinusoidal input (3. ltering the source signal based on the frequency of each component complex exponential.5 RL circuit i R vin + – L iout v + – Figure 3. Those components having a frequency less than the cuto frequency pass through the circuit with little modication while those having higher frequencies are suppressed. assuming complex exponential sources is actually quite general.18). add the results. In particular. each of which has a frequency dierent from the others. Because low frequencies pass through the lter. Available for free at Connexions <http://cnx. 26 two This content is available online at <http://cnx. nd element values that accomplish our design lowpass lter to express more We have also found the ease of calculating the output for sinusoidal inputs through the use of the transfer function. suppose these component signals are complex exponentials. and therefore superposition applies. Once we nd the transfer function. we call it a precisely its function. In fact we can also solve the dierential equation using impedances! Thus. despite the apparent restrictiveness of impedances. 3. 4. The transfer function portrays how the circuit aects the amplitude and phase of each component. The circuit is said to act as a lter. The source voltage equals Vin = 2cos (2π60t) + 3. linear circuits are a special case of linear systems.31 Let's apply these results to a nal example. In this sense. Example 3. we have not lost anything by temporarily pretending the source is a complex exponential. allowing us to understand how the circuit works on a complicated signal. in which the input is a voltage source and the output is the inductor current. it is far simpler to use impedances to nd the dierential equation (because we can use series and parallel combination rules) than any other method. we know from linear system theory that the output voltage equals the sum of the outputs produced by each signal alone. 3.69 we have argued.9> . nd the transfer function using impedances. In short.21/>. We want the circuit to pass constant (oset) voltage essentially unaltered (save for the fact that the output is a current rather than a voltage) and remove the 60 Hz term.

Having a 100 mH inductor would require a which yields an attenuation (relative to the gain at zero frequency) of about Ω resistor. An easily available resistor value is 6. 3 R . The transfer function at 60 Hz would be 2πfc L = R. Thus. The output is given by 3|H (0) | = For the 60 Hz component signal. This specication R would require the component values to be related by L = 20π = 62. it is a lowpass lter. A factor of 10 relative R R size between the two components seems reasonable. π 0. say. 10 Hz. The total iout = 2|H (60) |cos (2π60t + ∠ (H (60))) + 3 × H (0) function's denominator equal each other.16 × 1 R (3.8 Ω. and result in 0.19) = H (f ) where voltage divider = and j2πf L R + j2πf L 1 j2πf L inductor admittance = [Do the units check?] The form of this transfer function should be familiar. The constant term is easiest to handle. this choice results in cheaply and easily purchased parts. Thus.3 R sin (2π60t). To make the resistance bigger would require a proportionally larger inductor. the value we choose for the resistance will determine the scaling factor of how voltage is converted into current. = = j2πf L 1 R+j2πf L j2πf L 1 j2πf L+R Iout Vin (3.32 (Waveforms). The waveforms for the input and output are shown in Figure 3. let's use voltage divider to nd the transfer function between Vin and V. = Available for free at Connexions <http://cnx. even a 1 H inductor is physically large. leaving it to be 2 R cos 2π60t − 2 0.9> . Unfortunately. thus.28 1/6.8.21) an output amplitude of 6. Suppose we place it at. (3. We want this cuto frequency to be much less than 60 Hz. consequently low cuto frequencies require small-valued resistors and large-valued inductors. ANALOG SIGNAL PROCESSING Because the circuit is a series combination of elements.20) The cuto frequency for this lter occurs when the real and imaginary parts of the transfer R 2πL . which gives fc = | 1 1 1 1 1 |= | |= √ j2π60L + R R 6j + 1 R 37 0.3 3 relative to the constant term's amplitude of .org/content/col10040/1. The choice made here represents only one compromise.3 π The phase of the 60 Hz component will very nearly be − . then use the v-i relation of the inductor to nd its current. the output current is output due to our source is 2|H (60) |cos (2π60t + ∠ (H (60))).70 CHAPTER 3. and it will perform our desired function once we choose element values properly.


5 input voltage 4

Voltage (v) or Current (A)



1 output current 0 0
Figure 3.32:

Time (s)
Input and output waveforms for the example

circuit when the element values are

R = 6.28Ω


L = 100mH.

Note that the sinusoid's phase has indeed shifted; the lowpass lter not only reduced the 60 Hz signal's amplitude, but also shifted its phase by 90 .

3.15 Formal Circuit Methods: Node Method
nation rulesto solve for a circuit's input-output relation.


In some (complicated) cases, we cannot use the simplication techniquessuch as parallel or series combiIn other modules, we wrote


relations and

Kircho 's laws haphazardly, solving them more on intuition than procedure. We need a formal method that produces a small, easy set of equations that lead directly to the input-output relation we seek. One such technique is the

node method.

This content is available online at <>.

Available for free at Connexions <>




Node Voltage
e1 R1 vin + – R2 R3 e2

Figure 3.33

The node method begins by nding all nodesplaces where circuit elements attach to each otherin the circuit. We call one of the nodes the

reference node;

the choice of reference node is arbitrary, but it is

usually chosen to be a point of symmetry or the "bottom" node. For the remaining nodes, we dene

voltages en that represent the voltage between the node and the reference.


These node voltages constitute

have two node voltages. The very act of dening node voltages is equivalent to using all the KVL equations at your disposal. The reason for this simple, but astounding, fact is that a node voltage is uniquely dened regardless of what path is traced between the node and the reference. Because two paths between a node and reference have the same voltage, the sum of voltages around the loop equals zero. In some cases, a node voltage corresponds exactly to the voltage across a voltage source. In such cases, the node voltage is specied by the source and is thus, we need only to nd one node voltage. The equations governing the node voltages are obtained by writing KCL equations at each node having an unknown node voltage, using the is

the only unknowns; all we need is a sucient number of equations to solve for them. In our example, we

not an unknown.

For example, in our circuit,

e1 = vin ;

v-i relations for each element.

In our example, the only circuit equation

e2 − vin e2 e2 + + =0 R1 R2 R3


A little reection reveals that when writing the KCL equations for the sum of currents leaving a node, that node's voltage will


appear with a plus sign, and all other node voltages with a minus sign.

Systematic application of this procedure makes it easy to write node equations and to check them before solving them. Also remember to check units at this point: Every term should have units of current. In our example, solving for the unknown node voltage is easy:

e2 =

R2 R3 vin R1 R2 + R1 R3 + R2 R3


Have we really solved the circuit with the node method? Along the way, we have used KVL, KCL, and the

v-i relations.

Previously, we indicated that the set of equations resulting from applying these laws is

necessary and sucient. This result guarantees that the node method can be used to "solve" sources. All circuit variables can be found using the

any circuit.

One fallout of this result is that we must be able to nd any circuit variable given the node voltages and current through



e2 R3 .


relations and voltage divider.

For example, the

Available for free at Connexions <>


e1 R2 iin R1 R3

e2 i

Figure 3.34

The presence of a current source in the circuit does not aect the node method greatly; just include it in writing KCL equations as a current

leaving the node.

The circuit has three nodes, requiring us to dene

two node voltages. The node equations are

e1 e1 − e2 + − iin = 0 R1 R2 e2 − e1 e2 + =0 R2 R3

(Node 1)

(Node 2)

Note that the node voltage corresponding to the node that we are writing KCL for enters with a positive sign, the others with a negative sign, and that the units of each term is given in amperes. Rewrite these equations in the standard set-of-linear-equations form.


1 1 + R1 R2 1 + e2 R2 e1 = e2 =

− e2

1 = iin R2 =0

Solving these equations gives

1 1 + R2 R3 R2 + R3 e2 R3

R1 R3 iin R1 + R2 + R3
e2 R3 .

To nd the indicated current, we simply use


Example 3.6: Node Method Example

Available for free at Connexions <>




2 e1 1 vin + – 1 e2 1 i 1

Figure 3.35

In this circuit (Figure 3.35), we cannot use the series/parallel combination rules: The vertical resistor at node 1 keeps the two horizontal 1 prevents the two 1 equations are

resistors from being in series, and the 2


resistors at node 2 from being in series. We really do need the node method

to solve this circuit! Despite having six elements, we need only dene two node voltages. The node

e1 − vin e1 e1 − e2 + + =0 1 1 1 e2 e2 − e1 e2 − vin + + =0 2 1 1 6 5 yields e1 = 13 vin and e2 = 13 vin .

(Node 1)

(Node 2) The output current equals

e2 5 1 = 13 vin . One unfortunate consequence of using the element's numeric values from the outset is that it
Solving these equations becomes impossible to check units while setting up and solving equations.

Exercise 3.15.1

(Solution on p. 117.)

What is the equivalent resistance seen by the voltage source?

Node Method and Impedances
E R1 Vin + – C R2 Vout –
Figure 3.36:
Modication of the circuit shown on the left to illustrate the node method and the eect


of adding the resistor

R2 .

The node method applies to RLC circuits, without signicant modication from the methods used on simple resistive circuits, if we use complex amplitudes. We rely on the fact that complex amplitudes satisfy

Available for free at Connexions <>


KVL, KCL, and impedance-based equation is

v-i relations.

In the example circuit, we dene complex amplitudes for

the input and output variables and for the node voltages. We need only one node voltage here, and its KCL

E − Vin E + Ej2πf C + =0 R1 R2 E= R2 Vin R1 + R2 + j2πf R1 R2 C
E Vin . The transfer

with the result

To nd the transfer function between input and output voltages, we compute the ratio function's magnitude and angle are

|H (f ) | =

R2 (R1 + R2 ) + (2πf R1 R2 C)

∠ (H (f )) = −arctan

2πf R1 R2 C R1 + R2 R2

This circuit diers from the one shown previously (Figure 3.29: Simple Circuit) in that the resistor was a lowpass lter having cuto frequency

been added across the output. What eect has it had on the transfer function, which in the original circuit

1 2πR1 C ? As shown in Figure 3.37 (Transfer Function), adding the second resistor has two eects: it lowers the gain in the passband (the range of frequencies for

fc =

which the lter has little eect on the input) and increases the cuto frequency.

Transfer Function
|H(f)| 1 No R2

R1=1, R2=1 0 0 1 R1+R2 1 2πRC 2πR1C• R2 1 f

Figure 3.37:

Transfer functions of the circuits shown in Figure 3.36 (Node Method and Impedances). and

R1 = 1 , R2 = 1 ,

C = 1.

Available for free at Connexions <>





R2 = R1 ,

as shown on the plot, the passband gain becomes half of the original, and the cuto

frequency increases by the same factor. Thus, adding gain for cuto frequency.


provides a 'knob' by which we can trade passband

Exercise 3.15.2

(Solution on p. 117.)

We can change the cuto frequency without aecting passband gain by changing the resistance in the original circuit. Does the addition of the


resistor help in circuit design?

3.16 Power Conservation in Circuits
elements are used to build the circuit.


Now that we have a formal methodthe node methodfor solving circuits, we can use it to prove a powerful result: KVL and KCL are all that are required to show that

all circuits conserve power, regardless of what

Part of a general circuit to prove Conservation of Power
a i1 b
1 3 2

i2 i3 c

Figure 3.38

First of all, dene node voltages for all nodes in a given circuit. Any node chosen as the reference will do. For example, in the portion of a large circuit (Figure 3.38: Part of a general circuit to prove Conservation of Power) depicted here, we dene node voltages for nodes a, b and c. With these node voltages, we can express the voltage across any element in terms of them. For example, the voltage across element 1 is given by

v1 = eb − ea .

The instantaneous power for element 1 becomes

v1 i1 = (eb − ea ) i1 = eb i1 − ea i1
Writing the power for the other elements, we have

v2 i 2 v3 i 3

= =

ec i2 − ea i2 ec i3 − eb i3

When we add together the element power terms, we discover that once we collect terms involving a particular node voltage, it is multiplied by the sum of currents leaving the node minus the sum of currents entering. For example, for node b, we have node voltage. Consequently,

we conclude that the sum of element powers must equal zero in any circuit regardless of the elements used to construct the circuit.
vk ik = 0

eb (i3 − i1 ).

We see that the currents will obey KCL that multiply each

This content is available online at <>. Available for free at Connexions <>

The power supply is like the water Resistors. 3. We can take a circuit and measure all the voltages. Thus. The sum of the product of element voltages and currents will also be zero! Thus. if the topology has not changed. 31 "Small Signal Model for Bipolar Transistor" <http://cnx. is accomplished by semiconductor circuits that contain transistors. gain: 29 electrical circuits: The source signal has more power than the output Consequently. and circuits built of them will not electronic circuits. we can measure a set of currents. the complex-conjugate of currents also satises ∗ k Vk Ik = 0. and has much more power than expended in turning the handle. We can model the op-amp with a new circuit element: the dependent source.14/>. which means we also have Vk Ik = 0. Such circuits are termed electrical in distinction to those that do provide power Providing power gain. with the turning achieved by the input signal. This content is available online at <http://cnx. such as your stereo reading a CD and producing sound. However. for a given circuit topology (the specic way elements are interconnected). In particular.17 Electronics So far we have analyzed magically do so either. note that the complex amplitudes of voltages and currents obey KVL and KCL.18 Dependent Sources A 30 dependent source is either a voltage or current source whose value is proportional to some other voltage 31 or current in the circuit. but no matter. we need a voltage-dependent voltage source. also known as the op-amp.77 The simplicity and generality with which we proved this results generalizes to other situations as well. the water ow varies accordingly. Furthermore. And> 29 30 Available for free at Connexions <http://cnx. The waterpower results from the static pressure of the water in your plumbing created by the water utility pumping the water up to your local water tower. We can then make element-for-element replacements and. there are four dierent kinds of dependent sources. we know that evaluating the real-part of an 1 Re (Vk Ik ∗ ) = 0 2 expression is linear. The basic idea of the transistor is to let the weak input signal modulate a strong current provided by a source of electrical powerthe power supplyto produce a more powerful signal. A physical analogy is a water faucet: By turning the faucet back and forth. An op-amp is an integrated circuit (a complicated circuit involving several transistors constructed on a chip) that provides a large voltage gain if you attach the power supply. respectively. a power supply is a source of constant voltage as the water tower is supposed to provide a constant water pressure. Power has not been explicitly dened. All we need is a set of voltages that obey KVL and a set of currents that obey KCL. to describe an op-amp.8/>.9> . the voltages and currents can be measured at dierent times and the sum of v-i products is zero. vk (t1 ) ik (t2 ) = 0 k Even more interesting is the fact that the elements don't matter. the standard circuit-theoretical model for a transistor This content is available online at <http://cnx. Finding the real-part of this power conservation gives the result that is also conserved in any circuit. A device that is much more convenient for providing gain (and other useful features as well) than the transistor is the operational amplier. average power k note: This proof of power conservation can be generalized in another very interesting way. Just as in this analogy. and the faucet is the transistor. and capacitors as individual elements certainly provide no power gain. inductors. be it a voltage or a current. we have that k KCL.

39: Of the four possible dependent sources. Here.40: The op-amp has four terminals to which connections can be made. and the output is node c. RLC circuits we have been considering so far are known as passive circuits.9> . The dependent source model portrays how the op-amp works quite Inputs attach to nodes a and b. the output voltage equals an amplied version of the dierence of node voltages appearing across its inputs. like the node method (Section 3. ANALOG SIGNAL PROCESSING contains a current-dependent current source. As the circuit model on the right shows. the power supply is not shown. Analysis of circuits containing dependent sources essentially requires use of formal methods. op-amp a a b + c – b Rin + – G(ea–eb) Rout c Figure 3. They are used to model active circuits: those containing electronic elements.78 CHAPTER 3.15). circuit simplications such as current and voltage divider should not be applied in most cases. the op-amp serves as an amplier for the dierence of the input node voltages. dependent sources … + v – … kv + – … Dependent sources do not serve as inputs to a circuit like The … Figure 3. independent sources. and because the dependent variable cannot "disappear" when you apply parallel/series combining rules. As in most active circuit schematics.40 (op-amp) shows the circuit symbol for the op-amp and its equivalent circuit in terms of a voltage-dependent voltage source. Most operational ampliers require both positive and negative supply voltages for proper operation. depicted is a voltage-dependent voltage source in the context of a generic circuit. Using the node method for Available for free at Connexions <http://cnx. Figure 3. Because dependent sources cannot be described as impedances. but must be present for the circuit model to be accurate.

26) back conguration. known as the standard feed- Once we learn more about op-amps (Section 3. with node voltages dened across the source treated as if they were known (as with independent sources).41 (feedback op-amp).25) Note that no special considerations were used in applying the node method to this dependent-source circuit.9> . the expression will simplify greatly." with its inverting input at the top and serving as the only input. To determine how the output voltage is related to the input voltage.24) (3. Do note that the units check. As we explore op-amps in more detail in the next section. vout relates to vin yields RF Rout Rout − GRF 1 1 1 + + Rout Rin RL 1 1 1 + + R Rin RF − 1 RF vout = 1 vin R (3. On the bottom is the equivalent circuit. we apply the node method. and integrates the op-amp circuit model into the circuit. feedback op-amp R – vin + – + RL + vout – R + Rout + – Rin v + – – –Gv RL vout – RF + RF Figure The top circuit depicts an op-amp in a feedback amplier conguration. the remaining nodes are across sources or serve as the reference. and that the parame- G of the dependent source is a dimensionless gain.19).79 such circuits is not dicult. Only two node voltages v and vout need be dened. The node equations are v v − vout v − vin + + =0 R Rin RF vout − (−G) v vout − v vout + + =0 Rout RF RL Solving these to learn how (3. this conguration will appear again and again and its usefulness demonstrated. Available for free at Connexions <http://cnx. Consider the circuit shown on the top in Figure 3. ter This expression represents the general input-output relation for this circuit. in particular what its typical element values are. Note that the op-amp is placed in the circuit "upside-down.

desirable outcome if you are a rock & roll acionado. In dealing with electronic components. Rout . and looking at the outputyou would be disappointed. • • • input resistance. Rin . 32 This content is available online at <http://cnx. If you were to build such a circuitattaching a voltage source to node forget the unrepresented but needed power supply. you cannot It is impossible for electronic compo- nents to yield voltages that exceed those provided by the power supply or for them to yield currents that exceed the power supply's rating. is large. G. the op-amp serves as an amplier for the dierence of the input node voltages.9> . and the output is node c.32/>. As the circuit model on the right shows. the resulting waveform would be severely distorted.19 Operational Ampliers 32 Op-Amp a a b + c – b Rin + – G(ea–eb) Rout c Figure 3. Op-amps not only have the circuit model shown in Figure 3. Another consideration in designing circuits with op-amps is that these element values are typical: Careful control of the gain can only be obtained by choosing a circuit so that its element values dictate the resulting gain.42: The op-amp has four terminals to which connections can be made. exceeding 10 . 5 The voltage gain.42 (Op-Amp). but their element values are very special. is typically large. usually less than 100 Ω. Unmodeled limitations imposed by power supplies: a. which must be smaller than that provided by the op-amp. attaching node b to the reference. high-quality stereos should not distort signals. output resistance.80 CHAPTER 3. Inputs attach to nodes a and b. it suggests that an op-amp could turn a 1 mV input signal into a 100 V one. Typical power supply voltages required for op-amp circuits are ± (15V ). is small. The The The large gain catches the eye. Attaching the 1 mv signal not While a only would fail to produce a 100 V signal. ANALOG SIGNAL PROCESSING 3. Available for free at Connexions <http://cnx. on the order of 1 MΩ.org/content/m0036/

(3. Make the resistor. the expression ((3. the rst term becomes leaving us with − 1 G 1 1 + R RF − 1 RF vout = 1 vin R (3. This situation drops the term 1 RL from the R. RF Rout Rout − GRF 1 1 1 + + Rout Rin RL 1 1 1 + + R Rin RF − 1 RF vout = 1 vin R (3. much larger than Rout .19. second factor of (3. 3.1 Inverting Amplier The feedback conguration shown in Figure 3.27) provides the exact input-output relationship. we can simplify the expression dramatically. With these two design criteria.27).9> . smaller than Rin .28) is small. In choosing element values with respect to op-amp characteristics.43: The top circuit depicts an op-amp in a feedback amplier conguration. which means that the 1 Rin term in the third factor is negligible. On the bottom is the equivalent circuit. and integrates the op-amp circuit model into the circuit.27)) becomes RF Rout − GRF Because the gain is large and the resistance 1 1 + R RF Rout − 1 RF vout = 1 vout R 1 − RL .29) Available for free at Connexions <http://cnx.43 (opamp) is the most common op-amp circuit for obtaining what is known as an inverting amplier. • • Make the load resistance.81 opamp R – vin + – + RL + vout – R + Rout + – Rin v + – – –Gv RL vout – RF + RF Figure 3.

this factor will no longer depend on the op-amp's Under these conditions. RF vin R (3. and it will equal R so that GR 1 − RF . vout = − RF R.82 CHAPTER 3.9> . if careful.44). With the transfer function of the above op-amp circuit in mind.19.30) Consequently. let's consider some we obtain the classic input-output relationship for the op-amp-based inverting amplier.44: Vout Vin ZF = − ZF Z Example 3. It cannot exceed the op-amp's inherent gain and should not produce such large outputs that distortion results (remember the power supply!). This eect occurs because we use load resistances large compared to the op-amp's output resistance. Z = 1 + jf fc . ANALOG SIGNAL PROCESSING • If we select the values of RF and inherent gain. and inductors). We want the transfer function between the output and input voltage to be H (f ) = where K 1 + jf fc K equals the passband gain and fc is the cuto frequency. This choice means the feedback impedance is a resistor and that the input impedance is a series combination of an inductor and a resistor. the gain provided by our circuit is entirely determined by our choice of the feedback resistor and the input resistor It is always negative. opamp Z – Vin + – + Vout – + Figure 3. we can place op-amp circuits in cascade. the input-output relation for the inverting amplier also applies when the feedback and input circuit elements are impedances (resistors. we try to avoid inductors because they are physically bulkier than capacitors. without incurring the eect of succeeding circuits changing the behavior (transfer function) of previous ones. • ZF = K . Interestingly. In circuit design. Thus observation means that. and can be less than one or greater than one in magnitude.2 Active Filters As long as design requirements are met.7 Let's design an op-amp circuit that functions as a lowpass lter. Let's assume that the inversion (negative gain) does not matter. note that this relationship does not depend on the load resistance. Available for free at Connexions <http://cnx. RF . capacitors. problem (Problem 3. see this 3.

4. Thus. Consider the reciprocal of the feedback impedance (its admittance): Since this admittance is a sum of admittances. Letting the input resistance equal R.3 × 10−5 . let's assume we want a lowpass lter that emulates what the telephone companies do.) What is special about the resistor values. 117. easily obtained values of r are 1. the design specication of R RF R = 10 means that this criterion is easily and 6.7. see Problem 3. A 1 µF capacitor and a 330 Ω resistor. r10d . As opposed to design with passive circuits. we have the gain equal to R H (f ) = − 1+j2πf RF C RF RF 1 R and the cuto frequency RF C . We must have interest. Consider the general RC parallel 1 combination. electronics is more exible (a cascade of circuits can be built so that each has little eect on the others. the transfer the parallel combination of a resistor (value = 1 Ω) and a capacitor (value = function of the op-amp inverting amplier now is Thus. its admittance is RF +j2πf C .3 Intuitive Way of Solving Op-Amp Circuits When we meet op-amp design specications. To complete our example. 10 nF and 33 kΩ. many choices for resistance and capacitance values are possible.44) and gain (increase in power and amplitude) can result. which means R = 10 . and 10 pF and 33 MΩ would all theoretically work. and the decades Exercise 3.8. and this requirement means that the we require last choice for resistor/capacitor values won't work. costs don't depend heavily on component values as long as you stay close to standard values. Signals transmitted over the telephone have an upper frequency limit of about 3 kHz. 1. Available for free at Connexions <http://cnx. so much so that we don't need the op-amp's circuit model to determine the transfer function. having values span 0-8. RF |1+j2πf RF C| |ZF | R < 105 for all frequencies of < 105 .9> .83 • ZF = ZF −1 = 1 + 1 . We also need to ask for less gain than the op-amp can provide itself. 1+ jf fc Z = jf fc .19. 3. this expression suggests 1 fc F ). For resistors. but the values (like 1 Ω) are not right. As the op-amp's input impedance is about 1 MΩ. We have the right idea. As this impedance decreases with frequency. the rst two choices for the resistor and capacitor values (as well as many others in this range) will work well. Let's RF RF also desire a voltage gain of ten: R = 10. we need to examine the gain requirement more carefully. Recall that we must have R < Rin . Additional considerations like parts cost might enter into the picture. why these rather odd-appearing values for r? 3. Creating a specic transfer function with op-amps does not have a unique answer. we don't want R too large. RF C = 5.3. Thus. 1 K . Here is our inverting amplier. Because the feedback "element" is an impedance (a parallel resistor capacitor combination).4. For the second design choice.1 (Solution on p. Thus.19. Unless you have a high-power application (this isn't one) or ask for high-precision components. we can simplify our circuit calculations greatly.

84 CHAPTER 3. we The current iin must be very small.46 When we take advantage of the op-amp's characteristicslarge input impedance. Thus. which means that iin = Rin must be tiny. For example. Consequently.46 (opamp). The voltage produced by the dependent source is can ignore iin in our calculations and assume it to be opamp + v – R e + – RF iin=0 – + + RL vout – Figure 3. meaning that it is essentially tied to the reference node. Rin the voltage v must also be essentially zero. This means that in op-amp circuits. The node voltage e is essentially zero. Thus. let's return to our original circuit as shown in Figure 3. • Because of this assumptionessentially no current ow through zero. the voltage v must be small. −5 if the output is about 1 v. ANALOG SIGNAL PROCESSING opamp R i + iin vin + – Rin v + – – Rout –Gv RL vout – iF RF + Figure 3.9> . the Available for free at Connexions <http://cnx. • 105 times the v voltage v . making the current iin = 10−11 A. large gain. the voltage v = 10 V . and small output impedancewe note the two following important facts. the voltage across the op-amp's input is basically Armed with these approximations.

Let's try this analysis technique on a simple extension of the inverting amplier conguration shown in Figure 3. the node joining the three resistors is at the same potential as the reference. Because the feedback resistor is essentially in parallel with the load resistor.8 Two Source Circuit + v – R1 e – R2 vin (1) i RF + + RL vout – + – vin (2) + – Figure 3.9> . Furthermore. we obtain the input-output relation and the sum of currents owing into that node is zero. Because R the left end of the feedback resistor is essentially attached to the reference node. the feedback resistor appears in parallel with the load resistor. single-output op-amp circuit example. Thus. When using this technique. the voltage across it equals v R the negative of that across the output resistor: vout = −v = − in F .47: Two-source.85 vin R . Because the current going into the op-amp is i owing (1) (2) vin vin in the resistor RF equals R1 + R2 . the inverting amplier remains. By superposition. check to make sure the results you obtain current through the resistor R equals are consistent with the assumptions of essentially zero current entering the op-amp and nearly zero voltage across the op-amp's inputs. we know that the input-output relation is vout = − RF (1) v R1 in − RF (2) v R2 in (3. Using this approach makes analyzing R new op-amp circuits much easier.31) When we start from scratch. What utility does this circuit have? Can the basic notion of the circuit be extended without bound? e 0. If either of the source-resistor combinations were not present. Available for free at Connexions <http://cnx. and we know that transfer function. In this way. the current given above. Example 3.47 (Two Source Circuit). all of the current owing through R ows v R through the feedback resistor (iF = i)! The voltage across the feedback resistor v equals in F . the voltages must satisfy v = −vout .

5 v-i relation and schematic symbol for the diode. and inductor are linear circuit elements in that their v-i relations are linear in the mathematical sense. A less detailed model for the diode has any positive current owing through the diode when it is forward biased.48 (Diode). and equals I0 . ANALOG SIGNAL PROCESSING 3. "P-N Junction: Part II" <http://cnx.5 v (V) –0.16/>. capacitor. Voltage and current sources are (technically) nonlinear devices: stated simply. The constant I0 is the q relation in Figure 3. and is usually very small. Note that the diode's schematic symbol looks like an arrowhead. and no current when negative biased. the nonlinearity This situation is forward biasing. and very useful. When we apply a negative voltage. A more blatant. This content is available online at <http://cnx.9> .48: i + v – 0. nonlinear circuit element is the diode (learn more 34 ). the ratio leakage current. the direction of current ow corresponds to the direction the arrowhead points. Its input-output relation has an exponential form.32) q represents the charge of a single electron in coulombs. the current is quite small. At room temperature. known as v-i k is Boltzmann's constant. the diode parameters were room temperature and I0 = 1 µA.20 The Diode 33 Diode i (µa) 50 40 30 20 10 –1. doubling the current through a voltage source does not double the> Available for free at Connexions <http://cnx. known as the leakage or reverse-bias current. Viewing this becomes the quantity q (3. and kT = 25mv. i (t) = I0 e kT v(t) − 1 Here.0 Figure 3. The resistor. T is the diode's temperature in K. 33 34 When the voltage is positive.86 CHAPTER 3. current ows easily through the diode.

current ows through vin (so the diode is forward biased). If the source is vout vin . for this simple circuit we have q vout = I0 e kT (vin −vout ) − 1 R (3. We can of course numerically solve Figure 3.50 We need to detail the exponential nonlinearity to determine how the circuit distorts the input voltage waveform.9> . positive input voltages result in positive output voltages with negative ones resulting in vout = − (RI0 ). Thus. Thus. and KVL is a statement about voltage drops around a closed path the elements are linear or not.49 (diode circuit) to determine the output voltage Available for free at Connexions <http://cnx. The reliable node method can always be used. using computational and graphical aids. we cannot use impedances nor series/parallel combination rules regardless of whether analyze circuits containing them.33) This equation cannot be solved in closed vout "tries" to be bigger than We must understand what is going on from basic principles. As an approximation. the diode is reverse-biased. diode circuit idiode v out v in v out R " v out ' v out ' v in " v in v out t Figure 3. at this level of analysis.87 diode circuit + vin + – R vout – Figure 3.49 Because of the diode's nonlinear nature. when the diode so long as the voltage negative or is smaller than vin is positive. it only relies on KVL for its application. and the reverse-bias current ows through the diode.

vin . let's express this equation graphically.21 Analog Signal Processing Problems Problem 3. the diode is reverse-biased and the output voltage equals always intersect just once for any smaller than vin . the two curves will vin . and thus we have a xed the point at which the curve This reduction vout axis gives us the value of vin . ANALOG SIGNAL PROCESSING when the input is a sinusoid. Clearly. Available for free at Connexions <http://cnx.9> . For − (RI0 ). we see that the input-output relation is vout = − Clearly. the diode's current is proportional to the input voltage. To learn more. the current through the output resistor. the name kT ln q vin +1 RI0 ( 35 Simple Circuit Analysis This content is available online at <http://cnx. which corresponds to using a bigger output resistor.34) logarithmic amplier is justied for this which expresses the diode's crosses the value of negative v-i relation.47/>. and the clearly evident distortion must have some practical application if the circuit were to be useful. As the voltage across the diode is related to the logarithm of its current. What utility might this simple circuit have? The diode's nonlinearity cannot be escaped here. We know that the current through the resistor must equal that through the diode. As for the right side. The left side.51 functions! We'll learn what functions later. does not vary itself with straight line. Thus. vin Here is a circuit involving a diode that is actually simpler to analyze than the previous one. This circuit. and for positive vin the intersection occurs at a value for vout vin . known as a half-wave rectier.88 CHAPTER 3. is smaller if the straight line has a shallower slope. is present in virtually every AM radio twice and each serves very dierent diode circuit R + – – + + vout – Figure 3. 35 3. where they intersect gives us the output voltage. We plot each term as a function of vout for various values of the input voltage vin .

c) Again.53 (b) Circuit B Available for free at Connexions <http://cnx. the current i equals cos (2πt).52.2: Solving Simple Circuits a) Write the set of equations that govern Circuit A's (Figure 3.53) behavior. what circuit element could substitute for the capacitor-inductor series combination that would yield the same voltage? Problem 3. nd the value for RL that results in a current of 5 A passing through it.89 + i 1 + i 1 + i L v 2 v 1 v C (a) Circuit a (b) Circuit b Figure> . if zero voltage were possible.52 (c) Circuit c For each circuit shown in Figure 3. d) What is the power dissipated by the load resistor RL in this case? i1 iin R1 R2 + – vin 15 A 20Ω RL (a) Circuit A Figure 3. are there element values that make the voltage v equal zero for all time? If so. v in each case? a) What is the voltage across each element and what is the voltage what element values work? b) For the last circuit. for the last circuit. c) For Circuit B. b) Solve these equations for i1 : In other words. express this current in terms of element and source values by eliminating non-source voltages and currents.

ANALOG SIGNAL PROCESSING Problem 3.9> .55 Available for free at Connexions <http://cnx. This Principle has important consequences in simplifying the calculation of ciruit variables in multiple source circuits.4: Superposition Principle One of the most important consequences of circuit laws is the Superposition Principle: The current or voltage dened for any element equals the sum of the currents or voltages produced in the element by the independent sources.90 CHAPTER 3. R1 R3 R4 R1 R1 R2 R3 R2 R4 R2 R5 R4 (b) circuit b R3 (a) circuit a (c) circuit c 1 1 1 1 1 1 (d) circuit d Figure 3.54).3: Equivalent Resistance For each of the following circuits (Figure 3. How is the circuit (c) derived from circuit (b)? Problem 3. 1/2 vin + – i 1/2 1 1/2 iin Figure 3.54 1 Calculate the conductance seen at the terminals for circuit (c) in terms of each element's conductance. Compare this equivalent conductance formula with the equivalent resistance formula you found for circuit (b).org/content/col10040/1. nd the equivalent resistance using series and parallel combination rules.

Available for free at Connexions <http://cnx.6: Thévenin and Mayer-Norton Equivalents Find the Thévenin and Mayer-Norton equivalent circuits for the following circuits (Figure 3. We can nd each component by setting the other sources to zero.9> .5: Current and Voltage Divider Use current or voltage divider rules to calculate the indicated circuit variables in Figure 3.56. Thus. Calculate the total current i using the Superposition Principle.56 Problem 3.57). each of which is due to a source. b) You should have found that the current i is a linear combination of the two source values: i = C1 vin +C2 iin . to nd the voltage source component. To nd the current source component.55).org/content/col10040/1. This result means that we can think of the current as a superposition of two components. nd the indicated current using any technique you like (you should use the simplest).91 a) For the depicted circuit (Figure 3. 3 + – 6 + 1 1 i 6 2 7sin 5t vout 4 – (a) circuit a 6 (b) circuit b 6 120 + – 180 12 5 48 i 20 (c) circuit c Figure 3. you would set the voltage source to zero (a short circuit) and nd the resulting current. Is applying the Superposition Principle easier than the technique you used in part (1)? Problem 3. you can set the current source to zero (an open circuit) and use the usual tricks.

57 Problem 3.8: Bridge Circuits Circuits having the form of Figure 3. is = 2. ANALOG SIGNAL PROCESSING π 3 1 2 2 1 1.59 are termed bridge circuits.58). R i1 + N1 5 + – 1 v1 – is N2 Figure 3. determine R such that i1 = −1.58 Problem 3.5 v + – 1 + – 1 1 (a) circuit a (b) circuit b 3 10 + – + – 20 20 sin 5t 2 6 (c) circuit c Figure 3. Available for free at Connexions <http://cnx.9> .7: b) With Detective Work In the depicted circuit (Figure CHAPTER 3. the circuit N1 has the v-i relation v1 = 3i1 + 7 when is = 2. a) Find the Thévenin equivalent circuit for circuit N2 .

R3 = 2Ω and R4 = 4Ω. √ 1 + −3 3 + j4 6 2−j √3 6 2+j √3 2 4 − j3 1 + j 1 2 π 3ejπ + 4ej 2 √ √ π 3 + j 2 × 2e−(j 4 ) 3 1+j3π Problem 3.9: a) b) c) d) e) f) g) Cartesian to Polar Conversion Convert the following expressions into polar form. if> Available for free at Connexions <http://cnx. will result in a zero voltage for c) Assume vout ? i when the current source R1 = 1Ω.10: The Complex Plane The complex variable z is related to the real variable u according to z = 1 + eju • • • Sketch the contour of values z takes on in the complex plane. Plot their location in the complex plane 36 .93 R1 iin R2 Figure 3.59 i + R3 vout R4 a) What resistance does the current source see when nothing is connected to the output terminals? b) What resistor values.9> .11: ejx 1 + ejx e−x ejx Cool Curves In the following expressions. the current iin is Problem 3. What are the maximum and minimum values attainable by Sketch the contour the rational function |z|? z−1 traces in the complex plane. R2 = 2Ω. Find Im (4 + 2j) ej2π20t . What geometric shapes do the "The Complex Plane" <http://cnx. Express your answer as a sinusoid. z+1 Problem 3. the variable following trace in the complex plane? a) b) c) 36 x runs from zero to innity.

14: Using Impedances Find the dierential equation relating the indicated variable to the source(s) using impedances for each circuit shown in Figure 3. Plot the magnitude and phase of the transfer function.60. a) b) c) d) sin (2u) = 2sin (u) cos (u) cos2 (u) = 1+cos(2u) 2 cos2 (u) + sin2 (u) = 1 d du (sin (u)) = cos (u) Problem 3. + 1 vin + – 1 2 v vin + – 1 2 + v vin + 1 4 iout 1 – (a) circuit a (b) circuit b – (c) circuit c 1 1 vin + 1 (d) circuit d Figure CHAPTER 3.61. ANALOG SIGNAL PROCESSING d) ejx + ej (x+ 4 ) π Problem 3.9> .60 iout Problem 3.13: Transfer Functions Find the transfer function relating the complex amplitudes of the indicated variable and the source shown in Figure 3.12: Trigonometric Identities and Complex Exponentials Show the following trigonometric identities using complex exponentials. In many cases. Available for free at Connexions <http://cnx. they were derived using this approach.

62 a) What is the voltage b) Find the impedances v1 (t)? Z1 and Z2 .61 (d) circuit d Problem 3. Measurement Chaos The following simple circuit (Figure 3.95 R1 iout vin + L C iout C R2 R1 L iin R2 vin + – (a) circuit a (b) circuit b L1 iin R L2 C + i v iin 1 1 1 2 1 – (c) circuit c Figure 3. the current i (t) equaled v2 (t) = 1 3 sin (2πf0 t).62) was constructed but the signal measurements were made hap- √ When the source was sin (2πf0 t).9> . c) Construct these impedances from elementary circuit elements. Available for free at Connexions <http://cnx. 2 3 sin 2πf0 t + π 4 and the voltage i(t) + + v1(t) Z1 + Z2 v2(t) vin(t) Figure hazardly.

nd the output voltage. 1 2 iin 1 1 2 Figure 3.63).96 CHAPTER 3.9> .64).63 a) Find the transfer function between the source and the indicated output voltage.16: Transfer Functions In the following circuit (Figure 3. what was the source? Problem 3. b) For the given source. the voltage source equals vin (t) = 10sin t 2 .64 iout 1 2 a) What is the transfer function between the source and the indicated output current? b) If the output current is measured to be cos (2t).17: A Simple Circuit You are given this simple circuit (Figure 3. ANALOG SIGNAL PROCESSING Problem 3. Problem 3.18: Circuit Design Available for free at Connexions < + vin + – 1 2 4 vout – Figure 3.

org/content/col10040/1. b) At what frequency does the transfer function have a phase shift of zero? What is the circuit's gain at this frequency? c) Specications demand that this circuit have an output impedance (its equivalent impedance) less than 8Ω for frequencies above 1 kHz. What element works and what is its value? d) With this compensated circuit. Show that the power dissipated by R1 R1 equals the sum of the powers dissipated by the component resistors. c) Use these two results to prove the general result we seek. R1 R2 are connected in series. Problem 3. Show the same result for this combination.11. The generator produces 60 Hz and is modeled by a simple Thévenin equivalent. Why does the generator need to generate more than 1. the frequency at which the transfer function is maximum. The transmission line consists of a long length of copper wire and can be accurately described as a 50Ω resistor.000 average power to the load? Available for free at Connexions <http://cnx. meet the requirement for the resistive load? c) The load can be Now how much power must the generator produce to meet the same power requirement? Why is it more than it had to produce to compensated to have a unity power factor (see exercise (Exercise 3.97 + R vin + – C L vout – Figure 3.65 a) Find the transfer function between the input and the output voltages for the circuits shown in Figure 3.20: Power Transmission The network shown in the gure represents a simple power transmission system. how much power must the generator produce to deliver 1.19: Equivalent Circuits and Power Suppose we have an arbitrary circuit of resistors that we collapse into an equivalent resistor using the series and parallel rules. a) Suppose resistors b) Now suppose R1 and and R2 are connected in parallel. Find element values that satisfy this criterion.9> . The compensation technique is to place a circuit in parallel to the load circuit.2)) so that the voltage and current are in phase for maximum power eciency. Problem 3. a) Determine the load current RL and the average power the generator must produce so that the load receives 1. Is the power dissipated by the equivalent resistor equal to the sum of the powers dissipated by the actual resistors comprising the circuit? Let's start with simple cases and build up to a complete proof.000 watts of average power.65.000 watts of average power to meet this requirement? b) Suppose the load is changed to that shown in the second gure.

98 CHAPTER 3.67) shows a general model for power transmission. What choice of load impedance creates the largest load voltage? What is the largest load voltage? b) If we wanted the maximum current to pass through the load. the source components are xed while there is some latitude in choosing the load.66 Problem 3.67 Available for free at Connexions <http://cnx. The power generator is represented by a Thévinin equivalent and the load by a simple impedance. In most applications. what would we choose the load impedance to be? What is this largest current? c) What choice for the load impedance maximizes the average power dissipated in the load? most power the generator can deliver? What is note: One way to maximize a function of a complex variable is to write the expression in terms of the variable's real and imaginary parts.21: Optimal Power Transmission The following gure (Figure 3.9> . a) Suppose we wanted the maximize "voltage transmission:" make the voltage across the load as large as possible. evaluate derivatives with respect to set both derivatives to zero and solve the two equations simultaneously. ANALOG SIGNAL PROCESSING IL + Vg Rs RT 100 100 1 power generator lossy power transmission line load (b) Modied load circuit (a) Simple power transmission system Figure 3. Vg + Zg ZL Figure 3.

a) Draw a block diagram that expresses this communication A and each transmitter uses its own frequency fi . What choice would maximize the voltage across the speakers? c) Sammy decides that maximizing the power delivered to the speaker might be a better choice. Receiver design is greatly simplied if rst we remove the unwanted transmission (as much as possible). Choosing a speaker amounts to choosing the values for the resistor. A voltage source vin = 2volts has been attached to the left-hand terminals.9> .99 Problem 3. The transmitter signals will be added together by the channel.23: Sharing a Channel Two transmitter-receiver pairs want to share the same digital communications channel. c) Find the second transmitter's frequency so that the receivers can suppress the unwanted transmission by at least a factor of ten. 0 ≤ t ≤ T where the amplitude is either zero or datarate is 10Mbps. The Problem 3. He has an amplier and notices that the speaker terminals are labeled "8Ω source. Available for free at Connexions <http://cnx. the open-circuit voltage measured across an unknown circuit's terminals equals sin (t). Each transmitter signal has the form xi (t) = Asin (2πfi t) . where the transmitter 1 uses the the frequency 1 T .24: Circuit Detective Work In the lab. Each frequency is harmonically related to the bit interval duration T. When a 1 1Ω resistor is place across the terminals. b) What voltage will appear if we place a 1F capacitor across the terminals? Problem 3. a voltage of √ a) What is the Thévenin equivalent circuit? sin t + 2 π 4 appears. b) Find circuits that the receivers could employ to separate unwanted transmissions.68." a) What does this mean in terms of the amplier's equivalent circuit? b) Any speaker Sammy attaches to the terminals can be well-modeled as a resistor. What values for the speaker resistor should be chosen to maximize the power delivered to the speaker? Problem 3.25: Mystery Circuit We want to determine as much as we can about the circuit lurking in the impenetrable box shown in Figure 3. Assume the received signal is a voltage and the output is to be a voltage as well. leaving the right terminals for tests and measurements.22: Big is Beautiful Sammy wants to choose speakers that produce very loud music.

it needs not only be linear. What resistor circuit would be consistent with this and the previous part? Problem 3. a voltage of could produce this output? c) When a current source is attached so that v = 1volt is measured. 1 vin + – i + Circuit v – Figure 3. ANALOG SIGNAL PROCESSING i vin + Resistors + v Figure 3. the 1 v (t) equals √2 cos t + current i (t) was −sin (t). A system is said to be time-invariant if delaying the input delays the output by the Available for free at Connexions <http://cnx.100 CHAPTER 3. π 4 . • • With nothing attached to the terminals on the right.69 We make the following measurements. Time-Invariant Systems For a system to be completely characterized by a transfer function. What circuit i = 2amp.68 a) Sammy measures v = 10volts when a 1Ω resistor is attached to the terminals. The test source vin (t) sin (t) (Figure 3.27: Linear.26: equals More Circuit Detective Work The left terminal pair of a two terminal-pair circuit is attached to a testing circuit.69). Who is correct and why? b) When nothing is attached to the right-hand terminals.9> . the voltage v is now 3 volts. a) What is the impedance seen from the terminals on the right? b) Find the voltage v (t) if a current source is attached to the terminals on the right so that i (t) = sin (t). Problem 3. the voltage When a wire is placed across the terminals on the but also to be time-invariant. Samantha says he is wrong.

R i out C Z v in + – Figure 3. Clearly. and C). and constructed the circuit shown in Figure 3.29: A Testing Circuit The simple circuit here (Figure 3.71) was given on a test. this forgotten circuit is H (f ) = Find the impedance 1 j10πf + 2 Z as well as values for the other circuit elements. meaning y (t) is the output of S (x (t − τ )) = y (t − τ ) for all delays τ a system S (•) when x (t) Note and all inputs x (t). represented by the impedance important as the output is the current passing through it. time-varying systems do not have a transfer function.101 same amount. Hint: Consider the dierential equation that describes a circuit's input- output relationship. a) What is the Thévenin equivalent circuit seen by the impedance? b) In searching his notes.70. cannot remember what the circuit. if is the input.9> . its input-output relationship is time-invariant. i) diode ii) iii) iv) y (t) = x (t) sin (2πf0 t) y (t) = x (t − τ0 ) y (t) = x (t) + N (t) Problem 3. S (•) is the time-invariant if S (x (t)) = y (t). Available for free at Connexions <http://cnx. c) Determine the linearity and time-invariance of the following. time-invariant (LTI) one(s). What is its general form? Examine the derivative(s) of delayed signals. that both linear and nonlinear systems have this Sammy nds that the circuit is to realize the transfer function Z. Mathematically. sleepless night. Find the transfer function of the linear. b) Show that impedances cannot characterize time-varying circuit elements (R. For example. Consequently. L. a system that squares its input is time-invariant. was. show that linear. a) Show that if a circuit has xed circuit elements (their values don't change over time).28: Long and Sleepless Nights He Sammy went to lab after a long.70 Problem 3.

the 2sin t − π 4 .31: Solving a Mystery Circuit Sammy must determine as much as he can about a mystery circuit by attaching elements to the terminal and measuring the resulting voltage.102 CHAPTER 3.9> .71 √ When the voltage source is a) What is voltage 5sin (t).org/content/col10040/1. what are A and φ? Problem 3. When no source is attached (open-circuited terminals). the current through the source π − 2sin (4t). When he attaches a 1F capacitor across the terminals. When he attaches a 1Ω resistor to the circuit's terminals. the current i (t) = √ 2cos t − arctan (2) − π 4 . what would your circuit be? c) For your circuit. he measures the voltage across the terminals to be voltage is now 3× √ 3sin (t). + v(t) – i(t) Circuit Figure 3. a) What voltage should he measure when he attaches nothing to the mystery circuit? Available for free at Connexions <http://cnx.30: Black-Box Circuit You are given a circuit (Figure 3. When you attach a voltage source equaling equals 4sin t + a) What will the terminal current be when you replace the source by a short circuit? b) If you were to build a circuit that was identical (from the viewpoint of the terminals) to the given one. ANALOG SIGNAL PROCESSING i(t) + – Z + 1 vout – vin Figure 3.72) that has two terminals for attaching circuit elements. b) What is the impedance vout (t)? Z at the frequency of the source? Problem 3. the voltage across the 4 terminals has the form Asin (4t + φ).72 sin (t) to the terminals.

−8π 2 f 2 8π 2 f 2 + 4 + j6πf + 1/2 vin + – 4 ZL vout – Figure 3.74 Available for free at Connexions <http://cnx.73) has a transfer function between the output voltage and the source equal H (f ) = . Problem 3. π 2? c) Find a circuit that corresponds to this load impedance. Z1 Vin + – Z2 + Vout – Figure 3.103 b) What voltage should Sammy measure if he doubled the size of the capacitor to 2 F and attached it to the circuit? Problem 3. give another a) Sketch the magnitude and phase of the transfer function. We want to nd a circuit that will remove hum from any signal. A Rice engineer suggests using a simple voltage divider circuit (Figure 3.9> . b) At what frequency does the phase equal if not. Hum gets its name because it sounds like a persistent humming sound.33: Analog Hum Rejection Hum refers to corruption from wall socket power that frequently sneaks into circuits. Is your answer unique? If so.32: to Find the Load Impedance The depicted circuit (Figure 3.74) consisting of two series impedances. show it to be so.

75) for Which of these will work? b) Picking one circuit that works.76.77). is a resistor. nd the transfer function. The Rice engineer must decide between two circuits (Figure 3. Available for free at Connexions < b) What is the output voltage when the input has the form iin = 5sin (2000πt)? Problem 3.75 Problem 3.9> . C C L L Figure 3.104 CHAPTER 3. ANALOG SIGNAL PROCESSING a) The impedance the impedance Z1 Z2 .34: An Interesting Circuit 6 + iin 2 vout – 3 1 Figure 3.76 a) For the circuit shown in Figure 3. c) Sketch the magnitude of the resulting frequency response. choose circuit element values that will remove hum.35: A Simple Circuit You are given the depicted circuit (Figure 3.

and it has the indicated circuit inside> . nd the transfer function relating the indicated voltage to the source when the probe is used. 1 + sin (t). An Interesting and Useful Circuit The depicted circuit (Figure 3.36: oscilloscopes. which are exploited in high-performance probe R1 + vin C1 R2 oscilloscope + C2 vout – – Figure 3. what is the output voltage? Problem 3. input impedance on measured voltages? b) Using the node method.78 The portion of the circuit labeled "Oscilloscope" represents the scope's input impedance.78) has interesting properties.105 1 iin 1 1 vout – + 1 1 Figure 3. a) Suppose for a moment that the probe is merely a wire and that the oscilloscope is attached to a circuit that has a resistive Thévenin equivalent impedance. What would be the eect of the oscilloscope's Available for free at Connexions <http://cnx. A probe is a device to R2 = 1MΩ attach an oscilloscope to a circuit.77 a) What is the transfer function between the source and the output voltage? b) What will the voltage be when the source equals sin (t)? If the source equals c) Many function generators produce a constant oset in addition to a sinusoid. and C2 = 30pF (note the label under the channel 1 input in the lab's oscilloscopes).

the car's body moves in concert with the bumps in the raod. the transfer function is quite simple.79 a) Find the dierential equation relating the output voltage to the source.80). v 1/6 4 – 1/3 vin + – 2 Figure 3. suspension system on his car.38: Analog Computers An ELEC 241 student wants to understand the Because the dierential equations arising in circuits resemble those that describe mechanical motion. Find that C1 indicates that its value can be varied. What is the impedance seen by the circuit being measured for this Problem 3. Select the value for this capacitor to make the special relationship valid.9> + . d) For a particular relationship among the element values. Available for free at Connexions <http://cnx. reducing the car's vertical motion.37: A Circuit Problem You are given the depicted circuit (Figure 3. A well-designed suspension system will smooth out bumpy roads. He is trying to decide between two circuit models (Figure 3. The student wants to nd a simple circuit that will model the car's motion.106 CHAPTER 3.79). e) The arrow through special value? R1 = 9MΩ and C1 = we can use circuit models to describe mechanical systems. ANALOG SIGNAL PROCESSING c) Plot the magnitude and phase of this transfer function when relationship and describe what is so special about it. b) What is the impedance seen by the capacitor? Problem 3. Without a suspension. the car's vertical motion should follow that of the road. If the bumps are very gradual (think of a hill as a large but very gradual bump).

39: You are given the depicted network (Figure 3. circuit. amplitude and vout (t) when vin (t) = sin t 2 + π 4 . road and car displacements are represented by the voltages a) Which circuit would you pick? Why? vroad (t) and vcar (t). b) Sketch the magnitude and phase of your transfer function. c) Find Vin and Vout . Problem 3. What voltage do you measure? Available for free at Connexions <http://cnx. b) For the circuit you picked. Label important frequency.107 1 vroad + – 1 1 + 1 vcar vroad + – 1 1 + vcar – Figure 3. what will be the amplitude of the car's motion if the road has a displacement given by vroad (t) = 1 + sin (2t)? Transfer Functions and Circuits Problem b) You attach a 1 H inductor across the terminals. You measure the voltage √ sin t + 2cos (t) a) Find a circuit that has these characteristics.81). respectively.81 a) Find the transfer function between phase values. + 1/4 vin + – 2 3/4 vout – Figure 3.80 – Here.9> .40: the current Fun in the Lab You assume the box contains a π across the terminals when nothing is connected to them and 4 when you place a wire across the terminals. You are given an unopenable box that has two terminals sticking out.

83.9> .org/content/col10040/1.42: Operational Ampliers Find the transfer function between the source voltage(s) and the indicated output voltage for the circuits shown in Figure 3. ib βib + i 1/3 + 3 vout – (b) circuit b + – 3i + – iin R1 R2 RL vout 1 –6 – (a) circuit a Figure 3. Available for free at Connexions <http://cnx.82 Problem 3.41: Find the voltage Dependent Sources vout in each of the depicted circuits (Figure 3.82). ANALOG SIGNAL PROCESSING Problem 3.108 CHAPTER 3.

109 + + — + vin R1 – (a) op-amp a R2 v out – R2 – R3 + V(1) in – + + – V(2) in R4 R1 + Vout – (b) op-amp b + + – – 5 + Vin 10 (c) op-amp c 5 Vout – 1 1 2 vin + – 1 2 4 (d) op-amp d Figure 3.83 Available for free at Connexions <> – 4 vout – 2 + + .

to Vin ? see? vin = V0 e− τ .9> + .85) of a cascade of op-amp circuits illustrate the reason why op-amp realizations of transfer functions are so useful. ANALOG SIGNAL PROCESSING Problem 3.110 CHAPTER 3.85 Available for free at Connexions <http://cnx. the current the complex amplitude of the input. R C C Vin + Iout RL R Figure 3. t RL Problem 3. the voltage b) What equivalent circuit does the load resistor c) Find the output current when Iout .org/content/col10040/1. Z2 Z1 – Vin + – + + Z3 – Z4 Vout – Figure 3.44: Why Op-Amps are Useful The circuit (Figure 3.84 a) What is the transfer function relating the complex amplitude of the output signal.84) is claimed to serve a useful purpose.43: Op-Amp Circuit The following circuit (Figure 3.

and R3 = R4 = 5.87 a) Find the transfer function relating the voltage b) In particular. R1 = 530Ω. Characterize the resulting transfer function and determine what use this circuit might have. R3 R1 C1 – + + R4 – R2 C2 + V out – Vin + – Figure 3. R2 = 5.45: Operational Ampliers Consider the depicted circuit (Figure 3. C2 = 0.86 illustrates that sometimes designs can go vout (t) to the source.87). Available for free at Connexions <http://cnx.3kΩ.01µF. Show that this transfer function equals the product of each stage's transfer function. Find the transfer function for this op-amp circuit (Figure 3.9> . and then show that it can't work! Why can't it? 1 µF 1 kΩ 10 nF – + vin + – + 4. C1 = 1µF.3kΩ.86).86 Problem 3.111 a) Find the transfer function relating the complex amplitude of the voltage b) What is the load impedance appearing across the rst op-amp's output? c) Figure 3. vout (t) to the source.7 kΩ vout – Figure 3.

88 R = 1 kΩ RF = 1 kΩ C = 80 nF + a) Is this a pre-emphasis or de-emphasis circuit? Find the frequency low to high frequencies. De-emphasis circuits do the opposite and are applied after The op-amp circuit here (Figure 3. Problem 3. Available for free at Connexions <http://cnx. RF R – C Vin + – + Vout – Figure 3.47: a Pre-emphasis or De-emphasis? In audio applications. b) What is the circuit's output when the input voltage is f0 that denes the transition from sin (2πf t).89).9> . conversion back to analog and de-emphasis. the signal's spectrum should be what it was. b) Design a bandpass lter that meets these specications.112 CHAPTER 3. After pre-emphasis. Specify component with f = 4kHz? c) What circuit could perform the opposite function to your answer for the rst part? Problem 3. prior to analog-to-digital conversion signals are passed through what is known as higher frequencies beyond some frequency pre-emphasis circuit that leaves the low frequencies alone but provides increasing gain at increasingly f0 .46: Designing a Bandpass Filter We want to design a bandpass lter that has transfer the function H (f ) = 10 j2πf f j fl +1 j ffh + 1 fh is the cuto frequency of Here. can't recall which). digitization. Label important amplitude and phase values and the frequencies at which they occur. a) Plot the magnitude and phase of this frequency response. fl is the cuto frequency of the low-frequency edge of the passband and the high-frequency edge. ANALOG SIGNAL PROCESSING Problem 3.88) has been designed for pre-emphasis or de-emphasis (Samantha digital-to-analog conversion. We want fl = 1kHz and fh = 10kHz.48: Active Filter Find the transfer function of the depicted active lter (Figure 3.

91). Available for free at Connexions <http://cnx.90 a) What is this circuit's transfer function? Plot the magnitude and phase.50: Optical Receivers In your optical telephone. what will the output be when f0 is larger than the lter's Problem 3.113 R1 Rf R – R2 Vin + – R2 C2 – + R + V out R1 – + C1 Figure 3.9> .org/content/col10040/1. + Rin Vin + – + C – R1 R2 Vout – Figure 3. the receiver circuit had the form shown (Figure 3. b) If the input signal is the sinusoid cuto frequency? sin (2πf0 t).89 Problem 3.49: This is a lter? You are given a circuit (Figure 3.90).

Thus. a) Find the transfer function relating light intensity to lowpass lter? c) A clever engineer suggests an alternative circuit (Figure 3. If not. Available for free at Connexions <http://cnx. the op-amp stage serves to boost the signal and to lter out-of-band noise. Assume the diode is ideal. If it does. As is often the case in this crucial stage. representing RU Electronics. converting light energy into a voltage vout .114 CHAPTER 3. have discovered the schematic and want to gure out the intended application. we. vout .92) to accomplish the same task. producing a current proportional to the light intesity falling upon it. The photodiode acts as a current source. show why it does not work. Zf be so that the transducer acts as a 5 kHz b) What should the circuit realizing the feedback impedance Zin that accomplishes the lowpass ltering Zin – + 1 Vout Figure 3. the signals are small and noise can be a problem.93) has been developed by the TBBG Electronics design group.9> .51: Reverse Engineering The depicted circuit (Figure 3. ANALOG SIGNAL PROCESSING Zf – + Vout Figure This circuit served as a transducer.92 Problem 3. nd the impedance task. Determine whether the idea works or not. They are trying to keep its use secret.

115 R1= 1 kΩ R2= 1 kΩ C = 31.9> .8 nF R1 – + – + R2 C + Vin Vout – Figure 3.93 a) Assuming the diode is a short-circuit (it has been removed from the circuit).org/content/col10040/1. what is the circuit's output when the input voltage is c) What function might this circuit have? sin (2πf0 t)? Available for free at Connexions <http://cnx. what is the circuit's transfer function? b) With the diode in place.

51) Req = R2 RL L R2 R2 must be less than 0. j2πf arises from integrating a complex exponential. 40) Solution to Exercise 3.6. The two-resistor circuit has no apparent use. We can combine all but one node in a circuit into a supernode. KCL says that the sum of currents entering or leaving a node must be (p.2 (p. A 1% change means that R .01.116 CHAPTER 3.2 (p. vin R2 . the imaginary part of complex power should be zero.5. V I ∗ = |V ||I|ej(φ−θ) .000 joules. This result does not depend on the resistor R1 .7. zero imaginary part occurs when the phases of the voltage and currents Available for free at Connexions <http://cnx. which indeed directly corresponds to 3. which will be larger than any component resistor's value.11. Consequently. Consequently. one enter an entire circuit. R1 can be expressed as (vin − vout ) iout = R1 (R1 + R2 ) 2 vin 2 Solution to Exercise 3. for a parallel combination.1. 63) Division by R1 R1 +R2 iin and Req = R3 R1 + R2 . 47) R2 1 R1 2 2 vin 2 = 2 vin + 2 vin R1 + R2 (R1 + R2 ) (R1 + R2 ) Solution to Exercise 3. for the supernode must be the negative of the remaining node's KCL equation.1 (p. For a series combination.000 watt-seconds. 47) The circuit serves as an amplier having a gain of R2 R1 +R2 . 1 V ⇔ j2πf V ej2πf t dt Solution to Exercise 3.600.2 (p.5.4. 53) In a series combination of resistors. 49) Replacing the current source by a voltage source does not change the fact that the voltages are identical. in a parallel combination. a 10% change means that the ratio R L 1+ R 2 Solution to Exercise 3. If we consider two nodes Since no currents together as a "supernode". < 0. ANALOG SIGNAL PROCESSING Solutions to Exercises in Chapter 3 Solution to Exercise 3. The equivalent resistance is therefore smaller than any component resistance. the sum of currents must be zero.1 (p. specifying n−1 Solution to Exercise 3. Thus. +R ieq = and isc = − vin R1 (resistor R2 is shorted out in this case). If we had a two-node circuit.9> .600. 55) Solution to Exercise 3. which is larger than any component conductance.1 (p. Consequently.3 (p.1 (p.6. KCL KCL equations always species the remaining one. the equivalent conductance is the sum of the component conductances.4. the KCL equation of must be the negative of the other.1.10. the voltage is the same. As the complex power is given by agree. the equivalent resistance is the sum of the resistances. Solution to Exercise 3. vin = R2 iout or iout = Solution to Exercise 3. Thus.1 (p. 45) One kilowatt-hour equals 3.6. 46) The power consumed by the resistor Solution to Exercise 3.1 (p. KCL applies as well to currents entering the combination. veq = R2 R1 +R2 vin and Solution to Exercise 3.1 (p. which means that we simply have a resistor (R2 ) across a voltage source. 63) For maximum power dissipation. 56) voc = R1R2 2 vin +R R Req = R11 R22 . the current is the same in each.

The key notion is writing the imaginary part as the dierence between a complex exponential and its Im V ej2πf t = The response to V ej2πf t − V ∗ e−(j2πf t) 2j (3.2 (p.13.9> . To nd the equivalent resistance.35) Solution to Exercise 3. 76) Solution to Exercise 3. Not necessarily. Available for free at Connexions <http://cnx. 83) Ω resistor.1 (p. we need to nd the current owing through the voltage source. 1 13 13 Thus. 74) current equals V ej2πf t is V H (f ) ej2πf t . the equivalent resistance is 11 Ω. The cosine term is known as the power factor. ∗ As H (−f ) = H (f ) .1 (p.11.117 Solution to Exercise 3. especially if we desire individual knobs for adjusting the gain and the cuto frequency.2 (p. making the total current through the voltage source (owing out of it) 11 vin . √ The ratio between adjacent values is about 2.15.1 ( 64) Solution to Exercise 3. 68) complex conjugate: Pave = Vrms Irms cos (φ − θ). which means the response to V ∗ e−(j2πf t) is V ∗ H (−f ) e−(j2πf t) .15.19. This e1 6 = 13 vin . The same argument holds for the real part: Re V e → Re V H (f ) ej2πf t . the Superposition Principle says that the output to the imaginary part is j2πf t j2πf t Im V H (f ) e . This current equals the current we have just found plus the current owing through the other vertical 1 Solution to Exercise 3.

118 CHAPTER 3.9> . ANALOG SIGNAL PROCESSING Available for free at Connexions <

In fact. we developed the notion of a circuit's frequency response or transfer For example. Fourier transform 4. time-invariant systems. This notion. But the Fourier series goes well beyond being another signal decomposition method. radio. and cellular telephones transmit over dierent portions of the spectrum.3. You would be right and in good company as well. We also learned the Superposition Principle for linear systems: The system's output to an input consisting of a sum of two signals is the sum of the system's outputs to each individual component.dcs. we invented the impedance method because it made solving circuits Along the 3 http://www-groups.10/>.st.3)) Euler were not settled until later. which also applies to all the Fourier series begins our journey to appreciate how a signal can be described in either the time-domain or the 5 worried about this problem. television. They worked on what is now known as the periodic signal as a superposition of sinusoids.1). In fact.html 4 http://www-groups.dcs. and Jean Baptiste Fourier got the credit even though tough mathematical issues 3 and Gauss4 in particular representing Fourier series: any This content is available online at <http://cnx. As this story unfolds. it does not generalize well to other periodic signals: How can a superposition of pulses equal a smooth signal like a sinusoid? Because of the importance of sinusoids to linear systems. We begin by nding that those signals that can be represented as a sum of sinusoids is very large.31/>.Chapter 4 Frequency Domain∼history/Mathematicians/∼history/Mathematicians/Euler.9> 119 . we showed that a square wave could be expressed as a superposition of pulses. describes how the circuit responds to a sinusoidal input when we express it in terms of a complex exponential.1 Introduction to the Frequency Domain 1 In developing ways of analyzing linear circuits. This content is available online at < Rather.html 1 2 Available for free at Connexions <http://cnx.and. Calculating the spectrum is easy: The denes how we can nd a signal's spectrum. all signals can be expressed as a superposition of sinusoids.html 5 http://www-groups.and. you might wonder whether they could be added together to represent a large number of periodic signals. As useful as this decomposition was in this example. we'll see that information systems rely heavily on spectral∼history/Mathematicians/Fourier.2 Complex Fourier Series 2 In an earlier module (Exercise 2. The study of the frequency domain combines these two notionsa system's sinusoidal response is easy to nd and a linear system's output to a sum of inputs is the sum of the individual outputsto develop the crucial idea of a signal's spectrum. spectrum is so important that communications systems are regulated as to which portions of the spectrum they can use by the Federal Communications Commission in the United States and by International Treaty for the world (see Frequency Allocations (Section 7.

T ]. knowing the Fourier coecients is equivalent to knowing the signal. ∞ complex Fourier series expresses the signal as a superposition of complex exponentials having frequencies k = {. by exploiting the orthogonality properties of harmonically related complex expoe−(j2πlt) T 0 T 0 and integrate over the interval 2πkt T nentials. treat it just like any other constant. −1. 1. these functions are always present and form the representation's building blocks.1) by [0. . . 0. The zeroth coecient equals the signal's with ck = Fourier coecients average value and is real.. signal period c0 = a0 . Mathematically. even those that have constant-valued segments like a square wave.9> .2) Assuming for the moment that the complex Fourier series "works. The k T.1 Finding the Fourier series coecients for the square wave this signal can be expressed as sqT (t) is very simple. Exercise 4.2.1 What is the complex Fourier series for a sinusoid? To nd the Fourier coecients.3) ck = c0 = 1 T 1 T s (t) e−(j s (t) dt ) dt Example 4. 167. Thus. . Let s (t) be a periodic signal with period T.domain characterization of the signal. Available for free at Connexions <http://cnx. Because the signal has period T . s (t) = ck e j k=−∞ 2πkt T (4.1) 1 ck are written in this 2 (ak − jbk ). the fundamental frequency is T . Simply multiply each side of (4.120 CHAPTER 4.4) note: When integrating an expression containing j. and are indexed by k. its spectrum. it makes not dierence if we have a time-domain or a The family of functions ej 2πkt T are called No matter what the periodic signal might be. The real and imaginary parts of the unusual way for convenience in dening the classic Fourier series. can be expressed harmonically related sine waves: sinusoids having frequencies that are integer multiples of 1 fundamental frequency. . (4. ." we can nd a signal's complex Fourier coecients.}.   1 if 0 < t < T 2 sqT (t) =  −1 if T < t < T 2 The expression for the Fourier coecients has the form ck = 1 T T 2 e−(j 2πkt T 0 ) dt − 1 T T T 2 e−(j 2πkt T ) dt (4.) T ej 0 2πkt T e (−j) 2πlt T   T dt =  0 if if k=l k=l (4. FREQUENCY DOMAIN frequency-domain with as sum of the no compromise. We want to show that periodic signals. Key point: Assuming we know the period. They depend on the T. we note the orthogonality property (Solution on p.valued for real-valued signals: basis functions and form the foundation of the Fourier series.

Note that the spectral magnitude is unaected..7) Note that the range of integration extends over a period of the integrand. A signal's Fourier series spectrum k increases. the Fourier coecients are purely imaginary. c−k = −ck . the complex Fourier series for the square wave is sq (t) = k∈{.6) Consequently. Proof: 1 T T 0 s (t − τ ) e(−j) 2πkt T dt = = 2πk(t+τ ) 1 T −τ s (t) e(−j) T dt T −τ 2πkt T −τ 1 (−j) 2πkτ T s (t) e(−j) T dt Te −τ (4. but only those having frequencies equal to odd multiples of the fundamental frequency as the frequency index period. ck from the signal. are ck e− j2πkτ T .2: If ck = c−k ∗ . Furthermore. which is the simplest example of an even signal. The square wave is a great example of an Property 4. is s (−t) = s (t). you can nd the negative-indexed coecients. one equaling the negative of the other. Im (ck ) = −Im (c−k ): The imaginary parts of the Fourier coecients have odd symmetry.5)  0 k even Thus. hence the entire spectrum... } 2 (j) 2πkt T e jπk (4. which says the signal has even symmetry about the origin. the Fourier coecients of even signals are real-valued... odd-symmetric signal.−3. Therefore. Property 4. Showing this property is easy. Property 4.3: If s (−t) = −s (t).1: If ck has interesting properties. s (t − τ ).3.4: of The spectral coecients for a periodic signal delayed by τ . s (t) is real. which says the signal has odd symmetry. Given the previous property for real-valued signals. Similarly. becomes The nal expression bk = = −2 j2πk   2 (−1) − 1 if k jπk if k odd (4. this result means Re (ck ) = Re (c−k ): The real part of the Fourier coecients for real-valued signals is even. and we have Available for free at Connexions <http://cnx.. This kind of symmetry. This index corresponds to the 1 T . where shift the spectrum of s (t). it should not matter how we integrate over a period.1. c−k = ck . Consequently.9> .−1. A real-valued Fourier expansion amounts to an expansion in terms of only known as conjugate symmetry.121 The two integrals are very similar. Delaying a signal by τ seconds results in a spectrum having a − 2πkτ T linear phase ck denotes in comparison to the spectrum of the undelayed signal. if This result follows from the integral that calculates the that you are given the Fourier coecients for positive indices and zero and are told the signal is real-valued. The coecients decay slowly k -th harmonic of the signal's Property 4. ck = c−k ∗ (real-valued periodic signals have conjugate-symmetric spectra). which means that our result. Consequently. the square wave equals a sum of complex exponentials. T −τ −τ (·) dt = T 0 (·) dt.

Parseval's t The pulse width is is given by ∆. The complex Fourier spectrum of this signal ck = 1 T Ae− 0 j2πkt T dt = − j2πk∆ A e− T − 1 j2πk At this point. one of the most important results in signal analysis.1: Periodic pulse signal. consequently.9> .122 CHAPTER 4. p(t) A … ∆ ∆ … T Figure 4. to plot it we need to calculate its magnitude and phase. FREQUENCY DOMAIN The complex Fourier series obeys or the frequency domain. we nd that the coecients do indeed have conjugate symmetry: ck = c−k ∗ .9) Because this signal is real-valued. ∆ and the amplitude A.8) This result is a (simpler) re-expression of how to calculate a signal's power than with the real-valued Fourier series expression for power. Let's calculate the Fourier coecients of the periodic pulse signal shown here (Figure 4. This general mathematical result says you can calculate a signal's power in either the time domain Theorem 4. Because the spectrum is complex valued. we can simply express the Fourier series coecients for our pulse sequence. The periodic pulse signal has neither even nor odd symmetry. 1 T T ∞ s (t) dt = 0 k=−∞ 2 (|ck |) 2 (4. simplifying this expression requires knowing an interesting property. no additional symmetry exists in the spectrum. |ck | = A| sin πk∆ T πk sin | πk∆ T (4.10) ∠ (ck ) = − πk∆ + πneg T πk sign (k) Available for free at Connexions <http://cnx. the period T.1: Parseval's Theorem Average power calculated in the time domain equals the power calculated in the frequency domain. jπk∆ T ck = Ae− sin πk∆ T πk (4. 1 − e−(jθ) = e− 2 jθ e 2 − e− 2 jθ jθ = e− 2 2jsin jθ θ 2 Armed with this result.

In Available for free at Connexions <http://cnx. does your answer make sense? The phase plot shown in Figure 4. magnitudes must be positive. k T ).2 and A = 1. −π in the phase between indices 4 and 6. Here T = 0.123 The function neg (·) equals -1 if its argument is negative and zero otherwise. The somewhat complicated expression for the phase results because the sine term can be negative. leaving the occasional negative values to be accounted for as a phase shift of π. 2π can be added to a phase We see that at frequency index 4 The phase at index 5 is undened because the magnitude is zero in this we expect a shift of the formula is a little less than −π . Thus.10) suggests. 167. ∆ Comparing this term with that predicted from delaying a signal. Periodic Pulse Sequence Figure 4. Advancing the signal by this amount centers the pulse about the origin. with a jump of the sinusoidal term changes sign.) What is the value of c0 ? Recalling that this spectral coecient corresponds to the signal's average value. a delay of 2 is present in our signal. Thus. There. which in Also note the presence of a linear phase term (the rst term in ∠ (ck ) is proportional to frequency turn means that its spectrum is real-valued. At index 6. Exercise 4.2.2 (Solution on p. the formula and the plot do agree. leaving an even signal. Thus. the formula suggests that the phase of the linear term should be less than (more negative) than In addition. We must realize that any integer multiple of at each frequency the phase is nearly π every time without aecting the value of the complex spectrum. Because we can add 2π without aecting the value of the spectrum at index 6.2: The magnitude and phase of the periodic pulse sequence's spectrum is shown for positive∆ frequency indices.9> . −π . our calculated spectrum is consistent with the properties of the Fourier spectrum. the result is a slightly negative number as shown. the phase has a linear component. the phase value predicted by − (2π).2 (Periodic Pulse Sequence) requires some explanation as it does not seem to agree with what (4.

Available for free at Connexions <http://cnx. Equating the classic Fourier series (4. properties of sinusoids.1). we can nd the Fourier coecients using the frequency. each representing a signal's Fourier coecients. ak and bk . values are usually conned to the range some (possibly negative) multiple of [−π. are These identities allow you to substitute a sum of sines and/or cosines for a product of them. Just as with the complex Fourier series. 6 The classic Fourier series as derived originally expressed a periodic signal (period T ) in terms of harmonically ∞ s (t) = a0 + k=1 ak cos 2πkt T ∞ + k=1 bk sin 2πkt T (4. Note that the cosine and sine of harmonically related Each term in the sum can be integrating by noticing one of two important properties of sinusoids.11) to the complex Fourier series (4. • • 6 The integral of a sinusoid over an The integral of the integer number of periods equals zero.) Exercise 4.1 Derive this relationship between the coecients of the two Fourier series. sin (α) sin (β) = cos (α) cos (β) = sin (α) cos (β) = 1 2 (cos (α − β) − cos (α + β)) 1 2 (cos (α + β) + cos (α − β)) 1 2 (sin (α + β) + sin (α − β)) (4. even the orthogonality same (4.124 CHAPTER 4. k ∈ Z l ∈ Z sin 0 2πkt T sin dt = (k = l) and (k = 0) and (l = 0) (k = l) or (k = 0 = l) (k = l) and (k = 0) and (l = 0) k=0=l k=l  0        T 2 T cos 0 2πkt T cos 2πlt T if if if dt = T 0 These orthogonality relations follow from the following important trigonometric identities.12) T sin 0 T 2πkt T 2πlt T cos   2πlt T T 2 if if dt = 0 . The The complex Fourier series and the sine-cosine series are identical.3.3 Classic Fourier Series related sines and cosines. 4.11) spectrum. 167. This content is available online at <http://cnx. square of a unit-amplitude sinusoid over a period T equals T 2. express the real and imaginary parts respectively of the ck of the complex Fourier series express the spectrum as a magnitude and spectrum while the coecients phase.23/>.9> . an extra factor of two and complex conjugate become necessary to relate the Fourier coecients in each. ck = 1 (ak − jbk ) 2 (Solution on p. π) by adding 2π to each phase value. FREQUENCY DOMAIN phase calculations like those made in MATLAB.

Consequently. the integration will sift out all but T the term involving al .3 Example 4.14) The rst and third terms are zero. l = 0 All of the Fourier coecients can be found similarly.18)  1 2 k=1  0 Thus. we obtain a0 T . in which case we obtain al T 2 .16) bk we must calculate the integral bk = 2 T T 2 sin 0 2πt T sin 2πkt T dt (4.3.) Why? (Solution on p. which are much easier to evaluate. a0 = ak = bk = 1 T 2 T 2 T T 0 T 0 T 0 s (t) dt s (t) cos s (t) sin 2πkt T 2πkt T dt . The idea is that.9> . 167. to nd T 2 (4. (Solution on p.) What is the Fourier series for a unit-amplitude square wave? Let's nd the Fourier series representation for the half-wave rectied sinusoid. let's. T 2 0 sin 2πt T sin 2πkt T dt = = 1 2 2 0 T cos if 2π(k−1)t T − cos 2π(k+1)t T dt (4.   sin 2πt if 0 ≤ t < T s (t) =  0 if T ≤ t < T 2 Begin with the sine terms in the The expression for a0 is referred to as the average value of s (t). If k = 0 = l. for example. al = 2 T T s (t) cos 0 2πlt T dt .125 cos l harmonic 2πlt and integrate.17) Using our trigonometric identities turns our integral of a product of sinusoids into a sum of integrals of individual sinusoids.3. 167. multiply the Fourier series for a signal by the cosine of the th T s (t) cos 0 T ∞ k=1 bk 0 dt = 2πkt sin T cos l 2πlt T T a0 cos 0 2πlt dt T 2πlt T dt + ∞ k=1 ak T 0 cos 2πkt T cos 2πlt T dt + (4.2 Exercise 4. To use these. the only non-zero term in the sum results when the indices k and are equal (but not zero). because integration is linear. in the second.15) Exercise 4. otherwise b1 = 1 2 b2 = b3 = · · · = 0 Available for free at Connexions <http://cnx. k = 0 dt (4.

and the even 0 k -0. Each coecient is directly related k T . equals 1 π . here to a sinusoid having a frequency of to 1 kHz.3 (Fourier Series spectrum of a half-wave rectied sine wave). The remainder of the cosine coecients are easy to nd.21/>. the Fourier series for the half-wave rectied sinusoid has non-zero terms for the average. The average value.4 A Signal's Spectrum that the independent variable. Thus. A plot of the Fourier coecients as a function of the frequency index. domain expression is its spectrum. 4. .9> . etc. Thus. . 7 A periodic signal. Available for free at Connexions <http://cnx. 7 A signal's frequency A periodic signal can be dened either in the time domain (as a function) or in the frequency domain (as a spectrum).5 0 0 Figure 4. The word "spectrum" implies k. 4. displays the signal's spectrum. FREQUENCY DOMAIN On to the cosine terms. which corresponds to a0 . the fundamental.19) Thus. } (4. corresponds somehow to frequency.5 bk 0. such as shown in Figure 4. consists of a sum of elemental sinusoids. The index indicates the multiple of the fundamental frequency at which the signal has energy.3: k 2 4 6 8 10 The Fourier series spectrum of a half-wave rectied sinusoid is shown. but very important. aspect of the Fourier spectrum is its uniqueness: You can unambiguously nd the spectrum from the signal (decomposition (4. Fourier Series spectrum of a half-wave rectied sine wave ak 0.126 CHAPTER 4. . if we half-wave rectied a 1 kHz sinusoid. k=1 corresponds k=2 to 2 kHz. any aspect of the signal can be found from the spectrum and vice versa. but yield the complicated result   − ak =  0 2 1 π k2 −1 if k odd if k ∈ {2. such as the half-wave rectied sinusoid.15)) and the signal from the spectrum (composition). A subtle. This content is available online at <http://cnx.

A signal's instantaneous power is dened to be its square. The uniqueness property says that either domain can provide the right answer. The survivors leave a rather simple expression for the power we seek. 167. (Solution on p. For a periodic signal. For a periodic signal. the natural time interval is clearly its period.21) Exercise 4. the average power is the square of its root-mean-squared (rms) value. a better choice would be entire time or time from onset. power (s) = a0 2 + 1 2 ∞ ak 2 + bk 2 k=1 (4.1 What is the rms value of the half-wave rectied sinusoid? signal into this expression. form the signal from the spectrum and calculate the maximum.12) say that most of these crossterms integrate to zero. we want to know the (periodic) signal's maximum value.20) and thus its average power is power (s) = = rms2 (s) 1 T T 0 s2 (t) dt (4. suppose Clearly the time domain provides the answer directly.127 A fundamental aspect of solving electrical engineering problems is whether the time or frequency domain provides the most understanding of a signal's properties and the simplest way of manipulating it. we need to substitute the spectral representation of the power (s) = 1 T T ∞ a0 + 0 k=1 ak cos 2πkt T ∞ + k=1 bk sin 2πkt T 2 dt The square inside the integral will contain all possible pairwise products. To use a frequency domain approach would require us to nd the spectrum. we're back in the time domain! Another feature of a signal is its average power. for nonperiodic signals.22) Available for free at Connexions < the orthogonality properties (4. The average power is the average of the instantaneous power over some time interval. We dene the rms value of a periodic signal to be rms (s) = 1 T T 0 s2 (t) dt (4.) To nd the average power in the frequency domain. However.4.9> . As a simple example.

2 Ps(k) 0. contribution to the signal's average power.2 (Solution on p. Is this calculation most easily performed in the time or frequency domain? Exercise 4. It could well be that computing this sum is easier than integrating the signal's square.5 Fourier Series Approximation of Signals signal containing 8 It is interesting to consider the sequence of signals that we obtain as we incorporate more terms into the Fourier series approximation of the half-wave rectied sine wave (Example 4. Ps (k). deviation of a sine wave from the ideal is measured by the in the fundamental. plots each harmonic's contribution to the total power. 167.4: 2 4 6 8 10 k Power spectrum of a half-wave rectied sinusoid.) 4.5 ( Fourier Series spectrum of a half-wave rectied sine wave ) shows how this sequence of signals portrays the signal more accurately as more terms are added.128 CHAPTER 4. power spectrum total harmonic distortion. 8 This content is available online at <http://cnx. which equals the total power in the harmonics higher than the rst compared to power In high-end audio. the power contained in a signal at its k th harmonic is ak 2 +bk 2 .10/>.org/content/m10687/2.4 (Power Spectrum of a Half-Wave 2 Rectied Sinusoid). K sK (t) = a0 + k=1 ak cos 2πkt T K + k=1 bk sin 2πkt T (4. Furthermore.9> .1 0 0 Figure 4. Find an expression for the total harmonic distortion for any periodic signal. The . Dene sK (t) to be the K +1 Fourier terms. Available for free at Connexions <http://cnx. the contribution of each term in the Fourier series toward representing the signal can be measured by its Thus. such as shown in Figure 4. FREQUENCY DOMAIN Power Spectrum of a Half-Wave Rectied Sinusoid 0.23) Figure


Fourier Series spectrum of a half-wave rectied sine wave
ak 0.5



-0.5 bk 0.5

0 0 2 4 6 8 10


1 K=0 0.5 0 1 K=1 0.5 0 1 K=2 0.5 0 1 K=4 0.5 0 0 0.5 1 1.5 2 t t t t

Figure 4.5:
The Fourier series spectrum of a half-wave rectied sinusoid is shown in the upper portion. The

The index indicates the multiple of the fundamental frequency at which the signal has energy.

cumulative eect of adding terms to the Fourier series for the half-wave rectied sine wave is shown in the bottom portion. The dashed line is the actual signal, with the solid line showing the nite series approximation to the indicated number of terms,

K + 1.

We need to assess quantitatively the accuracy of the Fourier series approximation so that we can judge how rapidly the series approaches the signal. When we use a

K + 1-term

series, the errorthe dierence

Available for free at Connexions <>




between the signal and the

K + 1-term

seriescorresponds to the unused terms from the series.

(t) =

ak cos

2πkt T


bk sin

2πkt T


To nd the rms error, we must square this expression and integrate it over a period. Again, the integral of most cross-terms is zero, leaving

rms (



1 2

ak 2 + bk 2


Figure 4.6 (Approximation error for a half-wave rectied sinusoid) shows how the error in the Fourier series for the half-wave rectied sinusoid decreases as more terms are incorporated. In particular, the use of four terms, as shown in the bottom plot of Figure 4.5 ( Fourier Series spectrum of a half-wave rectied sine wave ), has a rms error (relative to the rms value of the signal) of about 3%. converges quickly to the signal. The Fourier series in this case

Approximation error for a half-wave rectied sinusoid
Relative rms error

0.8 0.6 0.4 0.2 0 0 2 4 6 8 10 K

Figure 4.6:

The rms error calculated according to (4.25) is shown as a function of the number of terms

in the series for the half-wave rectied sinusoid. The error has been normalized by the rms value of the

We can look at Figure 4.7 (Power spectrum and approximation error for a square wave) to see the power spectrum and the rms approximation error for the square wave.

Available for free at Connexions <>


Power spectrum and approximation error for a square wave
1 Ps(k)


0 0 1
Relative rms error

k 2 4 6 8 10


0 0
Figure 4.7:







The upper plot shows the power spectrum of the square wave, and the lower plot the rms

error of the nite-length Fourier series approximation to the square wave. The asterisk denotes the rms error when the number of terms


in the Fourier series equals 99.

Because the Fourier coecients decay more slowly here than for the half-wave rectied sinusoid, the rms error is not decreasing quickly. Said another way, the square-wave's spectrum contains more power at higher frequencies than does the half-wave-rectied sinusoid. This dierence between the two Fourier series results

1 k2 while those of the square 1 wave are proportional to k . If fact, after 99 terms of the square wave's approximation, the error is bigger than 10 terms of the approximation for the half-wave rectied sinusoid. Mathematicians have shown that
because the half-wave rectied sinusoid's Fourier coecients are proportional to no signal has an rms approximation error that decays more slowly than it does for the square wave.

Exercise 4.5.1

(Solution on p. 167.)

Calculate the harmonic distortion for the square wave. More than just decaying slowly, Fourier series approximation shown in Figure 4.8 (Fourier series approximation of a square wave) exhibits interesting behavior.

Available for free at Connexions <>




Fourier series approximation of a square wave
1 0 -1 1 0 -1 1 0 -1 1 0 -1
Figure 4.8:
Fourier series approximation to

K=1 t

K=5 t

K=11 t

K=49 t

sq (t).

The number of terms in the Fourier sum is indicated

in each plot, and the square wave is shown as a dashed line over two periods.

Although the square wave's Fourier series requires more terms for a given representation accuracy, when comparing plots it is not clear that the two are equal. Does the Fourier series really equal the square wave at

all values of t?

In particular, at each step-change in the square wave, the Fourier series exhibits a peak

followed by rapid oscillations. As more terms are added to the series, the oscillations seem to become more rapid and smaller, but the peaks are not decreasing. For the Fourier series approximation for the half-wave rectied sinusoid (Figure 4.5: Fourier Series spectrum of a half-wave rectied sine wave ), no such behavior occurs. What is happening? Consider this mathematical question intuitively: Can a discontinuous function, like the square wave, be expressed as a sum, even an innite one, of continuous signals? One should at least be suspicious, and in fact, it can't be thus expressed. This issue brought Fourier

9 much criticism from the French Academy of

Science (Laplace, Lagrange, Monge and LaCroix comprised the review committee) for several years after its presentation on 1807. It was not resolved for also a century, and its resolution is interesting and important to understand from a practical viewpoint.
9∼history/Mathematicians/Fourier.html Available for free at Connexions <>

133 The extraneous peaks in the square wave's Fourier series never disappear; they are termed Gibb's phenomenon after the American physicist Josiah Willard Gibbs. They occur whenever the signal is discontinuous, and will always be present whenever the signal has jumps. Let's return to the question of equality; how can the equal sign in the denition of the Fourier series be justied? The partial answer is that

limit rms (

and every value of





However, mathematicians later in the nineteenth century showed that the rms error of the Fourier series was always zero.



What this means is that the error between a signal and its Fourier series approximation may not be zero, but that its rms value will be zero! It is through the eyes of the rms value that we redene equality: The usual denition of equality is called if

pointwise equality:

Two signals

s1 (t), s2 (t)

s1 (t) = s2 (t)

for all values of

A new denition of equality is

said to be equal in the mean square if

rms (s1 − s2 ) = 0.

mean-square equality:

are said to be equal pointwise Two signals are

For Fourier series, Gibb's phenomenon peaks have 

nite height and zero width. The error diers from zero only at isolated pointswhenever the periodic signal contains discontinuitiesand equals about 9% of the size of the discontinuity. The value of a function at a nite set of points does not aect its integral. This eect underlies the reason why dening the value of a discontinuous function, like we refrained from doing in dening the step function (Section 2.2.4: Unit Step), at its discontinuity is meaningless. Whatever you pick for a value has no practical relevance for either the signal's spectrum or for how a system responds to the signal. The Fourier series value "at" the discontinuity is the average of the values on either side of the jump.

4.6 Encoding Information in the Frequency Domain
can exploit both to


To emphasize the fact that every periodic signal has both a time and frequency domain representation, we

encode information into a signal.
x (t).

Refer to the Fundamental Model of Communication

(Figure 1.3: Fundamental model of communication). We have an information source, and want to construct a transmitter that produces a signal For the source, let's assume we have information to encode every


seconds. For example, we want to represent typed letters produced by an extremely good typist (a key is

struck every information.


seconds). Let's consider the complex Fourier series formula in the light of trying to encode


x (t) =

ck ej

2πkt T


We use a nite sum here merely for simplicity (fewer parameters to determine). An important aspect of the spectrum is that each frequency component the


can be manipulated separately: Instead of nding the

Fourier spectrum from a time-domain specication, let's construct it in the frequency domain by selecting


according to some rule that relates coecient values to the alphabet. In dening this rule, we want to

always create a real-valued signal

x (t).

Because of the Fourier spectrum's properties (Property 4.1, p. 121), This requirement means that we can only assign positive-

the spectrum must have conjugate symmetry. corresponding positive-indexed ones. Assume we have and

indexed coecients (positive frequencies), with negative-indexed ones equaling the complex conjugate of the


letters to encode:

{a1 , . . . , aN }.

One simple encoding rule could be to make a single

Fourier coecient be non-zero and all others zero for each letter. For example, if

ck = 0 , k = n .

that the


an occurs, we make cn = 1 1 is used to represent a letter. Note T N the range of frequencies required for the encodingequals T . Another possibility is
In this way, the


harmonic of the frequency

to consider the binary representation of the letter's index. For example, if the letter


occurs, converting


to its base 2 representation, we have

13 = 11012 .

We can use the pattern of zeros and ones to represent

directly which Fourier coecients we "turn on" (set equal to one) and which we "turn o."

This content is available online at <>. Available for free at Connexions <>




Exercise 4.6.1

(Solution on p. 168.)

Compare the bandwidth required for the direct encoding scheme (one nonzero Fourier coecient for each letter) to the binary number scheme. Compare the bandwidths for a 128-letter alphabet. Since both schemes represent information without loss  we can determine the typed letter uniquely from the signal's spectrum  both are viable. Which makes more ecient use of bandwidth and thus might be preferred?

Exercise 4.6.2

(Solution on p. 168.)

Can you think of an information-encoding scheme that makes even more ecient use of the spectrum? In particular, can we use only one Fourier coecient to represent


letters uniquely?

We can create an encoding scheme in the frequency domain (p. 133) to represent an alphabet of letters. But, as this information-encoding scheme stands, we can represent one letter for all time. However, we note that the Fourier coecients depend the signal's spectrum every


on the signal's characteristics over a single period. We could change


as each letter is typed. In this way, we turn spectral coecients on and o as

letters are typed, thereby encoding the entire typed document. For the receiver (see the Fundamental Model of Communication (Figure 1.3: Fundamental model of communication)) to retrieve the typed letter, it would simply use the Fourier formula for the complex Fourier spectrum

11 for each

T -second

interval to determine

what each typed letter was. Figure 4.9 (Encoding Signals) shows such a signal in the time-domain.

Encoding Signals
x(t) 2









Figure 4.9:
to each The encoding of signals via the Fourier spectrum is shown over three "periods." In this exinterval plotted below the waveforms. Can you determine the phase of the harmonics

ample, only the third and fourth harmonics are used, as shown by the spectral magnitudes corresponding

T -second

from the waveform?

In this Fourier-series encoding scheme, we have used the fact that spectral coecients can be independently specied and that they can be uniquely recovered from the time-domain signal over one "period." Do

"Complex Fourier Series and Their Properties", (2) <> Available for free at Connexions <>

11/>. Furthermore. if x (t) is periodic thereby having a Fourier series. a linear circuit's output to this signal will be the superposition of the output to each x (t) = ej 2πkt T . the output has a Fourier series. T -second interval. we simply multiply the input spectrum by the frequency response . time-invariant in general. Available for free at Connexions <http://cnx. we can exploit the superposition we found for linear circuits that their output to a complex exponential input is just the frequency response evaluated at the signal's frequency times the complex exponential. The fundamental property of a linear system is that its input-output Because the Fourier series relation obeys superposition: L (a1 s1 (t) + a2 s2 (t)) = a1 L (s1 (t)) + a2 L (s2 (t)). Note ck H k T .7 Filtering Periodic Signals lter reshapes such signals 12 The Fourier series representation of a periodic signal makes it easy to determine how a linear.9> . we can construct a communications system. 12 Thus. Thus. ∞ y (t) = k=−∞ ck H k T ej 2πkt T ( The circuit modies the magnitude and phase of each Fourier coecient. Its Fourier coecients equal This content is available online at <http://cnx.135 note that the signal representing the entire document is no longer periodic.27) To obtain the spectrum of the output. which means that the circuit's output will dier as the period varies. Said mathematically. if k k j 2πkt T because f = T e T . then the output y (t) = H component. especially that while the Fourier coecients do not depend on the signal's period. By understanding the Fourier series' properties (in particular that coecients are determined only over a that they transmit over telephone lines. This approach represents a simplication of how modern modems represent text 4. represents a periodic signal as a linear combination of complex exponentials. which means that it too is periodic. the circuit's transfer function does depend on frequency.

(a) Periodic pulse signal (b) Top plots show the pulse signal's spectrum for various cuto frequencies. Exercise 4.2 fc: 100 Hz 0.) What is the average value of each output waveform? The correct answer may surprise you. Bottom plots show the lter's output signals.2). FREQUENCY DOMAIN Filtering a periodic signal p(t) A … ∆ ∆ … T t (a) Spectral Magnitude such as shown on the left part ( T an ∆ = 0.1 (Solution on p.7. which display the output signal's spectrum and the lter's transfer function.2 fc: 10 kHz 0 1 0 10 20 Frequency (kHz) 0 1 0 10 20 Frequency (kHz) 0 1 0 10 20 Frequency (kHz) Amplitude 0 0 1 Time (ms) 2 0 0 1 Time (ms) 2 0 0 1 Time (ms) 2 (b) Figure 4. As the cuto frequency decreases (center.9> . The bottom row shows the output signal derived from the Fourier series coecients shown in the top row. then left). Example 4.10: A periodic pulse signal.28) Figure 4. the rounding becomes more prominent. Note how the signal's spectrum extends well above its fundamental frequency. The input's period was 1 ms (millisecond). Having a cuto frequency ten times higher than the fundamental does perceptibly change the output waveform.2 fc: 1 kHz 0. serves as the input to RC lowpass lter.30: Magnitude and phase of the transfer function)) H (f ) = 1 1 + j2πf RC (4. with the leftmost waveform showing a small ripple.3 The periodic pulse signal shown on the left above serves as the input to a RC -circuit that has the transfer function (calculated elsewhere (Figure 3. Available for free at Connexions <http://cnx. rounding the leading and trailing edges.10 (Filtering a periodic signal) shows the output changes as we vary the lter's cuto frequency.136 CHAPTER 4. The lter's cuto frequency was set to the various values indicated in the top row. 168.

29) −T 2 where we have used a symmetric placement of the integration interval about the origin for subsequent derivational convenience.31) As the period increases. without writing. which means that higher harmonics are smoothly suppressed. the spectral lines become closer together.9> .30) −T 2 making the corresponding Fourier series ∞ sT (t) = k=−∞ ST (f ) ej2πf t 1 T (4. Let's calculate the Fourier transform of the pulse signal (Section 2. We denote the spectrum for any assumed value of the period by ck (T ). we will describe lters that have much more rapidly varying frequency responses. Furthermore. Dene T 2 sT (t) e−(j2πf t) dt (4. we vary the frequency index k proportionally as we increase the period. We want to consider what happens to this signal's spectrum as we let the period become longer and Later. both periodic and nonperiodic ones.8 Derivation of the Fourier Transform and systems respond to 13 Fourier series clearly open the frequency domain as an interesting and useful way of determining how circuits periodic input signals. the dierential equation governing the circuit's behavior. we have calculated the output of a circuit to a periodic input calculations entirely in the frequency domain. we can calculate how will respond to a periodic input. Using Fourier series. Fourier transform. More importantly. ∆ P (f ) = −∞ 13 p (t) e−(j2πf t) dt = 0 e−(j2πf t) dt = 1 e−(j2πf ∆) − 1 − (j2πf ) This content is available online at <http://cnx. Let f be a xed frequency equaling ST (f ) ≡ T ck (T ) = k T . Available for free at Connexions <http://cnx.33)) converges. We calculate the spectrum according to the familiar formula ck (T ) = 1 T T 2 sT (t) e− j2πkt T dt (4.32) S (f ) = −∞ s (t) e−(j2πf t) dt (4. much less solving. we made these any linear circuit 4.21/>.137 This example also illustrates the impact a lowpass lter can have on a waveform. becoming a continuum. Can we use similar techniques for nonperiodic signals? What is the response of the lter to a single pulse? Addressing these issues requires us to nd the Fourier spectrum of all signals.33) S (f ) is the Fourier transform of s (t) (the Fourier transform is symbolically denoted by the uppercase version of the signal's symbol) and is dened for Example allowing a much more dramatic selection of the input's Fourier coecients. ∞ limit sT (t) ≡ s (t) = T →∞ −∞ ∞ with S (f ) ej2πf t df (4.5: Pulse).4 ∞ any signal for which the integral ((4. Therefore. sT (t) be a periodic signal having period T. The simple RC lter used here has a rather gradual frequency response. p (t). This spectrum is calculated by what is known as the Let the Fourier spectrum of a signal. We need a denition for periodic or not.2.

138 CHAPTER 4.11 (Spectrum) shows how increasing the period does indeed lead to a continuum of coecients.11: to The upper plot shows the magnitude of the Fourier series spectrum for the case of T =1 with the Fourier transform of p (t) shown as a dashed line. and that the Fourier transform does correspond to what the continuum becomes. The inverse Fourier transform ((4.2 T=1 0 0. and is denoted by sinc (t).34)). respectively.10).2 T=5 Spectral Magnitude 0 -20 -10 0 Frequency (Hz) 10 20 Figure 4. we expanded the period T = 5.2. we often symbolically express these transform calculations as F (s) and F −1 (S). Figure 4.35)) nds the time-domain representation from the frequency domain. The direct Fourier transform (or simply the Fourier transform) calculates a signal's frequency domain representation from its time-domain variant ((4. The quantity special name. The Fourier transform relates a signal's time and frequency domain representations to each other. F (s) = S (f ) = ∞ −∞ s (t) e−(j2πf t) dt (4. Spectrum Spectral Magnitude FREQUENCY DOMAIN P (f ) = e−(jπf ∆) sin (πf ∆) πf Note how closely this result resembles the expression for Fourier series coecients of the periodic pulse signal (4. keeping the pulse's duration xed at 0. For the bottom panel. the magnitude of the pulse's Fourier transform equals |∆sinc (πf ∆) |.34) Available for free at Connexions <http://cnx. Thus. Rather than explicitly writing the required integral. the sinc sin(t) has a t (pronounced "sink") function.9> . and computed its Fourier series coecients.

A signal thus "exists" in both the time and frequency domains.36) Of practical importance is the conjugate symmetry property: When is real-valued.35) We must have s (t) = F −1 (F (s (t))) and S (f ) = F F −1 (S (f )) .org/content/col10040/1. understanding communications and information processing systems requires a thorough understanding of signal structure and of how systems work in both the time and frequency domains. Especially important among these properties is in either domain equals the power in the other.) In other words. F (S (f ))? Properties of the Fourier transform and some useful transform pairs are provided in the accompanying tables (Table 4. and inverse transforming the result.1 What is transform. Available for free at Connexions <http://cnx. (Solution on p.9> .2 (Solution on p. performing a simple calculation in the frequency domain. with the Fourier transform bridging between the two. 168. Note that the direct and inverse transforms dier only in the sign of the exponent. though most familiarly Fourier series coecients according to letter to be transmitted. really can be dened equally as well (and sometimes more easily) in the frequency domain. We will learn (Section 4.2: Fourier Transform Properties). A signal's time and frequency domain representations are uniquely related to each other. dened in the time-domain. we need only plot the positive frequency portion of the spectrum (we can easily determine the remainder of the spectrum). We express Fourier transform pairs as s (t) ↔ S (f ). and these results are indeed valid with minor exceptions. which states that power computed ∞ ∞ s2 (t) dt = −∞ −∞ (|S (f ) |) df s (t) 2 (4. Parseval's Theorem. time-invariant system's output in the time domain can be most easily calculated by determining the input signal's spectrum. Furthermore. it behooves the wise engineer to use the simpler of the two. Consequently. impedances depend on frequency and the time variable cannot appear. but being unequal at a point is indeed minor. and we won't try here. Showing that you "get back to where you started" is dicult from an analytic viewpoint.8. a signal. the spectrum at negative frequencies equals the complex conjugate of the spectrum at the corresponding positive frequencies. This value may dier from how the signal is dened in the time domain.) How many Fourier transform operations need to be applied to get the original signal back: F (· · · (F (s))) = s (t)? Note that the mathematical relationships between the time domain and frequency domain versions of the same signal are termed transforms. This idea is shown in another module (Section 4. This situation mirrors what happens with complex amplitudes in circuits: As we reveal how communications systems work and are designed. Exercise 4. use the wrong exponent sign in evaluating the inverse Fourier The diering exponent signs means that some curious results occur when we use the wrong sign.1: Short Table of Fourier Transform Pairs and Table 4.6) where we dene Thus. 168. we will dene signals entirely in the frequency domain without explicitly nding their time domain variants.8. We can dene an information carrying signal in either the time or frequency domains. We are transforming (in the nontechnical meaning of the word) a signal from one representation to another. note: Recall that the Fourier series for a square wave gives a value for the signal at the dis- continuities equal to the average value of the jump.9) that nding a linear. For example. Exercise 4.139 F −1 (S) = = s (t) ∞ −∞ S (f ) ej2πf t df (4. A common misunderstanding is that while a signal exists in both the time and frequency domains. a single formula expressing a signal must contain only time or frequency: Both cannot be present simultaneously.

we want to compute the Fourier transform  the spectrum  of (1 + s (t)) cos (2πfc t) Available for free at Connexions <http://cnx.2 Example FREQUENCY DOMAIN The only diculty in calculating the Fourier transform of any signal occurs when we have periodic signals (in either domain). and plot them along with the spectra of nonperiodic signals on the same frequency axis.1 if if |f | < W |f | > W Fourier Transform Properties Time-Domain Linearity Conjugate Symmetry Even Symmetry Odd Symmetry Scale Change Time Delay Complex Modulation Amplitude Modulation by Cosine Amplitude Modulation by Sine Dierentiation Integration Multiplication by Area Value at Origin Parseval's Theorem Frequency Domain a1 S1 (f ) + a2 S2 (f ) S (f ) = S (−f ) S (f ) = S (−f ) S (f ) = −S (−f ) 1 |a| S f a ∗ a1 s1 (t) + a2 s2 (t) s (t) ∈ R s (t) = s (−t) s (t) = −s (−t) s (at) s (t − τ ) ej2πf0 t s (t) s (t) cos (2πf0 t) s (t) sin (2πf0 t) d dt s (t) t s (α) dα −∞ e−(j2πf τ ) S (f ) S (f − f0 ) S(f −f0 )+S(f +f0 ) 2 S(f −f0 )−S(f +f0 ) 2j j2πf S (f ) 1 j2πf S (f ) if dS(f ) 1 −(j2π) df S (0) = 0 t ts (t) ∞ −∞ s (t) dt (|s (t) |) dt 2 S (0) ∞ −∞ ∞ −∞ s (0) ∞ −∞ S (f ) df (|S (f ) |) df 2 Table 4. Using this operation more as an example rather than elaborating the communications aspects here. Short Table of Fourier Transform Pairs s (t) e −(at) S (f ) u (t) ∆ 2 ∆ 2 1 j2πf +a 2a 4π 2 f 2 +a2 if if e(−a)|t|   1 p (t) =  0 sin(2πW t) πt |t| < |t| > sin(πf ∆) πf   1 S (f ) =  0 Table 4. a very important operation on a signal s (t) is to amplitude modulate it. Realizing that the Fourier series is a special case of the Fourier transform.9> .5 In communications.140 CHAPTER 4. we simply calculate the Fourier series coecients instead.

Using Euler's relation. S (f ) ej2πf t df cos (2πfc t) (s (t) cos (2πfc t)) = 1 2 1 2 ∞ S (f ) ej2π(f +fc )t df + −∞ ∞ 1 2 1 2 ∞ S (f ) ej2π(f −fc )t df −∞ ∞ (s (t) cos (2πfc t)) = S (f − fc ) ej2πf t df + −∞ ∞ S (f + fc ) ej2πf t df −∞ (s (t) cos (2πfc t)) = −∞ S (f − fc ) + S (f + fc ) j2πf t e df 2 Exploiting the uniqueness property of the Fourier transform. Its period is The second term is not periodic unless s (t) has the same period 1 fc . the spectrum of the second term can be derived as ∞ s (t) cos (2πfc t) = −∞ Using Euler's relation for the cosine.12: S(f–fc) fc–W fc fc+W f A signal which has a triangular shaped spectrum is shown in the top plot. Once amplitude modulated. 1 are c±1 = . To nd its time domain representation. Its highest frequency  the largest frequency containing power  is W Hz.12. (1 + s (t)) cos (2πfc t) = cos (2πfc t) + s (t) cos (2πfc t) For the spectrum of Fourier coecients cos (2πfc t).37) in frequency. S(f) –W W X(f) f S(f+fc) –fc–W –fc –fc+W Figure 4.3 (Solution on p.12? Available for free at Connexions <http://cnx. 168. 2 we use the Fourier series. ± (fc ) and the original triangular Note how in this gure the signal s (t) is dened in the frequency domain.9> .8.141 Thus. Exercise 4. we simply use the inverse Fourier transform. we have F (s (t) cos (2πfc t)) = S (f − fc ) + S (f + fc ) 2 (4. and its only nonzero as the sinusoid. the resulting spectrum has "lines" corresponding to the Fourier series components at 1 spectrum shifted to components at ± (fc ) and scaled by 2 . This component of the spectrum consists of the original signal's spectrum delayed and advanced The spectrum of the amplitude modulated signal is shown in Figure What is the signal s (t) that corresponds to the spectrum shown in the upper panel of Figure 4.

recall Euler's relation for the sinusoidal term and note the fact that multiplication by a complex exponential in the frequency domain amounts to a time delay.41) Consequently. time-invariant system having frequency response H (f ).39) Y (f ) = e−(jπf ∆) sin (πf ∆) 1 πf 1 + j2πf RC ( This situation requires cleverness and an understanding of the Fourier transform's properties. but the range of positive frequencies where the signal has power. Let's momentarily make the expression for Y (f ) = = more complicated. FREQUENCY DOMAIN Exercise 4. time-invariant system.8. Y (f ) = 1 1 1 − e−(jπf ∆) j2πf 1 + j2πf RC (4. in this example.9> . 4. the output is periodic and has Fourier series coecients equal to the product of the system's frequency response and the input's Fourier coecients (Filtering Periodic Signals (4. Since x (t)'s spectrum is conned to a frequency band close to the origin (we assume fc W ).2: thinking about this expression as a 14 product of terms. Signals such as speech and the Dow Jones averages are baseband signals. We have expressions for the input's spectrum and the system's frequency response. the amplitude-modulated signal? Try the calculation in both the time and frequency domains. e−(jπf ∆) sin(πf ∆) πf e−(jπf ∆) e 1 j2πf jπf ∆ 1−e −e−(jπf ∆) j2πf −(j2πf ∆) (4.) x (t). 2W Hz. time-invariant systems to nd a formula for the RC -circuit's response to a pulse input.38) H (f ) = Thus. The way we derived the spectrum of non-periodic signal from periodic ones makes it clear that the same kind of result works when the input is not periodic: If x (t) serves as the input to a linear. P (f ) = e−(jπf ∆) sin (πf ∆) πf (4.42) The table of Fourier transform properties (Table 4.142 CHAPTER 4.40) You won't nd this Fourier transform in our table.4 What is the power in (Solution on p.18/>.9 Linear Time Invariant Systems 14 When we apply a periodic input to a linear. the spectrum of the output is X (f ) H (f ). Available for free at Connexions <http://cnx.27)). 168. we have a bandpass signal.6 Let's use this frequency-domain input-output relationship for linear. and the required integral is dicult to evaluate as the expression stands. The bandwidth of a bandpass signal is not its highest the highest frequency at which it has power. the output's Fourier transform equals 1 1 + j2πf RC (4. the bandwidth is Why a signal's bandwidth should depend on its spectral shape will become clear once we develop communications systems. The baseband signal's bandwidth W . we call the signal equals not s (t) a baseband signal because its power is contained at low frequencies. Thus. In particular. In this example. Example 4. Fourier Transform Properties) suggests This content is available online at <http://cnx.

e−(j2πf ∆) means delay by ∆ seconds in the time 1 − e−(j2πf ∆) means. Thus. Y (f ) ↔ 1 − e− RC u (t) − 1 − e− RC t−∆ u (t − ∆) (4. and 1 j2πf e −(j2πf ∆) to one that delayed. RC e The inverse transform of the frequency response is We can translate each of these frequency-domain products into time-domain operations order we like in any Let's because the order in which multiplications occur doesn't aect the result. Although we don't have all we need to demonstrate the result as yet. and receiving information. 4. in the time domain. 168. the time-constant of the rising and falling portions of the output equal the product of the circuit's resistance and capacitance. subtract the time-delayed signal from its t 1 − RC u (t). and rightly so. We say that 1 e of its original value. Because of t of the dierence of two terms: the constant the Fourier transform's linearity. we will assume linear systems having certain properties (transfer functions) without worrying about what circuit has the desired property. linear circuits are a special case of linear.44) Note that in delaying the signal how we carefully included the unit step. The second term in this result does not begin until the t = ∆. relying mostly on what multiplication by certain factors. manipulating.10: Filtering a periodic signal) example mentioned above are exponentials. time-invariant systems have a frequency-domain input-output relation given by the product of the input's Fourier transform and the system's transfer function. like 1 j2πf and e−(j2πf ∆) .41) in the order given. We even implicitly interpreted the circuit's transfer function as the input's spectrum! This approach to nding inverse transforms  breaking down a complicated expression into products and sums of simple components  is the engineer's way of breaking down the problem into several subproblems that are much easier to solve and then gluing the results together.1 (Solution on we used the table extensively to nd the inverse Fourier transform. We essentially treated multiplication The transfer function by these factors as if they were transfer functions of some ctitious circuit. The term original.) Derive the lter's output by considering the terms in (4. At this point. you may be concerned that this approach is glib. Later we'll show that by involving software that we really don't need to be concerned about constructing a transfer function from circuit elements and op-amps. the waveforms shown in the Filtering Periodic Signals (Figure 4.43) The middle term in the expression for 1 and the complex exponential Y (f ) consists e−(j2πf ∆) . In this example.9> .143 • • • • Multiplication by 1 j2πf means integration. time-invariant systems.9. all linear. Thus. corresponded to a circuit that integrated.9. but in the rule factor with an Y (f ) = X (f ) H (f ). Available for free at Connexions <http://cnx. As we tackle more sophisticated problems in transmitting. Along the way we may make the system serve as the input. You should get the same answer. we simply subtract the results. Integrate last rather than rst. Thus. Multiplication by the complex exponential domain. time constant of an exponentially decaying signal equals the time it takes to decrease by Exercise 4. start with the product of 1 j2πf (integration in the time domain) and the transfer function: t 1 1 ↔ 1 − e− RC u (t) j2πf 1 + j2πf RC (4.1 Transfer Functions The notion of a transfer function applies well beyond linear circuits. meant. which term is the input and which is the transfer function is merely a notational matter (we labeled one X and the other with an H ).

9. Because the Fourier transform of the rst system's output is X (f ) H1 (f ) and it serves as the second system's input. H1 (f ) H2 (f ).9> . time-invariant Engineers exploit this property by systems as well. we nd a ready way of realizing designs.2 Commutative Transfer Functions Another interesting notion arises from the commutative property of multiplication (exploited in an example above (Example 4. see this Frequency Domain Problem (Problem 4. having transfer function Furthermore. time-invariant systems. This result applies to other congurations of linear. determining what transfer function they want. Consider a cascade of two linear. Available for free at Connexions <http://cnx. FREQUENCY DOMAIN 4.44)). then breaking it down into components arranged according to standard congurations. Because this product also equals X (f ) H2 (f ) H1 (f ). the cascade acts like a single linear system. We now understand why op-amp implementations of transfer functions are so important.6)): We can rather arbitrarily chose an order in which to apply each product.144 CHAPTER 4. the cascade's output spectrum is X (f ) H1 (f ) H2 (f ).13). the cascade having the linear systems in the opposite order yields the same result. Using the fact that op-amp circuits can be connected in cascade with the transfer function equaling the product of its component's transfer function (see this analog signal processing problem (Problem

org/content/col10040/1. produce pus of air that excite resonances in the vocal and nasal cavities. What are not shown are the brain and the musculature that control the entire speech production process. when under tension.13: The vocal tract is shown in cross-section. Air pressure produced by the lungs forces air through the vocal cords that.10 Modeling the Speech Signal 15 Vocal Tract Nasal Cavity Lips Teeth Tongue Vocal Cords Oral Cavity Air Flow Lungs Figure 4. Available for free at Connexions <http://cnx.9> .145 4.29/>.org/content/m0049/2. 15 This content is available online at <http://cnx.

but for modeling purposes we describe them separately since they control dierent aspects of the speech signal.14 (Model of the Vocal Tract) shows the model speech production system. The naturalness of linear system models for speech does not extend to other situations. impressing it on the system's output. Figure 4. In singing. The characteristics of the model depends on whether you are saying a vowel or a consonant. the periodic pulse output provided by the vocal cords. Is this system linear or nonlinear? Justify your answer. but not necessarily a sucient level of accuracy. and s (t) are the air pressure provided by the lungs. and information engineers frequently prefer linear models because they provide a greater level of comfort. in western Available for free at Connexions <http://cnx. pT (t).14: The systems model for the vocal tract. take a rubber band and hold it in front of your lips. Vocal cord tension is governed by a control input to the musculature. Control signals from the brain are shown as entering the systems from the top. If held open when you blow through it. arising from simple sources. Singers modify vocal cord tension to change the pitch to produce the desired musical note. Exercise 4. we need to understand the speech signal's structure  what's special about the speech signal  and how we can describe and model speech production. and/or chemistry of the problem are nonlinear. Because we shall analyze several speech transmission and processing schemes.13 (Vocal Tract) shows the actual speech production system and Figure 4. When the vocal cords are placed under tension by the surrounding musculature. the air passes through more or less freely. FREQUENCY DOMAIN Model of the Vocal Tract neural control l(t) Lungs Vocal Cords neural control pT(t) Vocal Tract s(t) Figure 4. This eect works best with a wide rubber band. the underlying mathematics governed by the physics. Clearly. The vocal cords respond to this input by vibrating. The information contained in the spoken word is conveyed by the speech signal.10. these come from the same source. system choice is governed by the physics underlying the actual production process.146 CHAPTER 4. which means the output of this system is some periodic function. it can be modeled as a constant supply of air pressure. We concentrate rst on the vowel production mechanism. we can use linear systems in our model with a fair amount of accuracy. and the speech output respectively. air pressure from the lungs causes the vocal cords to vibrate. 168. Certainly in the case of speech and in many other cases as well. are given structure by passing them through an interconnection of systems to yield speech. a process generically known as modulation. musicality is largely conveyed by pitch. Nonlinear models are far more dicult at the current state of knowledge to understand.1 (Solution on p. Because the fundamental equation of acoustics  the wave equation  applies here and is linear. leaving linear systems models as approximations. In many cases. If held tautly and close blowing through the opening causes the sides of the rubber band to vibrate. this situation corresponds to "breathing mode". For speech and for many other situations. Your lung power is the simple source referred to earlier.) Note that the vocal cord system takes a constant input and produces a periodic airow that corresponds to its output signal. You can imagine what the airow is like on the opposite side of the rubber band or the vocal cords. The signals l (t). This modeling eort consists of nding a system's description of how relatively unstructured signals. it is the control input that carries information. in system's models we represent control inputs as signals coming into the top or bottom of the system. The change of signal structure resulting from varying the control input enables information to be conveyed by the signal.9> . biology. To visualize this eect.

bringing the teeth together. we shall assume that the pitch period is constant. with harmonics of the frequency pT (t) as shown in the T denoting the pitch period. note the sound dierences between "Let's go to the park.14 (Model of the Vocal Tract)). pitch frequency fundamental frequency pitch frequency for normal speech ranges between 150-400 Hz for both males and females. and lips. and bringing the tongue toward the front portion of the roof of the mouth produces the sound "ee. and nasal cavity the tract. the vocal cords vibrate just as in vowels. creates acoustic disturbances. as shown in Figure 4. If we could examine the vocal cord output. The primary dierence between adult male and female/prepubescent speech is pitch. teeth.1). which has the eect of lowering their pitch frequency to the range 80-160 Hz. and positioning the tongue toward the back of the oral cavity produces the sound "oh. Before puberty. lips." and "Let's go to the park?".15 (Speech Spectrum). The resulting output airow is quite erratic. the pipe are quite similar.9> . A sentence can be read in a monotone fashion without completely destroying the information expressed by the sentence." Rounding the lips. Available for free at Connexions <http://cnx. For example. we collapse the vocal-cord-lung system as a simple source that produces the periodic pulse signal (Figure 4. We dene noise carefully later when we delve into communication problems. For others. tongue. Going back to mechanism. and exits primarily through the lips and to some extent through the nose. For example. The spectrum of this signal (4. Spreading the lips. the so-called nasal sounds "n" and "m" have this property. what is known as the F0. teeth. the vocal cords are placed under much less tension. time-invariant system that has a frequency response typied by several peaks. After puberty. spreading the teeth. so much so that we describe it as being noise. For some consonants. The sound pressure signal thus produced enters the mouth behind the tongue. The vocal cords' periodic output can be well described by the periodic pulse train periodic pulse signal (Figure 4. Speech specialists tend to name the mouth. when consonants such as "f" are produced. pitch is much less important. the vocal cords do not produce a periodic the dierence between a statement and a question is frequently expressed by pitch changes. cross-section of the vocal tract "tube" varies along its length because of the positions of the tongue.9) contains 1 or the T . vocal The physics governing the sound disturbances produced in the vocal tract and those of an organ Whereas the organ pipe has the simple physical structure of a straight tube. It is these positions that are controlled by the brain to produce the vowel sounds. However. To simplify our speech modeling eort.147 speech. which results in turbulent ow. This dierence is also readily apparent in the speech signal itself." These variations result in a linear. the vocal cords of males undergo a physical change. we could probably discern whether the speaker was male or female. With this simplication.

5 0 0.01 0. where the periodicity is quite apparent. we have a periodic signal.27)." with rejecting high or low frequencies.02 -0.5 0 0.5 “oh” 0. speech signal processors would say that the sound "oh" has a higher rst formant frequency than the sound "ee.9> . The spectral peaks are known as formants.02 Figure 4.148 CHAPTER 4. the vocal tract serves to F2 being much higher during "ee. (Solution on p. serving as the input to a linear system.2 and the pitch frequency.) From the waveform plots shown in Figure 4. speech has a Fourier series representation given by a linear circuit's response to a periodic signal (4. respectively.005 0. We know that the outputthe speech signal we utter and that is heard by others and ourselveswill also be periodic. shape the spectrum of the vocal cords.005 0. determine the pitch period Since speech signals are periodic. We Available for free at Connexions <http://cnx. The bottom plots show speech waveforms corresponding to these sounds. FREQUENCY DOMAIN Speech Spectrum Spectral Magnitude (dB) 30 “oh” 20 10 0 -10 -20 0 5000 20 10 0 -10 -20 0 5000 30 “ee” F1 F2 F3 F4 F5 Frequency (Hz) F1 F2 F3 F4F5 Frequency (Hz) 0. These peaks are known as formants.15: The ideal frequency response of the vocal tract as it produces the sounds "oh" and "ee" are shown on the top left and top Thus." Rather than serving as a lter. In the time domain. Because the acoustics of the vocal tract are linear.10. the pitch.15 (Speech Spectrum). and are numbered consecutively from low to high frequency. Example time-domain speech signals are shown in Figure 4.015 Time (s) 0.15 (Speech Spectrum). we know that the spectrum of the output equals the product of the pitch signal's spectrum and the vocal tract's frequency response. Exercise 4.015 Time (s) 0.01 0.5 “ee” Amplitude 0 0 -0. 168." F2 and F3 (the second and third formants) have more energy in "ee" than in "oh.

is jπk∆ T ck = Ae− sin πk∆ T πk (4. Here. If we had. is superimposed on the spectrum of actual male speech corresponding to the sound "oh. a male speaker with model is shown in Figure 4.16 (voice spectrum). 122).16(b) (voice spectrum). (b) The HV (f ) and the speech spectrum.46) and is plotted on the top in Figure 4.45) The Fourier series for the vocal cords' is the transfer function of the vocal tract system. smooth line. PT (f ).9> . about a 110 Hz pitch (T 9. (a) The vocal cords' output spectrum vocal tract's transfer function.16: The vocal tract's transfer function. Available for free at Connexions <http://cnx." The pitch lines corresponding to harmonics of the pitch frequency are indicated. derived in this equation (p. for example. the spectrum of his speech predicted by our voice spectrum (a) pulse 50 40 Spectral Amplitude (dB) Pitch Lines 30 20 10 0 -10 -20 0 1000 2000 3000 Frequency (Hz) 4000 5000 (b) voice spectrum Figure 4. S (f ) = PT (f ) HV (f ) (4. HV (f ) shown as the thin.1ms) saying the vowel "oh".149 thus obtain the fundamental model of speech production.

and captures all the important features. . Available for free at Connexions <http://cnx. The model transfer function for the vocal tract makes the formants much more readily evident. and we realize from our model The that they are due to the vocal cord's periodic excitation of the vocal tract. pitch and the vocal tract's transfer function are not static. 2. FREQUENCY DOMAIN The model spectrum idealizes the measured spectrum. for example. which indicates how the pitch changes.9> . measured spectrum certainly demonstrates what are known as pitch lines.17 (spectrogram). }. but not during the consonants (like the ce in "Rice").org/content/col10040/1. .10. is visible during the vowels. Would male or female speech tend to have a more clearly identiable formant structure when its spectrum is computed? Consider.16 (voice spectrum) would change if the pitch were twice as high (≈ (300) Hz). k ∈ {1.10) Figure 4. see previous result (4. Engineers typically display how the speech spectrum changes over time with what is known as a spectrogram (Section 5. Exercise 4. When we speak. . they change according to their control signals to produce speech.9). but dicult to discern exactly.) The Fourier series coecients for speech are related to the vocal tract's transfer function only at k T . how the spectrum shown on the right in Figure 4.3 the frequencies (Solution on p. 168.150 CHAPTER 4. The vocal tract's shaping of the line spectrum is clearly evident. especially at the higher frequencies. Note how the line spectrum.

We see from Figure 4.2 Ri ce Uni ver si ty Figure 4. not It support this 5 kHz bandwidth is the telephone: Telephone The most important consequence of this ltering is the removal of high frequency energy. Below the spectrogram is the time-domain speech signal.17: Displayed is the spectrogram of the author saying "Rice University.4 0. Try this yourself: Call a friend and determine if they Available for free at Connexions <http://cnx. We want to determine how to transmit and receive it. Ecient and eective speech transmission requires us to know the signal's properties and its structure (as expressed by the fundamental model of speech production). with red indicating the most energetic portions. is interesting that one system that does systems act like a bandpass lter passing energy between about 200 Hz and 3.2 kHz.17 (spectrogram).8 1 1. Eective speech transmission systems must be able to cope with signals having this bandwidth.6 Time (s) 0." From everyday life.151 spectrogram 5000 4000 Frequency (Hz) 3000 2000 1000 0 0 kHz. the "ce" sound in "Rice"" contains most of its energy above 3. we know that speech contains a wealth of information.2 0. for example." Blue indicates low energy portion of the spectrum. this ltering eect is why it is extremely dicult to distinguish the sounds "s" and "f" over the telephone. that speech contains signicant energy from zero frequency up to around 5 kHz.9> . where the periodicities can be seen. In our sample utterance. The fundamental model for speech indicates how engineers use the physics underlying the signal generation process and exploit its structure to produce a systems model that suppresses the physics while emphasizing how the signal is "constructed.

we need to manipulate signals in more ways than are possible with analog systems.11 Frequency Domain Problems Problem 4.2: 16 Fourier Series Find the Fourier series representation for the following periodic signals (Figure 4.18). If you say these words in isolation so that no context provides a hint about which word you are saying.18 Problem 4. speech transmitted the same way would sound ne. Radio does support this bandwidth (see more about AM and FM radio systems (Section Available for free at Connexions <http://cnx. violin music. Such exibility is achievable (but not without some loss) with programmable digital systems.11)). We shall learn later that transmission of bandwidth signal requires about 80 kbps (thousands of bits per second) to transmit digitally. 16 4.9> . nd the complex Fourier series for the triangle wave without performing the usual Fourier integrals. we need to do more than ltering to determine the speech signal's structure. For the third signal. Many speech transmission systems work by nding the speaker's pitch and the formant frequencies. dog barksbut don't sound at all like speech.152 CHAPTER 4. FREQUENCY DOMAIN can distinguish between the words "six" and "x". To reduce the "digital bandwidth" so drastically means that engineers spent many years to develop signal processing and coding methods that could capture the special characteristics of speech without destroying how it sounds.1: Simple Fourier Series What is the signal's period in each case? a) b) c) d) e) f) Find the complex Fourier series representations of the following signals without explicitly calculating Fourier s (t) = sin (t) s (t) = sin2 (t) s (t) = cos (t) + 2cos (2t) s (t) = cos (2t) cos (t) s (t) = cos 10πt + π (1 + cos (2πt)) 6 s (t) given by the depicted waveform (Figure 4. Ecient speech transmission systems exploit the speech signal's special structure: What makes speech speech? You can conjure many signals that span the same frequencies as speechcar engine sounds. Hint: This content is available online at <http://cnx. Exploiting the special structure of speech requires going beyond the capabilities of analog signal processing systems. any 5 kHz Speech signals can be transmitted using less than 1 kbps because of its special structure. If you used a speech transmission system to send a violin sound. s(t) 1 t 1 1 3 8 4 8 1 Figure 4.19). Fundamentally.42/>. your friend will not be able to tell them apart. it would arrive horribly distorted.

19 Problem 4.20).org/content/col10040/1.3: ure 4. Phase Distortion We can learn about phase distortion by returning to circuits and investigate the following circuit (Fig- Available for free at Connexions <http://cnx.9> .153 How is this signal related to one for which you already have the series? s(t) 1 1 (a) 2 3 t s(t) 1 1 (b) 2 3 t 1 s(t) 1 2 3 4 t (c) Figure 4.

b) Instead of truncating the series. we want to approximate a reference signal by a somewhat simpler signal.21). Plot the mean-squared error as a function of for both s (t).154 CHAPTER 4. How would you characterize this circuit? c) Let vin (t) be a square-wave of period T. the most frequently used error measure is the mean-squared error. ˜ K for periodic signals is to truncate their Fourier series. let's generalize the nature of the approximation to including any set of 2K + 1 terms: We'll always include the c0 and the negative indexed term corresponding to ck .01 and T = 2. always minimizes the meansquared error). To assess the quality of an approximation. Approximating Periodic Signals Often. c) Find the Fourier series for the depicted signal (Figure 4. Problem 4. Show that the square wave is applies a linear phase shift to the signal's spectrum. FREQUENCY DOMAIN 1 + vin(t) – 1 + – vout(t) 1 1 Figure 4. What value of T fourier2. the square wave is passed through a system that delays its input. a) Find a frequency-domain expression for the approximation error when we use the truncated Fourier series as the approximation. Use the transfer function of a delay to compute using Matlab the Fourier series of the output. τ be delineates e) Instead of the depicted circuit. What selection of terms minimizes the mean-squared error? Find an expression for the mean-squared error resulting from your choice. What is the Fourier series for the output voltage? d) Use Matlab to nd the output's waveform for the cases the two kinds of results you found? The software in T = 0.20 a) Find this lter's transfer function. Use Matlab to nd the truncated approximation and best approximation involving two terms..m might be useful.e. which T 4 . b) Find the magnitude and phase of this transfer function. For a periodic signal 2 = 1 T T 0 (s (t) − s (t)) dt ˜ One convenient way of nding approximations 2 where s (t) is the reference signal and s (t) its approximation. K Available for free at Connexions <http://cnx. s (t) = ˜ k=−K ck ej 2πk T t The point of this problem is to analyze whether this approach is the best (i. Let the delay indeed delayed.9> .

corresponding average daily highs in the second) for Houston.22 In this problem.mat contains these data (daylight hours in the rst row. Texas. we want to understand the temperature component of our environment using Fourier series and linear system theory.21 Problem 4. Hot Days The daily temperature is a consequence of several eects.9> . would you say that the system is linear or not? How did you reach you conclusion? Available for free at Connexions <http://cnx. If this were the dominant eect. one of them being the sun's heating.155 1 s(t) t 1 2 Figure 4. The le temperature. The plot (Figure 4.22) shows that the average daily high temperature does not behave that way. 95 90 Average High Temperature 14 Temperature 85 Daylight Hours 80 75 70 65 60 55 50 0 50 100 150 200 Day 250 300 350 Daylight 13 12 11 10 Figure 4. a) Let the length of day serve as the sole input to a system having an output equal to the average daily temperature.5: Long. then daily temperatures would be proportional to the number of daylight Examining the plots of input and output.

quently.24(a)). give a physical explanation for the phase shift.7: Duality in Fourier Transforms Conse- "Duality" means that the Fourier transform and the inverse Fourier transform are very similar. that are very similar. Would days be hotter? If so. the waveform s (t) in the time domain and the spectrum s (f ) have a Fourier transform and an inverse Fourier transform.9> . In particular.. d) Because the harmonic distortion is small. What are their spectral a) Calculate the Fourier transform of the single pulse shown below (Figure 4.23 1 (b) Problem 4. Available for free at Connexions <http://cnx. respectively.156 CHAPTER 4. b) Calculate the inverse Fourier transform of the spectrum shown below (Figure 4. a) Calculate the Fourier transform of the signal shown below (Figure 4. What is the e) Find the transfer function of the simplest possible linear model that would describe the data.23(b)). f ) Predict what the output would be if the model had no phase properties? Spectra of Pulse Sequences Pulse sequences occur often in digital communication and in other elds as well. Characterize and interpret the structure of this model.6: a) b) c) Fourier Transform Pairs Find the Fourier or inverse Fourier transform of the following.. x (t) = e−(a|t|) x (t) = te−(at) u (t)   1 if |f | < W X (f ) =  0 if |f | > W x (t) = e−(at) cos (2πf0 t) u (t) d) Problem 4. . c4 ) of the complex Fourier series for each signal. c) How are these answers related? What is the general relationship between the Fourier transform of and the inverse transform of s (t) s (f )? 1 s(t) 1 S(f) t f 1 (a) Figure 4. c) What is the harmonic distortion in the two signals? Exclude phase shift between input and output signals? c0 from this calculation. . FREQUENCY DOMAIN b) Find the rst ve terms (c0 . let's concentrate only on the rst harmonic. by how much? Problem 4.23(a)).

number of repeated pulses increases.24 Problem 4.? Available for free at Connexions <http://cnx. T t T t Figure 4.157 b) Calculate the Fourier transform of the two-pulse sequence shown below (Figure 4. You should look for a general expression that holds for sequences of any length.25 a) What is the spectrum of the waveform that represents the alternating bit sequence .24(c)). If it is a 0.24(b)). the appropriately chosen pulses are placed one after the other. it is represented by a positive pulse of duration T.01010101.. c) Calculate the Fourier transform for the ten-pulse sequence shown in below (Figure 4. If the value of a bit is a 1. it is represented by a negative pulse of the same duration. Describe how the spectra change as the 1 1 2 t 1 2 (a) 1 1 2 t 1 2 (b) 1 1 2 t 1 2 3 4 5 6 7 8 9 (c) Figure d) Using Matlab. To represent a sequence of bits..9> .9: Spectra of Digital Communication Signals One way to represent bits with signals is shown in Figure 4. plot the magnitudes of the three spectra.

b) Noting that the square wave is a superposition of a sequence of these pulses. having unit amplitude and width an expression for the lter's output to this pulse. Now what is the bandwidth? Problem 4. We want to develop an active circuit (it contains an op-amp) having an output that is proportional to the integral of its input. For example. you could use an integrator in a car to determine distance traveled from the speedometer. consider the response of the lter to a simple pulse. a) What is the transfer function of an integrator? b) Find an op-amp circuit so that its voltage output is proportional to the integral of its input for all signals.10: Lowpass Filtering a Square Wave Let a square wave (period T) serve as the input to a rst-order lowpass system constructed as a RC lter. a sound coming from the right arrives at the left ear here (Figure 4. such as integration and dierentiation.26). a) First..12: Where is that sound coming from? Sound travels at a We determine where sound is coming from because we have two ears and a brain.158 CHAPTER 4.11: Mathematics with Circuits Simple circuits can implement simple mathematical operations.9> . What is this signal's bandwidth? c) Suppose the bit sequence becomes .... How long must the period be so that the response does cuto frequency to the square wave's spectrum in this case? T 2 .00110011. We want to derive an expression for the time-domain response of the lter to this input. Derive not achieve a relatively constant value between transitions in the square wave? What is the relation of the lter's Problem 4. relatively slow speed and our brain uses the fact that sound will arrive at one ear before the other. Problem 4. As shown τ seconds after it arrives at the right Available for free at Connexions <http://cnx. FREQUENCY DOMAIN b) This signal's bandwidth is dened to be the frequency range over which 90% of the power is contained. what is the lter's response to the square wave? c) The nature of this response should change as the relation between the square wave's period and the lter's cuto frequency change..

a) What is the transfer function between the sound signal b) One way of determining the delay optimal system that delays each In an attempt to ∆l and ∆r are the delays applied to the left and right signals respectively.13: function between Arrangements of Systems Architecting a system of modular components means arranging them in various congurations to achieve some overall input-output relation. For each of the following (Figure 4. what is measured by the two ears. model what the brain might do. determine the overall transfer x (t) and y (t). Available for free at Connexions <http://cnx. How are Problem 4.9> . RU signal processors want to design an ear's signal by some amount then adds them together.26 Once the brain nds this propagation delay.27). The idea is to determine the delay values according to some criterion that is based on τ is to choose these maximum-power processing delays related s (t) ∆l and ∆r to τ ? and the processor output to maximize the power in y (t)? y (t).159 Sound wave τ s(t-τ) s(t) Figure 4. it can determine the sound

27 The overall transfer function for the cascade (rst depicted system) is particularly interesting.28 Available for free at Connexions <http://cnx. time-invariant systems in a cascade? What sin(πt) be the input to a linear. time-invariant lter having the transfer function shown πt below (Figure 4. Find the expression for y (t). the lter's Filtering s (t) = H(f) 1 1 4 1 4 f Figure 4. Let the signal Problem 4. does it say about the eect of the ordering of linear. FREQUENCY DOMAIN x(t) H1(f) (a) system a H2(f) y(t) x(t) x(t) H1(f) y(t) H2(f) (b) system b x(t) x(t) – e(t) H1(f) y(t) H2(f) (c) system c Figure 4.28).160 CHAPTER 4.9> .


Problem 4.15:

Circuits Filter!

A unit-amplitude pulse with duration of one second serves as the input to an RC-circuit having transfer

H (f ) =

j2πf 4 + j2πf

a) How would you categorize this transfer function: lowpass, highpass, bandpass, other? b) Find a circuit that corresponds to this transfer function. c) Find an expression for the lter's output.

Problem 4.16:
a) Assuming


Reverberation corresponds to adding to a signal its delayed version.


represents the delay, what is the input-output relation for a reverberation system?


the system linear and time-invariant? If so, nd the transfer function; if not, what linearity or timeinvariance criterion does reverberation violate. b) A music group known as the ROwls is having trouble selling its recordings. The record company's engineer gets the idea of applying dierent delay to the low and high frequencies and adding the result to create a new musical eect. Thus, the ROwls' audio would be separated into two parts (one less than the frequency

f0 ,

the other greater than

f0 ),

these would be delayed by





and the resulting signals added. Draw a block diagram for this new audio processing system, showing its various components. c) How does the magnitude of the system's transfer function depend on the two delays?

Problem 4.17:

Echoes in Telephone Systems

A frequently encountered problem in telephones is echo. Here, because of acoustic coupling between the ear piece and microphone in the handset, what you hear is also sent to the person talking. That person thus not only hears you, but also hears her own speech delayed (because of propagation delay over the telephone network) and attenuated (the acoustic coupling gain is less than one). Furthermore, the same problem applies to you as well: The acoustic coupling occurs in her handset as well as yours. a) Develop a block diagram that describes this situation. b) Find the transfer function between your voice and what the listener hears. c) Each telephone contains a system for reducing echoes using electrical means. could null the echoes? What simple system

Problem 4.18:

Eective Drug Delivery Typically, if the drug concentration in the patient's intravenous line is

In most patients, it takes time for the concentration of an administered drug to achieve a constant level in the blood stream.

Cd u (t),


concentration in the patient's blood stream is

Cp 1 − e−(at) u (t).

a) Assuming the relationship between drug concentration in the patient's drug and the delivered concentration can be described as a linear, time-invariant system, what is the transfer function? b) Sometimes, the drug delivery system goes awry and delivers drugs with little control. What would the patient's drug concentration be if the delivered concentration were a ramp? More precisely, if it were

Cd tu (t)?
c) A clever doctor wants to have the exibility to slow down or speed up the patient's drug concentration. In other words, the concentration is to be

Cp 1 − e−(bt) u (t),



bigger or smaller than



should the delivered drug concentration signal be changed to achieve this concentration prole?

Available for free at Connexions <>




Problem 4.19:

Catching Speeders with Radar

RU Electronics has been contracted to design a Doppler radar system. Radar transmitters emit a signal that bounces o any conducting object. Signal dierences between what is sent and the radar return is processed and features of interest extracted. In The measured return signal equals beam is the feature the design must extract. The transmitted signal is a sinsusoid:

Doppler systems, the object's speed along the direction of the radar

x (t) = Acos (2πfc t). Bcos (2π ((fc + ∆f) t + ϕ)), where the Doppler oset frequency ∆f equals ∆f .

10v ,



is the car's velocity coming toward the transmitter.

a) Design a system that uses the transmitted and return signals as inputs and produces

b) One problem with designs based on overly simplistic design goals is that they are sensitive to unmodeled assumptions. How would you change your design, if at all, so that whether the car is going away or toward the transmitter could be determined? c) Suppose two objects traveling dierent speeds provide returns. How would you change your design, if at all, to accomodate multiple returns?

Problem 4.20:

Demodulating an AM Signal

m (t)

denote the signal that has been amplitude modulated.

x (t) = A (1 + m (t)) sin (2πfc t)
Radio stations try to restrict the amplitude of the signal frequency

m (t)

so that it is less than one in magnitude. The


is very large compared to the frequency content of the signal. What we are concerned about

here is not transmission, but reception. a) The so-called coherent demodulator simply multiplies the signal Assume the lowpass lter is ideal. b) One issue in coherent reception is the phase of the sinusoid used by the receiver relative to that used by the transmitter. Assuming that the sinusoid of the receiver has a phase depend on

x (t)

by a sinusoid having the same

frequency as the carrier and lowpass lters the result. Analyze this receiver and show that it works.


how does the output


What is the worst possible value for this phase? Here, the receiver full-wave recties the received signal and lowpass lters the

c) The incoherent receiver is more commonly used because of the phase sensitivity problem inherent in coherent reception. in a signicant way? result (again ideally). Analyze this receiver. Does its output dier from that of the coherent receiver

Problem 4.21:

Unusual Amplitude Modulation

We want to send a band-limited signal having the depicted spectrum (Figure 4.29(a)) with amplitude modulation in the usual way. I.B. Dierent suggests using the square-wave carrier shown below (Figure 4.29(b)). Well, it is dierent, but his friends wonder if any technique can demodulate it. a) Find an expression for

b) Sketch the magnitude of

X (f ), the Fourier transform of the modulated signal. X (f ), being careful to label important magnitudes and


c) What demodulation technique obviously works? d) I.B. challenges three of his friends to demodulate

x (t)

x (t) some other way. One friend suggests modulating 3πt πt , another wants to try modulating with cos (πt) and the third thinks cos will 2 2 work. Sketch the magnitude of the Fourier transform of the signal each student's approach produces.


Which student comes closest to recovering the original signal? Why?

Available for free at Connexions <>


S(f) 1 f




1 t



Figure 4.29

Problem 4.22:
message signal is

Sammy Falls Asleep...

While sitting in ELEC 241 class, he falls asleep during a critical time when an AM receiver is being described. The received signal has the form

m (t);

it has a bandwidth of

r (t) = A (1 + m (t)) cos (2πfc t + φ) where the phase φ is unknown. The W Hz and a magnitude less than 1 (|m (t) | < 1). The phase φ

is unknown. The instructor drew a diagram (Figure 4.30) for a receiver on the board; Sammy slept through the description of what the unknown systems where.

cos 2πfct LPF W Hz r(t) sin 2πfct LPF W Hz xs(t) ? xc(t) ? ?

Figure 4.30

a) What are the signals

xc (t)


xs (t)?

b) What would you put in for the unknown systems that would guarantee that the nal output contained the message regardless of the phase?

Think of a trigonometric identity that would prove useful.

c) Sammy may have been asleep, but he can think of a far simpler receiver. What is it?

Available for free at Connexions <>




Problem 4.23:
decides to


any carrier frequency and message bandwidth for the station. A rival college jam its transmissions by transmitting a high-power signal that interferes with radios that try to receive KSRR. The jamming signal jam (t) is what is known as a sawtooth wave (depicted in Figure 4.31)
decides that she can choose having a period known to KSRR's engineer.

Sid Richardson college decides to set up its own AM radio station KSRR. The resident electrical engineer

jam(t) A … –T T
Figure 4.31

… 2T t

a) Find the spectrum of the jamming signal. b) Can KSRR entirely circumvent the attempt to jam it by carefully choosing its carrier frequency and transmission bandwidth? terms of If so, nd the station's carrier frequency and transmission bandwidth in


the period of the jamming signal; if not, show why not.

Problem 4.24:

AM Stereo

A stereophonic signal consists of a "left" signal transmitter rst forms the sum signal

l (t)

and a "right" signal

r (t)

that conveys sounds coming

from an orchestra's left and right sides, respectively.

To transmit these two signals simultaneously, the and the dierence signal

s+ (t) = l (t) + r (t)

Then, the transmitter amplitude-modulates the dierence signal with a sinusoid having frequency

s− (t) = l (t) − r (t). 2W , where


is the bandwidth of the left and right signals. The sum signal and the modulated dierence signal are

added, the sum amplitude-modulated to the radio station's carrier frequency the spectra of the left and right signals are as shown (Figure 4.32).

fc ,

and transmitted. Assume









Figure 4.32

a) What is the expression for the transmitted signal? Sketch its spectrum.

Available for free at Connexions <>


b) Show the block diagram of a stereo AM receiver that can yield the left and right signals as separate outputs. c) What signal would be produced by a conventional coherent AM receiver that expects to receive a standard AM signal conveying a message signal having bandwidth


Problem 4.25:
in the

Novel AM Stereo Method

A clever engineer has submitted a patent for a new method for transmitting two signals

same transmission bandwidth as commercial AM radio.


As shown (Figure 4.33), her approach is to

modulate the positive portion of the carrier with one signal and the negative portion with a second.

Example Transmitter Waveform 1.5 1 0.5

0 -0.5 -1 -1.5






5 Time






Figure 4.33

In detail the two message signals

equal to 1. The carrier has a frequency

m1 (t) and m2 (t) are bandlimited to W Hz and have maximal amplitudes fc much greater than W . The transmitted signal x (t) is given by
if if

  A (1 + am (t)) sin (2πf t) 1 c x (t) =  A (1 + am2 (t)) sin (2πfc t)
In all cases,

sin (2πfc t) ≥ 0 sin (2πfc t) < 0

sin (2πfm t)


0 < a < 1. The plot shows the transmitted signal when the messages are sinusoids: m1 (t) = m2 (t) = sin (2π2fm t) where 2fm < W . You, as the patent examiner, must determine x (t) than given above. m1 (t) and m2 (t) from x (t).

whether the scheme meets its claims and is useful. a) Provide a more concise expression for the transmitted signal b) What is the receiver for this scheme? It would yield both

c) Find the spectrum of the positive portion of the transmitted signal. d) Determine whether this scheme satises the design criteria, allowing you to grant the patent. Explain your reasoning.

Available for free at Connexions <>




Problem 4.26:

A Radical Radio Idea

An ELEC 241 student has the bright idea of using a square wave instead of a sinusoid as an AM carrier. The transmitted signal would have the form

x (t) = A (1 + m (t)) sqT (t)
where the message signal

m (t)

would be amplitude-limited:

|m (t) | < 1 W
Hz, what values for the square

a) Assuming the message signal is lowpass and has a bandwidth of wave's period


are feasible. In other words, do some combinations of

b) Assuming reception is possible, can

standard radios receive this innovative AM transmission?
W =5




prevent reception? If so,

show how a coherent receiver could demodulate it; if not, show how the coherent receiver's output would be corrupted. Assume that the message bandwidth

Problem 4.27:

Secret Communication

An amplitude-modulated secret message

m (t)

has the following form.

r (t) = A (1 + m (t)) cos (2π (fc + f0 ) t)
The message signal has a bandwidth of oset the carrier frequency by


Hz and a magnitude less than 1 (|m (t) |

< 1).

The idea is to


Hz from standard radio carrier frequencies. Thus, "o-the-shelf" coherent

demodulators would assume the carrier frequency has


Hz. Here,

f0 < W . fc

a) Sketch the spectrum of the demodulated signal produced by a coherent demodulator tuned to

b) Will this demodulated signal be a scrambled version of the original? If so, how so; if not, why not? c) Can you develop a receiver that can demodulate the message without knowing the oset frequency

fc ?

Problem 4.28:

Signal Scrambling

An excited inventor announces the discovery of a way of using analog technology to render music unlistenable without knowing the secret recovery method. The idea is to modulate the bandlimited message special periodic signal

m (t)

by a

s (t)

that is zero during half of its period, which renders the message unlistenable and

supercially, at least, unrecoverable (Figure 4.34).

1 s(t)

T 4

T 2
Figure 4.34



a) What is the Fourier series for the periodic signal? b) What are the restrictions on the period


so that the message signal can be recovered from

m (t) s (t)?

c) ELEC 241 students think they have "broken" the inventor's scheme and are going to announce it to the world. How would they recover the original message modulating signal?

without having detailed knowledge of the

Available for free at Connexions <>

After we add the positive-indexed and negative-indexed terms. This quantity clearly corresponds to the periodic pulse signal's average value. the numerator equals the square of the signal's rms value minus the power in the average and the power in the rst harmonic. the coecients of the complex Fourier series have conjugate symmetry: Solution to Exercise 4. this quantity is most easily computed in the fre- Available for free at Connexions <http://cnx. Write the coecients of the complex Fourier series in Cartesian form as the expression for the complex Fourier series.11). valued. As a half-wave rectied sine wave is zero A since the integral of the squared half-wave rectied sine wave 2 √ Solution to Exercise 4. 124) ∞ Thus. 128) P Total harmonic distortion equals quency domain.3. The coecients of the sine terms are given by bk = − (2Im (ck )) so that ck = bk = Thus. the integral corresponds to the average. 127) The rms value of a sinusoid equals its amplitude divided by during half of the (p.3.4. 22. and the other coecients are zero. 120) Because of Euler's relation. To obtain the classic Fourier series (4. c1 = 1 2j .48) Solution to Exercise 4.167 Solutions to Exercises in Chapter 4 Solution to Exercise 4. (Ak + jBk ) ej 2πkt T = = (Ak + jBk ) cos Ak cos 2πkt T 2πkt T + jsin 2πkt T 2πkt T 2πkt T − Bk sin + j Ak sin + Bk cos c−k = ck ∗ 2πkt T We now combine terms that have the same frequency index in magnitude. sin (2πf t) = 1 j2πf t 1 e − e−(j2πf t) 2j 2j (4. However... Because the signal is real- or A−k = Ak and B−k = −Bk . 1 c−1 = − 2j ..2 (p.1 (p.2. ∞ 2 2 k=2 ak +bk a1 2 +b1 2 .1 (p.3. 125) The average of a set of numbers is the sum divided by the number of terms.3 (p. } 4 sin πk 2πkt T (4.9> . ck = Ak + jBk and substitute into ∞ ck e k=−∞ j 2πkt T = k=−∞ (Ak + jBk ) ej 2πkt T Simplifying each term in the sum using Euler's formula. 123) Solution to Exercise 4.1 (p. The coecients are pure 0.2 (p. Viewing signal integration as the limit of a Riemann sum.3.2. 125) Solution to Exercise 4. its rms value is equals half that of a squared sinusoid. which means ak = 2 jπk .4. each term in the Fourier series 2πkt becomes 2Ak cos − 2Bk sin 2πkt . c0 = A∆ T . Clearly. the Fourier series for the square wave is   4 πk if if k odd even  0 k sq (t) = k∈{1.47) Solution to Exercise 4. We found that the complex Fourier series coecients are given by imaginary. we must have 2Ak = ak T T and 2Bk = −bk .

which equals a frequency of 111 Hz. Solution to Exercise 4.8.2 (p.3 (p. Delaying the frequency −(t−∆) 1 RC u (t − ∆). The periodic output indicates nonlinear behavior.6. the average output values equal the respective average Solution to Exercise 4. 146) Solution to Exercise 4. a constant input (a zero-frequency sinusoid) should yield a constant output. Because integration and signal-delay are linear. we need T .3 (p. Because the lter's gain at zero frequency equals one. and equals s (t) = W Solution to Exercise 4. Using a binary representation. Subtracting from the undelayed signal response's time-domain version by ∆ results in RC e −(t−∆) −t 1 1 RC u (t)− RC yields u (t − ∆). Because males have a lower pitch frequency. We can use N dierent amplitude values at only one frequency to represent the various letters. the binary-encoding scheme has a factor of log2 N N T . 148) Solution to Exercise 4. binary encoding 128 Solution to Exercise 4. The bottom-right panel has a period of about 0.1 (p. 136) input values. we can consider each separately. 139) ∞ ∞ F (S (f )) = −∞ S (f ) e−(j2πf t) df = −∞ S (f ) ej2πf (−t) df = s (−t) ∞ ∞ S (f ) e−(j2πf t) df = −∞ S (f ) ej2πf (−t) df = −∞ yields s (−t). 150) If the glottis were linear. 142) The result is most easily found in the spectrum's formula: the power in the signal-related part of half the power of the signal sin(πW t) πW t 2 x (t) is t 1 − RC u (t).4 (p.1 ( the spacing between spectral lines is smaller.10. the period is about 0.1 (p.8.10.05 smaller bandwidth. Multiplying the frequency response by RC e 1 − e−(j2πf ∆) means subtract from the original signal its time-delayed version.0065 s.1 (p. The integral is provided in the example (4.5. 131) Solution to Exercise 4.10. 133) is superior.16 (voice spectrum) would amount to removing every other spectral line. Because the integral of a sum equals the RC e RC e Solution to Exercise 4.44).1 (p. 134) Solution to Exercise 4. Available for free at Connexions <http://cnx. In the bottom-left panel.168 CHAPTER 4. Doubling the pitch frequency to 300 Hz for Figure 4. the integral of a delayed signal equals the delayed version of the integral. We need two more to get us back Solution to Exercise 4.9. Total harmonic distortion in the square wave is 1− 1 4 2 2 π = 20%.1 (p.2 (p. 141) The signal is the inverse Fourier transform of the triangularly shaped spectrum.9> . Clearly. N signals directly encoded require a bandwidth of N = 128. We know that F (S (f )) = s (−t). For 7 = 0. two Fourier transforms applied to s (t) Solution to Exercise 4. 139) where we started. a frequency of 154 Hz.8.2 (p. F (F (F (F (s (t))))) = s (t). 143) s (t).6.7. This closer spacing more accurately reveals the formant structure.009 s. Now we integrate this sum. The inverse transform of the frequency response is sum of the component integrals (integration is linear).8. Therefore. FREQUENCY DOMAIN Solution to Exercise 4.

3/>. computer is an electronic device that performs calculations on data.2. linear ltering. For example. Available for free at Connexions <http://cnx.2 Introduction to Computer Organization 5. we need real-time signal processing systems. functions dened only for the integers. Programmability This exibility has obvious appeal. and linear systems parallel what previous chapters described. comes at a price. and has been widely accepted in the marketplace. digital ones that depend on a computer to perform system computations may or may not work in real>. can be performed quickly enough to be calculated as the signal is produced." We will discover that digital signal processing is not an approximation to analog Clearly. Digital signals are sequences. the speech sent over digital cellular telephones. These similarities make it easy to understand the denitions and why we need them. n) for a discrete-"time" two-dimensional signal like a photo taken with a digital camera. also discover that digital systems enjoy an means that we can perform signal processing operations impossible with analog systems (circuits).9> 169 .or complex-valued functions of a continuous digital ones as well. like the Fourier transform. a system that produces its output as rapidly as the input arises is said to be a real-time system. We must explicitly worry about the delity of converting analog signals into digital ones. we must understand a little about how computers compute. The music stored on CDs.1 Computer Architecture The modern denition of a 1 2 3 To understand digital signal processing systems. The key reason why digital signal processing systems have a technological advantage today is the puter: com- computations. a consequence of how computers work. Only recently have computers become fast enough to meet real-time requirements while performing non-trivial signal processing. 3 This content is available online at <http://cnx. How do computers perform signal processing? 5. Despite such fundamental dierences.Chapter 5 Digital Signal Processing 5. All analog systems operate in real time. and the video carried by digital television all evidence that analog signals can be accurately converted to digital ones and back 2 and programmability means that the signal processing system can be easily changed. but the similarities should not be construed as "analog wannabes. the theory underlying digital signal processing mirrors that for analog signals: Fourier transforms. Sequences are fundamentally dierent than continuous-time signals. Taking a systems viewpoint for the moment. We will algorithmic advantage that contributes to rapid processing This exibility speeds: Computations can be restructured in non-obvious ways to speed the processing. We thus use the notation s (n) to denote a discrete-time one-dimensional signal s (m.1 Introduction to Digital Signal Processing variable such as time or space  we can dene such as a digital music recording and 1 Not only do we have analog signals  signals that are real. continuity has no meaning for sequences. presenting This content is available online at <http://cnx.

mouse. • A simple computer operates fundamentally in discrete time. Computer calculations can be numeric (obeying the laws of arithmetic). b-1 5 The b-ary positional representation system uses the position of digits ranging from 0 to 4 5 to denote a number. The sum or product of two integers is also an integer. however. For example. and an input/output (I/O) necessarily mean a computation like an addition. which means that the clock speed need not express the computational speed. The computational unit is the computer's heart. Computational speed is expressed in units of millions of instructions/second (Mips). in which computational steps occur periodically according to ticks of a clock. logical (obeying the laws of an algebra). DIGITAL SIGNAL PROCESSING the results to humans or other computers in a variety of (hopefully useful) ways. a multiplication. Organization of a Simple Computer CPU Memory I/O Interface Keyboard Figure 5.). this representation relies on integer-valued computations. What I/O devices might be present on a given computer vary greatly. This description be- lies clock speed: When you say "I have a 1 GHz computer. etc. 5. computational unit. Your 1 GHz computer (clock speed) may have a computational speed of 200 Mips. D/A converters). Computers are clocked devices. and output devices (monitors. a The generic computer contains input devices (keyboard. A/D (analog-to-digital) converter. we could use stick gure counting or Roman numerals.170 CHAPTER 5. At its heart.1: CRT Disks Network Generic computer hardware organization. and usually consists of a central processing unit (CPU). Alternative number representation systems exist. all numbers can represented by the positional notation system. a memory. but very limiting when it comes to arithmetic calculations: ever tried to divide two Roman numerals? Available for free at Connexions <http://cnx. printers. The An example of a symbolic computation is sorting a list of names." you mean that your computer takes 1 nanosecond to perform each step. or symbolic (obeying 4 Each computer instruction that performs an elementary numeric calculation  an addition. • Computers perform integer (discrete-valued) computations. or a division  does so only for integers.9> .2 Representing Numbers Focusing on numbers.2. These were useful in ancient times. That is incredibly fast! A "step" does not. computers break such computations down into several stages. any law you like). but the quotient of two integers is likely to not be an integer. unfortunately. How does a computer deal with numbers that have digits to the right of the decimal point? addressed by using the so-called This problem is oating-point representation of real numbers.

dk ∈ {0. Number representations on computers d7 d6 d5 d4 d3 d2 d1 d0 unsigned 8-bit integer s s d6 d5 d4 d3 d2 d1 d0 signed 8-bit integer s exponent mantissa floating point Figure 5. . so that the digits representing this number are d0 = 5.967. 6 You need one more bit to do that. usually expressed in terms of the number of bits. it takes an innite number of bits nite binary representation. Fractions between zero and one are represented the same and 64. Available for free at Connexions <http://cnx. But a computer cannot represent The fault is not with the binary number system. Thus." thereby representing "1" or "0. The bytes. we tack on a special bitthe computer's memory consists of an ordered sequence of represent an unsigned number ranging from sign bitto express the sign. commonly assumed to be due to us having ten ngers.2: The various ways numbers are represented in binary are illustrated.) For both 32-bit and 64-bit integer representations. 221. Since we want to store many numbers in a computer's memory. A byte can therefore −128 to 0 to 255. . While a gigabyte of memory may seem to be a lot.1) n in base-b as nb = dN dN −1 . Here." respectively.2) All numbers can be represented by their sign. we can make the same byte to represent numbers ranging from all possible real numbers. d1 = 2.9> . ∞ Mathematically. rather having only a nite number of bytes is the problem. we are restricted to those Large integers can be represented by an ordered sequence of bytes.171 quantity b is known as the base of the number system.294. b − 1} (5. each digit of which is known as a bit (binary digit). The number 25 in base 10 equals 2×101 +5×100 . d0 . . a collection of eight bits.295). positional systems represent the positive integer n as n= k=0 and we succinctly express dk bk . . 32. each bit is represented as a voltage that is either "high" or "low. The number of bytes for the exponent and mantissa components of oating point numbers varies.1) can be thought of as two real numbers that obey special rules to manipulate them. Common lengths. Digital computers use the binary number representation.2.1 (Solution on p. This same 4 3 2 1 0 number in binary (base 2) equals 11001 (1 × 2 + 1 × 2 + 0 × 2 + 0 × 2 + 1 × 2 ) and 19 in hexadecimal (base 16). . . . dk ∈ {0. are 16. a number almost big Exercise 5. to represent that have a 127. b − 1} (5. π. . . base 2 or Complex numbers (Section 2. what are the largest numbers that can be represented if a sign bit must also be included. −1 f= k=−∞ dk bk . To represent signed values. . integer and fractional parts. If we take one of the bits and make it the sign bit. Humans use base 10. an unsigned 32-bit number can represent integers ranging between 0 and enough to enumerate every human in the world! 6 232 − 1 (4. and all other dk equal zero.

x can be multiplied/divided by enough number is represented exactly in oating represent the number.5 equals the fractional representation. mantissa require a sign bit. 1 . 221. Available for free at Connexions <http://cnx. The more bits used in the mantissa. 9 Note that there will always be numbers that have an innite representation in any chosen positional system. 8 However.600000079. the number part of which has an exact binary representation. when does addition cause problems?) 7 In some computers. A computer's representation of integers is either perfect or only approximate. representations have similar representation problems: the number Floating point powers of two to yield a fraction lying between 1/2 and 1 that has a nite binary-fraction representation.expressed by the remaining bytes. and only be represented approximately in oating point.. (This statement isn't quite true. Such inexact numbers have an innite binary representation.2. but with a little more complexity. this normalization is taken to an extreme: the leading binary digit is not explicitly expressed. which require 32 bits (one byte for the exponent and the remaining 24 bits for the mantissa). In number 2. (5. the latter situation occurring when the integer exceeds the range of numbers that a limited set of bytes can represent. point numbers consume 8 bytes. bytes) reserved to represent the The oating-point system uses a number of bytes . how about numbers having nonzero digits to the right of the decimal point? In other words. but there are always some that cannot..172 CHAPTER 5. the binary representation system is used.typically 4 or 8 . 8 See if you can nd this representation. Electronic circuits that make up the physical computer can add and subtract integers without error. the number 2. This convention is known as the hidden-ones notation. has an innite representation in decimal (and binary for that matter).625 × 22 . but has nite representation in base 3. and underlies the entire eld numerical analysis. the 2. This increasing accuracy means that more numbers can be represented exactly. For example.. Note that this approximation has a much longer decimal expansion. the 2. which seeks to predict the numerical accuracy of any computation. Exercise 5. If you were thinking that base 10 numbers would solve this inaccuracy. not catastrophically in error as with integers.6 will be represented as single precision oating point Double precision oating 9 Realizing numbers. the greater the accuracy. and quadruple precision 16 bytes. note that 1/3 = 0. they can be represented exactly in a computer using the binary positional notation. which means The sign of the mantissa represents the sign of the number and the exponent can be a signed integer.) What are the largest and smallest numbers that can be represented in 32-bit oating point? in 64-bit oating point that has sixteen bits allocated to the exponent? Note that both exponent and So long as the integers aren't too large. but with one byte (sometimes two exponent e of a power-of-two multiplier for the number . 7 The number zero is an exception to this rule. The choice of base denes which do and which don't. and it 1 2 . DIGITAL SIGNAL PROCESSING While this system represents integers well.. we can only represent the number approximately.2 (Solution on p.6 does not have an exact binary 0. Otherwise. This level of accuracy may not suce in numerical calculations.the mantissa m x = m2e .. how are numbers that have fractional parts represented? For such of that real numbers can be only represented approximately is quite important. providing an extra bit to represent the mantissa a little more accurately.3) The mantissa is usually taken to be a binary fraction having a magnitude in the range that the binary representation is such that is the only oating point number having a zero fraction.9> . if d−1 = 1.

221. the Sampling Theorem allows us to quantize the time axis The signals that can be sampled without introducing error are interesting.3 The Sampling Theorem restriction means that 11 5. and an array of such voltages express numbers akin to positional notation. 11 This content is available online at <http://cnx. binary multiplication corresponds to AND and addition (ignoring carries) to XOR." George Boole discovered this equivalence in the mid-nineteenth Exercise 5.2. yields a value of truth if either is true. A and B. Here. equals the union of A ∪ B and A ∩ B . any computer using base-2 representations and arithmetic can also easily evaluate logical statements.3 (Solution on p.173 5. XOR. 10 subtraction of two single-digit binary numbers yields the same bit as addition. You use this kind of statement to tell search engines that you want to restrict hits to cases where both of the events A and B occur. conversion introduces error. a signal's value can no longer be any real number. without error for some signals. represents a A and B must be true for the statement to be true. no one They must each be and as described in the next section. This both the time axis and the amplitude axis must be quantized: a multiple of the integers. This fact makes an integer-based computational device much more powerful than might be apparent. More importantly. we can make a signal "samplable" by ltering. In contrast. the AND of A and B.) Add twenty-ve and seven in base 2.20/>. Available for free at Connexions <http://cnx. Logic circuits perform arithmetic operations. 12 Quite surprisingly. Computers use high and low voltage values to express a bit. discrete-valued: their values must be proportional to the integers. signal must be represented by a nite number of bytes. 12 We assume that we do not use oating-point A/D converters.4) Note that if carries are ignored.3 Computer Arithmetic and Logic The binary addition and multiplication tables are                      0+0=0   0+1=1    1 + 1 = 10    1+0=1       0×0=0   0×1=0    1×1=1   1×0=0 ( It laid the foundation for what we now call Boolean algebra. Signals processed by digital computers must Consequently.2. Why is the result "nice"? Also note that the logical operations of AND and OR are equivalent to binary addition (again if carries are ignored). A ∪ B. Thus. 1 + 1 = 10 is an example of a computation that involves a carry. Note the carries that might occur. Note that if we represent truth by a "1" and 5. The Irish mathematician falsehood by a "0. be has found a way of performing the amplitude quantization step without introducing an unrecoverable error. analog-to-digital 10 A carry means that a computation performed at a given position aects other positions as well. The variables of logic indicate truth or falsehood.3.9> . statement that both the OR of A ∩ B.1 Analog-to-Digital Conversion Because of the way computers are organized. which expresses as equations logical statements. the exclusive or operator.

∞ pTs (t) = k=−∞ ck e j2πkt Ts (5. 13 πk∆ Ts πk s (t) from (5. we approximate it as the product x (t) = s (t) PTs (t). rst derived this result. in the Available for free at Connexions <http://cnx. }. To characterize Clearly. One of the most amazing and useful results in electrical engineering is that signals can be converted from a function of time into a sequence of numbers back into the signal with (theoretically) Shannon II. known as the Sampling Theorem. . The resulting signal. the value of the original signal at the sampling times is preserved.5) sin ck = If the properties of ltering.3 (Sampled Signal).org/content/col10040/1. The waveform of an example signal is shown in the top plot and its sampled version in For our purposes here. 0. .6) s (t) and the periodic pulse signal are chosen properly. without error: We can convert the numbers Harold Nyquist. we can recover x (t) by http://www.lucent. It found no real application back then. also at Bell Laboratories.174 CHAPTER 5. n ∈ {.3. with Ts known as the sampling interval. we center the periodic pulse signal about the origin so that its Fourier series coecients are real (the signal is even). has nonzero values only during the time nTs − ∆ 2 . with PTs (t) being the periodic pulse signal. . nTs + ∆ 2 . −1. The sampled version of the analog signal no error. . . a Bell Laboratories engineer. sampling. revived the result once computers were made public after World War s (t) is s (nTs ).9> . DIGITAL SIGNAL PROCESSING 5.2 The Sampling Theorem Digital transmission of information and digital signal processing all require signals to rst be "acquired" by a computer. the issue is how the signal values between intervals the samples can be reconstructed since they are lost in the sampling process. Sampled Signal s(t) t s(t)pTs(t) ∆ Ts t Figure 5.3: the bottom. . Claude 13 . 1. as shown in Figure 5.

If. aliasing will occur.4: W X(f) c0 c1 1 Ts> 2W c2 2 Ts c1 f c-1 – 1 –W Ts c-1 W X(f) c0 1 Ts f 1 Ts< 2W c2 2 Ts f – 1 Ts –W W 1 Ts The spectrum of some bandlimited (to W Hz) signal is shown in the top plot.8) Thus.4 (aliasing)). ck ) and delayed versions ∞ X (f ) = k=−∞ ck S f − k Ts (5. the sampling interval is chosen suciently small to avoid aliasing. In the bottom plot. ∞ x (t) = k=−∞ ck e j2πkt Ts s (t) (5. Using the Fourier series representation of the periodic sampling signal. This unpleasant phenomenon is known as aliasing.7) Considering each term in the sum separately. Note that if the signal were not bandlimited.> . rendering recovery of the original signal impossible. ∞ s (t) e −∞ j2πkt Ts ∞ e−(j2πf t) dt = −∞ s (t) e−(j2π(f − Ts )t) dt = S f − k k Ts (5. we need to know the spectrum of the product of the complex exponential and the signal. the terms in this sum overlap each other in the frequency domain. aliasing S(f) –W Aliasing c-2 – 2 Ts c-2 – 2 Ts Figure 5. Evaluating this transform directly is quite easy.175 To understand how signal values between the samples can be "lled" in. the spectrum of the sampled signal consists of weighted (by the coecients of the signal's spectrum (Figure 5.9) In general. we need to calculate the sampled signal's spectrum. If the sampling interval Ts is chosen too large relative to the bandwidth W. and Available for free at Connexions <http://cnx. we satisfy two conditions: • The signal s (t) is bandlimitedhas power in a restricted frequency rangeto W Hz. the component spectra would always overlap.

DIGITAL SIGNAL PROCESSING • the sampling interval Ts is small enough so that the individual components in the sum do not overlap Ts < 1/2W . Exercise 5.1 Sampling Theorem. Because not have anti-aliasing lters or. In this delightful case. the signal will change only slightly during each pulse. If.05 kHz in our example).1 kHz for example. known today as the corresponds to the highest frequency at which a signal can contain energy and remain compatible with the Sampling Theorem. High-quality sampling systems ensure that no aliasing occurs by unceremoniously lowpass ltering the signal (cuto frequency being slightly lower than the Nyquist frequency) before sampling. s (nTs ). frequencies beyond the sound card's Nyquist frequency. aliasing will not occur. the signal's samples. we will have performed This content is available online at <http://cnx.2.2 (Solution on p.2: Digital Signals) form. we can recover the original signal by lowpass ltering with a lter having a cuto frequency equal to x (t) W Hz.9> . Let the sampling interval Ts be 1. 221.176 CHAPTER 5.3. ∆. it can be recovered s (nTs ). the resulting aliasing can be impossible to remove. They sample at high frequencies. Sampling is only the rst phase of acquiring data into a computer: Computational processing further requires that the samples be values are converted into digital (Section 1. making be To gain a better appreciation of aliasing. 1.) What is the eect of this The Sampling Theorem (as stated) does not mention the pulse width two conditions are met)? The frequency parameter on our ability to recover a signal from its samples (assuming the Sampling Theorem's 1 and the . . 0.4 Amplitude Quantization without error from its samples 14 The Sampling Theorem says that if we sample a bandlimited signal s (t) fast enough. 0. what signal What is the simplest bandlimited signal? Using this signal. As we narrow the pulse. n ∈ {. −1. 14 In short. however. many sound cards do Nyquist frequency Shannon sampling frequency anti-aliasing lter's cuto frequency as the sampling rate varies.5 and 4. . In these ways. . 221. If the sampling rate would your resulting undersampled signal become? (Solution on p. }. convince yourself that less than two 5. For 1 Ts . for that matter. Such systems therefore vary the such quality features cost money.3.23/>.org/content/col10040/1. 44.3. (Solution on p.) 1 Ts is not high enough. . consider two values for the square wave's period: 3. 2Ts . at least two samples will occur within the period of the signal's highest frequency sinusoid. Exercise 5.) 1 − Ts . Note in particular where the simplicity consider only the spectral repetitions centered at spectral lines go as the period decreases. ∆ smaller and smaller. and hope the signal contains no frequencies above the Nyquist frequency (22. Available for free at Connexions <http://cnx. the nonzero values of the signal s (t) pTs (t) will simply If indeed the Nyquist frequency equals the signal's highest frequency. . quantized: analog analog-to-digital (A/D) conversion. sketch the spectrum of a sampled square wave.3 samples/period will not suce to specify it. 221. . These two conditions ensure the ability to recover a bandlimited signal from its sampled version: We thus have the Exercise 5. some will move to the left and some to the right. What property characterizes the ones going the same direction? If we satisfy the Sampling Theorem's conditions. the sampling signal captures the sampled signal's temporal variations in a way that leaves all the original signal's structure intact. the signal contains post-sampling

Thus. The error introduced by converting a signal from analog to digital form by sampling and amplitude quantization then back again would be half the quantization interval for each amplitude we'll dene this range to be assigns amplitude values in this range to a set of integers.75 0. Assuming we can scale the signal without [−1. they all become 0. the quantization interval 0. the D/A converter. the so-called Available for free at Connexions <http://cnx. the signal is assumed to lie within a predened range.5).25.5 1 s(nTs) (a) signal 1 1 0.4. 1. Thus. the original amplitude value cannot be recovered without error.625.5 0.5 and 0. For example the two signal values between 0.625. 2B Exercise 5. A phenomenon reminiscent of the errors incurred in representing numbers on a computer prevents signal amplitudes from being converted with no error into a binary number representation. 221. A In analog-to-digital A/D converter of the integers conversion. The width of a single 2 quantization interval ∆ equals B . .) Recalling the plot of average daily highs in this frequency domain problem (Problem 4. then amplitude-quantized to three bits.5 shows how a three-bit A/D converter assigns input values to the integers. We dene a quantization interval to be the range of values assigned to the same ∆ is integer.5: A three-bit A/D converter assigns voltage in the range [−1. 1].25 0 0 -0. it can be reduced (but not eliminated) by using more bits in the A/D converter.625 in this scheme. For example.1 (Solution on p. the B -bit converter produces one 0.5 0. . First it is sampled. Typically. upon conversion back to an analog value. 2B − 1 for each sampled input. why is this plot so jagged? Interpret this eect in terms of analog-to-digital conversion. Because values lying anywhere within a quantization interval are assigned the same value for computer processing.75 become 0. assigns an amplitude equal to the value lying halfway in the quantization interval. .25 -0. 1] to one of eight integers between 0 and 7. all inputs having values lying between 0.75 are assigned the integer value six and. The bottom panel shows a signal going through the analog-to-digital. The integer 6 would be assigned to the amplitude 0. it is 2 . for our example three-bit A/D converter. in general.9> . . Furthermore. Note how the sampled signal waveform becomes distorted after amplitude quantization. the device that converts integers to amplitudes. aecting the information it expresses.75 –1 -1 sampled signal 7 6 5 4 3 2 1 0 amplitude-quantized and sampled signal (b) Figure 5. Figure 5.5 and 0.177 Q[s(nTs)] 7 ∆ 6 5 4 3 2 1 0 –1 –0.5 -0. 2 where B is the number of bits used in the A/D conversion process (3 in the case depicted here). This distortion is irreversible.

To what signal-to-noise ratio does this correspond? Once we have acquired signals with an A/D converter. sinusoid. To nd the power in the quantization error.178 CHAPTER 5. computer processing. ∆ 2 ε ∆ rms ( ) = = 1 ∆ ∆2 12 −∆ 2 1 2 2d (5. s(nTs) Q[s(nTs)] Figure 5. along with a typical signal's value before amplitude s (nTs ) and after Q (s (nTs )).) Music on a CD is stored to 16-bit accuracy. [−A. we note that no matter into which quantization interval the signal's value falls. It can be shown that if the computer processing is linear. Why go to all the bother if the same function can be accomplished using analog techniques? Knowing when digital processing excels and when it does not is an important the error will have the same characteristics. 2B the more bits available in the A/D converter.11) Thus. the result of sampling. we need to compute the which equals the ratio of the signal power and the quantization error power.6) details a single quantization interval.) This derivation assumed the signal's amplitude lay in the range quantization signal-to-noise ratio be if it lay in the range [−1.2 Exercise 5.5dB 2 (5.4. 1]. we can process them using digital hardware or software. the smaller the quantization error. constant term The 10log1. Available for free at Connexions <http://cnx.3 Exercise 5. As we have xed the input-amplitude range.4 equals 1. To calculate the rms value. every bit increase in the A/D converter yields a 6 dB increase in the signal-to-noise ratio. A]? What would the amplitude (Solution on p.6: quantization A single quantization interval is shown.76.10) Since the quantization interval width for a B -bit 1 2 converter equals 2 2B = 2−(B−1) . To analyze the amplitude quantization error more deeply. Its width is ∆ and the quantization error is denoted by } . 221.9> . the signal power is the square of the rms amplitude: signal-to-noise ratio. = 1 2 .4. we nd that the signal-to- noise ratio for the analog-to-digital conversion process equals SNR = 2−(2(B−1)) 12 = 3 2B 2 = 6B + 10log1. denotes the error thus incurred.) How many bits would be required in the A/D converter to ensure that the maximum amplitude quantization error was less than 60 db smaller than the signal's peak value? (Solution on p. and unsampling is equivalent to some analog linear system. The illustration Assuming the signal is a power (s) = 1 √ 2 2 (Figure 5. (Solution on p. 222. DIGITAL SIGNAL PROCESSING A/D error equals half the width of a quantization interval: 1 . we must square the error and average it over the interval. 222.5 Exercise 5.

We can delay a discrete-time signal by an integer just as with analog ones.12) Note that the frequency variable f is dimensionless and that adding an integer to the frequency of the discrete-time complex exponential has no eect on the signal's value. . s (n) = ej2πf n (5. A signal delayed by m samples has the expression s (n − m). the complex exponential sequence. eciency: what is the most parsimonious and compact way to represent information so that it can be extracted later. .org/content/m10342/2.5. the most important issue becomes. Cosine sn 1 … n … Figure 5. . ej2π(f +m)n = ej2πf n ej2πmn = ej2πf n (5. such as space and time. Discrete-time signals are functions dened on the integers. Can you nd the formula for this We usually draw discrete-time signals as stem plots to emphasize the fact they are functions dened only on the integers. .5. Because this approach leading to a better understanding of signal structure.1 Real. As with analog signals.7: signal? The discrete-time cosine signal is plotted as a stem plot.9> . 5. 5. we seek ways of decomposing discrete-time signals into simpler components. For symbolic-valued signals. for both real-valued and symbolic-valued signals. where n = {. we need only consider frequency to have a value in some unit-length interval. analog signals are functions having as their independent variables continuous quantities. of course. .179 5. . they are sequences. 0. This content is available online at <http://cnx.and Complex-valued Signals A discrete-time signal is represented symbolically as s (n).13) This derivation follows because the complex exponential evaluated at an integer multiple of Thus. −1. we can exploit that structure to represent information (create ways of representing information with signals) and to extract information (retrieve the information thus represented).5 Discrete-Time Signals and Systems 15 Mathematically.15/>. Available for free at Connexions <http://cnx. 15 2π equals one. }.2 Complex Exponentials The most important signal is. the approach is dierent: We develop a common representation of all symbolic-valued signals so that we can embody the information they contain in a unied way. From an information representation perspective.

How to choose a unit-length interval for a sinusoid's frequency will become evident Symbolic Signals An interesting aspect of discrete-time signals is that their values do not need to be real numbers.7 (Cosine). As opposed to analog complex exponentials and sinusoids that can have their frequencies be any real value. This choice of frequency 2 interval is arbitrary.5.   1 u (n) =  0 if if n≥0 n<0 (5.16) 5.5. and will prove useful subsequently.15) This kind of decomposition is unique to discrete-time signals. only when f lies in the interval 5. we can also choose the frequency to lie in the interval [0.3 Sinusoids Discrete-time sinusoids have the obvious form time counterparts yield unique waveforms s (n) = Acos (2πf n + φ). Examination of a discrete-time signal's plot. 1 . We do have real-valued discrete-time signals like the sinusoid. like that of the cosine signal shown in Figure 5. 5.14)   1 δ (n) =  0 if otherwise Unit sample δn 1 n Figure 5. 1).5 Unit Step The unit sample in discrete-time is well-dened at the origin. reveals that all signals consist of a sequence of delayed and scaled unit samples. which is dened to be n=0 (5. but we also have signals that denote the sequence of Available for free at Connexions <http://cnx. we can decompose any signal as a sum of unit samples delayed to the appropriate location and ∞ is denoted by s (m) and the unit sample delayed to occur at m is written scaled by the signal value.4 Unit Sample The second-most important discrete-time signal is the unit sample.5.9> . DIGITAL SIGNAL PROCESSING 5.180 CHAPTER 5. frequencies of their discrete- 1 − 2 . s (n) = m=−∞ s (m) δ (n − m) (5. as opposed to the situation with analog signals.5. a sequence at each integer Because the value of m δ (n − m).8: The unit sample.

. and converted back into For such integers that convey daily temperature. symbolic-valued signal s (n) takes on one of the values {a1 . 222. the Nyquist frequency (p. Real-valued signals have conjugate-symmetric spectra: S e−(j2πf ) = S ej2πf ∗ Exercise 5. processed with software. the When we obtain the discrete-time signal via sampling an analog signal. with the transform of a sum of signals equaling the sum of their transforms.6 Discrete-Time Fourier Transform (DTFT) The Fourier transform of the discrete-time signal 16 s (n) ∞ is dened to be S ej2πf = n=−∞ Frequency here has no units.17) As should be expected. meaning that discrete-time fD = fA Ts 16 (5.) . More formally. Such characters certainly aren't real numbers. systems can be easily produced in software.9> . A special property of the discrete-time Fourier transform is that it is periodic with period one: S ej2π(f +1) = S ej2πf Because of this periodicity. (Solution on p. Understanding how discrete-time and analog signals and systems intertwine is perhaps the main goal of this course. and as a collection of possible signal values. we can further simplify our plotting chores by showing the spectrum only over spectrum at negative frequencies can be derived from positive-frequency spectral values.5. Because of the role of software in discrete-time systems. aK } which entirely of analog circuit elements. Whether controlled by software or not. an analog signal. many more dierent systems can be In fact. typically. . note that a sinusoid having a frequency equal 1 to the Nyquist frequency has a sampled waveform that equals 2Ts cos 2π × The exponential in the DTFT at frequency 1 nT s 2T s = cos (πn) = (−1) n 1 − j2πn 2 = e−(jπn) 2 equals e frequency equals analog frequency multiplied by the sampling interval = (−1) n . we plot the spectrum over the frequency range 1 −2. Derive this property from the denition of the DTFT. They could represent keyboard characters. bytes (8-bit quantities). like e-mail. if not impossible. this denition is linear. This technical terminology does not mean we restrict symbols to being mem- bers of the English or Greek alphabet. is accomplished with analog signals and systems. 176) corresponds to the discrete-time frequency 1 2 .7 Discrete-Time Systems Discrete-time systems can act on discrete-time signals in ways similar to those found in analog signals and systems. 1 0. When the signal is real-valued. we need only plot the spectrum over one period to understand completely the spectrum's structure. the transmission and reception of discrete-time signals. all without the incursion of error. they have little mathematical structure other than that they are members of a set. a special class of envisioned and "constructed" with programs than can be with analog signals. analog signals can be converted into discrete-time signals. 1 2 . 5. discrete-time systems are ultimately constructed from digital circuits.1 . 2 . each element of the comprise the alphabet A. which consist Furthermore.31/>. To show this. Available for free at Connexions <http://cnx. s (n) e−(j2πf n) ( This content is available online at <http://cnx.181 characters typed on the keyboard. . 5. to design. . with equivalent analog realizations dicult.

1 · Ts and use ampliers to rescale the signal. S ej2πf = 1 1 − ae−(j2πf ) (5. |S ej2πf | = 1 (1 − acos (2πf )) + a2 sin2 (2πf ) 2 (5. become increasingly equal.4: aliasing) provides another way of deriving this result.10) of signal (Figure 4. we have our Fourier transform. periodic with period Example 5. Practical systems use a small value ∆ decreases.1) reveals that as zero: pTs (t). to maintain a mathematically viable Sampling Theorem. Simply plugging the signal's expression into the Fourier transform formula.21) Using Euler's relation. the amplitudes of the signal's spectral repetitions. DIGITAL SIGNAL PROCESSING fD and fA represent discrete-time and analog frequency variables. |α| < 1 1−α (5.10 (Spectra of exponential signals)). S ej2πf = = ∞ n −(j2πf n) n=−∞ a u (n) e ∞ −(j2πf ) n n=0 ae (5.22) ∠ S ej2πf No matter what value of = −tan−1 asin (2πf ) 1 − acos (2πf ) (5. Thus.9> . As the duration of each pulse in the periodic sampling signal pTs (t) narrows. the Nyquist frequency corresponds to the frequency .9 (Spectrum of exponential signal) shows indeed that the spectrum is a periodic function. as long as 1 .org/content/col10040/1. the largest Fourier coecient. we have a highpass spectrum (Figure 5. The aliasing gure (Fig- ure 5. 0. When as frequency increases from 0 to for a> 1 2 0. we have a lowpass spectrumthe spectrum diminishes and 1 −2 a< 1 2 with increasing a leading to a greater low frequency content.20) |a| < 1. which are governed by the Fourier series coecients (4. the above formulae clearly demonstrate the periodic nature of the spectra of discrete-time signals. We need only consider the spectrum between to unambiguously dene it. the value of ∆ |c0 | = say of ∆. Available for free at Connexions <http://cnx.19) This sum is a special case of the geometric series.1 an u (n). Ts 2Ts 2 Let's compute the discrete-time Fourier transform of the exponentially decaying sequence s (n) = u (n) is the unit-step sequence. we can express the magnitude and phase of this spectrum. where Thus.182 CHAPTER 5. ∞ αn = n=0 Thus. Thus. the amplitude A must Ts 1 increase as . Figure 5. decreases to A∆ . respectively. becoming innitely large as the pulse duration decreases. 0.23) a we choose. Examination of the periodic pulse c0 . the sampled signal's spectrum becomes 1 1 1 .

9> . Available for free at Connexions <http://cnx.9: The spectrum of the exponential signal (a = The angle has units of degrees. clearly demonstrating the periodicity of all discrete-time spectra. 2].5) is shown over the frequency range [-2.183 Spectrum of exponential signal 2 |S(ej2πf)| 1 f -2 -1 45 0 ∠S(ej2πf) 1 2 -2 -1 -45 1 2 f Figure 5.

222.10: The spectra of several exponential signals are shown.2 Analogous to the analog pulse signal. we know that N +n0 −1 α n = α n0 n=n0 1 − αN 1−α (5.184 CHAPTER 5.5 a = 0. Applying this result yields (Figure 5.27) Available for free at Connexions <http://cnx.5 f a = 0.6.5 0. let's nd the spectrum of the length-N pulse sequence.11 (Spectrum of length-ten pulse).5 a = – N −1 S ej2πf = n=0 e−(j2πf n) (5.) Derive this formula for the nite geometric series sum. The "trick" is to consider the dierence α.9 Figure 5.26) all values of α.5 and a = −0.5 a = –0.2 for between the series' sum and the sum of the series multiplied by (Solution on p.25) For the so-called nite geometric series.   1 s (n) =  0 if 0≤n≤N −1 (5.5 f a = 0.9 -90 -45 0. DIGITAL SIGNAL PROCESSING Spectra of exponential signals Spectral Magnitude (dB) 20 10 0 -10 90 45 0 a = 0.9> .5? Example 5.24) otherwise The Fourier transform of this sequence has the form of a truncated geometric series.) S ej2πf = = 1−e−(j2πf N ) 1−e−(j2πf ) e−(jπf (N −1)) sin(πf N ) sin(πf ) (5. Exercise 5. What is the apparent relationship Angle (degrees) between the spectra for a = 0.

28) = δ (m − n) Therefore. discrete-time sinc N. Thus. sin(N x) sin(x) . we nd that 1 2 1 −2 S ej2πf ej2πf n df = = 1 2 −1 2 mm mm s (m) e−(j2πf m) ej2πf n df 1 2 1 −2 s (m) e(−(j2πf ))(m−n) df (5. the pulse's The discrete-time pulse's spectrum contains many ripples. which is known as the dsinc (x).org/content/col10040/1.30) Available for free at Connexions <http://cnx.9> . Can you explain the rather complicated appearance of the phase? The inverse discrete-time Fourier transform is easily derived from the following relationship: 1 2 −1 2 e−(j2πf m) ej2πf n df   1 =  0 if if m=n m=n (5. our transform can be concisely expressed as S ej2πf = e−(jπf (N −1)) dsinc (πf ).29) = s (n) The Fourier transform pairs in discrete-time are S ej2πf = s (n) = 1 2 −1 2 S e ∞ n=−∞ j2πf s (n) e−(j2πf n) ej2πf n df (5.11: The spectrum of a length-ten pulse is shown. the number of which increase with Spectrum of length-ten pulse Figure 5.185 The ratio of sine functions has the generic form of function duration.

δ (n) is the unit sample (Fig- ure 5.7 Discrete Fourier Transforms (DFT) 18 The discrete-time Fourier transform (and the continuous-time transform as well) can be evaluated when we have an analytic expression for the signal. The DTFT properties table Theorem. so we'll assume that the signal extends over • Continuous frequency. 5.28).17) is a sum. . Exercise 5. the most obvious ones are the equally spaced ones f= k K. • Signal One important common property is Parseval's ∞ 2 1 2 (|s (n) |) = n=−∞ −1 2 |S ej2πf | df 2 (5. DIGITAL SIGNAL PROCESSING The properties of the discrete-time Fourier transform mirror those of the analog Fourier transform. 1 −2. For analog-signal spectra. 222. How then would you compute the spectrum? For example. 1 2 −1 2 |S ej2πf | df 2 = = 1 2 1 −2 nn s (n) e−(j2πf n) s (n) s (n) ∗ 1 2 mm s (n) ej2πf m df (5.32) ∗ n. which conceptually can be easily computed save for two issues.) Suppose we obtained our discrete-time signal from values of the product duration of the component pulses in the total energy contained in s (t) pTs (t).org/content/col10040/1. which turn out in most cases to consist of A/D converters and discrete-time computations. Suppose we just have a signal. we simply substitute the Fourier transform expression into the frequencydomain expression for power. How is the discrete-time signal energy related to s (t)? Assume the signal is bandlimited and that the sampling rate was chosen appropriate to the Sampling Theorem's conditions.6. for analog signals no similar exact spectrum Certainly discrete-time spectral analysis is computation exists. Available for free at Connexions <http://cnx.8: Unit sample). 17 shows similarities and dierences.31) To show this important property. N − 1]. such as the speech signal used in the previous chapter. The formula for the DTFT (5. Subtler than the signal duration issue is the fact that the frequency variable is continuous: It may only need to span one period. . The sum extends over the signal's duration. use must build special devices.17: spectrogram)? The Discrete Fourier Transform (DFT) allows the computation of spectra from discrete-time data. for which there is no formula.m −1 2 ej2πf (m−n) df where Using the orthogonality relation (5.3 (Solution on p. 1 2 or [0. K − 1}.9> . k ∈ {0. the double sum collapses into a single sum because nonzero values occur only n = m. like stands requires evaluating the spectra at all frequencies within a period. which must be nite to compute the It is exceedingly dicult to store an innite-length signal in any case. While in discrete-time we can exactly calculate spectra. but the DTFT formula as it Let's compute the spectrum at a few frequencies. where the pTs (t) is ∆. This terminology is a carry-over from the analog world.28/>. . signal's spectrum. . [0. 17 18 "Properties of the DTFT" <http://cnx. more exible than continuous-time spectral analysis. giving Parseval's Theorem as a result. Thus. how did we compute a spectrogram such as the one shown in the speech signal example (Figure 4. the integral equals when δ (m − n).org/content/m10249/2.186 CHAPTER 5. We term nn s2 (n) the energy in the discrete-time signal s (n) in spite of the fact that discrete-time signals don't consume (or produce for that matter)> This content is available online at <http://cnx. 1].

36) single n and m both range over {0. . N − 1}. . }) (5. If we evaluate the spectrum at fewer frequencies than the signal's duration.1 (Solution on p. our formula becomes N −1 ∞ s (n) = m=0 The integers to be a s (m) K l=−∞ δ (m − n − lK) (5.35) otherwise We obtain nonzero value whenever the two indices dier by multiples of K.33) S (k) is shorthand for S ej2π K . and we would not have a valid transform: Once going into the frequency domain. characterize this eect a dierent way. To have an inverse transform. n in this range. . One way of answering this question is determining an inverse discrete Fourier transform formula: given S (k). . Note that you can think about this computationally motivated choice as sampling the spectrum. . . .187 We thus dene the discrete Fourier transform (DFT) to be N −1 S (k) = n=0 Here. Another way to understand this requirement is to use the theory of linear equations. n = {0. If it did not. k = {0.) Given the sampling interpretation of the spectrum. n = {0. . . . we can return from the frequency domain we entered via the DFT. the term to l=0 soon). . . s (n) + s (n + K) m. we need the sum unit sample for m. We can compute the spectrum at as many equally spaced frequencies as we like. . s (n) e− k j2πnk K . The issue now is how many frequencies are enough to capture how the spectrum changes with frequency. N − 1}? Presumably. then s (n) would equal a sum of values. . When we have fewer frequency samples than the signal's duration. some discrete-time signal values equal the sum of the original signal values. .37) = S (1) Available for free at Connexions <http://cnx. . 222. Thus. N − 1}. . more about this interpretation later. . The only way situation means that our to eliminate this problem In this way. . . n ± K.9> . the formula will be of the form j2πnk K−1 s (n) = k=0 S (k) e K . If we write out the expression for the DFT as a set of linear equations. K − 1} (5. we could not get back unambiguously! This n. . K − 1} how do we nd s (n). . . n ± 2K. . 2π 2π(N −1) K (5.34) Note that the orthogonality relation we use so often has a dierent character now. . We can express this result as K l δ (m − n − lK). K−1 e−(j k=0 2πkm K ) ej 2πkn K   K =  0 if (m = {n. k ∈ {0. Substituting the DFT formula in this prototype inverse transform yields K−1 N −1 s (n) = k=0 m=0 s (m) e−(j 2πmk K ) ej 2πnk K (5. the term corresponding for some values of always provides a unit sample (we'll take care of the factor of K m = n+K will also appear for some values of prototype transform equals is to require K ≥ N: We must have at least as many frequency samples as the signal's duration.7. s (0) + s (1) + · · · + s (N − 1) = S (0) s (0) + s (1) e(−j) K + · · · + s (N − 1) e(−j) . Exercise 5.

3 Use this demonstration to perform DFT analysis of a signal. we nd the O N2 computational procedure.9> . In computation. DIGITAL SIGNAL PROCESSING s (0) + s (1) e(−j) we have 2π(K−1) K + · · · + s (N − 1) e(−j) K ≥ N. we must keep the real and imaginary parts separate. Available for free at Connexions <http://cnx. Thus. The Discrete Fourier Transform Pair S (k) = s (n) = N −1 −(j 2πnk ) N n=0 s (n) e 2πnk N −1 1 j N k=0 S (k) e N ( Consequently. As we have dominant termhere the DFT is an N frequencies.llb> 5. As multiplicative constants don't matter since we are making a "proportional to" evaluation.llb> Example 5.4 Use this demonstration to synthesize a signal from a DFT sequence. becomes equivalent to how long the computation takes (how long must we wait to some function of the amount of data used in the computation and the amount for an answer). This media object is a LabVIEW computes the spectrum at 19 We now have a way of computing the spectrum for an arbitrary signal: The Discrete Fourier Transform N equally spaced frequencies from a length- N sequence. known as the complexity. To add the results together. the total number of 2N + 2 (N − 1) = 4N − 2 computations is N (4N − 2). For a real-valued signal." like that performed by a circuit. Please view or download it at <DFTanalysis. The number of steps. is how much work it takes to perform the signal processing operation such as ltering. each frequency requires computational steps. consider the formula for the discrete Fourier transform. the resulting set of equations can indeed be By convention. meaning we have numbers requires 2N multiplications to perform. Complexity is not so much tied to specic computers or programming languages but to how many steps are required on any computer. This media object is a LabVIEW VI. An issue that never arises in analog "computation. 2π(N −1)(K−1) K = S (K − 1) K equations in N unknowns if we want to nd the signal from its sampled spectrum. K < N.188 CHAPTER 5.38) Example 5. Thus. we must have Our orthogonality relation essentially says that if we have a sucient number of equations (frequency samples).8 DFT: Computational Complexity (DFT) (5. we would expect that the computation time to approximately quadruple. This notation is read "order N -squared". For each frequency we chose. this consideration translates to the number of basic computational steps required to perform the needed processing. Adding N N −1 additions. we must multiply each signal value by a complex number and add together the results. each real-times-complex multiplication requires two real multiplications. and take the 4N 2 termas reecting how much work is involved in making the computation.11/>. basic In complexity calculations. the number of DFT frequency values discrete Fourier transform pair consists of K is chosen to equal the signal's duration N. we only worry about what happens as the data lengths increase. a procedure's stated complexity says that the time taken will be proportional demanded. if we double the length of the data. Please view or download it at <DFT_Component_Manipulation. This require- ment is impossible to fulll if solved. For example. 19 This content is available online at <http://cnx.

) Exercise 5.dcs. Does this symmetry change the DFT's complexity? = Secondly. Surprisingly. but more eciently.9. the number of frequency indices in a DFT calculation range between zero and the transform length minus one. now what both of which compute exactly the same quantity.∼history/Mathematicians/Gauss. The computational advantage of the FFT comes from recognizing the periodic nature of the discrete Fourier transform. . second multiplied by the complex exponential e The half-length transforms are each evaluated at of a frequency indices form k = 0.9 Fast Fourier Transform (FFT) 20 One wonders if the DFT can be computed faster: Does another computational procedure  an constant of proportionality. We want to calculate a transform of a signal that is 10 times longer. algorithm meant that computations had exibility not available to (Solution on p. Compare how much longer a straightforward implementation of the DFT would take in comparison to an FFT.21/>. historical work has shown that Gauss 21 in the early nineteenth century developed the same algorithm. It is an algorithm for computing that DFT that has order O (N logN ) for certain length inputs. Consider what happens to the even-numbered and odd-numbered elements of the sequence in the DFT calculation. . ..189 Exercise 5. and the second of the odd-numbered elements. The rst one is a DFT of the evennumbered elements. the spectral computational time will not quadruple as with the DFT algorithm. The FFT simply reuses the computations 20 21 This content is available online at <http://cnx. We could seek methods that reduce the smaller complexity results? O N2 .9> . N + 1 in the DFT (5. First of all. but do not change the DFT's complexity dramatic in mind: Can the computations be restructured so that a algorithm  exist that can compute the same quantity. Suppose a short-length transform takes 1 ms.1 frequency components (k (Solution on p.html Available for free at Connexions <http://cnx. Now when the length of data doubles. Later research showed that no algorithm for computing the DFT could have a smaller complexity than the FFT. let's try to appreciate the algorithm's impact. we have something more In 1965. not only was the computation of a signal's spectrum greatly speeded. what is the DFT's complexity now? Finally. . Normally. we assume that the signal's duration is a power of two: N = 2L . http://www-groups. a less important but interesting question is suppose we want is the complexity? K frequency values instead of N.) In making the complexity evaluation for the DFT. . S (k) = (−j) 2πk N s (1) e  s (0) + s (2) e(−j) N (5. the spectra of such signals have conjugate symmetry. suppose the data are complex-valued. IBM researcher Jim Cooley and Princeton faculty member John Tukey developed what is now known as the Fast Fourier Transform (FFT). 222.39) + · · · + s (N − 2) e(−j) N + 2π×(2+1)k 2π(N −(2−1))k (−j) (−j) N N + s (3) e + · · · + s  1) e (N − = 2π ( N −1)k 2π ( N −1) 2 2 (−j) 2πk (−j) (−j) 2πk (−j) N N N N s (0) + s (2) e +s (1) + s (3) e 2 + · · · + s (N − 2) e 2 2 + · · · + s (N − 1) e 2 Each term in square brackets has the 2π2k 2π(N −2)k N 2 -length DFT. 222.33)) can be computed from the corresponding positive frequency components. Three questions emerge. N − 1. but also the added feature of analog implementations. The rst DFT is combined with the − j2πk N . meaning that negative N 2 + 1. To derive the Before developing the FFT. . instead. it approximately doubles. we assumed the data to be real. but did not publish it! After the FFT's rediscovery.

each of the half-length transforms can be reduced to two quarter-length transforms. This transform is quite simple. Because e− j2πk N O (N )). Figure 5. the number of 3N times the length can be divided by two. Because the number of stages. multiply one of them by the it stands. involving only additions. When these half-length transforms are successively decomposed. Available for free at Connexions <http://cnx. the total complexity is still dominated by the half-length DFT calculations.12: The initial decomposition of a length-8 DFT into the terms using> . and add the results (complexity O (N )). the rst stage of the N 2 length-2 transforms (see the bottom part of Figure 5. etc. Each pair N 3N requires 4 additions and 2 multiplications.12 (Length-8 DFT decomposition) illustrates this decomposition. This number of computations does not change from stage to stage. N 2 . each of these to two eighth-length ones. As N N2 ). FFT has Length-8 DFT decomposition s0 s2 s4 s6 s1 s3 s5 s7 Length-4 DFT S0 e–j0 –j2π/8 S1 e S2 e–j2π2/8 S e–j2π3/8 3 S e–j2π4/8 4 –j2π5/8 S5 e S6 e–j2π6/8 S7 e–j2π7/8 Length-4 DFT (a) s0 s4 s2 s6 s1 s5 s3 s7 +1 +1 +1 S0 S1 e +1 S2 S3 e e π/4 S4 S5 S6 S7 e π/2 e 0 e π/2 e π/2 e π/4 4 length-2 DFTs 2 length-4 DFTs (b) Figure 5. At this point. we are left with the diagram shown in the bottom panel that depicts the length-8 FFT computation. Pairs of these transforms are combined by adding one to the other multiplied by a complex exponential.and odd-indexed inputs marks the rst phase of developing the FFT algorithm. giving a total number of computations equaling 6 · 4 = 2 . but the proportionality coecient has N = 2L .190 CHAPTER 5. we now compute two length2 transforms (complexity 2O 4 complex exponential (complexity been reduced. which makes the complexity of the FFT O (N log2 N ).12 (Length-8 DFT decomposition)). Now for the fun. equals log2 N . the number of arithmetic operations equals 2 log2 N . Thus. This decomposition continues until we are left with length-2 transforms. DIGITAL SIGNAL PROCESSING made in the half-length transforms and combines them through additions and the multiplication by which is not periodic over .

represented by a and b. By further decomposing the length-4 DFTs into two length-2 DFTs and combining their outputs. and we can reduce our computational work even further.13 (Buttery)).10 Spectrograms 22 22 We know how to acquire analog signals for digital processing (pre-ltering (Section 5. It takes two complex numbers. and A/D conversion (Section we rst decompose the DFT into two length-4 DFTs.13: The basic computational element of the fast Fourier transform is the buttery. Why not? How is the ordering determined? Other "fast" algorithms were discovered. The numbers 16 and 81 are highly composite (equaling 24 and 34 composite respectively).13 (Buttery) as the frequency index goes from 0 through 7. and 17 not at all (it's prime).9> .3 (Solution on p. we create the basic computational element known as a buttery (Figure 5. with the outputs added and subtracted together in pairs. sampling (Section 5.20/>.12 (Length-8 DFT decomposition)).4)) and to compute spectra of discrete-time signals (using the FFT This content is available online at <http://cnx. Available for free at Connexions <http://cnx. we see that the two complex multiplies are related to each other. Each buttery requires one complex multiplication and two complex additions. Let's look at the details of a length-8 DFT.13 (Buttery). the original Cooley-Tukey algorithm is far and away the most frequently used. Examining how pairs of outputs are collected together. Considering Figure 5. the number of prime factors a given integer has measures how it is. Exercise 5. We have additions for each stage and predicted. 222. all of which make use of how many common factors the transform length (2 N has.191 Doing an example will make computational savings more In number theory. 222. the number 18 is less so 1 · 32 ). In over thirty years of Fourier transform algorithm development. As shown on Figure 5. we recycle values from the length-4 DFTs into the nal calculation because of the periodicity of the DFT output. Although most of the complex multiplies are quite simple (multiplying by e−(j 2 ) π means swapping real and imaginary parts and changing their signs). and forms the quantities shown. log2 N = 3 N 2 = 4 complex multiplies and N = 8 complex 3N stages.) Note that the ordering of the input sequence in the two parts of Figure 5.3). we arrive at the diagram summarizing the length-8 fast Fourier transform (Figure 5. Buttery a a a+be–j2πk/N e–j2πk/N a+be–j2πk/N b a–be–j2πk/N e–j2π(k+N/2)/N b –1 e–j2πk/N a–be–j2πk/N Figure 5. let's count those for purposes of evaluating the complexity as full complex multiplies.12 (Length-8 DFT decomposition) aren't quite the same. By considering together the computations involving common output frequencies from the two half-length DFTs. It is so computationally ecient that power-of-two transform lengths are frequently used regardless of what the actual length of the data.) Suppose the length of the signal were 500? How would you compute the spectrum of this signal using the Cooley-Tukey algorithm? What would the length N of the transform be? 5.9.2 (Solution on p. making the number of basic computations 2 log2 N as Exercise 5.

2 seconds. We'll learn the rationale for this number later. Point of interest: Music compact discs (CDs) encode their signals at a sampling rate of 44.025 kHz and passed through a 16-bit A/D converter.4 0.1 (Solution on p.8 1 1.2 Ri ce Uni Figure 5.) How Looking at Figure 5.14 (Speech Spectrogram). how many bytes of computer memory does the speech consume? Speech Spectrogram 5000 4000 Frequency (Hz) 3000 2000 1000 0 0 and was the lowest available sampling rate commensurate with speech signal bandwidths available on my computer.1 kHz.14 ver si ty Available for free at Connexions <http://cnx.025 kHz sampling rate for the speech is 1/4 of the CD sampling rate.2 0.6 Time (s) 0. is calculated.10).9)).14 (Speech Spectrogram) the signal lasted a little over 1. long was the sampled signal (in terms of samples)? What was the datarate during the sampling process in bps (bits per second)? Assuming the computer storage is organized in terms of bytes (8-bit quantities). The 11.9> . The speech was sampled at a rate of 11. 222. let's put these various components together to learn how the spectrogram shown in Figure 5. which is used to analyze speech (Section 4. DIGITAL SIGNAL PROCESSING algorithm (Section 5.192 CHAPTER 5. Exercise 5.

an artifact directly related to this sharp amplitude change. frame is calculated using the FFT. we essentially w (n) = 1. 0 ≤ n ≤ N − 1. we nd that the oscillations resulting from applying the Available for free at Connexions <http://cnx. a Fourier transform of each comparatively short. An important detail emerges when we examine each framed signal (Figure 5. Rectangular 256 n Rectangular Window Hanning Window FFT (512) FFT (512) f Figure 5. contiguous groups of samples.14 (Speech Spectrogram) involved creating A transform of such a segment reveals a curious oscillation in the spectrum. This shaping is window.193 The resulting discrete-time signal. clearly changes its character with time. this shaping greatly reduces spurious oscillations in each frame's spectrum.9> . that were 256 samples long and nding the spectrum of each. x-axis y -axis frequency. Considering the spectrum of the Hanning windowed frame. Rectangular)). As shown in Figure 5. Computing Figure 5. shown in the bottom of Figure 5. here demarked by the vertical lines. a feature not present in the original signal. w (n). At the frame's edges. In sectioning the signal.15 (Spectrogram Hanning Spectrogram Hanning vs.14 (Speech Spectrogram). the long signal was sectioned into Conceptually. rectangular window is applied (corresponding to extracting a frame from the signal).15 (Spectrogram Hanning 2 1 − cos N vs. Rectangular). frames: To display these spectral changes. A better way to frame signals for spectrograms is to apply a accomplished by multiplying the framed signal by the sequence applied a rectangular window: window: Shape the signal values within a frame so that the signal decays gracefully as it nears the edges. A much more graceful window is the Hanning 1 2πn it has the cosine shape w (n) = . but not so short that we lose the signal's spectral character.15: f The top waveform is a segment 1024 samples long taken from the beginning of the If a "Rice University" phrase. Roughly speaking. oscillations appear in the spectrum (middle of bottom row). the speech signal's spectrum is evaluated over successive time segments and stacked side by side so that the corresponds to time and the vs. with color indicating the spectral amplitude. the signal may change very abruptly. Applying a Hanning window gracefully tapers the signal toward frame edges. Each frame is not so long that signicant signal variations are retained within a frame. thereby yielding a more accurate computation of the signal's spectrum at that moment of time.

and displayed in spectrograms with frequency extending vertically. is sectioned into overlapping. extracted from the bottom plot could well miss important features present in the original. To alleviate this DIGITAL SIGNAL PROCESSING rectangular window obscured a formant (the one located at a little more than half the Nyquist frequency). equal-length frames.2 (Solution on p. Available for free at Connexions <http://cnx. window time location running horizontally. the nonClearly.) What might be the source of these oscillations? To gain some insight. The spectra of each of these is calculated.14: Speech Spectrogram). such as shown in the speech spectrogram (Figure 5. This solution requires more Fourier transform calculations than needed by rectangular windowing. 2N discrete Fourier transform of a length-N pulse? The pulse emulates the rectangular window. and spectral magnitude color-coded. The speech signal. If you examine the windowed signal sections in sequence to examine windowing's aect on signal amplitude. Compare your answer with the lengthwindow. frames are overlapped (typically by half a frame duration). we see that we have managed to amplitude-modulate the signal with the periodically repeated window (Figure 5. but the spectra are much better behaved and spectral changes are much better captured.16: In comparison with the original speech segment shown in the upper plot.17 (Overlapping windows for computing spectrograms) illustrates these computations. Figure 5.16 (Non-overlapping windows)). spectral information overlapped Hanning windowed version shown below it is very ragged. with a Hanning window applied to each frame.9> . 222. Exercise 5.194 CHAPTER 5. and 2N transform of a length- N Hanning Non-overlapping windows n n Figure 5.10. what is the lengthcertainly has edges.

org/content/col10040/1. (Solution on p. with the magnitude of the rst 257 FFT values displayed vertically. with spectral amplitude values color-coded.9> . shift-invariant systems.10. A length-512 FFT of each frame was computed. we will discover that frequency-domain implementation of systems. We begin with discussing the underlying mathematical structure of linear. but also a computationally ecient one.5/>.11 Discrete-Time Systems Exercise 5.195 Overlapping windows for computing spectrograms n FFT FFT FFT FFT FFT FFT FFT Log Spectral Magnitude f Figure 5. In discrete-time signal processing. wherein we multiply the input signal's Fourier transform by a frequency response.) N and 512 for K? Another issue is how was the length-512 transform of each length-256 windowed frame computed? 5. In fact. Frames were 256 samples long and a Hanning window was applied with a half-frame overlap. 222. and devise how software lters can be constructed.) We One of the rst analog systems we described was the amplier (Section 2.17: The original speech segment and the sequence of overlapping Hanning windows applied to it are shown in the upper portion. What is the discrete-time implementation of an amplier? Is this especially hard or easy? found that implementing an amplier was dicult in analog systems.3 Why the specic values of 256 for (Solution on p. is not only a viable alternative.1 23 When we developed analog systems. Exercise we are not limited by hardware considerations but by what can be constructed in software. Available for free at Connexions <http://cnx. 23 This content is available online at <http://cnx.11. 222. interconnecting the circuit elements provided a natural starting place for constructing useful devices. requiring an op-amp at least.2: Ampliers).

We have thus created the convention that As opposed to dierential equations. the dierential equation species the input-output relationship in the time-domain. ad innitum. S (a1 x1 (n) + a2 x2 (n)) = a1 S (x1 (n)) + a2 S (x2 (n)) A discrete-time system is called the property delaying the input delays the corresponding output. and to the current and {a1 .12 Discrete-Time Systems in the Time-Domain A discrete-time signal 24 s (n) is delayed by n0 samples when we write s (n − n0 ). . . . We simply express the dierence equation by a program that calculates each output for loops. . discrete-time delays can only be integer valued. for n=1:1000 y(n) = sum(a. the output signal number of coecients is related to its past values of the input signal (5. In the frequency domain. To compute them. in fact.6. delaying a signal corresponds to a linear phase shift of the signal's discrete-time Fourier transform: Linear discrete-time systems have the superposition property. bq }. b1 . . DIGITAL SIGNAL PROCESSING 5.42) past values y (n − l). dierence equations provide an from the previous output values. with n0 > 0. y (−1).196 CHAPTER 5.42).9> . . somehow solve the dierential equation). The way out of this predicament is to specify the system's the initial conditions: we must provide Make the p output values that occurred before the input started. To compute these values.*y(n-1:-1:n-p)) + sum(b. Dierence equations are usually expressed in software with compute the rst 1000 values of the output has the form implicit description of a system (we must explicit way of computing the A MATLAB program that would output for any input. then a shift-invariant system has (5. y (n) and y (n) = a1 y (n − 1) + · · · + ap y (n − p) + b0 x (n) + b1 x (n − 1) + · · · + bq x (n − q) Here. but the choice does impact how the system responds to a given . 24 One choice gives rise to a linear system: This content is available online at < as written it has (at least) two bugs.40) (analogous to time-invariant analog systems (p. shift-invariant s (n − n0 ) ↔ e−(j2πf n0 ) S ej2πf . What input and output values enter into the computation of y (1)? We need values for y (0). end An important detail emerges when we consider making this program work. we would need even earlier values. . We use the term shift-invariant to emphasize that delays can only have integer values in discrete-time. We have essentially divided the equation by it. . which only provide an a0 is always one. These values can be arbitrary. while We want to concentrate on systems that are both linear and shift-invariant. The system's characteristics are determined by the choices for the p q and the coecients' values {b0 .. . ap } a0 ? and x (n). In analog systems. values we have not yet computed. 29)) if S (x (n)) = y (n). Choosing n0 to be negative advances the signal along the integers. which we have not yet computed. l = {1. . It will be these that allow us the full power of frequency-domain analysis and implementations. . we need only a mathematical specication. which does not change the input-output relationship. The corresponding discrete-time specication is the dierence equation. delays can be arbitrarily valued. As opposed to analog delays (Section 2. Because we have no physical constraints in "constructing" such systems.24/>... aside: There is an asymmetry in the coecients: where is This coecient would multiply the y (n) term in (5. .*x(n:-1:n-q)). we would need more previous values of the output.41) S (x (n − n0 )) = y (n − n0 ) in analog signals. If (5. Available for free at Connexions <http://cnx.3: Delay). p}. and the current and previous inputs.

What is it? How can it be "xed?" Example 5. the output simply equals the input times the gain IIR (Innite Impulse Response). When Available for free at Connexions <http://cnx. the output changes a = 1. not conceptual.197 initial conditions zero. y (0) = ay (−1) + b (5. it is reasonable to assume that the output is also zero. (5. this dierence equation says we need to know what the previous output negative compute this system's output to a unit-sample input: y (n − 1) and what the input signal is at that moment of time. For all non-zero values of a. Exercise 5. −1. the program will not work because of a programming. the output If a is negative and greater a = −1. 223. than When is positive and less than one. In more detail. all time did not input . If it equals zero. y (n) = ay (n − 1) + bδ (n) (5. However. the dierence equation would not describe a linear system (Section Certainly.6. If is a decaying exponential. the output The reason for this terminology is that the unit sample also known as the impulse (especially in analog situations). error.6: Linear Systems) if the input that is zero for produce a zero output. n > 0 can envision how the lter responds to this input by making a table. which leaves us with the dierence equation y (0) = b.6: Linear Systems): The only way that the output to a sum of signals can be the sum of the individual outputs occurs when the initial conditions in each case are zero. the output oscillates while decaying exponentially.1 (Solution on p. and a is more complicated (Table 5. a b.6.5 Let's consider the simple system having p=1 and q = 0. the y (n) = ay (n − 1) .45) n −1 0 1 2 : x (n) 0 1 0 0 0 0 y (n) 0 b ba ba2 : n ban Table 5.44) What is the value of y (−1)? Because we have used an input that is zero for all negative indices.1).12. let's x (n) = δ (n).9> . With this assumption.1 Coecient values determine how the output behaves. we start by trying to compute the output at n = 0. For n > 0.) The initial condition issue resolves making sense of the dierence equation for inputs that start at some index. such systems are said to be b can be any value. The reason lies in the denition of a linear system (Section 2. The parameter serves as a gain. The eect of the parameter lasts forever. leaving unit-sample is zero.43) y (n) = ay (n − 1) + bx (n) To compute the output at some index. and the system's response to the "impulse" lasts forever. the output is a unit step. Because the input is zero for indices. We y (−1) = 0.

the population ourishes. y (n) = a1 y (n − 1) + · · · + ap y (n − p) + b0 x (n) + b1 x (n − 1) + · · · + bq x (n − q) does not involve terms like y (n + 1) or x (n + 1) on the equation's right side.18: The input to the simple example system. the population becomes extinct. b = 1 1 y(n) a = –0.5. The dierence equation says that the number The same dierence indexes the times at in the next generation is some multiple of the previous one. Here. growing etc. alternating between b and −b. and n a equals the compound interest rate plus one.12.9> . For our example. b = 1 n n -1 n Figure 5. whether positive x(n) n 1 n y(n) a = 0.42).) Note that the dierence equation (5. which compounding occurs (daily.198 CHAPTER 5. DIGITAL SIGNAL PROCESSING sign forever. |a| > 1. Here.1.). if greater than one. Positive values of over time. we typically require that the output remain bounded for any input. equation also describes the eect of compound interest on deposits. 223. is shown at the top. |a| < 1 Exercise 5.5. b=1 (the bank provides no gain). with the outputs for several system parameter values shown below. a unit sample. the output signal becomes larger and larger. Can such terms also be included? Why or why not? Available for free at Connexions <http://cnx.2 (Solution on p. b = 1 4 2 0 y(n) a = 1. In signal processing applications. If this multiple is less than one. monthly. More dramatic eects when or negative. a are used in population models to describe how population size increases n might correspond to generation. that means that we restrict and chose values for it and the gain according to the application.

all values of n.llb> 26 This content is available online at <http://cnx.46) Because this system's output depends only on current and previous input Such a previous values. We used impedances to derive directly from the circuit's structure the frequency response.9> . changed in amplitude and phase. hence the name boxcar lter 1 q .6 A somewhat dierent system has no "a" coecients. We proceed as when we used impedances: let the input be a complex exponential signal. x (n) = Xej2πf n . These amplitude and phase changes comprise the frequency response we seek. We'll derive its q and height frequency response and develop its ltering interpretation in the next section. we need not be concerned with initial conditions. [Media Object] = 7) that could be updated 25 5. q − 1}. Response) because their unit sample responses have nite duration. This waveform given to this system. Thus. When we have a linear. .19) shows that the unit-sample response is a pulse of width then equals zero thereafter. we need to nd the frequency response of discrete-time systems.47) The assumed output does indeed satisfy the dierence equation if the output complex amplitude is related Y = b0 + b1 e−(j2πf ) + · · · + bq e−(j2πqf ) X 1 − a1 e−(j2πf ) − · · · − ap e−(j2πpf ) 25 This media object is a LabVIEW VI. the output equals (5. the output should also be a complex exponential of the same frequency. n = {0. . . . Example 5. For now. The complex exponential input signal is Note that this input occurs for output has a similar form: (5. The only structure we have so far for a discrete-time system is the dierence equation. Available for free at Connexions <http://cnx.42). Consider the dierence equation y (n) = 1 (x (n) + · · · + x (n − q + 1)) q When the input is a unit-sample.14/>. No need to worry about initial conditions here.19: The plot shows the unit-sample response of a length-5 boxcar lter.199 y(n) 1 5 n Figure 5. note that the dierence equation says that each output value equals the average of the input's current and q values. we have y (n) = Y ej2πf n . the output equals the running average of input's previous system could be used to produce the average weekly temperature (q daily. ure 5.13 Discrete-Time Systems in the Frequency Domain 26 As with analog linear systems. Assume the Plugging these signals into the fundamental dierence equation Y ej2πf n = a1 Y ej2πf (n−1) + · · · + ap Y ej2πf (n−p) + b0 Xej2πf n + b1 Xej2πf (n−1) + · · · + bq Xej2πf (n−q) to the input amplitude by (5. Please view or download it at <DiscreteTimeSys. shift-invariant system. Such systems are said to be FIR (Finite I 1 q for mpulse Plotting this response (Fig- is also known as a

having a cuto frequency of Exercise 5. lters having virtually any frequency response desired can be designed. by another name. q−1 (5. In the next section. When the lter coecient ing.) Suppose we multiply the boxcar lter's coecients by a sinusoid: bm = 1 cos (2πf0 m) q Use Fourier transform properties to determine the transfer function.51) This expression amounts to the Fourier transform of the boxcar signal (Figure 5. When a system's transfer function has both terms.14 Filtering in the Frequency Domain 27 27 Because we are interested in actual computations rather than analytic calculations. The larger the coecient in magnitude. because b0 + b1 e−(j2πf ) + · · · + bq e−(j2πqf ) 1 − a1 e−(j2πf ) − · · · − ap e−(j2πpf ) (5. we detail how analog signals can be ltered by computers. we assume that the signal has a This content is available online at <http://cnx.48) any discrete-time signal can be expressed as a superposition of complex exponential Y ej2πf = X ej2πf H ej2πf (5. There we found that this frequency response has a magnitude equal to the absolute value of see the length-10 lter's frequency response (Figure 5. We see that boxcar lterslength-q signal averagershave a lowpass behavior. we must consider the details of the discrete Fourier transform. and its order equals p q. oering a much greater range of ltering possibilities than is possible with circuits.6)) has the frequency response H ej2πf = 1 e−(j2πf m) q m=0 dsinc (πf ). we have a lowpass lter.10: Spectra of exponential signals) portrays the magnitude and phase of this transfer function.49) signals and because linear discrete-time systems obey the Superposition Principle.13. what kind of lter and how do you control its characteristics with the lter's coecients? These examples illustrate the point that systems described (and implemented) by dierence equations serve as lters for discrete-time>.11: Spectrum of length-ten pulse). its transfer function. 223.50) This Fourier transform occurred in a previous example. We nd that any discrete-time system dened by a dierence equation has a transfer function given by H ej2πf = Furthermore. the more pronounced the lowpass or highpass lter- Example 5.200 CHAPTER 5. Example 5. the system is usually IIR. 5.1 1 q.8 The length-q boxcar lter (dierence equation found in a previous example (Example 5. (Solution on p. How would you characterize this system: Does it act like a lter? If so. By selecting the coecients and lter type. To compute the length-N DFT.7 The frequency response of the simple IIR system (dierence equation given in a previous example (Example 5. The lter's order is given by the number p of denominator coecients in the transfer function (if the system is IIR) or by the number regardless of q of numerator coecients if the lter is FIR. This design exibility can't be found in analog systems. the transfer function relates the discrete-time Fourier transform of the system's output to the input's Fourier transform. DIGITAL SIGNAL PROCESSING This relationship corresponds to the system's frequency response or.19).9> . negative a results in a highpass lter. a is positive. Available for free at Connexions < is given by H ej2πf = b 1 − ae−(j2πf ) (5. the exponential signal spectrum (Figure 5.

5). (Solution on p. however. x (Nx − 1).9> . N − 1} (5. (Solution on p. h (n). we have that in terms of the discrete-time Fourier transform.14. memory of past input values. Assume we have an input signal having duration lter having a length-q +1 unit-sample response. the output for a unit-sample input is known as the system's unit-sample response. Because sampling in the frequency domain causes repetitions of the unit-sample response in the time domain. FIR lters. . In the time-domain. which has X transfer function. Because of the input and output of linear.) This statement is a very important result.53) sample response might be.2 N ≥ q.201 duration less than or equal to N. Finding this signal is quite the duration of the unit-sample response determines the minimal sampling rate that prevents aliasing. Combining the frequency-domain and time-domain interpretations of a linear. What is the duration of the output signal? The dierence equation for this lter is y (n) = b0 x (n) + · · · + bq x (n − q) This equation says that the output depends on current and past input values. Exercise 5.54) Nx depends on x (Nx ) (which equals zero). h (n) and the transfer function are Fourier transform h (n) ↔ H ej2πf response. If. we can only implement an IIR lter accurately in the time domain with the system's dierence equation. we have the potential for aliasing in the time domain (sampling in one domain.14. Because frequency responses have an explicit frequency-domain specication (5. errors are encountered. shift- invariant system's unit-sample response. can result in aliasing in the other) unless we sample fast enough. Here. we don't have a direct handle on which signal has a Fourier transform equaling a given frequency response. 223. note that the discretetime Fourier transform of a unit sample equals one for all frequencies. sketch the time-domain result for various choices of the DFT length Exercise 5. Computing the inverse DFT yields a length-N signal no matter what the actual duration of the unitN (it's a FIR If the unit-sample response has a duration less than or equal to lter). the output at index Thus. results in the output's Fourier transform equaling the system's Y ej2πf = H ej2πf X ej2πf . 223. the duration exceeds N. the output returns to zero Available for free at Connexions <http://cnx.14. .) Express the unit-sample response of a FIR lter in terms of dierence equation coecients. Consequently. . Frequency-domain implementations are restricted to Nx that we pass through a FIR Another issue arises in frequency-domain ltering that is related to time-domain aliasing. First of all.1 and is denoted by pairs e j2πf a unit-sample = 1. with the input value previous dening the extent of the lter's (5. q samples For example.) Derive the minimal DFT length for a length-q unit-sample response using the Sampling Theorem. this time when we consider the output.47) in terms of lter coecients. we cannot use the DFT to nd the system's unit-sample response: aliasing of the unitsample response will always occur. and derive the corresponding length-N DFT by sampling the frequency response. N. we can analytically specify the frequency H (k) = H e j2πk N .3 (Example 5. through x (Nx − q). 223. shift-invariant systems are related to each other by input. computing the inverse DFT of the sampled frequency response indeed yields the unit-sample response. . (5. (Solution on p. Note that the corresponding question for IIR lters is far more dicult to answer: Consider the example For IIR systems. The nature of these errors is easily explained by appealing to the Sampling Theorem. For FIR systems  they by denition have nite-duration unit sample responses  the number of required DFT samples equals the unit-sample response's duration: Exercise 5. be it time or frequency. By sampling in the frequency domain. k = {0. Derive it yourself.52) Returning to the issue of how to use the DFT to perform ltering.

but computationally we would have to make an uncountably innite number of computations. the dominant factor is not the duration of input or of the unit-sample response. Thus. As the input signal's last value occurs at index Nx − 1. Thus. the output signal's duration equals Exercise 5.20. rst compute the DFT of the input.20: To lter a signal in the frequency domain. unit-sample response and output is the output. To accommodate a shorter signal than DFT length. The DFT's length must be at least the sum of the input's and unit-sample response's duration minus one. Demonstrate the accuracy of this statement. The sampling interval here 1 K for a length-K DFT: faster sampling to avoid aliasing thus requires a longer transform calculation. diagrammed in Figure 5. is accomplished by storing the lter's frequency response as the DFT input's DFT X (k). Guess which is "bigger?" The DFT computes the Fourier transform at a nite set of frequencies  samples the true spectrum  which can lead to aliasing in the time-domain unless we sample suciently fast. frequency variable The reason why we had to "invent" the discrete Fourier transform (DFT) has the same origin: The spectrum resulting from the discrete-time Fourier transform depends on the always true: continuous f. The number of rational numbers is countably innite (the numerator and denominator correspond to locating the rational by row and column. let's clarify why so many new issues arose in trying to develop a frequencydomain implementation of linear ltering.202 CHAPTER 5.". we express this result as "The output's duration equals the input's duration plus the lter's duration minus one. X e = n x (n) e−(j2πf n) . An uncountably innite quantity cannot be so associated. the number of values at which we must evaluate the frequency response's DFT must be at least q + Nx and we must compute the same length DFT of the input. We calculate these discrete Fourier transforms using the fast Fourier transform algorithm.) In words. Before detailing this procedure.14. the number of irrational numbers is uncountably innite. Thus.9> . for example. DIGITAL SIGNAL PROCESSING only after the last input value passes through the lter's memory. The frequency-domain relationship between a lter's input and Y ej2πf = H ej2πf X ej2πf . multiply the result by the sampled frequency response. Since the longest signal among the input. to avoid aliasing when we use DFTs. it is that signal's is Available for free at Connexions <http://cnx. using this relationship output is to perform ltering is restricted to the situation when we have analytic formulas for the frequency response and the input signal. voila!). and computing inverse DFT of the result to yield y (n). Frequency-domain ltering. (Solution on p.4 q + Nx . and nally compute the inverse DFT of the product. That's ne for analytic calculation. of course. note: Did you know that two kinds of innities can be meaningfully dened? A countably innite quantity means that it can be associated with a limiting process associated with integers. computing Y (k) = H (k) X (k). The main theme of this result is that a lter's output extends longer than either its input or its unit-sample response. but of the output. Unfortunately. multiplying them to create the output's DFT H (k).org/content/col10040/1. we simply zero-pad the input: Ensure that for indices the the extending beyond the signal's duration that the signal is zero. x(n) DFT X(k) Y(k) IDFT H(k) y(n) Figure 5. the last nonzero output value occurs when n − q = Nx − 1 or n = q + Nx − 1. 223. the total number so-located can be counted. This Fourier transforms in this result are discretej2πf time Fourier transforms.

Frequency Domain h = [1 1 1 1 1]/5.21: The blue line shows the Dow Jones Industrial Average from 1997. and this determines the transform length we need to use. y = filter(h. and this length just undershoots our required length.9 Suppose we want to average daily stock prices taken over last year to yield a running weekly average (average over ve trading sessions). we are restricted to power-of-two transform lengths. The lter we want is a length-5 averager (as shown in the unit-sample response (Figure 5. 512). and the input's duration is 253 (365 calendar days minus weekend days and holidays).21 shows the input and the ltered output. 256 is a power of two (2 8 = 256). H = fft(h. Available for free at Connexions <http://cnx. 512). Y = H. To use frequency domain techniques. We need to choose any FFT length that exceeds the required DFT length.9> . Because we want to use the FFT. The output duration will be 253+5−1 = 257. 8000 Dow-Jones Industrial Average 7000 6000 5000 4000 3000 2000 1000 0 0 50 100 150 Daily Average Weekly Average 200 250 Trading Day (1997) Figure 5.19)).*X. Figure 5. y = ifft(Y). We simply extend the other two signals with zeros (zero-pad) to compute their DFTs. As it turns out.[djia zeros(]). The MATLAB programs that compute the ltered output in the time and frequency domains are Time Domain h = [1 1 1 1 1]/5. DJIA = fft(djia.1.203 duration that determines the transform length. and the red one the Note the "edge" length-5 boxcar-ltered result that provides a running weekly of this market index. Example 5. eects in the ltered output. we must use length-512 fast Fourier transforms.

2.15 Eciency of Frequency-Domain Filtering equation approach. lter each. dierence- Nx (2 (q) + 1).696). we need only count the computations required by each. These artifacts can be handled in two ways: we can just ignore the edge eects or the data from previous and succeeding years' last and rst week. To force it to produce a signal having the proper length. Available for free at Connexions <http://cnx. The ramping up and down that occurs can be traced to assuming the input is zero before it begins and after it ends. each term.271. but so far we have required the input to have limited duration (so that we could calculate its Fourier transform). will exceed the left if However. the frequency-domain approach is faster constraint is advantageous. the right side. so long as the FFT's power-of-two The frequency-domain approach is not yet viable. can be placed at the ends.9> . as long as the the signal's duration is long and add the results together. we are evaluating the number of computations per sample. An interesting signal processing aspect of this example is demonstrated at the beginning and end of the output. ltering a sum of terms is equivalent to summing the results of ltering ∞ ∞ x (n) = m=−∞ 28 x (n − mNx ) ⇒ y (n) = m=−∞ y (n − mNx ) (5. Thus. of the lter's order q. The frequency-domain approach requires three Fourier trans3K (log2 K) computations for a length-K FFT. Insight into this comparison is best obtained by dividing by 2q + 1 ↔ 6 × 1 + q Nx + 3 2 1+ With this manipulation. DIGITAL SIGNAL PROCESSING note: The filter program has the feature that the length of its output equals the length of its input. If the input signal had been one sample shorter. each requiring (6K compare 28 To determine for what signal and lter durations a time. for lter durations greater than about 10. For the time-domain. The solution to this problem is quite simple: Section the input into MATLAB's fft function automatically zero-pads its input if the specied transform length (its The frequency domain result will have a small  because of the inherent nite precision second argument) exceeds the signal's length. the number of frequency-domain computations. The output-signal-duration-determined length must be at least Nx + q . The lter "sees" these initial and nal values as the dierence equation passes over the input." Because the lter is linear.or frequency-domain implementation would be the most ecient. we must Nx (2q + 1) ↔ 6 (Nx + q) + 3 (Nx + q) log2 (Nx + q) 2 Nx .55) This content is available online at <http://cnx.204 CHAPTER 5. 5. and the multiplication of two spectra 2 computations). the number of arithmetic operations in the time-domain implementation is far less than those required by the frequency domain version: 514 versus 62. the program zero-pads the input appropriately.24). input is at least 10 samples. imaginary component  largest value is nature of computer arithmetic. To section a signal means expressing it as a linear combination of length-Nx non-overlapping "chunks. respectively.16/>. the frequency-domain computations would have been more than a factor of two less (28. what will we do when the input signal is innitely long? The dierence equation scenario ts perfectly with the envisioned digital ltering structure (Figure 5.2 × 10−11 Because of the unfortunate mist between signal lengths and favored FFT lengths. q Nx log2 (Nx + q) For any given value Exact analytic evaluation of this comparison is quite dicult (we have a transcendental equation to solve). we need forms. but far more than in the time-domain implementation.

15. Computational considerations reveal a substantial advantage for a frequency-domain implementation over a time-domain one. note that each ltered section has a duration longer than the input. the number of computations for each output is frequency response 2 (q) + 1. a section amounts to Letting Nx denote a section's length. (Solution on p.) Show that as the section length increases. Each ltered section is added to other outputs that overlap to create the signal equivalent to having ltered the entire input. the frequency-domain approach is much faster. Exercise 5. the frequency domain approach becomes increasingly Note that the choice of section duration is arbitrary. we should section so that the Available for free at Connexions <http://cnx. not just butt them together. For even modest lter orders.22: The noisy input signal is sectioned into length-48 frames. the number (Nx + q) log2 (Nx + q) + 6 (Nx + q). the number of terms to add corresponds to the excess duration of the output compared with the input (q ). We need only compute two DFTs and of computations for the ltered outputs multiply them to lter a section. The sinusoidal component of the signal is shown as the red dashed line.1 more The frequency-domain approach thus requires 1+ q Nx q log2 (Nx + q) + 7 Nx + 6 computations per output value.9> . we must add together. 223.205 As illustrated in Figure 5. Thus. Overlapped Sections Output (Sum of Filtered Sections) n Figure 5. In addition. each of which is ltered using frequency-domain techniques. The number of computations for a time-domain implementation essentially remains constant whether we section the input or not.22. In the frequency-domain approach. which amounts to a xed overhead. we must literally add the ltered sections together. computation counting changes because we need only compute the lter's H (k) once. Consequently. Once the lter is chosen. Sectioned Input n Filter Filter Filtered.

FrequencyThey domain approaches don't operate on a sample-by-sample basis. . Nx outputs for the same number of inputs faster than Nx Ts .1. . Some residual noise remains because noise components within the lter's passband appear in the output as well as the signal. One of the primary applications of linear lters is in the noisy signal. To use a length-64 FFT.206 CHAPTER 5. the operation of Buering can also be used Example 5. This lter functions as a lowpass lter having a cuto frequency of We note that the noise has been dramatically reduced. Its frequency 17 1 − cos 17 response (determined by computing the discrete Fourier transform) is shown in Figure 5. In programming. and produce the output Conceptually. The example shown in Figure 5.23: about 0. with a sinusoid now clearly visible in the ltered output.22 shows how frequency-domain ltering works. Available for free at Connexions <http://cnx. instead. The gure shows the unit-sample response of a length-17 Hanning lter on the left and the frequency response on the right. this frequency-domain implementation is over twice as fast! Figure 5.9> .1 1 Spectral Magnitude |H(ej2πf)| 0 Index n 0 0 Frequency 0. a real-time.10 We want to lowpass lter a signal that contains a sinusoid and a signicant amount of noise. discerning the sine wave in the signal is virtually impossible. Implementing the digital lter shown in the A/D block diagram (Figure 5. they operate on sections. . n = {0. Because they generally take longer to produce an output section than the sampling interval duration. calculate the dierence equation. all in less that the sampling interval lter in real time by producing Ts .5 Figure 5. building up sections while computing on previous ones is known as in time-domain lters as well but isn't required. we can select the length of each section so that the frequency-domain ltering approach is maximally ecient: Choose the section length Nx so that Nx + q is a power of two.24) with a frequency-domain implementation requires some additional signal management not required by time-domain implementations.23. 16}. . h(n) 0. DIGITAL SIGNAL PROCESSING required FFT length is precisely a power of two: Choose Nx so that Nx + q = 2L . A smart Rice engineer has selected a FIR lter having a unit-sample response corresponding a noise removal: preserve the signal by matching lter's passband with the signal's spectrum and greatly reduce all other frequency components that may be present 2πn 1 . buering. To period-17 sinusoid: h (n) = apply.22 shows a portion of the noisy signal's waveform. which makes q = 16. each section must be 48 samples long. we must lter one section while accepting into memory the next section to be ltered. Filtering with the dierence equation would require 33 computations per output while the frequency domain requires a little over 16. If it weren't for the overlaid sinusoid. time-domain lter could accept each sample as it becomes available.

The greater the number Q [·] of the A/D converter. while in others the frequency domain approach is faster. multiplying it by the lter's transfer function. A frequency domain implementation O (logN ) computational complexity for each output value.21/>.15. The complexities of time-domain and frequency-domain implementations depend on dierent aspects of the ltering: The time-domain implementation depends on the combined orders of the lter while the frequency-domain implementation depends on the logarithm of the Fourier transform's length. the greater the accuracy of the entire system. thus determines how long our program has to compute each output y (n). 29 Because of the Sampling Theorem (Section 5. In the latter (Solution on p. which determines allowable sampling intervals of bits in the amplitude quantization portion D/A converter and a second anti-aliasing lter (having the same bandwidth as the rst one).16 Discrete-Time Filtering of Analog Signals important signal aspects. The resulting digital signal x (n) can now be ltered in the time-domain with a dierence equation or in the frequency domain with Fourier transforms. What is the source of this delay? Can it be removed? 5. The idea begins by computing the Fourier transform of a length-N portion of the input overly complex and potentially inecient. trying to operate without them can lead to potentially very inaccurate digitization. Highpass signals cannot be ltered digitally. Detailing the complexity. we can process. It could well be that in some problems the time-domain version is more ecient (more easily satises the real time requirement).3.24: To process an analog signal digitally. The computational complexity for calculating each output with a dierence equation (5. the output's sinusoidal component seems to be however. it is 29 This content is available online at <http://cnx. analog signals "with a computer" by constructing the system shown in Figure 5. for the multiplication by the transfer function.207 Exercise 5. which is determined by the analog signal's bandwidth.2: The Sampling Theorem). O (p + q). Note that the input and output lters must be analog lters. A/D x(t) LPF W x(nTs) t = nTs 1 Ts < 2W Q[•] x(n) = Q[x(nTs)] Digital Filter y(n) D/A LPF W y(t) Figure 5. in particular lter. the signal x (t) must be ltered with an anti- aliasing lter (to ensure a bandlimited signal) before A/D conversion. we are assuming that the input signal has a lowpass spectrum and can be bandlimited without aecting Bandpass signals can also be ltered digitally. Available for free at Connexions <http://cnx. 223. we have transforms (computed using the FFT algorithm) and which makes the total complexity thus requires x (n). This lowpass lter (LPF) has a Ts . but require a more complicated system. The resulting output y (n) then drives a cuto frequency of Hz. and computing the inverse transform of the result. To use this system.42) is Frequency domain implementation of the lter is also possible.24. The sampling interval. W Another implicit assumption is that the digital lter can operate in real time: The computer and the ltering algorithm must be suciently fast so that outputs are computed faster than input values arrive. This approach seems O (N logN ) for the two O (N logN ) for O (N ) N input values.9> .) Note that when compared to the input signal's sinusoidal component.

42).) Derive this value for the number of computations for the general dierence equation (5.2: Non-Standard Sampling Using the properties of the Fourier series can ease nding a signal's spectrum. can a change in sampling rate prevent aliasing.1 2 (p + q). a) What sampling frequency (if any works) can be used to sample the result of passing RC highpass lter with s (t) through an b) What sampling frequency (if any works) can be used to sample the c) The signal derivative of s (t)? s (t) has been modulated by an 8 kHz sinusoid having an unknown phase: the resulting signal is s (t) sin (2πf0 t + φ). 223. Find an expression for and sketch d) Does aliasing occur? If with f0 = 8kHz and φ =? Can the modulated signal be sampled so that the original signal can be recovered from the modulated signal regardless of the phase value φ? If so. R = 10kΩ and C = 8nF? show how and nd the smallest sampling rate that can be used. if not. If ck s t− T ? 2 represents the signal's Fourier series coecients. DIGITAL SIGNAL PROCESSING the FFT algorithm for computing the Fourier transforms that enables the superiority of frequency-domain implementations.42/>. Because complexity considerations only express how algorithm running-time increases with system parameter choices. we need to detail both implementations to determine which will be more suitable for any given ltering problem. 30 This content is available online at <http://cnx. a) Suppose a signal s (t) is periodic with period what are the Fourier series coecients of b) Find the Fourier series of the signal the spectrum of the sampled signal.25 (Pulse Signal). p (t) shown in Figure 5. T .1: The signal Sampling and Filtering 30 s (t) is bandlimited to 4 kHz. (Solution on CHAPTER 5. Filtering with a dierence equation is straightforward. We want to sample it. if not. Available for free at Connexions <http://cnx. show why not. and the number of computations that must be made for each output value is Exercise 5. c) Suppose this signal is used to sample a signal bandlimited to 1 T Hz.17 Digital Signal Processing Problems Problem 5. show how the signal can be recovered from these samples. Problem 5.16.9> . 5. but it has been subjected to various signal processing manipulations.

4: The signal Bandpass Sampling has the indicated spectrum. He multiplies the bandlimited signal by the depicted periodic pulse signal to perform sampling (Figure 5. how should TS be related to the signal's bandwidth? If s (t) Available for free at Connexions <http://cnx.26).3: A Dierent Sampling Scheme A signal processing engineer from Texas A&M claims to have developed an improved sampling scheme. b) Will this scheme work? If so.9> .209 Pulse Signal p(t) A … ∆ T/2 T ∆ –A ∆ 3T/2 2T ∆ … ∆ t Figure 5. why not? Problem 5.25 Problem 5. p(t) A … … ∆ ∆ ∆ ∆ t Ts 4 Ts 5Ts 4 Figure 5.26 a) Find the Fourier spectrum of this signal.

6: (Figure 5. Show that this is indeed the case. Problem 5. and nd the system that reconstructs s (t) from its samples.28). s (t) be a sinusoid having frequency a) Let W Hz. Sampling Signals If a signal is bandlimited to 1 Ts > 2W and recover the waveform This statement of the Sampling Theorem can be taken to mean that all information about the W Hz. DIGITAL SIGNAL PROCESSING S(f) –2W –W Figure 5. b) How fast would you need to sample for the amplitude estimate to be within 5% of the true value? c) Another issue in sampling is the inherent amplitude quantization produced by A/D converters. nd the worst case example. If we sample it at precisely the Nyquist rate. we can sample it at any rate original signal can be extracted from the Assume the maximum voltage allowed by the converter is b bits. an important aspect of a signal is its peak value. how large is maximum d) We can describe the quantization error as noise. Problem 5.9> .5: exactly. which equals max {|s (t) |}. you do have to be careful how you do so. What is the signal-to-noise ratio of the quantization error for a full-range sinusoid? Express your result in decibels. In addition to the rms value of a signal.27 W 2W f a) What is the minimum sampling rate for this signal suggested by the Sampling Theorem? b) Because of the particular structure of this spectrum. with a power proportional to the square of the maximum error.210 CHAPTER 5. where (t) represents the quantization error at the quantization error? nth sample. how accurately do the samples convey the sinusoid's amplitude? In other words. While true in principle. We can express the quantized sample Vmax volts and that it quantizes amplitudes to Q (s (nTs )) as s (nTs ) + (t). Assuming the converter rounds. Hardware Error An A/D converter has a curious hardware problem: Every other sampling pulse is half its normal amplitude Available for free at Connexions <http://cnx. one wonders whether a lower sampling rate could be used.

we want to convert numbers having a pulses occurring at multiples of a time interval B -bit representation into a voltage proportional to that number. the number 13 has the binary representation 1101 (1310 binary representation. b) Can this signal be used to sample a bandlimited signal having highest frequency W = 1 2T ? Problem 5.28 a) Find the Fourier series for this signal. We'll see why that is. Let's assume we have a B -bit converter.7: Simple D/A Converter Commercial digital-to-analog converters don't work this way. = 1 × 23 + 1 × 22 + 0 × 21 + 1 × 20 ) would be represented by the depicted pulse sequence.29) serves as the input to a rst-order RC lowpass lter. We want to design the lter and the parameters ∆ and T so that the output voltage at time 4T (for a 4-bit converter) is proportional Available for free at Connexions <http://cnx. Note that the pulse sequence is backwards from the A ∆ 1 0 1 1 0 T 2T Figure 5. The presence of a pulse For a 4-bit and indicates a 1 in the corresponding bit position. but a simple circuit illustrates how they work.9> . Thus. 3T 4T t This signal (Figure 5.211 p(t) A …A 2 ∆ ∆ T ∆ 2T ∆ 3T ∆ t 4T … Figure 5. and pulse absence means a 0 occurred. the number by a sequence of The rst step taken by our simple converter is to represent B T.

he samples TS = 2. Discrete-Time Fourier Transforms Find the Fourier transforms of the following sequences.10: Just Whistlin' Sammy loves to whistle and decides to record and analyze his whistling in lab. How do the converter's parameters change with sampling rate and number of bits in the converter? Problem 5. so he grabs 30 consecutive. 7} 4 s (n) =  0 if otherwise d) The length-8 DFT of the previous signal.or over-sample his whistle? b) What is the discrete-time Fourier transform of c) How does the 32-point DFT of x (n) depend on x (n) θ? and how does it depend on θ? Available for free at Connexions <http://cnx. the voltage due to a pulse at which in turn is twice that of a pulse at 3T should be twice that of a pulse produced at 2T . . . 0. samples. Problem 5. He x (n) and realizes he can write it as x (n) = sin (4000nTS + θ) . 1} 4 s (n) =  0 if otherwise   n if n = {−2. DIGITAL SIGNAL PROCESSING to the number. Show the circuit that works. This combination of pulse creation and ltering constitutes our simple D/A converter. etc. The requirements are • The voltage at time t = 4T should diminish by a factor of 2 the further the pulse occurs from this time. .9: Spectra of Finite-Duration Signals Find the indicated spectra for the following signals. .9> . but arbitrarily chosen. a) The discrete-time Fourier transform of b) The discrete-time Fourier transform of c) The discrete-time Fourier transform of   cos2 π n if n = {−1. where s (n) is some sequence having Fourier transform (−1) s (n) s (n) cos 0 n) (2πf  s n if n (even) 2 x (n) =  0 if n (odd) ns (n) n Problem 5. In other × 10−4 to obtain s (n) = sa (nTS ). n = {0. Sammy (wisely) decides to analyze a few samples at a time. his whistle is a pure sinusoid that can be described by his recorded whistle with a sampling interval of calls this sequence sa (t) = sin (4000t). . 29} a) Did Sammy under. 1. T.212 CHAPTER 5. • The 4-bit D/A converter must support a 10 kHz sampling rate. 0. . 2} s (n) =  0 if otherwise   sin π n if n = {0. . He is a very good whistler. To analyze the spectrum.8: S ej2πf a) b) c) d) . . −1.

what is the duration of the lter's output to this signal? d) Let the lter be a boxcar averager: be a pulse of unit height and 1 h (n) = q+1 for n = {0.12: A Digital Filter A digital lter has the depicted (Figure 5. . The key idea is that a sequence can be written as a weighted linear combination of unit samples.9> .11: a) Show that Discrete-Time Filtering We can nd the input-output relation for a discrete-time lter much more easily than for analog lters. .   1 if n = 0 δ (n) =  0 otherwise b) If h (n) denotes the unit-sample responsethe output of a discrete-time linear. Problem 5. If the to a unit-sample inputnd an expression for the output. q+1 duration N . with the unit-sample response having duration input has duration N.30 3 4 n a) What is the dierence equation that denes this lter's input-output relationship? b) What is this lter's transfer function? c) What is the lter's output when the input is sin πn 4 ? Problem 5. Find the lter's output when N = 2 .213 Problem 5. shift-invariant lter q + 1. .org/content/col10040/1. Available for free at Connexions <http://cnx. assume our lter is FIR. q an Let the input odd integer.30) unit-sample reponse. h(n) 2 1 –1 0 1 2 Figure 5. c) In particular. . x (n) = i x (i) δ (n − i) where δ (n) is the unit-sample.13: A Special Discrete-Time Filter Consider a FIR lter governed by the dierence equation y (n) = 1 2 2 1 x (n + 2) + x (n + 1) + x (n) + x (n − 1) + x (n − 2) 3 3 3 3 a) Find this lter's unit-sample response. q} and zero otherwise.

What is the lter's output to this input? In particular. For example. x (t) is a the unit step. −1. DIGITAL SIGNAL PROCESSING b) Find this lter's transfer function.31)? x(n) 3 2 1 0 1 2 3 Figure 5. but still.9> .e. how should the sampling interval T be chosen so that the approximation works well? Problem 5.15: Derivatives The derivative of a sequence makes little sense. The idea is to replace the derivative with a discrete-time approximation and solve the resulting dierential equation.214 CHAPTER 5. . . The digital lter described by the dierence equation y (n) = x (n) − x (n − 1) resembles the derivative formula. m = {. . c) Suppose we take a sequence and stretch it out by a factor of three. 0. what classic lter category does it fall into).   s x (n) =  0 Sketch the sequence n 3 if n = 3m . Characterize this transfer function (i. } otherwise x (n) for some example s (n). a) What is this lter's transfer function? b) What is the lter's output to the depicted triangle input (Figure 5. a) What is the dierence equation that must be solved to approximate the dierential equation? b) When c) Assuming x (t) = u (t).31 4 n 5 6 Available for free at Connexions <http://cnx.14: Simulating the Real World Much of physics is governed by dierntial equations. suppose we have the dierential equation dy (t) + ay (t) = x (t) dt and we approximate the derivative by d y (t) |t=nT dt where y (nT ) − y ((n − 1) T ) T T essentially amounts to a sampling interval. We want to explore how well it works. . . and we want to use signal processing methods to simulate physical problems. we can approximate it. what is the output at the indices where the input would you characterize this system? x (n) is intentionally zero? Now how Problem 5. . what will be the simulated output? sinusoid.

. 215)? d) What would you need to change so that the product of the DFTs of the input and unit-sample response in this case equaled the DFT of the ltered output? Problem 5. b) Does a Parseval's Theorem hold for the DCT? c) You choose to transmit information about the signal could only send one. it does give him an idea. a = √ 2 Available for free at Connexions <http://cnx. when will x (n) = x (nTs ). You Problem 5. d) How does the number of computations change with this approach? Will Sammy's idea ultimately lead to a faster computation of the required DFTs? S1 (k) and S2 (k).19: A Digital Filter A digital lter is described by the following dierence equation: 1 y (n) = ay (n − 1) + ax (n) − x (n − 1) . where b) Consider the special case where length-3 boxcars. The issue is whether he can S (k) of this complex-valued signal in terms of retrieve the individual DFTs from the result or not. which one would you send? s (n) according to the DCT coecients.18: Discrete Cosine Transform (DCT) The discrete cosine transform of a length-N sequence is dened to be N −1 Sc (k) = n=0 Note that the number of frequency terms is a) Find the inverse DCT. c) If we could use DFTs to perform linear ltering. where s1 (n) s2 (n) are two real-valued signals of which he needs to compute the spectra. . . says that retrieving the wanted DFTs is easy: Just nd the real and imaginary parts of the symmetry properties of the DFT. but he is behind schedule and needs to get his results as quickly as possible. a) What will be the DFT the original signals? b) Sammy's friend. Show that this approach is too simplistic.16: The DFT Let's explore the DFT and its properties. Does the actual output of the boxcar-lter equal the result found in the previous part (list. 2N − 1}. He will. let the input be a boxcar signal and the unit-sample response also be a boxcar. What approach will work? Hint: Use Problem 5. So that you can use what you just calculated. p. it should be true that the product of the input's DFT and the unit-sample response's DFT equals the output's DFT.215 c) Suppose the signal x (n) is a sampled analog signal: lter act like a dierentiator? In other words. or course. s (n) cos 2πnk 2N 2N − 1: k = {0. an Aggie who knows some signal processing. Under what conditions will d y (n) be proportional to dt x (t) |t=nTs ? the Problem 5. the DFTs of S (k). He gets the idea of two transforms at one time by computing the transform of s (n) = s1 (n) + js2 (n). use the FFT algorithm. a) What is the length-K DFT of length-N boxcar sequence.9> . Find the inverse DFT of the product of the DFTs of two if we could implement the lter with length-4 DFTs. .org/content/col10040/1. c) While his friend's idea is not correct.17: computing and DSP Tricks Sammy is faced with computing lots of discrete Fourier transforms. The result of part (b) would then be the lter's output N < K? K = 4.

y (n) = y (n − 1) + x (n) − x (n − 4) a) Find this lter's unit sample response. the input sequence is zero for output to be not? n< πn 2 . special purpose.20: Another Digital Filter A digital lter is determined by the following dierence equation. 1 1 1 x (n) + x (n − 1) + x (n − 2) 4 2 4 a) What is the lter's transfer function? How would you characterize it? b) What is the lter's output when the input equals πn 2 ? c) What is the lter's output when the input is the depicted discrete-time square wave (Figure 5.216 CHAPTER 5. then becomes nonzero.9> . nd the input x (n) that gives rise to this output. If not.32)? cos x(n) 1 … … n –1 Figure 5.. why Problem 5. .org/content/col10040/1. is there an input that can yield this output? If so. Can his measurement be correct? In other words. b) What is the lter's transfer function? How would you characterize this lter (lowpass.32 Available for free at Connexions <http://cnx.)? c) Find the lter's output when the input is the sinusoid d) In another case. highpass. DIGITAL SIGNAL PROCESSING a) What is this lter's unit sample response? b) What is this lter's transfer function? c) What is this lter's output when the input is sin πn 4 ? Problem 5. Sammy measures the sin y (n) = δ (n) + δ (n − 1).21: Yet Another Digital Filter A lter has an input-output relationship given by the dierence equation y (n) = .. 0.

26: A discrete-time. Problem 5.25: The signal Detective Work equals x (n) δ (n) − δ (n − 1). indicate why and nd the system's unit sample response. a) What is the lter's unit-sample response? b) What is the discrete-Fourier transform of the output? c) What is the time-domain expression for the output? Problem 5. . linear system produces an output y (n) = {1. b) What is this system's output when the input is c) If the output is observed to be sin πn ? 2 y (n) = δ (n) + δ (n − 1).22: A Digital Filter in the Frequency Domain We have a lter with the transfer function H ej2πf = e−(j2πf ) cos (2πf ) operating on the input signal x (n) = δ (n) − δ (n − 2) that yields the output y (n). c) How would you describe this system's function? Available for free at Connexions <http://cnx. b) You are told that when output was x (n) served as the input to a linear FIR (nite impulse response) lter. b) What is this lter's output when x (n) = cos πn 2 + 2sin 2πn ? 3 Problem 5. then what is the input? Problem 5. shift invariant. a) Find the dierence equation governing the system.24: Digital Filtering A digital lter has an input-output relationship expressed by the dierence equation y (n) = . b) Find the output when x (n) = cos (2πf0 n). 0. 0. x (n) + x (n − 1) + x (n − 2) + x (n − 3) 4 a) Plot the magnitude and phase of this lter's transfer function. . Is this statement true? If so. −1. the y (n) = δ (n) − δ (n − 1) + 2δ (n − 2). } when its input x (n) equals a unit sample.9> .org/content/col10040/1. . if not.23: Digital Filters A discrete-time system is governed by the dierence equation y (n) = y (n − 1) + x (n) + x (n − 1) 2 a) Find the transfer function for this system.217 Problem 5. a) Find the length-8 DFT (discrete Fourier transform) of this signal. show why not.

signal w (−n) A signal x (n) x (n) is passed through this system to and is then passed through the system to yield the What is the transfer function between y (n)? Problem 5. fs a) Find lter coecients for the length-3 FIR lter that can remove a sinusoid having digital frequency f0 from its input. to what analog frequency does b) Assuming the sampling rate is the absolute value of a cosine: f0 correspond? c) A more general approach is to design a lter having a frequency response |H ej2πf | ∝ |cos (πf N ) |. d) Find the dierence equation that denes this lter. we want to design a digital lter for hum removal. N and the sampling rate so that the frequencies at which the cosine equals zero correspond to 60 Hz and its odd harmonics through the Problem yield the signal time-reversed Time Reversal has Uses time-reversed A discrete-time system has transfer function w (n). our clever engineer wants to design a digital AM even multiple of receiver. H ej2πf . Will this ltering algorithm work? If so. Does Sammy's result mean that Samantha's answer is wrong? c) The homework problem says to lowpass-lter the sequence by multiplying its DFT by   1 H (k) =  0 if k = {0. 7} otherwise and then computing the inverse DFT. c) Assuming the channel adds white noise and that a signal-to-noise ratio? b-bit A/D converter is used. pass the result through an A/D converter. Usually. why not? Available for free at Connexions <http://cnx. the 60 Hz signal (and its harmonics) are added to the desired signal. magnitude proportional to In this way. the signal and the accompanying hum have been sampled.29: Digital AM Receiver Thinking that digital implementations are always better.218 CHAPTER 5.28: Removing Hum The slang word hum represents power line waveforms that creep into signals because of poor circuit construction. nd the ltered output. Select the parameter fth. not only can the fundamental but also its rst few harmonics be removed.9> . In this problem. a) What is the smallest sampling rate that would be needed? b) Show the block diagram of the least complex digital AM receiver. her group partner Sammy says that he computed the inverse DFT of her answer and got δ (n + 1) + δ (n − 1). The receiver would bandpass the received signal. if not. What we seek are lters that can remove hum. perform all the demodulation with digital signal processing systems. DIGITAL SIGNAL PROCESSING Problem 5. 1.30: δ (n − 7). what is the output's Problem 5. The output y (−n). DFTs A problem on Samantha's homework asks for the 8-point DFT of the discrete-time signal δ (n − 1) + a) What answer should Samantha obtain? b) As a check. Assume in this problem that the carrier frequency is always a large the message signal's bandwidth W. and end with a D/A converter to produce the analog message signal.

Half the class votes to just program the dierence equation 2 5 while the other half votes to program a frequency domain approach that exploits the speed of the FFT. and compute its then would discard (zero the spectrum) at thus reconstituting an half of the frequencies. a2 = .org/content/col10040/1. which are digital (a computer performs the task). the input signal x (n) emerges as x (n) + a1 x (n − n1 ) + a2 x (n − n2 ). a) What is the dierence equation governing the ve-day averager for daily stock prices? b) Design an ecient FFT-based ltering algorithm for the broker. n1 = 10. c) If the lter is a length-128 FIR lter (the duration of the lter's unit-sample response equals 128). The technical stock analyst at the Buy-LoSell-Hi brokerage rm has heard that FFT ltering techniques work better than any others (in terms of producing more accurate averages). First of all.33: Digital Filtering of Analog Signals RU Electronics wants to develop a lter that would be used in analog applications. and will serve as a lowpass lter. and n2 = 25. what system's output is the signal x (n)? Problem 5. lossy signal compression becomes important if you want signals to be received quickly. The receiver would assemble the transmitted spectrum and compute the inverse DFT. and send these over the network. quantize them to N -point DFT. assume the signal is a sinusoid. but also in auditoriums and telephone circuits. N -point block. Available for free at Connexions <http://cnx. what is the transfer function of your Problem 5. but that is implemented digitally. with the echoed signal as the input. he would section the signal into length-N blocks. program that has the same input-output relationship. should it be implemented in the time or frequency domain? d) Assuming system? H ej2πf is the transfer function of the digital lter. Suppose the duration of a1 = Because of the undecided vote. The lter is to operate on signals that have a 10 kHz bandwidth.31: Stock Market Data Processing Because a trading week lasts ve days.9> . He b-bits. a) What is the block diagram for your lter implementation? Explicitly denote which components are analog. How much data should be processed at once to produce an ecient algorithm? What length transform should be used? c) Is the analyst's information correct that FFT techniques produce more accurate averages than any others? Why or why not? Problem 5.34: Signal Compression An enterprising 241 student has proposed a scheme based on frequency-domain Because of the slowness of the Internet. In one situation where the echoed signal has been sampled. In other words.32: Echoes Echoes not only occur in canyons.000 and that 1 1 . b) To simulate this echo system. processing. you must break the tie. and which interface between analog and digital worlds. stock markets frequently compute running averages each day over the previous ve trading days to smooth price uctuations. Which approach is more ecient and why? c) Find the transfer function and dierence equation of the system that suppresses the echoes. ELEC 241 students are asked to write the most ecient (quickest) x (n) is 1.219 Problem 5. b) What sampling rate must be used and how many bits must be used in the A/D converter for the acquired signal's signal-to-noise ratio to be at least 60 dB? For this calculation. a) Find the dierence equation of the system that models the production of echoes.

9> . DIGITAL SIGNAL PROCESSING a) At what frequencies should the spectrum be zeroed to minimize the error in this lossy compression scheme? b) The nominal way to represent a signal digitally is to use simple bits/sample is smaller than that nominally required? c) Assuming that eective compression can be achieved. would the proposed scheme yield satisfactory results? b-bit quantization of the time-domain How long should a section be in the proposed scheme so that the required number of Available for free at Connexions <http://cnx.220 CHAPTER 5.

the largest number is we have 9. 172) 2b−1 − 1. 173) Solution to Exercise 5. we have 2.3. 177) The plotted temperatures were quantized to the nearest degree.1 (p.2. Reducing the sampling rate would result in fewer samples/period. 25 = 110112 and 7 = 1112 .3 (p. At the Nyquist frequency. the negative frequency lines move to the left and the positive Solution to Exercise 5. 176) f = –1 f=1 T=4 f T = 3. and these samples would appear to have arisen from a lower frequency sinusoid. the largest (smallest) numbers are 2±(127) = 1. The dashed lines correspond to the frequencies about which the spectral repetitions (due to sampling with frequency ones to the right.1 (p.223.036.9> . the pulse duration has no signicant eect on Solution to Exercise 5. the largest number is about 10 9863 .775. Ts = 1) occur.1 (p.647 and for b = 64. the number of bits in the exponent determines the largest and smallest representable numbers.3 (p.2 (p. exactly two samples/period would occur. We nd that 110012 + 1112 = 1000002 = 32.5 f Figure 5. concerned with the repetition centered about the origin.372. 9. 176) recovering a signal from its samples.2 (p.33 The square wave's spectrum is shown by the bolder set of lines centered about the origin.9 × 10−39 ).7 × 1038 ( Because we are only The only eect of pulse duration is to unequally weight the spectral repetitions.3. For 32-bit oating point.221 Solutions to Exercises in Chapter 5 Solution to Exercise 5. Thus. Solution to Exercise 5.2.854. the high temperature's amplitude was quantized as a form of A/D conversion. In oating point. As the square wave's period decreases.483.3. For 64-bit oating point.807 or about Solution to Exercise 5. 171) For b-bit signed integers. For b = 32.2. Available for free at Connexions <http://cnx.4.147. Solution to Exercise 5. 176) The simplest bandlimited signal is the sine wave.2 × 1018 .

186) original signal's energy multiplied by If the sampling frequency exceeds the Nyquist frequency.9> . This situation amounts to aliasing in the time-domain. To compute a longer transform than the input signal's duration.1 (p. Solution to Exercise 5. S ej2π(f +1) = = = = S e ∞ −(j2π(f +1)n) n=−∞ s (n) e ∞ −(j2πn) s (n) e−(j2πf n) n=−∞ e ∞ −(j2πf n) n=−∞ s (n) e j2πf (5. Using the FFT. 188) again the same. the A 2A and the signal's rms value (again assuming it is a sinusoid) is √ . we may only need half the spectral values.3 (p. The storage Solution to Exercise 5.001 results in B = 10 bits.56) Solution to Exercise 5. the length of the resulting zero-padded signal can be 512.6.8. 191) The upper panel has not used the FFT algorithm to compute the length-4 DFTs while the lower one has.3 (p. 178) Solving ∆= With an A/D range of [−A.1 (p.2 (p. a 1ms computing time would increase by a factor of about of 3 less than the DFT would have needed.2 (p. 195) The oscillations are due to the boxcar window's Fourier transform.2 (p. The ordering is determined by the algorithm.10. DIGITAL SIGNAL PROCESSING Solution to Exercise 5. which demands retaining all frequency values. Solution to Exercise 5.7. the spectrum of the samples equals the analog spectrum.6. 191) The transform can have any greater than or equal to the actual duration of the signal. the energy in the sampled signal equals the Solution to Exercise 5. a factor Solution to Exercise 5. Recall that the algorithm to compute the DFT (Section 5. When the signal is real-valued. samples long.2 (p. which equals the sinc function.4. 189) K frequencies are needed. Solution to Exercise 5.3 (p. 178) Solution to Exercise 5. the complexity is Solution to Exercise 5.9. 181) 2−B = . The datarate is 11025 × 16 = 176. A]. If a DFT required 1ms to compute. 1024. but the complexity remains unchanged. 26460 bytes.3 (p.10. the complexity is O (KN ).8 dB.4 kbps. Extending the length of the signal this way merely means we are sampling the frequency axis more nely than required.1 (p. When only T. and signal having ten times the duration would require 100ms to compute. 10log2 10 = we simply zero-pad the signal. These numbers are powers-of-two.2 × 11025 = 13230. 184) N +n0 −1 N +n0 −1 α n=n0 αn − n=n0 αn = αN +n0 − αn0 which.4. 192) Number of samples equals required would be 1. FFT is an We simply pad the signal with zero-valued samples until a computationally advantageous signal length results. Available for free at Connexions <http://cnx.1 (p. quantization interval Solution to Exercise 5. yields the geometric sum formula.7). To use the Cooley-Tukey algorithm. 187) Solution to Exercise 5.5 = 97. and the FFT algorithm can be exploited with these lengths. 194) Solution to Exercise 5.9.222 CHAPTER 5.9.1 (p. but over the normalized analog frequency fT.4 (p. 2B 2 Solution to Exercise 5. etc. A 16-bit A/D converter yields a SNR of 6 × 16 + 10log1. Thus.4. 178) The signal-to-noise ratio does not depend on the signal amplitude. If the data are complex-valued.6. after manipulation.

200) original lowpass lter.3 (p. Because we divide again by Nx to nd the number of computations per input value in the entire input.59) Solution to Exercise 5. In discrete-time signal processing. The indices can be negative.15. 207) The delay is not computational delay herethe plot shows the rst output value is aligned with the lter's rst inputalthough in real systems this is an important consideration.14.1 (p.1 ( and this condition is not allowed in MATLAB. a very easy operation to perform. It now acts like a bandpass lter with a center frequency of f0 and a bandwidth equal to twice of the Solution to Exercise 5.14. 202) The unit-sample response's duration is Let which corresponds to the representation described in a problem (Example 5. but not in real time.6) of a length-q boxcar lter.223 Solution to Exercise 5.12. For the time-domain implementation.1 (p. ∞ S (k) ↔ i=−∞ s (n − iL) (5.2 (p. Solution to Exercise 5. Doing so would require the output to emerge before the input arrives! Solution to Exercise 5. Thus. each of which requires 1+ q Nx q log2 (Nx + q) + 7 Nx + 6 cos (2πf n − φ) = cos 2πf n − φ 2πf .57) Solution to Exercise 5. Thus. multiplications and p+q−1 additions.9> .58) q h (n) = m=0 bm δ (n − m) (5.14.2 (p. Thus. Solution to Exercise 5.1 (p. the transform length must equal or exceed the signal's duration. the corresponding signal equals the periodic repetition of the original signal. All lters have phase shifts.1 (p. The dierence equation for an FIR lter has the form q y (n) = m=0 The unit-sample response equals bm x (n − m) (5. 205) N computations. Such lters do not exist in analog form. To x it. 201) The DTFT of the unit sample equals a constant (equaling 1).2 (p. 2π) to form the DFT. it stays constant. this quantity decreases as Nx increases. but digital ones can be programmed. the Fourier transform of the output equals the transfer function. 198) Such terms would require the system to know what future input or output values would be before the current value was computed. 197) signals later in the array. the lter's phase shift: Rather. 195) Solution to Exercise 5. an amplier amounts to a multiplication.14.13. such terms can cause diculties.4 (p. 201) In sampling a discrete-time signal's Fourier transform L times equally over [0. 201) To avoid aliasing (in the time domain). the total number of arithmetic operations Available for free at Connexions <http://cnx. Thus the statement is correct. In the frequency domain. denote the input's total duration. we split the input into per input in the section. Solution to Exercise 5.15. The time-domain implementation requires a total of N (2q + 1) 2q + 1 computations per input value. we must start the Solution to Exercise 5. Solution to Exercise 5. or q+1 and the signal's Nx . 208) We have equals p+q+1 2 (p + q). the delay is due to A phase-shifted sinusoid is equivalent to a time-delayed one: N Nx sections.11.1 (p. This delay could be removed if the lter introduced no phase shift.16.

DIGITAL SIGNAL PROCESSING Available for free at Connexions <http://cnx.9> .org/content/col10040/1.224 CHAPTER 5.

what are we going to send and how are we going to send it? communications) and electromagnetic radiation ( 1 2 Interestingly. 1. Fundamentally digital signals. signals into digital ones. Shannon showed that once in this form. What are the channel's characteristics and how do they aect the transmitted signal? In short. not time Communication systems endeavor not to manipulate information. 2 fundamental work on a theory of information published in 1948. high-performance computers. What is the nature of the information source. is clearly an analog signal.lucent. are in the proper form. place to another. Speech. like computer les (which are a special case of symbolic signals). because of Claude Shannon's and the creation of high-bandwidth communication systems. digital as well as analog transmission are accomplished using analog signals. This startling result has no counterpart in analog systems. This chapter develops a common theory that underlies how such systems work. AM radio will remain noisy. Communication sample. Because of the Sampling Theorem. but to transmit it from one point-to-point communication. wireline This content is available online at <http://cnx. some new like computer networks.8/>. analog or digital communication. broadcast communication. for example.9> 225 . a form of "discrete-time" signal despite the fact that the index sequences byte position. they also aect the information content. Because systems manipulate Information comes neatly packaged in both analog and digital forms. or from many to many. with communication design Go back to the fundamental model of communication (Figure 1. or digital. amplitude-quantized signals. The convergence of these theoretical and engineering results on communiThe audio compact disc (CD) and the cations systems has had important consequences in other arenas. like voltages in Ethernet (an example of wireless) in cellular telephone. like radio. like a telephone conference call or a chat room. and to what extent can the receiver tolerate errors in the received information? Available for free at Connexions <http://cnx. Communications design begins with two fundamental considerations. you should convert all information-bearing signals into discrete-time. so-called systems can be fundamentally analog. has been answered. some old like AM radio. digital videodisk (DVD) are now considered digital communications systems. We describe and analyze several such systems. The question as to which is better. considerations used throughout.Chapter 6 Information Communication 6.1 Information Communication signals. http://www. signals express information. nication strategy. we know how to convert analog a properly engineered system can communicate digital information with no error despite the fact that the communication channel thrusts noise onto all transmissions. 1 As far as a communications engineer is concerned. The answer is to use a digital commu- In most and computer les consist of a sequence of bytes.3: Fundamental model of communication). from one place to many others. the development of like computer networks.

and for circuits. H the magnetic eld. The equations as written here are simpler Nonlinear electromagnetic media do exist. versions that apply to free-space propagation and conduction in metals.1) div ( E) = ρ × H = σE + ∂ ( E) ∂t div (µH) = 0 where E is the electric eld. Available for free at Connexions <http://cnx. do bear in mind that a fundamental understanding of communications channels ultimately depends on uency with Maxwell's equations. dielectric permittivity. INFORMATION COMMUNICATION 6. ×E =− ∂ (µH) ∂t (6.13/>.org/content/col10040/1. The transmitter This content is available online at <http://cnx. Maxwell's equations neatly summarize the physics of all electromagnetic phenomena. A noisier channel subject to interference compromises the exibility of wireless communication. the receiver's antenna will react to electromagnetic radiation coming from any source. 6. Listening in on a conversation requires that the wire be tapped and the voltage measured.226 CHAPTER 6. electrical conductivity. letting receiver electronics select wanted signals and disregarding others. which are also governed by Maxwell's equations. One simple example of this situation is cable television. radio. This content is available online at <http://cnx. Wireline channels physically Simple wireline connect transmitter to receiver with a "wire" which could be a twisted pair.3 Wireline Channels telegraph.9> .29/>. coaxial cable or optic ber. wireline channels are more private and much less prone to interference. add. Kircho 's Laws represent special cases of these equations We are not going to solve Maxwell's equations here. µ magnetic permeability. cuits. the elds (and therefore the voltages and currents) resulting from two or more sources will note: linear with respect to the electrical and magnetic elds. σ ρ is the charge density. Nonlinear media are becoming increasingly important in optic ber communications. Some wireline broadcast modes: one or more transmitter is connected to several receivers. while the frowny face says that interference and noise are much more prevalent than in wireline situations. note: You will hear the term tetherless networking applied to completely wireless computer including cir- networks. where the receiver takes in only the transmitter's signal. Perhaps the most important aspect of them is that they are Thus. Consequently. thereby allowing portable transmission and reception. Wireless channels are much more public. with a transmitter's antenna radiating In contrast to wireline channels a signal that can be received by any antenna suciently close point-to-point connection as with the telephone. This feature has two faces: The smiley face says that a receiver can take in transmissions from any source. the channel is one of several wires connecting transmitter to and optic ber transmission. Computer networks can be found that operate in point-to-point or in broadcast modes.2 Types of Communication Channels Electrical communications channels are either channels connect a single transmitter to a single receiver: a channels operate in 3 wireline or wireless channels. 3 4 4 Wireline channels were the rst used for electrical communications in the mid-nineteenth century for the Here.

µd Figure 6. Both twisted pair and co-ax are examples of transmission lines. which all have the circuit model shown in Figure 6.and noise-free. and the transmitter applies a message-related voltage across the pair. and nds use in cable television and Consequently. several transmissions can share the circuit by amplitude modulation techniques. In the case of single-wire communications. In either case. Another is twisted pair. is what Ethernet uses as its channel. and they are. most wireline channels today essentially consist of pairs of conducting wires Figure 6.εd. How these pairs of wires are physically congured greatly aects their transmission characteristics. Available for free at Connexions <http://cnx. We must have a circuit a closed paththat supports current ow. wherein the wires are wrapped about each other. the term telegraphs. the earth is used as the current's return path. One example is twisted pair channel. Coaxial cable. fondly called "co-ax" by where a concentric conductor surrounds a central wire with a dielectric material in between. wireline channels form a dedicated circuit between transmitter and receiver. type of cable supports broader bandwidth signals than twisted pair. These information-carrying circuits are designed so that interference from nearby electromagnetic sources is minimized. by the time signals arrive at the receiver. Coaxial cable consists of one conductor wrapped around the central conductor.2 (Circuit Model for a Transmission Line) for an innitesimally small length. commercial cable TV is an example.1 (Coaxial Cable Cross-section). Single-wire metallic channels cannot support high-quality signal transmission having a bandwidth beyond a few hundred Hertz over any appreciable distance. As we shall nd subsequently.9> . ground for the reference node in circuits originated in single-wire You can imagine that the earth's electrical characteristics are highly variable. they are relatively interference.227 simply creates a voltage related to the message signal and applies it to the wire(s). In fact. Telephone cables are one example of a coaxial cable.1: Ethernet. Thus. This circuit model arises from solving Maxwell's equations for the particular transmission line geometry. Coaxial Cable Cross-section insulation σ σ rd ri dielectric central conductor outer conductor This σd.

these voltages and currents will also be sinusoidal because the transmission line model consists of linear circuit elements. INFORMATION COMMUNICATION Circuit Model for a Transmission Line I(x–∆x) + … V(x–∆x) – Figure 6. t) and i (x. the series resistance on the inner conductor's radius R has units of ohms/meter. the element values depend ri .org/content/col10040/1.2: I(x) ˜ R∆x ˜ L∆x ˜ G∆x + ˜ C∆x V(x) – ˜ R∆x ˜ L∆x ˜ G∆x ˜ C∆x I(x+∆x) + V(x+∆x) … – The so-called distributed parameter model for two-wire cables has the depicted circuit model structure. the conductivity of the conductors d .228 CHAPTER 6.3) π arccosh πσ arccosh µ π δ + arccosh 2r d 2r x v (x. Element values depend on geometry and the properties of materials used to construct the transmission line.9> . the outer radius of the dielectric σ. We express this dependence as a sinusoidal source at one end of the transmission line. and the parallel conductance from the medium between the wire pair. For coaxial cable. σd ) ln rd ri L= For twisted pair. and magnetic permittivity µd of the dielectric as 1 1 + rd ri 2π ln d rd ri (6. When we place The voltage between the two conductors and the current owing through them will depend on distance along the transmission line as well as time. and the conductivity σd . the element values are then R= C= G= L= ∼ ∼ ∼ 1 πrδσ d 2r d 2r (6. ∼ For example. As is customary in analyzing linear Available for free at Connexions <http://cnx. The inductance and the capacitance derive from transmission line geometry. t). The series resistance comes from the conductor used in the wires and from the conductor's geometry. having a separation µd ln 2π rd ri and common radius d between the conductors that have conductivity σ ∼ r and that are immersed in a medium having dielectric and magnetic properties. this notation represents that element values here have per-unit-length units. dielectric constant rd . Note that all the circuit elements have values expressed by the product of a constant times a length.2) R= ∼ 1 2πδσ C= ∼ ∼ G= ∼ 2 (π.

For example. and write circuit variables as a complex amplitudehere dependent on distancetimes a complex exponential: v (x. physically possible solutions for voltage amplitude cannot increase with distance along the transmission line. ∼ ∼ d2 V (x) = G +j2πf C 2 dx This equation's solution is R +j2πf L V (x) ∼ ∼ (6. Using the transmission line circuit model. γ depends on frequency.9) Our solution works so long as the quantity γ satises γ = ± G +j2πf C ∼ ∼ R +j2πf L ∼ ∼ ( because a (f ) is always positive. we can obtain a single equation that governs how the voltage's or the current's complex amplitude changes with position along the transmission line. The quantities V+ and V− are constants determined by the source and physical considerations. let the spatial origin be the middle of the transmission line model Figure 6.8) Calculating its second derivative and comparing the result with our equation for the voltage can check this d2 dx2 V (x) = γ 2 V+ e−(γx) + V− eγx = γ 2 V (x) (6.10) = ± (a (f ) + jb (f )) Thus. and v-i relations the equations governing the complex amplitudes.4) V-I relation for RL series V (x) − V (x + ∆ (x)) = I (x) R +j2πf L ∆ (x) Rearranging and taking the limit ∼ ∼ (6. The rst term will increase exponentially for V+ = 0 in this region.   V e(−(a+jb))x if x > 0 + V (x) =  V− e(a+jb)x if x < 0 (6.6) d I (x) = − dx d V (x) = − dx G +j2πf C V (x) ∼ ∼ ∼ ∼ R +j2πf L I (x) By combining these equations. V (x) = x<0 V+ e(−(a+jb))x + V− e(a+jb)x unless The voltage cannot increase without limit. we nd from KCL. t) = Re V (x) ej2πf t and i (x. and we express it in terms of real and imaginary parts as indicated. (6. we express voltages and currents as the real part of complex exponential signals. t) = Re I (x) ej2πf t .229 circuits. KVL. KCL at Center Node I (x) = I (x − ∆ (x)) − V (x) G +j2πf C ∆ (x) ∼ ∼ (6.11) Available for free at Connexions <http://cnx. These physical constraints give us a cleaner solution. a similar result applies to V− for x > 0.2 (Circuit Model for a Transmission Line).9> . Expressing γ in terms of its real and imaginary parts in our solution shows that such increases are a (mathematical) possibility. we must segregate the solution for negative and positive x. Taking the derivative of the second equation and plugging the rst equation into the result yields the equation governing the voltage.7) V (x) = V+ e−(γx) + V− eγx solution.5) ∆ (x) → 0 yields the so-called transmission line equations. Because the circuit model contains simple circuit elements. (6.

Thus. 294. this propagation speed is a fraction (one-third to two-thirds) of the speed of light. we know that the voltage's complex amplitude will The complete solution for the voltage has the form v (x. which means the transmission line appears resistive in this high-frequency regime. (Solution on p. One period of this variation. INFORMATION COMMUNICATION This solution suggests that voltages (and currents too) will decrease line. If we were to take a second t2 .12) If we could take a snapshot of propagating wave. Considering the spatial region x > 0.230 CHAPTER 6. decreases by a factor of The presence of the imaginary part of Because the solution for exponentially along a transmission space constant. the quantity under the radical simplies .16) C Available for free at Connexions <http://cnx. also provides insight into how transmission lines work. t) = Re V+ e−(ax) ej(2πf t−bx) The complex exponential portion has the form of a the voltage (take its picture at picture at some later time (6. we would see a sinusoidally varying waveform along the transmission line.13) c=| Im ∼ In the high-frequency region where to 2πf G +j2πf C ∼ and ∼ ∼ R +j2πf L ∼ ∼ ∼ ∼ −4 π .9> . Exercise 6. C 2 2 ∼ ∼ j2πf L R j2πf C G.) Find the propagation speed in terms of physical parameters for both the coaxial cable and twisted By using the second of the transmission line equation (6.3. It equals the reciprocal of manufacturers in units of dB/m. V (x) I(x) = = Z0 R+j2πf L ∼ G+j2πf C ∼ ∼ ∼ (6. equals λ = 2πf t2 − bx = 2πf (t1 + t2 − t1 ) − bx = 2πf t1 − b x − 2πf b (assuming 2πf (t2 − t1 ) b We denote this the second waveform appears to be the rst one.14) LC For typical coaxial cable. The 1 e .15) The quantity Z0 is known as the transmission line's characteristic impedance. and is expressed by vary sinusoidally in space. the characteristic impedance is real. the volt- speed by c. t = t1 ). for example.6). limit Z0 = f →∞ L ∼ (6. ∼ Note that when the signal frequency is suciently high. x>0 is proportional to γ . Because wavelength. known as the t= 2π b . we nd that ∼ ∼ d V (x) = − (γV (x)) = − R +j2πf L dx I (x) which means that the ratio of voltage and current complex amplitudes does not depend on distance. e−(jbx) . b (f ). which depends on frequency. we can solve for the current's complex amplitude. also known as the attenuation constant. is the distance over which the voltage a (f ). we would also see a sinusoidal f . and it equals age appeared to move to the right with a speed equal to b > 0). but delayedshifted to the rightin space. | propagation (6.1 pair examples. L. and we nd the propagation speed to be limit c = f →∞ 1 ∼∼ (6.

) From tables of physical constants. Antenna geometry determines how energetic a eld a voltage of a given frequency creates. they must be modulated to higher frequencies to be transmitted over wireless channels. Consequently.2 (Solution on p. Note that in the high-frequency regime that the space constant is approximately zero. we don't have two conductorsin fact we have noneand the energy is propagating in what corresponds to the dielectric material of the coaxial cable. For most antenna-based wireless systems.15/>. wavelength and frequency are inversely related: High frequency corresponds to small wavelengths.3. Compare this frequency with that of a mid-frequency cable television signal. When a voltage is applied to an antenna. it creates an electromagnetic eld that propagates in all directions (although antenna geometry aects how much power ows in any given direction) that induces electric currents in the receiver's antenna. From the encompassing view of Maxwell's equations. physical connectiona circuitbetween transmitter and receiver. Transmitted signal amplitude does decay exponentially along the transmission line. having spectral energy at low frequencies. Because no electric conductors are present and the ber is protected by an opaque insulator. if 5 p (d) represents the power This content is available online at <http://cnx. the only dierence is the electromagnetic signal's frequency.3 (Solution on p. A related transmission line is the optic ber. the lower the frequency the bigger the antenna must be.4 Wireless Channels 5 Wireless channels exploit the prediction made by Maxwell's equation that electromagnetic elds propagate in free space like light. This requirement results from the conservation of energy. must be constant regardless of the sphere's radius. 294. Considering a sphere centered at the In general terms. Thus. An antenna radiates a given amount of power into free space. Exercise 6. In this situation. Antennas having a size or distance from the ground comparable to the wavelength radiate elds most eciently. the electromagnetic eld is light. the total power. which means the attenuation is quite small. Here. Because most information signals are baseband signals. and it propagates down a cylinder of glass. 294. nd the frequency of a sinusoid in the middle of the visible light range. When we select the transmission line characteristics and the transmission frequency so that we operate in the highfrequency regime. a 1 MHz electromagnetic eld has a wavelength of 300 m. we use transmission lines for high-frequency wireline signal communication. we have a direct. the dominant factor is the relation of the antenna's size to the eld's wavelength. Exercise Typical values for characteristic impedance are 50 and 75 Ω.9> . optic ber transmission is interference-free. The fundamental equation relating frequency and wavelength for a propagating wave is λf = c Thus.) What is the limiting value of the space constant in the high frequency regime? 6. Optic ber communication has exactly the same properties as other transmission lines: Signal strength decays exponentially according to the ber's space constant and propagates at some speed less than light would in free space. how the signal diminishes as the receiver moves further from the transmitter derives by considering how radiated power changes with distance from the transmitting antenna. To summarize.3. signals are not ltered as they propagate along the transmission line: The characteristic impedance is real-valuedthe tranmission line's equivalent impedance is a resistorand all the signal's components at various frequencies propagate at the same speed. and ideally this power propagates without loss in all directions. which is found by integrating the radiated power over the surface of the sphere. For example. In wireline communication. Available for free at Connexions <http://cnx.>. Exercise 6. the weaker the received signal. a reasonably small time delay. Because signals travel at a nite speed.1 (Solution on p.5 Line-of-Sight Transmission 6 Long-distance transmission over either kind of channel encounters attenuation problems. the further from the transmitter the receiver is located. For p (d) ∝ which means that the received signal amplitude 1 d2 AT AR must be proportional to the transmitter's amplitude and inversely related to distance from the transmitter. a receiver senses a transmitted signal only after a time delay directly related to the propagation speed: ∆ (t) = d c At the speed of light. of the earth's curvature.4. Whereas the attenuation found in wireline channels can be controlled by physical parameters and choice of transmission frequency. Losses in wireline channels are explored in the Circuit Models module (Section 6.18) Known familiarly as the speed of light. If a lossless (zero space constant) coaxial cable connected the East and West coasts. INFORMATION COMMUNICATION integrated with respect to direction at a distance this quantity to be a constant. the total power will be p (d) 4πd2 .org/content/m0538/2.) What is the Why don't signals attenuate according to the inverse-square law in a conductor? dierence between the wireline and wireless cases? The speed of propagation is governed by the dielectric constant space. but also one antenna may not "see" another because This content is available online at <http://cnx. Thus. this delay would be two to three times longer because of the slower propagation speed. a signal travels across the United States in 16 ms.17) k. 6 In wireless channels. 6. AR = for some value of the constant kAT d (6. the inverse-distance attenuation found in wireless channels persists across all frequencies.3).9> . where repeaters can extend the distance between transmitter and receiver beyond what passive losses the wireline channel imposes.232 CHAPTER 6. µ0 and magnetic permeability 0 of free c = = √1 µ0 0 3 × 108 m/s (6. not only does radiation loss occur (p. 231). it sets an upper limit on how fast signals can propagate from one place to another. Available for free at Connexions <http://cnx. we must have d from the antenna.

2 (Solution on p.) Can you imagine a situation wherein global wireless communication is possible with only one transmitting antenna? In particular. essentially zero height? What is the range of cellular telephone where the handset antenna has Exercise 6. 7 6. would be impossible. like ship-to-shore communication.1 is the earth's radius ( 6. boldly tried such long distance communication without any evidence  either empirical or theoretical  When the experiment worked.5. occurs when the sighting line just grazes the earth's surface. (Solution on p. of course).38 × 106 m). If we were limited to line-of-sight communications. Using such very tall antennas would provide wireless communication within a town or between closely spaced population centers. but only at night. propagating electromagnetic energy does not follow the earth's surface. Consequently. networks of antennas sprinkle the countryside (each located on the highest hill possible) to provide long-distance wireless communications: Each antenna receives energy from one antenna and retransmits to another. It was Oliver>. Available for free at Connexions <http://cnx.9> . dLOS . the inventor of wireless telegraphy. maximum line-of-sight distance is dLOS = 2 2hR + h2 where √ 2 2Rh (6.233 dLOS earth }h R Figure 6.) Derive the expression of line-of-sight distance using only the Pythagorean Theorem. At the turn of the century.19) R Exercise 6. physicists scrambled to determine why (using Maxwell's equations. Line-of-sight transmission means the transmitting and receiving antennae can "see" each other as shown. Generalize it to the case where the antennas have dierent heights (as is the case with commercial radio and cellular telephone). Marconi. h above the earth's surface. Assuming both antennas have height At the usual radio frequencies.5. a mathematical physicist with strong engineering interests. Line-of-sight communication has the transmitter and receiver antennas in visual contact with each other. This kind of network is known as a relay network.4 km. who hypothesized that an invisible electromagnetic "mirror" surrounded the earth.6 The Ionosphere and Communications that it was Two antennae are shown each having the same height. The maximum distance at which they can see each other. 294. what happens to wavelength when carrier frequency decreases? Using a 100 m antenna would provide line-of-sight transmission over a distance of 71. 295. 7 This content is available online at <http://cnx. long distance wireless communication.

000 km when we substitute minimum and maximum ionospheric altitudes. This content is available online at <http://cnx.7 Communication with Satellites 8 Global wireless communication relies on satellites. which corresponds to an altitude of 35700km. Available for free at Connexions <http://cnx.10/>. where the time for one revolution about the equator exactly matches the earth's the time rotation time of one day.9> .7. R = 42200km. again a small time interval. √ The communication delay encountered with a single reection in this channel is between 6. Exercise 6.24 s. a signicant delay. 231). It's time to be more precise about what these quantities are and how they Suppose interference occupied a dierent frequency band. 6. Here. a plasma that encompasses the earth at altitudes hi between 80 and 180 km that reacts to solar radiation: It becomes transparent at Marconi's frequencies during the day. Calculate the attenuation incurred by radiation going to the satellite (one-way loss) with that encountered by Marconi (total going up and down).org/content/col10040/1. The problem with such interference is that it occupies the same frequency band as the desired communication signal.20) G is the gravitational constant and M the earth's mass. The maximum distance along the earth's surface that can be reached by a single ionospheric reection is 2Rarccos R R+hi .17/>.234 CHAPTER 6. This distance does not span the United States or cross the Atlantic. but becomes a mirror at night when solar radiation diminishes.8 and 10 ms. how would the receiver remove it? 8 9 This content is available online at <http://cnx. it reected electromagnetic radiation back to earth. Note that the attenuation calculation in the ionospheric case. but at the frequencies Marconi used. subject to interference and noise. Newton's equations applied to orbiting bodies predict that T for one orbit is related to distance from the earth's center 3 R as R= where GM T 2 4π 2 Calculations yield (6.8.) In addition to delay. Exercise 6. at least two reections would be required. is not a straightforward application of the propagation loss formula (p. assuming the ionosphere acts like a perfect mirror. Telephone lines are subject to power-line interference (in the United States a distorted 60 Hz sinusoid). which ranges c 6. Satellites will move across the sky unless they are in geosynchronous orbits. requiring satellite transmitters to use frequencies that pass through it. 2 2Rhi +hi 2 .1 (Solution on p. represents man-made signals. which ranges between 2. and has a similar structure. TV satellites would require the homeowner to continually adjust his or her antenna if the satellite weren't in geosynchronous orbit. ground stations transmit to orbiting satellites that amplify the signal and retransmit it back to earth.010 and 3. the propagation attenuation encountered in satellite communication far exceeds what occurs in ionospheric-mirror based communication.8 Noise and Interference Interference 9 We have mentioned that communications are. INFORMATION COMMUNICATION What he meant was that at optical frequencies (and others as it turned out). for transatlantic communication. Cellular telephone channels are subject to adjacent-cell phone conversations using the same signal frequency. Of great importance in satellite communications is the transmission delay. 295. to varying degrees. This altitude greatly exceeds that of the ionosphere.1 (Solution on p. He had predicted the existence of the ionosphere. The time for electromagnetic elds to propagate to a geosynchronous satellite and return is 0. the mirror was transparent.

the power spectrum of the (6. time-invariant system. The signal propagates through the channel at a speed equal to or less than the speed of light. the power spectrum equals the constant N0 (f2 − f1 ).21) Integrating the power spectrum over any range of frequencies equals the power the signal contains in that routinely calculate the power in a spectral band as the integral over positive frequencies multiplied by two. When we pass a signal through a linear. all electronic circuits that contain resistors. Available for free at Connexions <http://cnx. we are lead to dene the Because of Parseval's Theorem 10 . N0 2 .11/>. The signal can be attenuated. must have negative frequency components that mirror positive frequency ones. we dene the power spectrum power spectrum. All channels are subject to "Parseval's Theorem". we can Satellite channels are write an explicit expression for it that may contain some unknown aspects (how large it is. the power in a frequency For white noise. the resultant noise signal has a power equal to the sum of the component powers. Because of the emphasis here on frequency-domain power. Thermal noise plagues Thus.23) system's output is given by Py (f ) = (|H (f ) |) Px (f ) noise signal but with power spectrum 2 This result applies to noise signals as well. Thus.235 We use the notation i (t) to represent interference. which means that the channel delays the transmission. Because signals 2 (6. n (t) to represent a noise signal's waveform. The most widely used noise model is by its frequency-domain characteristics. allowing us to use a common model for how the channel aects transmitted signals. The transmitted signal is usually not ltered by the> This content is available online at <http://cnx. f2 ] =2 f1 Ps (f ) df (6. the phase of the noise spectrum is totally uncertain: It can be any value in between 0 and 2π . in receiving small amplitude signals. for example). The channel may introduce additive interference and/or noise. Because interference has man-made structure. (1) <http://cnx. Ps (f ) ≡ (|S (f ) |) band. At each frequency. the output's spectrum equals the product 142) of the system's frequency response and the input's spectrum. When noise signals arising from two dierent sources add. we f2 Power in [f1 .9> . white noise. and we need a way of describing such signals despite the fact we can't write a formula for the noise signal like we can for interference. When we pass white noise through a lter. It is dened entirely • • • White noise has constant power at all frequencies. and its value at any frequency is unrelated to the phase at any other frequency. subject to deep space noise arising from electromagnetic radiation pervasive in the galaxy. 6. With this denition. we dene noise in terms of its power spectrum. the output is also a (|H (f ) |) 2 N0 2 . s (t) to be the Ps (f ) of a non-noise signal magnitude-squared of its Fourier transform. receiver ampliers will most certainly add noise as they boost the signal's Using the notation band equals (p. Noise signals have little structure and arise from both human and natural sources.9 Channel Models • • • • 10 11 11 Both wireline and wireless channels share characteristics.

org/content/m0517/2. Variations in signal-tointerference and signal-to-noise ratios arise from the attenuation because of transmitter-to-receiver distance INFORMATION COMMUNICATION Letting α represent the attenuation introduced by the channel. Communication system design begins with detailing the channel model. (6. The attenuation is due to propagation loss. (6. We use analog communication techniques for analog message signals. the signal that emerges from the channel is corrupted.19/>.3: Fun- damental model of communication) has the depicted form. [fl .4.9> . and television. and television.10 Baseband Communication 12 We use analog communication techniques for analog message signals.24) This expression corresponds to the system model for the channel shown in Figure 6. the interference and noise powers do not vary for a given receiver. speech.236 CHAPTER 6. fu ]. like music. then developing the transmitter and receiver that best compensate for the channel's corrupting behavior. In this book. We characterize the channel's quality by the signal-to-interference ratio ( signal according to the relative power of each SIR) and the signal-to-noise ratio (SNR). Exercise 6. Adding the interference and noise is justied by the linearity property of Maxwell's equations. 12 This content is available online at <http://cnx. which it almost certainly does). we shall x(t) Channel r(t) x(t) Delay τ Attenuation α + + r(t) Interference Noise i(t) n(t) Figure 6. Available for free at Connexions <http://cnx. like Transmission and reception of analog signals using analog results music.25) 2α2 0 Px (f ) df SNR = N0 (fu − fl ) (6.1 Is this model for the channel linear? (Solution on p. speech.) As expected. 295. the receiver's input signal is related to the transmitted one by r (t) = αx (t − τ ) + i (t) + n (t) assume that the noise is white.4: The channel component of the fundamental model of communication (Figure 1. Transmission and reception of analog signals using analog results in an inherently noisy received signal (assuming the channel adds noise. SIR = 2α2 2 ∞ Px (f ) df 0 fu Pi (f ) df fl ∞ are computed Assuming the x (t)'s spectrum spans the frequency interval these ratios can be expressed in terms of power spectra.9. but does contain the transmitted signal. 6.26) In most cases. The ratios within the transmitted signal's bandwidth. The simplest form of analog communication is Point of Interest: baseband communication.

28) thus. an analog message signal must be 13 Especially for wireless channels. The amplitude modulated message signal has the form x (t) = Ac (1 + m (t)) cos (2πfc t) 13 This content is available online at <http://cnx. and television. (6. as shown in r(t) LPF W ^ m(t) Figure 6. which gives a signal-to-noise ratio of munication system.29) sinusoid. (6. does). Here. like commercial radio and which is somewhat out of date. Point of Interest: We use analog communication techniques for analog message signals. removing frequency components above W Hz. which it almost certainly does). the transmitted signal equals the message times a transmitter gain. is the wireline telephone system.237 in an inherently noisy received signal (assuming the channel adds noise. which it almost certainly The key idea of modulation is to aect the frequency or phase of what is known as the carrier (6. Frequency modulation (FM) and less frequently used phase modulation (PM) are not discussed here. speech. The receiver in a baseband system can't do much more than lter the received signal to remove out-of-band noise (interference is small in wireline channels). 6. x (t) = Gm (t) An example. in an inherently noisy received signal (assuming the channel adds noise. the received signal power equals α2 G2 power (m) and the noise power N0 W .27) You don't use baseband communication in wireless systems simply because low-frequency signals do not radiate well.5. Assume that the channel introduces an attenuation SNRbaseband = The signal power α2 G2 power (m) N0 W W. We use the signal-to-noise ratio of the receiver's output m (t) ^ to evaluate any analog-message com- α and white noise of spectral height N0 . in baseband communication power (m) will be proportional to the bandwidth the signal-to-noise ratio varies only with transmitter gain and channel attenuation and noise level. but also for wireline systems like cable modulated: The transmitted signal's spectrum occurs at much higher frequencies than those occupied by the signal.26/>. the receiver applies a lowpass lter having the same bandwidth. like Transmission and reception of analog signals using analog results music. The lter does not aect the signal componentwe assume its gain is unitybut does lter the 2 noise.9> .11 Modulated Communication television.5: The receiver for baseband communication systems is quite simple: a lowpass lter having the same bandwidth as the signal. In the lter's output. Available for free at Connexions <http://cnx. W Hz (the signal's spectrum W ). we focus on amplitude modulation (AM). Assuming the signal occupies a bandwidth of extends from zero to Figure 6.

The dashed line indicates the white noise level. m (t) ^ = = LPF (x (t) cos (2πfc t)) LPF Ac (1 + m (t)) cos2 (2πfc t) (6. assuming the signal's bandwidth is The carrier frequency is usually much larger than the signal's highest frequency: fc W. 295. Thus. the received signal is m (t) = ^ Ac (1 + m (t)) 2 ( CHAPTER 6.) This derivation relies solely on the time domain. carrier amplitude. we know that cos2 (2πfc t) = 1 (1 + cos (2π2fc t)) 2 (6.6: f fc W c fc+W W W The AM coherent receiver along with the spectra of key signals is shown for the case of a triangular-shaped signal spectrum. The lowpass lter removes this high-frequency signal. we know that the transmitted signal's spectrum occupies the [fc − W.31) At this point. Ignoring the attenuation and noise introduced by the channel for the moment. derive the same result in the frequency domain. reception of an amplitude modulated signal is quite easy (see Problem 4. the signal's amplitude is From our previous exposure to amplitude modulation (see the Hz (see the gure (Figure 6. cos 2πfct r(t) BPF fc. the message signal is multiplied by a constant and a sinusoid at twice the carrier frequency. which means that the transmitter antenna and carrier frequency are chosen jointly during the design process.20). W Also.32) Exercise 6. leaving only the baseband signal.30) Because of our trigonometric identities. fc + W ].5)). The so-called coherent receiver multiplies the input signal by a sinusoid and lowpass-lters the result (Figure 6.1 You won't need the trigonometric identity with this approach. 2W ~ r(t) ~ R(f) f f LPF W ^ m(t) ^ M(f) f R(f) f fc W c fc+W Figure 6. Multiplication by the constant term returns the message signal to baseband (where we want it to be!) while multiplication by the double-frequency term yields a very high frequency signal. Fourier Transform example (Example 4.9> . Available for free at Connexions <http://cnx. Because it is so easy to remove the constant term by electrical meanswe insert a capacitor in series with the receiver's outputwe typically ignore it and concentrate on the signal portion of the receiver's output when calculating signal-to-noise ratio.11. (Solution on p.6)). INFORMATION COMMUNICATION where fc is the carrier frequency and Ac the assumed to be less than one: frequency range |m (t) | < 1. Note that the lters' characteristics  cuto frequency and center frequency for the bandpass lter  must be match to the modulation and message parameters.6).

we can make use of the just-described receiver's linear nature to directly derive the receiver's output. occurs as it increases. (6. What is it? Thus. because the power spectrum is constant over the transmission band.37) 2SNRr Let's break down the components of this signal-to-noise ratio to better appreciate how the channel and the transmitter parameters aect communications performance. 14 Better performance. 295. should be ltered from the received signal before demodulation. you don't obtain the second unless α2 Ac 2 power (m). This αAc (M (f + fc ) + M (f − fc )) 2 making the power spectrum.239 6. As we derive the signal-to-noise ratio in the demodulated signal. SNR. The noise power equals the integral of the 2 noise power spectrum. Letting P (f ) denote the power spectrum of r (t)'s ˜ noise component. (6. we apply coherent receiver to this ltered signal. As shown in the triangular-shaped signal spectrum (Figure 6. We must thus insert a bandpass lter having bandwidth 2W and center frequency fc : This lter has no eect on the received signal-related component.33) α 2 Ac 2 2 2 (|M (f + fc ) |) + (|M (f − fc ) |) 4 (6. the signal power equals α Ac power(m) . let's also calculate the signal-to-noise ratio of the bandpass lter's output signal's Fourier transform equals r (t). The attenuation aects the output in the same way as the transmitted signal: It scales the output signal by the same amount.1 you make an assumption. The white Available for free at Connexions <http://cnx.36) The delay and advance in frequency indicated here results in two spectral noise bands falling in the low- 2· N0 2 ·W ·2· 1 4 = Thus. but does remove out-of-band noise power.12 Signal-to-Noise Ratio of an Amplitude-Modulated Signal 14 When we consider the much more realistic situation when we have a channel that introduces attenuation and noise. The signal-to-noise ratio of the receiver's output thus equals 2 SNR ^ = = m α2 Ac 2 power(m) 2N0 W (6.12. 2 4 To determine the noise power.18/>. we must understand how the coherent demodulator aects the bandpass m (t) = ^ noise found in r (t). on the other hand.34) Exercise 6. we must deal with the power spectrum since we don't have the Fourier transform available to us. the power spectrum after multiplication by the carrier has the form P (f + fc ) + P (f − fc ) 4 frequency region of lowpass lter's passband. with the result that the demodulated output contains noise that cannot be removed: It lies in the same spectral band as the signal. Clearly.9> .6). as measured by the This content is available online at <http://cnx. the total signal-related power in (Solution on p. this integral r (t) ˜ is equals the noise amplitude equals ratio  the signal-to-noise ratio after the de rigeur SNRr = The demodulated signal N0 times the lter's bandwidth 2W . the total noise power in this lter's output equals N0 W . ˜ The signal component of r (t) ˜ equals αAc m (t) cos (2πfc t). ˜ Because we are concerned with noise.35) front-end bandpass lter and before demodulation  α2 Ac 2 power (m) 4N0 W 2 2 αAc m(t) + nout (t).) If you calculate the magnitude-squared of the rst The so-called received signal-to-noise (6.

9> . note: If the signal spectrum had a constant the On the other hand. For wireless channels. theory. always yield a received signal containing noise as well as the message signal when the channel adds noise. Thus.240 CHAPTER 6. Exactly what signals we use ultimately aects how well the bits can be received. digital communication errors can be zero. In summary. error-free transmission of a sequence of bitsa bit stream {b (0) . even though the channel adds noise! We represent a bit by associating one of two specic analog signals with the bit's value. fc has no eect on SNR.13 Digital Communication dierent. suppose someone else wants to use AM and chooses the same carrier frequency.10/>. amplitude modulation is the only alternative. baseband 15 This content is available online at <http://cnx. The two resulting transmissions will add. these signals have a nite duration common to both signals. Thus. communication and are designed with the channel and bit stream in mind. Exercise 6. and agree to use separate carrier frequencies. .12. • Increasing channel attenuation  moving the receiver farther from the transmitter  decreases the signal-to-noise ratio as the square. INFORMATION COMMUNICATION • • • More transmitter power  increasing The carrier frequency The signal Ac  increases the signal-to-noise ratio proportionally. b (1) . These two signals comprise the signal set for digital b (n) = 0. What we clearly need to do is talk to the other party. as represented by amplitude modulation. }is the goal here. In virtually every case. the Federal Communications Commission (FCC) strictly controls the use of the electromagnetic spectrum for communications. with the result that the signal power remains constant. 295. FM. amplitude as we increased the bandwidth.) Suppose all users agree to use the same signal bandwidth. Available for free at Connexions <http://cnx. if b (n) = 1. cellular telephone (the analog version of which is AM).org/content/col10040/1. but we do know that m (0) = ∞ −∞ M (f ) df . Separate frequency bands are allocated for commercial AM. • Noise added by the channel adversely aects the signal-to-noise ratio. . Interestingly. amplitude modulation provides an eective means for sending a bandlimited signal from one place to another. but we have assumed that fc W. our transmitter enforced the criterion that signal amplitude was constant (Section 6. On earth. we need a forum for agreeing on carrier frequencies and on signal bandwidth. The one AM parameter that does not aect signal-to-noise ratio is the carrier frequency fc : We can choose any value we want so long as the transmitter and receiver use the same value. T send s1 (t). Digital communication schemes are very Once we decide how to represent bits by analog signals that can be transmitted over wireline In (like a computer network) or wireless (like digital cellular telephone) channels. we will then develop a way of tacking on communication bits to the message bits that will reduce channel-induced errors greatly. increasing signal bandwidth does indeed decrease the signal-to-noise ratio of the receiver's output. short wave (also AM). signal-to-noise ratio decreases as distance-squared between transmitter and receiver. and satellite communications. if we transmit the signal s0 (t). signal power would increase proportionally. this forum is the government. As more and more users wish to use radio. amplitude essentially equals the integral of the magnitude of the signal's spectrum. using baseband or amplitude modulation makes little dierence in terms of signal-to-noise ratio. and both receivers will produce the sum of the two signals. However. In the United States. this duration is known as the bit interval. Enforcing the signal amplitude specication means that as the signal's bandwidth increases we must decrease the spectral amplitude. Thus. 15 Eective. .org/content/m0519/2. How closely can the carrier frequencies be while avoiding communications crosstalk? What is the signal bandwidth for commercial AM? How does this bandwidth compare to the speech bandwidth? 6. We found that analog schemes.7). bandwidth W enters the signal-to-noise expression in two places: implicitly through signal power and explicitly in the expression's denominator. For wireline channels. Signal This result isn't exact.2 (Solution on p.

org/content/col10040/1.241 and modulated signal sets can yield the same performance. The entire bit stream b (n) is represented by a sequence of these signals.9> . 0.8 shows what a typical transmitted signal might be. s1 (t). each signal of which has duration T? 6. 1.1 (Solution on p. .) What is the expression for the signal arising from a digital transmitter sending the bit stream b (n). . Other considerations determine how signal set choice aects digital communication performance. Mathematically. 295. the transmitted signal has the form x (t) = nn (−1) b(n) ApT (t − nT ) (6. s0 (t) = ApT (t) s1 (t) = − (ApT (t)) (6.13.14 Binary Phase Shift Keying 16 A commonly used example of a signal set consists of pulses that are negatives of each other (Figure 6.7).org/content/m10280/2. −1.38) s0(t) A T t –A s1(t) T t Figure 6. .39) and graphically Figure 6. Available for free at Connexions <http://cnx. .7 Here. Exercise 6. .14/>. we have a baseband signal set suitable for wireline transmission. n = {. 16 This content is available online at <http://cnx. . } using the signal set s0 (t).

R= T = 1µ Available for free at Connexions <http://cnx. we couldn't distinguish bits if we did so. INFORMATION COMMUNICATION x(t) A “0” T –A “1” 2T “1” 3T “0” 4T t (a) x(t) “0” “1” “1” “0” t T 2T (b) 3T 4T A Figure 6. and will have no eect on performance made the negative-amplitude pulse represent a 0 and the positive one a 1. This way of representing a bit streamchanging the bit changes the sign of the transmitted signalis known as this popular way of communicating digital information. we must have datarate R of a digital communication system is how frequently an information bit is transmitted. Changing the sign of sinusoid amounts to changingshifting the phase by The communication system. A simple signal set for both wireless and wireline channels amounts to amplitude modulating a baseband signal set (more appropriate for a wireline channel) by a carrier having a frequency harmonic with the bit interval. which happened to be digital as well: the telegraph. In this example it equals the reciprocal of the bit interval: transmission.8: The upper plot shows how a baseband signal set for transmitting the bit sequence The lower one shows an amplitude-modulated variant suitable for wireless channels. we design transmitter and receiver together. 0110. As in all communication systems. Clearly. we do not want to We could also have 1 T . The name comes from concisely expressing π (although we don't have a sinusoid yet).242 CHAPTER 6. Thus. choose signal set members to be the same. This choice is indeed arbitrary assuming the receiver knows which signal represents which bit.9> . s0 (t) = ApT (t) sin s1 (t) = − ApT (t) sin 2πkt T 2πkt T (6. The word "binary" is clear enough (one binary-valued binary phase shift keying and abbreviated BPSK. The word "keying" reects back to the rst electrical quantity is transmitted during a bit interval). for a 1 Mbps (megabit per second) The choice of signals to represent bit values is arbitrary to some degree.

The rst and third harmonics contain that fraction of the total power. If the bit sequence is constantalways 0 or always 1the transmitted signal is a constant. in the style of the baseband signal set. the signal's bandwidth is innite. x(t) A “0” T –A Figure 6.9> . which has zero bandwidth.5 MHz. Figure 2 <http://cnx. we use the 90%-power bandwidth to assess the eective range of frequencies consumed by the signal. Write a formula.2 (Solution on p. 295.14.3 (Solution on p. a digital communications signal requires more bandwidth than the datarate: a 1 Mbps baseband system requires a bandwidth of at least 1. what is the total harmonic distortion of the received 2T Show that indeed the rst and third harmonics contain 90% of the transmitted power. “1” 2T “0” 3T “1” 4T t From our work in Fourier series. 295. The worst-casebandwidth consumingbit sequence is the alternating one shown in Figure 6. expressing this quantity in terms of the datarate. the transmitted signal is a square wave having a period of 2T . Exercise 6. 2 . Listen carefully when someone describes the transmission bandwidth of digital communication systems: Did they say "megabits" or "megahertz"? Exercise 6. We'll show later that indeed both signal sets provide identical performance levels when the signal-to-noise ratios are equal.243 s0(t) A T A t s1(t) T t Figure 6. we know that this signal's spectrum contains odd-harmonics of the> Available for free at Connexions <http://cnx.10. Thus.1 What is the value of (Solution on p.) k in this example? This signal set is also known as a BPSK signal set. In this case.14. strictly speaking. In practical terms. which here equals 1 2T . for the transmitted signal as shown in the plot of the baseband signal set What is the transmission bandwidth of these signal sets? We need only consider the baseband version The bandwidth is determined by the bit as the second is an amplitude-modulated version of the rst. Thus.10: Here we show the transmitted waveform corresponding to an alternating bit sequence.14.) If the 3 .9 Exercise 6. receiver uses a front-end lter of bandwidth signal? 17 "Signal Sets". 295.) 17 that emerges when we use this modulated signal. meaning that the eective bandwidth of our baseband signal is 3 3R 2T sequence.

zero.15 Frequency Shift Keying In 18 frequency-shift keying (FSK). T T the transitions at bit interval boundaries are smoother than those of BPSK.11 f0 . s1 (t) This content is available online at <http://cnx. Think of it as two signals added together: The rst comprised of the signal (Figure 6.13)..41) s0(t) A T t A s1(t) T t Figure 6.14. we again consider the alternating bit stream. and the second having the same structure but interleaved with the rst and containing s0 (t). The frequencies f0 = A x(t) “0” “1” “1” “0” t T Figure 6. 4 3 and f1 = . the zero signal.12/>. Available for free at Connexions < CHAPTER In the depicted example.12). As can be seen from the transmitted signal for our example bit stream (Figure 6. etc. the bit aects the frequency of a carrier sinusoid.4 What is the 90% transmission bandwidth of the modulated signal set? (Solution on p.) 6.8). 2T 3T 4T This plot shows the FSK waveform for same bitstream used in the BPSK example To determine the bandwidth required by this signal set. f1 are usually harmonically related to the bit interval.9> . INFORMATION COMMUNICATION Exercise 6. 295.12: (Figure 6. 18 s0 (t). s0 (t) = ApT (t) sin (2πf0 t) s1 (t) = ApT (t) sin (2πf1 t) (6.

13). known as the preamble. then the FSK bandwidth is T smaller than the BPSK bandwidth. The second preamble phase informs the receiver that data bits are about to come and that the preamble is almost over. Because transmitter and receiver are designed in> Available for free at Connexions <http://cnx. with c0 . If the dierence between harmonic numbers is 1. 19 The receiver interested in the transmitted bit stream must perform two tasks when received waveform r (t) • It must determine when bit boundaries occur: The receiver needs to interval synchronize with the transmitted signal. f1 + 2T − f0 − 2T = f1 − f0 + T .9> . • 19 20 Once synchronized and data bits are transmitted. The receiver knows what the preamble bit sequence is and uses it to determine when bit boundaries occur. 6. This procedure amounts to what in digital hardware as self-clocking signaling: The receiver of a bit stream must derive the clock  when bit boundaries occur  from its input signal. T seconds We focus on this aspect of the digital This content is available online at <http://cnx. it does not know when during the preamble it obtained synchronization.18/>. This baseband square wave has the same Fourier spectrum as our BPSK example.13: The depicted decomposition of the FSK-modulated alternating bit stream into its frequency components simplies the calculation of its the receiver must then determine every what bit was transmitted during the previous bit interval. The bandwidth thus equals. Synchronization can occur because the transmitter begins sending with a reference bit sequence. "Transmission Bandwidth". both use the same value for the bit T. This quantity's presence changes the number of Fourier series terms required for the 90% bandwidth: Now we need only include the zero and 1 1 1 f0 < f1 .16 Digital Communication Receivers begins. Because the receiver usually does not determine which bit was sent until synchronization occurs. the k +−k0 +1 bandwidth equals 1 . The transmitter signals the end of the preamble by switching to a second bit sequence. Each component can be thought of as a xed-frequency sinusoid multiplied by a square wave of period 2T that alternates between one and zero. the bandwidths are equal and larger dierences produce a transmission bandwidth larger than that resulting from using a BPSK signal set. Figure 1 <http://cnx.245 A A x(t) “0” “1” “0” “1” t = T 2T 3T 4T A “0” “0” t + “1” “1” t T 2T 3T 4T Figure 6. This reference bit sequence is usually the alternating sequence as shown in the square wave example 20 and in the FSK example (Figure 6. k0 k1 If the two frequencies are harmonics of the bit-interval duration. but with the addition of the constant term rst harmonics to achieve it. f0 = T and f1 = T with k1 > k0 . If the dierence is 2.

42) You may not have seen the with respect to the index argmax i i yields the maximum value of its argument i. multiplies the received signal by each of the possible members of the transmitter signal For the next bit interval. and compares the results. (n+1)T r (t) s0 (t) dt = − A2 T nT (n+1)T (6. argmax equals the value of the index that yields the maximum. the integrator outputs would be (n+1)T r (t) s0 (t) dt = A2 T nT (n+1)T (6. Let's assume a perfect channel for the moment: The received signal equals the transmitted one. integrates the product over the bit interval. This receiver. The receiver for digital communication is known as a matched lter. shown in Figure 6. INFORMATION COMMUNICATION receiver because this strategy is also used in synchronization. the received value of ^ b (n). is given by (n+1)T i b (n) = argmax nT notation before. Mathematically. r (t) si (t) dt maxi {i.44) r (t) s1 (t) dt = A2 T nT Available for free at Connexions <http://cnx. with the next bit decision made at the end of the bit interval. Whichever path through the receiver yields the largest value corresponds to the receiver's decision as to what bit was sent during the previous bit interval. the multiplication and integration begins again.43) r (t) s1 (t) dt = − A2 T nT If bit 1 were sent.14: The optimal receiver structure for digital communication faced with additive white noise channels is the depicted matched lter. ·} (6. what does matter is its value relative to the other integrator's output. Note that the precise numerical value of the integrator's output does not matter. If bit 0 were sent using the baseband BPSK signal set. which we label ^ b (n).9> .14 (Optimal receiver structure). Optimal receiver structure s0(t-nT) (n +1)T r(t) s1(t-nT) nT ∫ (⋅) Choose Largest (n +1)T nT ∫ (⋅) Figure 6.246 CHAPTER 6.

errors can creep in. If the noise is such that its integral term is more negative than αA2 T .  The value of the noise terms relative to the signal terms and the probability of their occurrence directly aect the likelihood that a receiver error will occur. Deriving the following expression for the probability the receiver makes an error on any bit transmission is complicated but can be found at here For our BPSK baseband signal set.9> . 296.14).17 Digital Communication in the Presence of Noise When we incorporate additive noise into our channel> 21 22 Available for free at Connexions <http://cnx. What is important is how much they vary.16. the integrators' outputs in the matched (n+1)T (n+1)T r (t) s0 (t) dt = αA2 T + nT (n+1)T nT (n+1)T n (t) s0 (t) dt (6. 6. "Detection of Signals in Noise" <http://cnx. 0 using a BPSK signal set (Section 6. RT 0 pe = = Q Q (s1 (t)−s0 (t))2 dt 2N0 for the BPSK case ( the values of these integrals are random quantities drawn from some probability distribution that vary erratically from bit interval to bit> 23 "Continuous-Time Detection Theory" <http://cnx. The signal-dierence energy equals T 0 (s1 (t) − s0 (t)) dt 2 • • Variability of the Noise Term  We quantify variability by the spectral height of the white noise Probability Distribution of the Noise Term N0 2 added by the channel. this receiver would always choose the bit correctly.15/>.1 combination? (Solution on p. the underlying distributions are Gaussian.45)) denes how large the noise term must be for an incorrect receiver decision to result. deciding that the transmitted zero-valued bit was indeed a one. the dierence-signal-energy term is 4α2 A4 T 2 . Because the noise has zero average value and has an equal amount of power in all frequency bands.14: Optimal receiver structure) would be: 21 r (t) = αsi (t)+n (t). For the white noise we have been considering. it would only make the values smaller.46) 2α2 A2 T N0 This content is available online at <http://cnx. • then the receiver will make an error. so that If the transmitter sent bit lter receiver (Figure 6. but all that matters is which is largest.45) r (t) s1 (t) dt = αA2 T + nT nT n (t) s1 (t) dt It is the quantities containing the noise terms that cause errors in the receiver's decision-making process. What aects the probability of such errors occurring is the energy in the dierence of the received signals in comparison to the noise term's variability. Because they involve noise.) Can you develop a receiver for BPSK signal sets that requires only one multiplier-integrator Exercise 6. 22 and here23 . 296. The probability that this situation occurs depends on three factors: Signal Set Choice  The dierence between the signal-dependent terms in the integrators' outputs (equations (6. the values of the integrals will hover about zero.) Channel attenuation would not aect this What is the corresponding result when the amplitude-modulated BPSK signal set is used? correctness. (Solution on Clearly.247 Exercise 6.

As Figure 6. For example. Note that it decreases very rapidly for small increases in its arguments. The term A2 T equals the energy expended by the transmitter in sending the bit.248 CHAPTER 6.  pe = Q   2α2 Eb  N0 α 2 Eb N0 . very nonlinear function. (6. This integral has no closed form expression. when x increases from 4 to by a factor of The function 1 Q (x) 2 3 4 5 6 is plotted in semilogarithmic coordinates.16 shows how the receiver's error rate varies with the signal-to-noise ratio Available for free at Connexions <http://cnx. We arrive at a concise expression for the probability the matched lter receiver makes a bit-reception error. we label this term Eb . Q (x) decreases 100. Here Q (·) is the integral Q (x) = 10 0 10-2 Q(x) 10-4 10-6 10-8 0 Figure 6.47) Figure 6.9> . INFORMATION COMMUNICATION ∞ − α2 √1 e 2 dα. but it 2π x can be accurately computed.15 illustrates. Q (·) is a decreasing.

Available for free at Connexions <http://cnx. and show that its 6. At a signal-to-noise ratio of 12 dB. one out of one hundred million bits will. This content is available online at <http://cnx. The BPSK signal set does perform much better than the FSK signal set once the signal-to-noise ratio exceeds about 5 dB. pe  can be easily obtained. baseband or modulated.17) reveals several properties about digital communication • As the received signal becomes increasingly noisy. on the average. the probability the receiver makes an error equals 10−8 . whether due to increased distance from the transmitter (smaller α) or to increased noise in the channel (larger N0 ).) Derive the expression for the probability of error that would result if the FSK signal set were used.18.9> . yield the same performance for the same bit energy.) Derive the probability of error expression for the modulated BPSK signal becomes almost impossible to communicate information when digital channels become noisy. the probability the receiver makes an Consequently. 296.249 10 0 FSK Probability of Bit Error 10 -2 10 -4 BPSK 10 -6 10 -8 -5 0 5 Signal-to-Noise Ratio (dB) 10 Figure 6. • As the signal-to-noise ratio increases. Exercise 6. • • Once the signal-to-noise ratio exceeds about 5 dB. Exercise 6. In such situations. (Solution on p. 296. be in error.9/>.18 Digital Communication System Properties performance gainssmaller probability of error In words. All BPSK signal sets.16: The probability that the matched-lter receiver makes an error on any bit transmission is plotted against the signal-to-noise ratio of the received signal. Signal set choice can make a signicant dierence in performance. Adding 1 dB improvement in signal-to-noise ratio can result in a factor of 10 smaller pe . the error probability decreases dramatically.1 performance identically equals that of the baseband BPSK signal set. it error approaches 1/2. 24 Results from the Receiver Error module (Section 6. the receiver performs only slightly better than the "receiver" that ignores what was transmitted and merely guesses what bit was transmitted.17.1 24 (Solution on p. The upper curve shows the performance of the FSK signal set. the lower (and therefore better) one the BPSK signal set.

org/content/m0102/2. which means the channel noise is small and the attenuation low. can be lumped into a single system known as tion 6. on the average N pe bits will be received in error. requiring a much pe to achieve a more acceptable average occurrence rate for errors occurring. Furthermore. the message is a single bit. however. errors occur at an average frequency of small number like smaller 10−6 . Do note the phrase "on the average" here: Errors occur randomly because of the noise introduced by the channel. BPSK Signal Set27 .org/content/m0544/latest/> 29 "Digital Communcation System Properties" <http://cnx.19) and Shannon's Noisy Channel Coding Theorem (Section 6.30). As shown in Figure 6. The answer is that the matched-lter receiver is optimal: No other receiver can provide a smaller probability of error than the matched lter regardless of the> 30 "Error Probability" <http://cnx. which is discussed in Digital Communication 26 .3: Fundamental model of communication). 25 We need to understand digital channels (Section 6. Digital Communication Receivers (Section 6.15).14/>. which means that errors would occur roughly 100 per> 27 "BPSK signal set" <http://cnx. and we can only predict the probability of occurrence. Rpe .13). The reason for this result rests in the dependence of probability of error integrator outputs: For a given pe on the dierence between the noise-free Eb . Transmission Bandwidth28 . no signal set can provide better performance than the BPSK signal set. Digital Communication System Properties the digital channel. Since bits are transmitted at a rate R.17).250 CHAPTER 6. We do have some tricks up our sleeves. "Signal Sets" <http://cnx. You might wonder whether another receiver might be better.19 Digital Channels (Section 6.17 (DigMC).org/content/m0548/latest/> 25 26 Available for free at Connexions <http://cnx. Signal Sets Let's review how digital communication systems work within the Fundamental Model of Communication (Figure 1. Factors in Receiver Error (Section 6. The entire analog transmission/reception system. no other signal set provides a greater dierence. Frequency Shift Keying (Sec29 . INFORMATION COMMUNICATION The matched-lter receiver provides impressive performance once adequate signal-to-noise ratios occur. where the signal representing a bit is the negative of the signal representing the other bit. This error rate is very This content is available online at <http://cnx. obtaining very small error probabilities is not dicult. that can essentially reduce the error rate to zero without resorting to expending a large amount of energy at the transmitter. 6. Because Ethernet is a wireline channel.16). How small should the error probability be? Out of N transmitted> 28 "Transmission Bandwidth" <http://cnx. and Error Probability30 . Suppose the error probability is an impressively Data on a computer network like Ethernet is transmitted at a rate R = 100Mbps.9> .

the Fundamental Model of Communication. probabilistic models for symbolic-valued signals. receiver and the signal sets we have seen. each symbol can occur at index P r [ak ]. . which indicate the output alphabet symbols For the matched-lter that result for each possible transmitted symbol and the probabilities of the various reception possibilities. k = {1. http://www. captures how transmitted bits are received. The symbolic-valued signal encoded into a bit sequence s (m) forms the message. and it encapsulates signal set choice. 6. K}. transmitting a 0 results in the reception of a 1 with pe (an error) or a 0 with probability 1 − pe (no error).com/minds/infotheory/ Available for free at Connexions <http://cnx.14/>. channel properties.17: The steps in transmitting digital information are shown in the upper which became the cornerstone of digital communication. binary symmetric The probability of error pe is the sole parameter of the digital channel.251 DigMC s(m) Source Source Coder b(n) Transmitter x(t) Channel r(t) Receiver b(n) s(m) Source Decoder Sink s(m) Source Source Coder b(n) 0 1–pe pe pe 0 b(n) Source Decoder s(m) Sink 1 1–pe Digital Channel 1 Figure 6. known as a channel. we can concentrate on how bits are received. . n with K -sided coin is ipped (note that the coin need not be fair).20 Entropy 31 Communication theory has been formulated best for symbolic-valued signals. the probabilities must be numbers between zero and one and must sum to one. In the simplest signal model. With this simple but entirely accurate model. and the matched-lter Digital channels are described by transition diagrams. and it is b (n). . which shows how each transmitted bit could be received. For this model to make sense. which allowed him to quantify What this model says is that for each signal value a the information present in a signal. Claude Shannon He showed the power of a probability 32 published in 1948 The Mathematical Theory of Communication. and received by a matched-lter receiver.9> . The indices dier because more than one bit/symbol is usually required ^ to represent the message by a bitstream.48) This content is available online at <http://cnx. From the received bitstream received symbolic-valued signal ^ b (n) the s (m) is derived. 0 ≤ P r [ak ] ≤ 1 31 32 (6. The probabilities on transitions coming from the same symbol must sum to one. transmitted through the (unfriendly) channel. . probability For example.lucent. the depicted transition diagram. The lower block diagram shows an equivalent system wherein the analog portions are combined and modeled by a transition diagram. Each bit is represented by an analog signal.

As H (A) = = = − − 1 1 1 1 1 1 2 log2 2 + 4 log2 4 + 8 log2 8 1 1 1 1 2 −1+ 4 −2+ 8 −3+ 8 + 1 log2 1 8 8 −3 (6. both the numeric aspect (entropy equals log2 K ) and the theoretical one (equally likely symbols maximize entropy). The entropy of P r [a1 ] = P r [a2 ] = P r [a3 ] = this alphabet equals Note that these probabilities sum to one as they should.21 Source Coding Theorem 33 33 The signicance of an alphabet's entropy rests in how we can represent it with a sequence of bits. thus.14/>. 296. the ideas we develop here also work when more accurate.75bits 6.1 alphabet.20.) Derive the maximum-entropy results. so that such symbols do not aect the entropy. a false assumption for typed CHAPTER 6. P r [ak ] log2 P r [ak ] H (A) = − kk (6.49) This coin-ipping model assumes that symbols occur without regard to what preceding or succeeding symbols were. The key quantity that characterizes a symbolic-valued signal is the entropy of its alphabet. entropy has units of bits.1 A four-symbol alphabet has the following probabilities. For this denition to make sense. Derive the value of the minimum entropy Example 6. models are used. Despite this probabilistic model's over-simplicity. Bit sequences form the "coin of the realm" in digital communications: they are the universal way of representing This content is available online at <http://cnx. it has probability one of Exercise 6. P r [a0 ] = 1 2 1 4 1 8 1 8 1 2 1 = 2−1 .51) 1. In this case. the The minimum value occurs when only one symbol occurs. Available for free at Connexions <http://cnx. The maximum value attainable by an alphabet's entropy occurs when the symbols are equally likely (P r [ak ] occurring and the rest have probability zero.9> . INFORMATION COMMUNICATION K P r [ak ] = 1 k=1 (6. we must take special note of symbols having probability zero of occurring. log2 2 = −1. but still probabilistic. A zero-probability symbol never occurs. = P r [al ]).50) Because we use the base-2 logarithm. we dene entropy equals 0log2 0 = 0 log2 K . (Solution on

There is no relation to cryptology. Example 6. In creating this table. For example. Shannon 34 proved in his monumental work what we call today the K k=1 Source Coding Theorem. We convert back and forth between symbols to bit-sequences with what is known as a assign a error. Let's see if we can nd a codebook for this four-letter simple binary code: convert the symbol's index into a binary number and use the same number of bits for each symbol by including leading zeros where necessary. the average B (A) equals log2 K . The smaller an alphabet's entropy. − number of bits Let B (ak ) denote the number of bits used to represent the symbol represent the entire alphabet equals Coding Theorem states that the average number of bits needed to accurately represent the alphabet need only to satisfy − ak .1). Point of Interest: codebook: a table that associates symbols to bit sequences. terminology was developed during the beginnings of information theory just after World War II.75 bits (Example 6. a0 ↔ 00 a1 ↔ 01 a2 ↔ 10 a3 ↔ 11 − number of bits ( which equals 2 in this case. The Source B (A) required to H (A) ≤B (A)< H (A) + 1 (6. for example. Because we want to send the le quickly. digital communication (Section 6.13) is the transmission of symbolic-valued signals from one place to another. the alphabet's entropy species to within one bit how many bits on the average need to be used to send the alphabet.2 A four-symbol alphabet has the following Available for free at Connexions <http://cnx. and an entropy of 1. When faced with the problem. we don't want to use so few bits that the receiver cannot determine what each character was from the bit sequence. the fewer bits required for digital transmission of les expressed in that alphabet. we want to use as few bits as possible. Because the entropy equals 1. we must rst represent each character by a bit sequence. P r [a0 ] = 1 2 1 4 1 8 1 8 The simplest code to try is known as the P r [a1 ] = P r [a2 ] = P r [a3 ] = alphabet that satises the Source Coding Theorem. However.253 symbolic-valued signals. we must be able to unique bit sequence to each symbol so that we can go between symbol and bit sequences without You may be conjuring the notion of hiding information from others when The codebook we use the name codebook for the symbol-to-bit-sequence table. The average B (ak ) P r [ak ].lucent. the simple binary code indeed satises the Source Coding Theoremwe are within one bit of the 34 http://www.9> .52) Thus. of sending a le across the Internet. which comprises mathematically provable methods of securing information.75bits. we could use one bit for every character: File transmission would be fast but useless because the codebook creates errors. As we shall explore in some detail elsewhere.53) Whenever the number of symbols in the alphabet is a power of two (as in this case).

org/content/col10040/1. If more than two nodes/symbols share − the lowest probability at a given level. Available for free at Connexions <http://cnx.21) we nd that using a so-called one that produces a xed number of bits/symbol. a smaller average number of bits can indeed be obtained. We're pretty sure he received an A. The idea is to use shorter bit sequences for the symbols that occur more often. Here. and one approach that Human source coding algorithm.254 CHAPTER 6. Using a lossy compression scheme means that you cannot recover a symbolic-valued signal from its compressed You might be wondering why anyone would want to intentionally create errors. Compression schemes that assign symbols to bit sequences are known as lossless if they lossy if they use fewer bits than the alphabet's entropy. Form a binary tree to the right of the table.22 Compression and the Human Code obey the Source Coding Theorem. In the H (A) number of bits to represent each of its values.9> . we have a symbolic-valued signal source. your choice won't aect B (A). • At each node. less ecient than the unequal-length code. but lossy compression schemes are frequently used where the eciency gained in representing the signal outweighs the signicance of the errors. INFORMATION COMMUNICATION entropy limitbut you might wonder if you can do better. like a computer le or an image. Shannon's Source Coding Theorem states that symbolic-valued signals require on the average at least xed rate source coder.3 The simple four-symbol alphabet used in the Entropy (Example 6.52) has additional applications in data compression. Furthermore. the race was on to be the rst The race was won by then MIT graduate student David Human in 1954. P r [a0 ] = 35 1 2 This content is available online at < One codebook like this is a0 ↔ 0 a1 ↔ 10 a2 ↔ 110 a3 ↔ 111 Now (6.2) modules has a four-symbol alphabet with the following probabilities. A binary tree always has two branches at each node.75. to nd a provably maximally ecient source coding algorithm. pick any two. Using the ecient code. we can transmit the symbolic-valued signal having this alphabet 12. the best ordering being in decreasing order of probability. What is not discussed there is a procedure for designing an ecient source coder: one does achieve that limit is the Point of Interest: guaranteed to produce the fewest bits/symbol on the average. 4 8 − We can reach the entropy limit! The simple binary code is.19/>. they are version without incurring some error. making the probability of the node equal to the sum of the merged nodes' probabilities.5% faster.54) 1 1 B (A)= 1 · × 2 + 2 · × 1 + 3 · × 1 + 3 · × 8 = 1. who worked on the problem as a project in his information theory course. 6. • • Create a vertical table for the symbols. that we want to represent with as few bits as possible. Build the tree by merging the two lowest probability symbols at each level. A. label each of the emanating branches with a binary number. which are symbols drawn from the alphabet module on the Source Coding Theorem (Section 6.1) and Source Coding (Example 6. Example 6. may not be the most ecient way of encoding symbols into bits. in this case. That source coder is not unique. In the early years of information theory. 35 Shannon's Source Coding Theorem (6. we know that no more ecient codebook can be found because of Shannon's Theorem. If we chose a codebook with diering number of bits for the symbols. The bit sequence obtained from passing from the tree's root to the symbol is its Human code.

with the root node (the one at which the tree begins) dening the codewords. a1 . If our symbols 1 1 1 1 2 . a3 .75 bits (Example 6. a2 .1 (Solution on p. P r [a2 ] = 4 . b (n) = 101100111010 . the average number of bits/symbol resulting from the Human coding algorithm would equal 1.68 bits. . a4 . . The average number of bits required to represent this alphabet equals If we had the symbolicour Human code would produce the bitstream 1. Exercise 6.) Derive the Human code for this second set of probabilities.1). valued signal If the alphabet probabilities were dierent.75 bits.255 P r [a1 ] = P r [a2 ] = P r [a3 ] = Figure 6. This alphabet has the Human coding tree shown in Human Coding Tree Symbol Probability 1 a1 a2 a3 a4 Figure 6.22.18 (Human Coding Tree). The bit sequence obtained by traversing the tree from the root to the symbol denes that symbol's binary code. could well result.75 bits. the had the probabilities P r [a1 ] = entropy limit is 1. we may not be able to achieve the entropy limit.18: 2 1 4 1 8 1 8 Source Code 0 0 0 0 1 1 4 1 2 10 1 110 111 1 We form a Human code for a four-letter alphabet having the indicated probabilities of occurrence. 296. The code thus obtained is not unique as we could have labeled the branches coming out of each node dierently. The Human code does satisfy the Source Coding Theoremits average length is within one bit of the alphabet's entropybut you might wonder if a better code existed. P r [a3 ] = 5 . However.. and therefore dierent David Human showed mathematically that no other code could achieve a shorter average code than his. a1 . }. . The binary tree created by the algorithm extends to the right. and P r [a4 ] = 20 .9> . . clearly a dierent tree. 1 4 1 8 1 8 and an entropy of 1. Available for free at Connexions <http://cnx. We can't do better. . Furthermore. and verify the claimed average code length and alphabet entropy. which is the Shannon entropy limit for this source alphabet. s (m) = {a2 .

if they occur (and they will). Available for free at Connexions <http://cnx.23 Subtlies of Coding bitstream index 36 In the Human code. To separate codes for each letter. and this quantity determines the bit interval duration T. we can only compute − averages.9> . Exercise 6. If our source code averages B (A) bits/symbol and symbols are produced at a rate R. who would relay the message on to the next operator. 296.) Show by example that a bitstream produced by a Human code is not necessarily self-synchronizing. Morse code required that a spacea pausebe inserted between each letter. It was also far ahead of some needed technologies. note: A good example of this need is the Morse Code: Between each letter. Human showed that his (maximally ecient) code had the the bitstream is prex property: No code for a symbol began another symbol's code. 296. will provide unique decoding when an unequal number of bits/symbol are used in the code.3 (Solution on p. but you can unambiguously decode the sequence of symbols from the bitstream. presaged modern communications would be an understatement. To capture how often bits must be transmitted to keep up with the source's production of symbols. we can Exercise 6. presumably getting the message closer to its destination. whether derived from a Human code or not. how does the receiver determine when symbols begin and end? If you required a separation marker in the bitstream between symbols. namely the Source Coding Theorem. shown in Figure 6. In information theory. INFORMATION COMMUNICATION 6. We need ways of reducing reception errors that without demanding pe be smaller. (Solution on p. the bit sequences that represent individual symbols can have diering lengths so the m does not increase in lock step with the symbol-valued signal's index n.23. telegraph operators In short. having a prex code does not guarantee total synchronization: After hopping into the middle of a bitstream. an infrequent error can devastate the ability to translate a bitstream into a symbolic − Calculate what the relation between T and the average bit rate B (A) R is. the To say it would tap the message using a telegraph key to another operator. synchronization can easily be lost even if the receiver started "in synch" with the source. assign a unique and correct symbol sequence to the bitstream.23. created a source code that number of bits to represent symbols. the telegrapher needs to insert a pause to inform the receiver when letter boundaries occur. When we use an unequal A subtlety of source coding is whether we need "commas" in the bitstream.256 CHAPTER 6. The Morse code. no commas are placed in the bitstream. Example 6.19.2 (Solution on p. As shown in this example (Example 6. However. it would be very inecient since you are essentially requiring an extra symbol in the transmission stream.16/>.23. Despite the small probabilities of error oered by good signal set design and the matched lter. Once you have the prex property.) Sketch an argument that prex coding. To send a message from one place to another. was not a prex code.1 B (A) R. partially self-synchronizing: Once the receiver knows where the bitstream starts. that space counts as another code 36 This content is available online at <http://cnx. it communicated text over wireline connections using a binary codethe Morse codeto represent individual letters. the average − bit rate equals Exercise 6.4 The rst electrical communications systemthe telegraphwas digital. Are xed-length codes self synchronizing? Another issue is bit errors induced by the digital channel.3). When rst deployed in 1844. can we always nd the correct symbol boundaries? The self-synchronization issue does mitigate the use of ecient source coding algorithms. telegraph relied on a network not unlike the basics of modern computer networks. 296.

19 shows a Human code for English dashes and space. and is grossly inecient (about 25%). The resulting source code is not within a bit of entropy. Figure 6.9> . Available for free at Connexions <http://cnx. which means that the Morse code encoded text with a three-letter source code: dots.257 letter. which as we know is ecient.

81 7. Human Code 1011 010100 10101 01011 001 110001 110000 11001 1001 01010111011 01010110 10100 00011 0100 1000 00000 0101011100 0111 0110 1101 00010 0101010 000011 010101111 000010 0101011101011 Figure 6.06 0.07 0.. .07 2.-.48 5.19: Morse and Human Codes for American-Roman Alphabet.97 10.35 bits.32 3.11 2...53 1.65 3.10 5.87 5.-.13 0.87 0.70 1. -.. Available for free at Connexions <http://cnx. . .. ..-.68 1.25 1.68 2. .  .258 CHAPTER 6.06 1.14 0. .27 0.5 symbols...22 1.63 6. The entropy H (A) of the this source is 4..31 3.  -.. The average Morse codeword length is 2..73 6.-.14 Morse Code . . -..9> . . Adding one more symbol for the letter separator and converting to bits yields an average codeword length of 5.. The average Human codeword length is 4.56 bits. INFORMATION COMMUNICATION Morse and Human Code Table % A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 6. . -... .. -...-..-. The % column indicates the average probability (expressed in percent) of the letter occurring in English...

Properly designed channel coding can greatly reduce the probability (from the uncoded value of Communication. This system forms the Fundamental Model of Digital Shannon's Noisy Channel Coding Theorem (Section 6. seems like there is always more work to do. Available for free at Connexions <http://cnx.22/>.25 Repetition Codes 37 38 38 Perhaps the simplest error correcting code is the repetition code. derived bits emerging from the source coder but also additional bits derived from the coder's bit stream. Shannon's result proves it exists.30) says that if the data aren't transmitted too This block diagram shown there forms Fundamental Model of Digital the transmitter inserts a the channel coder before analog modulation. Instead of the communication model (Figure 6. Shannon did not demonstrate an error correcting code that would achieve this remarkable feat.9> . 6.24 Channel Coding We can. that error correction codes exist that can correct all the bit errors introduced by the channel.5/>. 37 correct errors made by the receiver with only the error-lled bit stream emerging The idea is for the transmitter to send not only the symbolhelp the receiver determine if an error has occurred from the digital channel available to These additional bits. s(m) Source Source Coder b(n) Channel Coder c(l) Digital Channel c(l) Channel Decoder b(n) Source Decoder s(m) Sink Figure 6. no one has found such a code.259 6. and the receiver the corresponding channel decoder (Figure 6. to some extent. the error correcting bits. in fact. that should not prevent us from studying commonly used error correcting codes that not only nd their way into all digital communication systems. a channel coder and decoder are added to the communication system. but also into CDs and bar codes used on merchandise. This content is available online at <http://cnx.17: DigMC) shown previously. In any case. in the data bits (the important bits) or in the error-correction bits.20: To correct errors that occur in the digital channel. This content is available online at <http://cnx. pe ) that a data bit b (n) is received incorrectly even when the probability of c (l) be received in error remains pe or becomes larger.20). Unfortunately.

Because the 1 . we know that more of the bits should be correct rather than in 2 error. an odd number of times in fact. This reduction in the bit interval means that the transmitted energy/bit decreases by a factor of three. Simple majority voting of the received bits (hence the reason for the odd number) determines the error probability Here. the channel coder produces three. For example. The channel creates an error (changing a 0 1) that with probability pe . which results in an increased error probability in the receiver. the resulting T three times smaller than the uncoded version.21: cl l 2/R’ Digital Transmitter x(t) t T 3T 6T The upper portion depicts the result of directly modulating the bit stream b (n) into a transmitted signal x (t) using a baseband BPSK signal set. the transmitter encodes into a 0 as 000. the transmitter sends the data bit several times. let's consider the three-fold repetition code: for every bit bit stream b (n) emerging from the source coder.1: In this example. the bit stream emerging from the channel coder c (l) has a data rate three times higher than that of the original b (n).1) channel coder to yield the bit stream transmitted signal requires a bit interval c (l). Thus. The rst column lists all possible received datawords and the second the probability of each dataword being received. The coding table illustrates when errors can be corrected and when they can't by the majority-vote decoder. R' is the datarate produced by the source coder. Coding Table Code 000 001 010 011 100 101 110 111 Probability (1 − pe ) 3 2 2 Bit 0 0 0 1 0 1 1 1 pe (1 − pe ) pe (1 − pe ) pe (1 − pe ) 2 2 3 pe 2 (1 − pe ) 2 pe (1 − pe ) pe (1 − pe ) pe Table 6. The last column shows the results of the majority-vote Available for free at Connexions <http://cnx. INFORMATION COMMUNICATION Repetition Code 1 0 1/R’ 2/R’ bn n Digital Transmitter T 2T x(t) t Channel Coder 1 0 1/R’ Figure CHAPTER 6.9> . If that bit stream passes through a (3. pe is always less than transmitted bit more accurately than sending it alone.

the top row corresponds to the case in which no errors occurred). 296. we need to develop rst a general framework for channel coding. The error probability of the decoders is the sum of the probabilities when the decoder produces 1. Show that when the energy per bit the no-coding probability of A block code's K .1 pe . Available for free at Connexions <http://cnx. equals the ratio digital channel operate at a dierent rate by using the index bit stream coding eciency l for the channel-coded bit stream c (l). The bit interval must decrease by a factor of three if the transmitter is to keep up with the data stream. the receiver can correct the error. Does any error-correcting code reduce communication errors when real-world constraints are taken into account? The answer now is yes. if one bit of the three bits is received in error. We represent the fact that the bits sent through the parameters.) Using MATLAB.1 (Solution on p. so 3pe 2 × (1 − pe ) + pe 3 . Point of Interest: It is unlikely that the transmitter's power could be increased to compensate. down Eb goes The bit interval K in comparison to the no-channel-coding situation. it successfully corrected the errors introduced by the channel (if there were any. This content is available online at <http://cnx. and discover what it takes for a code to be maximally ecient: Correct as many errors as possible using the fewest error correction bits as possible (making the eciency 39 K N as large as possible).15/>. To understand channel coding. The repetition code (p. Is 3pe 2 × (1 − pe ) + pe 3 ≤ pe ? 6. especially when we employ variable-length source codes.9> . Using this b (n) = 0 equals Exercise 6.K) to represent a given block code's E K = 1 and N = 3 . The question thus becomes Is the eective error probability lower with channel coding even though the error probability for each transmitted bit is larger? The answer is no: Using a repetition code for channel coding cannot ultimately reduce the probability that a data bit is received in error. the uncoded value.25.26. 296. it inserts an additional N −K block channel coding.26 Block Channel Coding digital channel per bit duration must be reduced by 39 Because of the higher datarate imposed by the channel coder.) Demonstrate mathematically that this claim is indeed the channel decoder announces the bit is repetition code. 1 long as pe < . We use the notation (N. the error probability does channel coding really help: pe of the digital channel goes up. the probability of error is always less than ^ 1 instead of transmitted value of 0. The rate at which bits N must be transmitted again changes: So-called data bits b (n) emerge from the source coder at an average − 1 rate B (A) and exit the channel at a rate E higher. as illustrated here (Figure 6.21: Repetition Code). which means the energy N by the same amount. and quanties the overhead introduced by channel coding. In the three-fold repetition code (p. calculate the probability a bit is received incorrectly with a three-fold repetition code.261 decoder. Because of this reduction. 259) represents a special case of what is known as every Eb is reduced by 1/3 that this probability is larger than K bits that enter the block channel coder. Thus. The ultimate reason is the repetition code's ineciency: transmitting one data bit for every three transmitted is too inecient for the amount of error correction provided. 2 This probability of a decoding (Solution on p. Note that the blocking (framing) imposed by the channel coder does not correspond to symbol boundaries in the b (n). Such is the sometimes-unfriendly nature of the real world. For error-correction bits to produce a block of N bits for transmission. the probability of bit error occurring in the increases relative to the value obtained when no channel coding is used. 259). When the decoder produces 0. if more than one error occurs. Exercise 6.

262 CHAPTER 6.9> . known as a codeword. more concisely.2 For example. we see that adding a 1 corresponds to ipping a bit. As we consider other block codes.) Show that adding the error vector col[1. geometrically.. Furthermore. by c. Available for free at Connexions <http://cnx.0.. 40 linear codes create error-correction bits by combining the data bits linearly.27.27 Error-Correcting Codes: Hamming Distance So-called combination" means here single-bit binary arithmetic.22. subtraction and addition are equivalent. 297. 40 (Solution on p. 1) error correction code described by the following coding table and. c (1) = b (1) c (2) = b (1) c (3) = b (1) or c = Gb where  1    G= 1    1   c (1)   c =  c (2)    c (3) b= The length-K (in this simple example all block-oriented linear channel coders. let's consider the specic (3.. by the succeeding matrix expression. c2 ) As shown in Figure 6. possible datawords to select which 2K codewords was actually transmitted. the simple idea of the decoder taking a majority vote of the received bits won't generalize easily. We need a broader view that takes into account the A length-N codeword means that the receiver must decide among the of the b (1) K = 1) block of data bits is represented by the vector length-N output block of the channel coder. For example. We can express the Hamming distance as d (c1 . we can think of the datawords to be the minimum number of bits that must be "ipped" to go from one word to the other. INFORMATION COMMUNICATION 6. the distance between codewords is 3 bits.0] to a codeword ips the codeword's leading bit and This content is available online at <http://cnx. and the 2N distance between codewords. The generator matrix G denes b. 0⊕0=0 0·0=0 1⊕1=0 1·1=1 0⊕1=1 0·1=0 1⊕0=1 1·0=0 The phrase "linear Table 6.29/>.org/content/m10283/2.. denoted by d (c1 . We dene the Hamming distance between binary datawords c1 and c2 .org/content/col10040/1. c2 ) = sum (c1 ⊕ c2 ) (6.55) Exercise 6. In our table of binary arithmetic.1 leaves the rest unaected.

However. This criterion means that if any two codewords are two bits apart.4) to have a code that can correct all single-bit errors.1) repetition code.1). Because distance corresponds to ipping a bit. The center plot shows that the distance between codewords is 3. Note that if a decrease). we want to nd the codeword (one of the lled circles in Fig- ally sent. we need to develop rst a general framework for channel codes and discover what it takes for a code to be maximally ecient: Correct as many errors as possible using the fewest error correction bits as possible (making the K N as large as possible.1) repetition code demonstrates that we can lose (Exercise 6. eciency c (1) = b (1) c (2) = b (2) c (3) = b (3) c (4) = b (4) c (5) = b (1) ⊕ b (2) ⊕ b (3) c (6) = b (2) ⊕ b (3) ⊕ b (4) c (7) = b (1) ⊕ b (2) ⊕ b (4) Available for free at Connexions <http://cnx. Our repetition code has this property. Do we win or lose by using an error-correcting code? The answer is that we can win if the code is well-designed.9> . To develop good channel coding. the only possible codewords. Note that the received dataword groups do not overlap. A much better code than our (3. codewords must have a minimum separation of three. We can represent these bit patterns geometrically with the axes being bit positions in the data block. the probability of any particular error vector decreases 1 1 0 1 1 0 1 1 1 0 1 1 Figure 6.22: In a (3. The number of errors e. which means the code can correct all single-bit errors. only 2 of the possible 8 three-bit data blocks are codewords. The unlled ones correspond to the transmission.22) that has the highest probability of occurring: the one closest to the one received. it is Introducing code bits increases the probability that any bit arrives in error (because bit interval durations ure 6. then the code impossible to determine which codeword was actucannot correct the channel-induced error.26. the lled circles represent the codewords [0 0 0] and [1 1 1].263 The probability of one bit being ipped anywhere in a codeword is the channel introduces equals the number of ones in with the number of errors. To perform decoding when errors occur. using a well-designed error-correcting code corrects bit reception errors. Thus. calculating the Hamming distance geometrically The right plot shows the datawords means following the axes rather than going "as the crow ies".) We also need a systematic way of nding the codeword closest to any received dataword. that result when one error occurs as the codeword goes through the channel. N pe (1 − pe ) N −1 . The (3. The three datawords are unit distance from the original codeword. dataword lies a distance of 1 from two codewords. In the left plot.1) repetition code is the following (7.

we need only count the number of bits in each column and sums of columns. Thus. and the last two three. the next one four. The error correction capability of a channel code is limited by how close together any two error-free blocks are. ci = cj To have a channel code that can correct all single-bit errors. Error correction amounts to searching for the codeword c closest to the received block c in terms of the Hamming distance between the two. 4). The quantity to examine. (Solution on p. For our example (7. therefore.) Suppose we want a channel code to have an error-correction capability of minimum Hamming distance between codewords n bits. Recall that our channel coding Therefore ci ⊕ cj = G (bi ⊕ bj ). to data bits. the number always yields another Thus. which would result in ambiguity when assigning a block of data bits to a received block. Available for free at Connexions <http://cnx. Considering sums of column pairs next. correction bits from the data portion of the received block by multiplying the received block 41 41 Because the idea of channel coding has merit (so long as the code is ecient). note that because the upper portion of G is an identity in designing code error correction codes is the minimum distance between codewords.20/>. no sum of dmin = 3. we make this calculation ^ c by the matrix H known as the parity check matrix. G's rst column has three ones. What must the dmin be? How do we calculate the minimum distance between codewords? Because we have of possible unique pairs equals procedure is linear. It is formed from This content is available online at <http://cnx. 297. INFORMATION COMMUNICATION where the generator matrix is         G=       In this (7. Note that the columns of (why is this?). Using matrix notation. 1 0 0 0 1 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 0 0   0    0    1    0   1   1 24 = 16 of the 27 = 128 possible blocks at the channel decoder correspond to error-free ^ transmission and reception. (6. which means that all occurrences of one error within a received G is an identity matrix. cj )) . and we have a channel coder that can correct 7-bit block. Because the bottom portion of each column diers from the other columns in at least one place. K−1 2 −1 K . the corresponding upper portion of all column sums must have exactly two bits.2 dmin ≥ 3.56) Exercise 6. Bad codes would produce blocks close together. Finding these codewords codewords is easy once we examine the coder's generator matrix.264 CHAPTER 6.9> .4) code. Triple sums will have at least three bits because the upper portion of columns has fewer than three bits. with block of nd 2K codewords. let's develop a systematic One way of checking for errors is to try recreating the error ^ c. To nd dmin . dmin = min (d (ci . we nd that the dierence between any two codewords is another codeword! G are 2 c = Gb. Because bi ⊕ bj dmin we need only compute the number of ones that comprise all non-zero codewords.28 Error-Correcting Codes: Channel Decoding procedure for performing channel decoding. and that all codewords can be found by all possible pairwise sums of the columns. the bottom portion of a sum of columns must have at least one bit. 6. which can be a large

For example. values. 0. . 1) T are both codewords in the example (7. pe is Exercise 6. 0. . Because the result of the product is a length.9> . is a codeword. error-correction portion of G and attaching to it an identity matrix. 0. 0. 1. 0. with e a vector of binary values having a 1 in those positions where a bit error occurred. 0. 1. then 4. 3. for all the columns of Hc = 0 G.4) code. but such multiple error patterns are very unlikely to occur. To perform our ^ channel decoding. consult a table of length-(N pattern that could have resulted in the non-zero result. 0. . Our receiver uses the principle of maximum probability: An error-free transmission is much more likely than one with three errors if the bit-error probability small enough. 1. 298. − K) binary vectors to associate them with the minimal ^ error 2. Such an error pattern cannot be detected by our coding strategy.(N − K) vector of binary N −K have 2 − 1 non-zero values that correspond to non-zero error patterns e. (1. and sixth bits).3 How small must (Solution on p. 0. a length-(N − K) zero-valued vector. Consequently.) pe be so that a single-bit error is more likely to occur than a triple-bit error? Available for free at Connexions <http://cnx. The second results when the rst one experiences three bit errors (rst. Does this property guarantee that all codewords also When the received bits ^ ^ HG = 0 an (N − K) × K satisfy Hc = 0? c do not form a codeword. Select the data bits from the corrected word to produce the received bit sequence The phrase b (n). 5. if non-zero. add the error vector thus obtained to the received vector c to correct the error (because ^ c ⊕ e ⊕ e = c). (Solution on p. the rst column of calculations show that multiplying this vector by H Exercise 6.265 the generator matrix G by taking the bottom. 0. compute (conceptually at least) H c. 1) T and (0. indicating the presence of one or more errors induced by the digital channel. T Simple (Solution on p. 1. 297. 0. 1. . 1. In other words. second. 1) .2 Show that adding the error vector leaves the rest unaected. If no digital channel errors occurwe receive a codeword c= c  then H c= 0. and the result of multiplying this matrix with a (N − K) binary vector.57) Lower portion of G Identity (N − K) × N . ^ For example. H c ^ does not equal zero.4) code. For our (7.    1   H= 0    1 The parity check matrix thus has size received word is a lengthso that ^  1 1 1 1 1 0 0 1 1 1 0 0 0 1 0  0    0    1  (6. we can ^ (1. if this result is zero. . . (1. 298.28. 0) T to a codeword ips the codeword's leading bit and H c= Hc ⊕ e = He. Because the presence of an error can be mathematically written as c= c ⊕ e . no detectable or correctable error occurred.) Exercise 6.) error minimal in the third item raises the point that a double (or triple or quadruple occurring during the transmission/reception of one codeword can create the same received word as a singlebit error or no error in another codeword.28.28. .1 Show that results in G. show that matrix of zeroes.

INFORMATION COMMUNICATION 6. Parity Check Matrix e 1000000 0100000 0010000 0001000 0000100 0000010 0000001 He 101 111 110 011 100 010 001 Table 6.4) code's error correction capability compensates for the increased error probability due to the necessitated reduction in bit energy.29 Error-Correcting Codes: Hamming Codes For the (7. a smaller error probability. this "error correction" strategy usually makes the error worse in the sense that more bits are changed from what was transmitted.4) channel code does yield smaller error rates. This content is available online at <http://cnx. Clearly. If we obtain unique answers. we can try double-bit error patterns. if two or more error patterns yield the same result. we are done.25/>.9> . Figure 6. the fair way of comparing coded and uncoded transmission is to compute the probability of block error: 42 the probability that any bit in a block remains in error despite error correction and regardless of whether the error occurs in the data or in coding buts. our (7. and is worth the additional systems required to make it work. In our case. We start with single-bit error patterns.23 (Probability of error occurring) shows that if the signal-to-noise ratio is large enough channel coding yields Because the bit stream emerging from the source decoder is segmented into four-bit we have 42 2N −K − 1 = 7 error patterns that can be corrected.266 CHAPTER 6. single-bit error patterns give a unique example. we must question whether our (7. As with the repetition code. and multiply them by the parity check matrix. Available for free at Connexions <http://cnx. If more than one error occurs (unlikely though it may be).3 This corresponds to our decoding table: We associate the parity check matrix multiplication result with the error pattern and add this to the received word.

90 0. appending three extra bits for error correction.23: as The probability of an error occurring in transmitted K=4 data bits equals 1 − (1 − pe )4 (1 − pe ) 4 equals the probability that the four bits are received without error.94 Table 6.9> .4: Hamming Codes) provides the parameters of these codes. following table (Table 6.4 Available for free at Connexions <http://cnx. Hamming Codes N 3 7 15 31 63 127 K 1 4 11 26 57 120 E (eciency) 0.84 0. The upper curve displays how this probability of an error anywhere in the four-bit block varies with the signal-to-noise ratio. and the 2N −K − 1. Hamming codes are the simplest single-bit error correction codes. Now the probability of 6 7 any bit in the seven-bit block being in error after error correction equals 1 − (1 − pe ) − (7pe ) (1 − pe ) .org/content/col10040/1. Codes that have 2 N −K −1 = N are known as Hamming codes.4) code has the length and number of data bits that perfectly ts correcting single bit errors. This pleasant property arises because the number of error patterns that can be corrected. Here 6 (7pe ) (1 − pe ) equals the probability of exactly on in seven bits emerging from the channel in error. and all data bits in the block are received correctly. equals the codeword length N.57 0.267 Probability of error occurring 10 0 Probability of Block Error 10 -2 10 -4 (7.73 0. The channel decoder corrects this type of error.4) Code 10 -6 10 -8 -5 0 5 Signal-to-Noise Ratio (dB) 10 Uncoded (K=4) Figure 6. When a (7. the transmitter reduced the energy it expends during a single-bit transmission by 4/7. and the generator/parity check matrix formalism for channel coding and decoding works for them.33 0.4) single-bit error correcting code is used. Note that our (7. where pe is the probability of a bit error occurring in the channel when channel coding occurs.

43 As the block length becomes larger. 298. we need to enhance the code's error correcting capability by adding double as well as single-bit error correction. Generally.1 Noisy Channel Coding Theorem Let E denote the eciency of an error-correcting code: the ratio of the number of data bits to the total number of bits used to represent them.57. This result astounded communication engineers when Shannon published it in 1948. If the eciency is less than the an error occurring in the decoded block approaches zero. the probability of an error in a decoded block must approach one regardless of the code that might limit P r [block N →∞ error] be chosen. the capacity is given by C = 1 + pe log2 pe − 1log2 (1 − pe ) bits/transmission Hamming code has an eciency of (6. This content is available online at <http://cnx. INFORMATION COMMUNICATION Unfortunately. and codes having the same eciency but longer block sizes can be used on additive noise channels where the signal-to-noise ratio exceeds 43 44 0dB. in digital communication. His result comes in two complementary forms: the Noisy Channel Coding Theorem and its 44 creation of information theory answers 6. Exercise 6. an error-correcting code exists that has the property that as the length of the code increases. error correction can be powerful enough to correct all errors as the block length increases. Consequently. E<C ( Available for free at Connexions <http://cnx. Analog communication always yields a noisy version of the transmitted signal.lucent. Do codes exist that can correct Perhaps the crowning achievement of Claude Shannon's this question.58) 6. the probability of multiple-bit errors can exceed the number of single-bit errors unless the channel single-bit error probability pe is very small. a channel's capacity changes with the signal-to-noise ratio: As one increases or decreases. more error correction will be needed.9> .30. so does the other. For a binary symmetric channel. for such large blocks.30.4) 0. The key for this capability to exist is that the code's eciency be less than the channel's capacity. capacity of the digital channel.60) Figure 6.2 Converse to the Noisy Channel Coding Theorem If E > C .org/content/m0073/2. our (7.and double-bit errors 6. =1 (6.12/>.) What must the relation between with a "perfect t"? N and K be for a code to correct all single.268 CHAPTER 6. For example. The capacity measures the overall error characteristics of a channelthe smaller the capacity the more frequently errors occurand an overly ecient error-correcting code will not build in enough error correction capability to counteract channel errors.24 (capacity of a channel) shows how capacity varies with error probability. http://www.1 (Solution on p.30 Noisy Channel Coding Theorem all errors? converse. the probability of limit P r [block N →∞ error] =0 .org/content/col10040/1.29.59) These results mean that it is possible to transmit digital information over a noisy channel (one that introduces errors) and receive the information without error if the code is suciently inecient compared to the channel's characteristics.

5 Capacity (bits) -5 0 5 Signal-to-Noise Ratio (dB) 10 Figure 6. 45 46 Available for free at Connexions < is less than capacity.9> . even to the point that more than one bit can be transmitted each "bit interval. It states that if your data rate exceeds capacity. more important in communication system design is the converse. errors will overwhelm you no matter what channel coding you use. capacity calculations are made to understand the fundamental limits on transmission rates. B (A) R.5 0 1 0. 6. error-free transmission is possible if the source − coder's datarate.3 Error Probability (Pe) 0.24: The capacity per transmission through a binary symmetric channel is plotted as a function of the digital channel's error probability (upper) and as a function of the signal-to-noise ratio for a BPSK signal set (lower). The bandwidth restriction arises not so much from channel properties.5 0 -10 0 0.30). practice to keep error rates low. bits/s (6.1 0. the signal set is unrestricted. but they remain greater than zero.31 Capacity of a Channel the capacity for a bandlimited (to 45 In addition to the Noisy Channel Coding Theorem and its converse (Section 6. For this case. but from spectral regulation. Until the "magic" code is found. This content is available online at <http://cnx.4 0. C = W log2 (1 + SNR) bandlimited channel with no error.269 capacity of a channel Capacity (bits) 1 0. especially for wireless channels. For this reason. Shannon also derived W Hz) additive white noise channel.13/>. the revised Noisy Channel Coding Theorem states that some errorcorrecting code exists such that as the block length increases. and did not indicate Codes such as the Hamming code work quite well in what this code might be. it has never been found." Instead of constraining channel code 0.61) This result sets the maximum datarate of the source coder's output that can be transmitted through the 46 Shannon's proof of his theorem was very clever.

15) for a comparison. amplitude modulation (AM) radio being a typifying example. INFORMATION COMMUNICATION Exercise 6. For example. 47 6. Images can be sent by analog means (commercial television). Although analog systems are less expensive in many cases than digital ones for the same application. one of the most popular of which is multi-level signaling. Although it is not shown here. having units of The rst denition of capacity applies only for binary symmetric channels. What kind of signal sets might be used to achieve capacity? Modem signal sets send more than one bit/transmission exceeds 0 dB.31.12) of this receiver thus indicates that some residual error will always be present in an analog system's output. Here. • Flexibility: Digital communication systems can transmit real-valued discrete-time signals. and much greater exibility. we have a specic criterion by which to formulate error-correcting codes that can bring us as close to error-free transmission as we might want. which could be analog ones obtained by analog-to-digital conversion. and symbolic-valued ones (computer data. In analog communication.5 The telephone channel has a bandwidth of 3 kHz and a signal-to-noise ratio exceeding 30 dB (at least they promise this much). Any signal that can be transmitted by analog means can be sent by digital means.) The second result states capacity more generally. explored in the next section. Our results for BPSK and FSK indicated the bandwidth they require exceeds using a number. we can transmit several and bits during one transmission interval by representing bit by some signal's amplitude. Note that the data rate allowed by the capacity can exceed the bandwidth when the signal-to-noise ratio 1 T .6) provides the largest possible signal-to-noise ratio for the demodulated message. the coherent receiver (Figure 6.32 Comparison of Analog and Digital Communication Analog communication systems. two bits can be sent with a signal set comprised of a sinusoid with amplitudes of ± (A) ± A 2 . see this problem (Problem 6. In addition to digital communication's ability to transmit a wider variety of signals than analog systems. the so-called 33 kbps modems operate right at the capacity limit. Available for free at Connexions <http://cnx.901 kbps (6. digital schemes are capable of error-free transmission while analog ones cannot overcome channel disturbances. for example). can inexpensively communicate a bandlimited analog signal from one location to another (point-to-point communication) or from one point to many (broadcast). (Solution on p. Even analog-based networks. but better communication performance occurs when we use digital systems (HDTV). • Performance: Because of the Noisy Channel Coding Theorem. and represents the bits/second. are what we call such systems today. Computer How would you convert the rst denition's result into units of bits/second? Example 6. Even though we may send information by way of a noisy channel. We cannot exploit signal structure to achieve a more ecient communication system. An analysis (Section 6. with the only issue being the number of bits used in A/D conversion (how accurately do we need to represent signal amplitude).org/content/col10040/1. • Eciency: The Source Coding Theorem allows quantication of just how complex a given message source is and allows us to exploit that complexity by source coding (compression). 47 This content is available online at <http://cnx. better performance.9> . point-to-point digital systems can be organized into global (and beyond as well) systems that provide ecient and exible information transmission.11/>.62) Thus. C = = 3 × 103 log2 1 + 103 29. 298. The maximum data rate a modem can produce for this wireline channel and hope that errors will not become rampant is the capacity. such as the telephone system. employ modern computer networking ideas rather than the purely analog systems of the past.270 CHAPTER 6. the only parameters of interest are message bandwidth and amplitude. digital systems oer much more eciency.1 number of bits/transmission.

and the ability to interconnect computers to form a communications infrastructure. Available for free at Connexions < or the Internetconsists of nodes interconnected by links. and they have the channel to themselves. • • A user writes a letter. even modern ones. The key aspect. and if it could be shared without signicant degradation in communications performance (measured by signal-to-noise ratio for analog signal transmission and by bit-error probability for digital transmission) so much the better.1 postal service-communications network metaphor. communication bandwidth is precious. wherein the link between transmitter and receiver is straightforward.9> .25: The prototypical communications networkwhether it be the postal service. 298. and is delivered to the recipient (what we have termed the message sink).25 describes point-to-point communications well.) Develop the network model for the telephone system. The idea of a network rst emerged with perhaps the oldest form of organized communication: the postal service. Messages formed by the source are transmitted within the network by dynamic routing. or your friendly mailman or mailwoman picking • • The communications network delivers the message in the most ecient (timely) way possible. Two routes are shown.271 Consequently. trying not to corrupt the message while doing so. Communication Network Source Sink Figure 6. (Solution on p. some would say aw. The longer one would be used if the direct link were disabled or congested. Most communication networks. of this model is that the channel is dedicated: Only one communications link through the channel is allowed for all time. What they do care about is message integrity and communications 48 This content is available online at <http://cnx. serving in the communications context as the message source. the development of increasingly ecient algorithms. The model shown in Figure 6. Entry points in the postal case are mailboxes. One modern example of this communications mode is the modem that connects a personal computer with an information server via a telephone line. cellular telephone. post oces. digital communication is now the best choice for many>. 6. This message is sent to the network by delivery to one of the network's public entry points. share many of its aspects. with the increased speed of digital computers.33. Regardless whether we have a wireline or wireless channel. making it as analogous as possible with the What is most interesting about the network system is the ambivalence of the message source and sink about how the communications link is made. Exercise 6. The message arrives at one of the network's exit points.3: Fundamental model of communication). up the letter.33 Communication Networks 48 Communication networks elaborate the Fundamental Model of Communications (Figure 1.

The analogy is that the postal service. From today's perspective. drunken operators). the fact that this nineteenth century system handled digital communications is astounding. the route is xed once the phone starts ringing. the basic network model by subdividing messages into smaller chunks called packets (Figure 6.9> . and satellite communication links.34 Message Routing 49 Focusing on electrical networks. because of the long transmission time. In radio networks. Rather than a matched lter. most analog ones make inecient use of communication links because truly dynamic routing is dicult. Note that no omnipotent router views the network as a whole and pre-determines every message's route. but also that the wide variety of communication modescomputer login.26). the receiver was the operator's ear. message sequences are sent. for example).org/content/m0076/2. the average number of bits/symbol. and Computer networks elaborate what was then called the ARPANET. le transfer. communication networks do have point-to-point communication links between network well described by the Fundamental Model of Communications. the network must nodes and links to usebased on destination informationthe route messagesdecide what addressthat is usually separate from the message information. Available for free at Connexions <http://cnx. exceeded the Source Coding Theorem's upper bound. sharing At a grander viewpoint. nodes time-domain multiplexing: However. Furthermore. By creating packets. 6.9/>. Note: Because of the need for a comma between dot-dash sequences to dene letter (symbol) boundaries.272 CHAPTER 6. network customers would be quite upset if the telephone company momentarily disconnected the path so that someone else could use it. INFORMATION COMMUNICATION eciency. Internally.4). Routing in networks is necessarily dynamic: The complete route taken by messages is formed as the network handles the message. it was becoming clear that not only was digital communication technically superior. Here the network consisted of telegraph operators who transmitted the message eciently using Morse code and routed the message so that it took the shortest possible path to its destination while taking into account internal network failures (downed lines. each station has a dedicated portion of the electromagnetic spectrum. if not impossible. a communication failure might require retransmission of the entire le. The notion of computer networks was born then. each of which has its own address and is routed independently of others. and. The rationale for the network enforcing smaller transmissions was that large le transfers would consume network resources all along the route. with nodes relaying the message having some notion of the best possible path at the time of transmission. and can consider issues like inoperative and overly busy links. Certainly in the case of the postal system dynamic routing routing takes place when you place the call. optical ber. During the 1960s. today's networks use heterogeneous links. to obtain. The signal set consisted of a short and a long pulse. munications channel between nodes using what we call in time the channel's capacity. served as the source coding algorithm. The telephone network is more dynamic. and he wrote the message (translating from received bits to symbols). but once it establishes a call the path through the network is xed. the network can better manage congestion. The rst electrical communications network was the telegraph. The users of that path control its Telephone use. opens the envelope. places each page in a 49 This content is available online at <http://cnx. This kind of connection through a networkxed for the duration of the communication sessionis known as a circuit-switched connection. now called the Internet. and this spectrum cannot be shared with other stations or used in any other than the regulated way. Morse code. Communication paths that form the Internet use wireline. In the telephone system. which assigned a sequence of dots and dashes to each letter of the alphabet. was born. and electronic mail needed a dierent approach than point-to-point. such as commercial television. and may not make ecient use of it (long pauses while one person thinks. many messages share the comRather than the continuous communications mode implied in the Model as presented. rather than sending a long letter in the envelope you provide. Modern communication networks strive to achieve the most ecient (timely) and most reliable information delivery system possible. as described in Subtleties of Coding (Example 6.


separate envelope, and using the address on your envelope, addresses each page's envelope accordingly, and mails them separately. The network does need to make sure packet sequence (page numbering) is maintained, and the network exit point must reassemble the original message accordingly.

Receiver Address Transmitter Address Data Length (bytes) Data

Error Check
Figure 6.26:
Long messages, such as les, are broken into separate packets, then transmitted over

computer networks. A packet, like a letter, contains the destination address, the return address (transmitter address), and the data. The data includes the message part and a sequence number identifying its order in the transmitted message.

Communications networks are now categorized according to whether they use packets or not. A system like the telephone network is said to be

circuit switched:

The network establishes a 

xed route that lasts

the entire duration of the message. Circuit switching has the advantage that once the route is determined, the users can use the capacity provided them however they like. Its main disadvantage is that the users may not use their capacity eciently, clogging network links and nodes along the way.


networks continuously monitor network utilization, and route messages accordingly. Thus, messages can, on the average, be delivered eciently, but the network cannot guarantee a specic amount of capacity to the users.

6.35 Network architectures and interconnection
certainly the largest WAN, spanning the entire earth and beyond.


The network structureits architecture (Figure 6.25)typies what are known as

wide area networks
51 . LANs connect

(WANs). The nodes, and users for that matter, are spread geographically over long distances. "Long" has no precise denition, and is intended to suggest that the communication links vary widely. The Internet is

Local area networks, LANs, employ a

single communication link and special routing. Perhaps the best known LAN is Ethernet to other LANs and to wide area networks through special nodes known as Protocol address). An example address is bytes specify the computer's and computer name. Internet, a computer's address consists of a four byte sequence, which is known as its


gateways (Figure 6.27). In the IP address (Internet

each byte is separated by a period. The rst two Computers are also addressed by a more is the same as name servers translate between

(here Rice University).

human-readable form: a sequence of alphabetic abbreviations representing institution, type of institution, A given computer has both names (

Data transmission on the Internet requires the numerical form. So-called is sent to the network.
50 51

alphabetic and numerical forms, and the transmitting computer requests this translation before the message

This content is available online at <>. "Ethernet" <> Available for free at Connexions <>




Wide-Area Network LAN A B Gateway LAN C D Gateway B
Figure 6.27: The gateway serves as an interface between local area networks and the Internet. The two
shown here translate between LAN and WAN protocols; one of these also interfaces between two LANs, presumably because together the two LANs would be geographically too dispersed.


6.36 Ethernet


L Z0 Terminator Transceiver Transceiver Z0 Terminator

Computer A

Computer B

Figure 6.28:

The Ethernet architecture consists of a single coaxial cable terminated at either end by a

resistor having a value equal to the cable's characteristic impedance. Computers attach to the Ethernet through an interface known as a transceiver because it sends as well as receives bit streams represented as analog voltages.

Ethernet uses as its communication medium a single length of coaxial cable (Figure 6.28). This cable serves as the "ether", through which all digital data travel. Electrically, computers interface to the coaxial cable (Figure 6.28) through a device known as a


This device is capable of monitoring the voltage

appearing between the core conductor and the shield as well as applying a voltage to it. Conceptually it consists of two op-amps, one applying a voltage corresponding to a bit stream (transmitting data) and another serving as an amplier of Ethernet voltage signals (receiving data). The signal set for Ethernet resembles that shown in BPSK Signal Sets, with one signal the negative of the other. parallel, resulting in the circuit model for Ethernet shown in Figure 6.29.

Computers are attached in

This content is available online at <>. Available for free at Connexions <>


Exercise 6.36.1
should the output resistor

(Solution on p. 298.)

From the viewpoint of a transceiver's sending op-amp, what is the load it sees and what is the transfer function between this output voltage and some other transceiver's receiving circuit? Why


be large?





Transceiver Rout xA(t) + – xB(t) + – Rout

Coax Rout + – Z0

… x (t) Z

Figure 6.29:

The top circuit expresses a simplied circuit model for a transceiver. must be much larger than

The output



so that the sum of the various transmitter voltages add to

create the Ethernet conductor-to-shield voltage that serves as the received signal In this case, the equivalent circuit shown in the bottom circuit applies.

r (t) for all transceivers.

No one computer has more authority than any other to control when and how messages are sent. Without scheduling authority, you might well wonder how one computer sends to another without the (large) interference that the other computers would produce if they transmitted at the same time. The innovation of Ethernet is that computers schedule themselves by a the fact that

random-access method. This method relies on all packets transmitted over the coaxial cable can be received by all transceivers, regardless of

which computer might actually be the intended recipient. In communications terminology, Ethernet directly supports broadcast. Each computer goes through the following steps to send a packet. 1. The computer senses the voltage across the cable to determine if some other computer is transmitting. 2. If another computer is transmitting, wait until the transmissions nish and go back to the rst step. If the cable has no transmissions, begin transmitting the packet. 3. If the receiver portion of the transceiver determines that no other computer is also sending a packet, continue transmitting the packet until completion. 4. On the other hand, if the receiver senses interference from another computer's transmissions, immediately cease transmission, waiting a random amount of time to attempt the transmission again (go to step 1) until only one computer transmits and the others defer. The condition wherein two (or more) computers' transmissions interfere with others is known as a


Available for free at Connexions <>




The reason two computers waiting to transmit may not sense the other's transmission immediately arises because of the nite propagation speed of voltage signals through the coaxial cable. The longest time any computer must wait to determine if its transmissions do not encounter interference is speed of 2/3 the speed of light, this time interval is more than 10 two!

2L c , where L is the coaxial cable's length. The maximum-length-specication for Ethernet is 1 km. Assuming a propagation


As analyzed in Problem 22 (Prob-

lem 6.31), the number of these time intervals required to resolve the collision is, on the average, less than

Exercise 6.36.2
transmitting computers located at the Ethernet's ends.)

(Solution on p. 298.)

Why does the factor of two enter into this equation? (Consider the worst-case situation of two

Thus, despite not having separate communication paths among the computers to coordinate their transmissions, the Ethernet random access protocol allows computers to communicate without only a slight degradation in eciency, as measured by the time taken to resolve collisions relative to the time the Ethernet is used to transmit information.

Pmin . The time required to transmit such Pmin , where C is the Ethernet's capacity in bps. Ethernet now comes in two dierent types, C each with individual specications, the most distinguishing of which is capacity: 10 Mbps and 100 Mbps. If
A subtle consideration in Ethernet is the minimum packet size packets equals the minimum transmission time is such that the beginning of the packet has not propagated the full length of the Ethernet before the end-of-transmission, it is possible that two computers will begin transmission at the same time and, by the time their transmissions cease, the other's packet will not have propagated to the other. In this case, computers in-between the two will sense a collision, which renders both computer's transmissions senseless to them, without the two transmitting computers knowing a collision has occurred at all! For Ethernet to succeed, we must have the minimum packet transmission time exceed propagation time:

Pmin C


2L c or

twice the voltage

Pmin >
200 bits.

2LC c

Thus, for the 10 Mbps Ethernet having a 1 km maximum length specication, the minimum packet size is

Exercise 6.36.3
be? Why should the minimum packet size remain the same?

(Solution on p. 298.)

The 100 Mbps Ethernet was designed more recently than the 10 Mbps alternative. To maintain the same minimum packet size as the earlier, slower version, what should its length specication

6.37 Communication Protocols


The complexity of information transmission in a computer networkreliable transmission of bits across a channel, routing, and directing information to the correct destination within the destination computers operating systemdemands an overarching concept of how to organize information delivery. No unique set of rules satises the various constraints communication channels and network organization place on information transmission. For example, random access issues in Ethernet are not present in wide-area networks such as the Internet. A


is a set of rules that governs how information is delivered. For example, to use

the telephone network, the protocol is to pick up the phone, listen for a dial tone, dial a number having a specic number of digits, wait for the phone to ring, and say hello. In radio, the station uses amplitude or frequency modulation with a specic carrier frequency and transmission bandwidth, and you know to turn on the radio and tune in the station. In technical terms, no one protocol or set of protocols can be used for any communication situation. Be that as it may, communication engineers have found that a common thread

This content is available online at <>.

Available for free at Connexions <>


runs through the


of the various protocols.

This grand design of information transmission As

organization runs through all modern networks today. What has been dened as a networking standard is a layered, hierarchical protocol organization. shown in Figure 6.30 (Protocol Picture), protocols are organized by function and level of detail.

Protocol Picture
Application http telnet tcp ip ecc signal set Presentation Session Transport Network Data Link Physical detail

ISO Network Protocol Standard
Figure 6.30:
Protocols are organized according to the level of detail required for information transmis-

sion. Protocols at the lower levels (shown toward the bottom) concern reliable bit transmission. Higher level protocols concern how bits are organized to represent information, what kind of information is dened by bit sequences, what software needs the information, and how the information is to be interpreted. Bodies such as the IEEE (Institute for Electronics and Electrical Engineers) and the ISO (International Standards Organization) dene standards such as this. Despite being a standard, it does not constrain protocol implementation so much that innovation and competitive individuality are ruled out.

Segregation of information transmission, manipulation, and interpretation into these categories directly aects how communication systems are organized, and what role(s) software systems fulll. Although not thought about in this way in earlier times, this organizational structure governs the way communication engineers think about all communication systems, from radio to the Internet.

Exercise 6.37.1

(Solution on p. 298.)

How do the various aspects of establishing and maintaining a telephone conversation t into this layered protocol organization? We now explicitly state whether we are working in the physical layer (signal set design, for example), the data link layer (source and channel coding), or any other layer. IP abbreviates Internet protocol, and governs gateways (how information is transmitted between networks having dierent internal organizations). TCP (transmission control protocol) governs how packets are transmitted through a wide-area network such as the Internet. Telnet is a protocol that concerns how a person at one computer logs on to another computer across a network. A moderately high level protocol such as telnet, is not concerned with what data links (wireline or wireless) might have been used by the network or how packets are routed. Rather, it establishes connections between computers and directs each byte (presumed to represent a typed character) to the appropriate operation system component at each end. It is higher layers. Recently, an important set of protocols created the World Wide Web. These protocols exist independently of the Internet. The Internet insures that messages are transmitted eciently and intact; the Internet is not


concerned with what the characters mean

or what programs the person is typing to. That aspect of information transmission is left to protocols at

Available for free at Connexions <>




concerned (to date) with what messages contain. HTTP (hypertext transfer protocol) frame what messages contain and what should be done with the data. The extremely rapid development of the Web on top of an essentially stagnant Internet is but one example of the power of organizing how information transmission occurs without overly constraining the details.

6.38 Information Communication Problems
Problem 6.1:
50 (Ω).
Signals on Transmission Lines


A modulated signal needs to be sent over a transmission line having a characteristic impedance of

Z0 =

So that the signal does not interfere with signals others may be transmitting, it must be bandpass 

ltered so that its bandwidth is 1 MHz and centered at 3.5 MHz. The lter's gain should be one in magnitude. An op-amp lter (Figure 6.31) is proposed.



C1 – +



+ –


Figure 6.31

a) What is the transfer function between the input voltage and the voltage across the transmission line? b) Find values for the resistors and capacitors so that design goals are met.

Problem 6.2:

Noise in AM Systems

The signal s (t) emerging from an AM communication system consists of two parts: the message signal, s (t), and additive noise. The plot (Figure 6.32) shows the message spectrum S (f ) and noise power spectrum PN (f ). The noise power spectrum lies completely within the signal's band, and has a constant value there of

N0 2 .


This content is available online at <>.

Available for free at Connexions <>

H1 (f ) and H2 (f ) are complementary if H1 (f ) + H2 (f ) = 1 We can use complementary lters to separate a signal into two parts by passing it through each lter. Realizing that this ltering aects the message Draw a block diagram of this communication system. What is the signal-to-noise ratio in the upper half of the frequency band? c) A clever 241 student suggests ltering the message before the transmitter modulates it so that the signal spectrum is balanced (constant) across frequency. assume that |m (t) | < 1. a) What is the transmission bandwidth? b) Find a receiver for this modulation scheme. the signal-to-noise ratio is not constant within subbands. In this problem. the phase deviation is small.279 S(f) A A/2 –W W f N0/2 –W PN(f) W f Figure 6. Mathematically. c) What is the signal-to-noise ratio of the received signal? Available for free at Connexions <http://cnx. c) What is the receiver's signal-to-noise ratio? How does it compare to the standard system that sends Problem 6.4: Phase Modulation A message signal m (t) phase modulates a carrier if the transmitted signal equals x (t) = Asin (2πfc t + φd m (t)) where φd is known as the phase deviation. the student realizes that the receiver must also compensate for the message to arrive intact. As with all analog modulation schemes. Each output can then be transmitted separately and the original signal reconstructed at the receiver.9> . the message is bandlimited to W Hz. and the carrier frequency fc is much larger than W. How does this system's signal-to-noise ratio compare with that of the usual AM radio? Problem 6. a) What circuits would be used to produce the complementary lters? b) Sketch a block diagram for a communication system (transmitter and receiver) that employs complementary signal transmission to send a message the signal by simple amplitude modulation? m (t).3: Complementary Filters Complementary lters usually have opposite ltering characteristics (like a lowpass and a highpass) and have transfer functions that add to one.32 a) What is the message signal's power? What is the signal-to-noise ratio? b) Because the power in the message decreases with frequency. Let's assume the message is bandlimited to W Hz and that H1 (f ) = a a+j2πf .

s (n) has no special characteristics and the modulation frequency f0 is known. Sammy says that he can recover s (n) from its amplitude-modulated version by the same approach used in analog communications. while the receiver c) The jammer. "Random switching" means that one carrier frequency is used for some period of time. The receiver knows what the switches to the other for some other period of time. unaware of the change. m (t). What is the signal-to-noise ratio of the receiver tuned to the harmonic having the largest power that does not contain the jammer? c(t) 1 1/2fc 0 –1 Figure 6. What does he have in mind? Problem 6. He tells them that if s (n) cos (2πf0 n) and s (n) sin (2πf0 n) were both available.5: Digital Amplitude Modulation where the signal Two ELEC 241 students disagree about a homework Available for free at Connexions <http://cnx. back to the rst. Samantha says that approach won't work. is transmitting with a carrier frequency of tunes a standard AM receiver to a harmonic of the carrier frequency. INFORMATION COMMUNICATION Hint: Use the facts that cos (x) 1 and sin (x) x for small x. Thus.280 CHAPTER 6. fc .6: the Anti-Jamming One way for someone to keep people from receiving an AM transmission is to transmit noise at the same carrier frequency. n (t) has a constant power density spectrum The channel adds white noise of spectral height a) What would be the output of a traditional AM receiver tuned to the carrier frequency transmitted signal has the form fc ? b) RU Electronics proposes to counteract jamming by using a dierent modulation scheme. The issue concerns the discrete-time signal s (n) cos (2πf0 n). fc so that the transmitted signal is The noise AT (1 + m (t)) sin (2πfc t) N0 2 . a) What is the spectrum of the modulated signal? b) Who is correct? Why? c) The teaching assistant does not want to take sides.33). etc. s (n) can be recovered. Problem 6. The scheme's 1 fc ) having the indicated waveform (Figure 6. What is the spectrum of the transmitted signal with AT (1 + m (t)) c (t) where c (t) is a periodic carrier signal (period the proposed scheme? Assume the message bandwidth frequency W is much less than the fundamental carrier fc .7: quencies Secret Comunications A system for hiding AM transmissions has the transmitter randomly switching between two carrier fre- f1 and f2 .9> . if the carrier frequency is over the bandwidth of the message jammer would transmit AJ n (t) sin (2πfc t + φ).33 3/4fc 1/fc 1/4fc t Problem 6.

AM stereo is not. Available for free at Connexions <http://cnx. The channel attenuates the transmitted signal m (t) is bandlimited to W Hz. x (t) and adds white noise N0 2 . cos 2πfct x(t) BPF × × sin 2πfct Figure 6. signal has bandwidth Assume the message W. advantages over the usual amplitude modulation system. the receiver must be designed to receive the transmissions regardless of which carrier frequency is used.34. While FM stereo is commonplace.35) has.9> . Assume the left and right signals are bandlimited to W Hz. Show that this receiver indeed works: It produces the left and right signals separately.34 LPF W Hz LPF W Hz Problem 6. but is much simpler to understand and analyze. x (t) = A (1 + ml (t)) cos (2πfc t) + Amr (t) sin (2πfc t) a) Find the Fourier transform of with that of standard AM? x (t). The message signal and the carrier frequency of spectral height fc W. Find the signal-to-noise ratio of each A Novel Communication System A clever system designer claims that the depicted transmitter (Figure 6. despite its complexity. What is the transmission bandwidth and how does it compare b) Let us use a coherent demodulator as the receiver. shown in Figure 6. c) Assume the channel adds white noise to the transmitted signal.281 carrier frequencies are but not when carrier frequency switches occur. An amazing aspect of AM stereo is that both signals are transmitted within the same bandwidth as used to transmit just one. a) How dierent should the carrier frequencies be so that the message could be received? b) What receiver would you design? c) What signal-to-noise ratio for the demodulated signal does your receiver yield? Problem 6.8: AM Stereo Stereophonic radio transmits two signals simultaneously that correspond to what comes out of the left and right speakers of the receiving radio. The channel adds white noise of spectral height N0 2 . Consequently.

64) f0 is the frequency oset for each bit and it is harmonically related to the bit interval The value bk is either −1 or 1. × B bits would be transmitted according B−1 x (t) = A k=0 Here. a) Find an expression for the spectrum of b) Show that the usual coherent receiver demodulates this signal. a) Find a receiver for this transmission scheme. How should N be related to B .10: to Multi-Tone Digital Communication In a so-called multi-tone system.35 The transfer function H (f ) is given by   j if f < 0 H (f ) =  −j if f > 0 x (t). and analyze its performance. the number eciently as possible. How would you recommend he implement the receiver? Problem 6. further attenuated signal are received superimposed. multipath occurs because the buildings reect the signal and the reected path length between transmitter and receiver is longer than the direct path. several bits are gathered together and transmitted simultaneously on dierent carrier frequencies during a T second interval. He samples the received signal (sampling interval of simultaneously transmitted bits? c) The alumni wants to nd a simple form for the receiver so that his software implementation runs as Ts = T N ).11: City Radio Channels In addition to additive white noise. c) Find the signal-to-noise ratio that results when this receiver is used. metropolitan cellular radio channels also contain multipath: the attenuated signal and a delayed. Problem 6.36. For example. INFORMATION COMMUNICATION A sin 2πfct m(t) H(f) × × A cos 2πfct x(t) Figure 6.282 CHAPTER (6. d) Find a superior receiver (one that yields a better signal-to-noise ratio). As shown in Figure 6. Available for free at Connexions <http://cnx. Sketch your answer. 0 ≤ t < T T. b) An ELEC 241 almuni likes digital systems so much that he decides to produce a discrete-time version. of bk sin (2π (k + 1) f0 t) .9> .

Rather than send signals at dierent frequencies. the base station (transmitter) needs to relay dierent voice signals to several telephones at the same time.37 s2(t) A T t Available for free at Connexions <http://cnx. d) How would the usual AM receiver be modied to minimize multipath eects? Express your modied receiver as a block diagram. she suggests BPSK signal sets that have the depicted basic signals (Figure 6. What is the model for the channel.36 a) Assume that the length of the direct path is b) Assume d meters and the reected path is 1.283 Reflected Path Direct Path Transmitter Figure 6.37). How would you characterize this transfer function? c) Would the multipath aect AM radio? If not. Find and sketch the magnitude of the transfer function for the multipath component of the channel. if so.12: Downlink Signal Sets In digital cellular telephone systems. including the multipath and the additive noise? d is 1 km. be aected or not? Analog cellular telephone uses amplitude modulation to transmit voice. For example. 1 MHz for radio). Problem 6. how so? Would analog cellular telephone. a clever Rice engineer suggests using a dierent signal set for each data which operates at much higher carrier frequencies (800 MHz vs. for two simultaneous data streams.5 times as long. why not. s1(t) A T –A t –A Figure 6.9> .

38 digital f a) Write an expression for the time-domain version of the transmitted signal in terms of digital signal m (t) and the d (t). In addition to sending this analog signal. bits are represented in data stream 1 by s1 (t) and −s1 (t) and in data stream 2 by s2 (t) and −s2 (t).38).13: A signal Mixed Analog and Digital Transmission m (t) is transmitted using amplitude modulation in the usual way. Each receiver uses a matched lter for its receiver. The requirement is that each receiver not receive the other's bit stream.9> . The signal has bandwidth W Hz. the transmitted signal has the form b(1) (n) and b(2) (n) each represent a x (t) = A n where signal.284 CHAPTER 6. INFORMATION COMMUNICATION Thus. The channel adds white noise and attenuates the transmitted Available for free at Connexions <http://cnx. b(1) (n) sin (2πfc (t − nT )) p (t − nT ) + b(2) (n) cos (2πfc (t − nT )) p (t − nT ) T and p (t) is a unit-amplitude pulse having duration b(1) (n). The transmission signal spectrum is as shown (Figure 6. One suggested transmission scheme is to use a quadrature BPSK scheme. the transmitter also wants to Using an 8-bit representation of the characters and a simple baseband BPSK signal set (the constant signal +1 corresponds to a 0. b(2) (n) equal either +1 or -1 according to the bit being transmitted for each signal. each of which are modulated by 900 MHz carrier. and the carrier frequency is send ASCII text in an auxiliary band that lies slightly above the analog transmission band. If bit stream. a) What is the block diagram describing the proposed system? b) What is the transmission bandwidth required by the proposed system? c) Will the proposal work? Does the fact that the two data streams are transmitted in the same bandwidth at the same time mean that each receiver's performance is aected? Can each bit stream be received without interference from the other? Problem 6.14: Digital Stereo Just as with analog communication. d (t) fc . X(f) B 2W analog fc Figure 6.1 kHz with 16 bits/sample). The transmitter sends the two data streams so that their bit intervals align. Assume you have two CD-quality signals (each sampled at 44. and has a total B. the data signal as the analog signal bandwidth representing the text is transmitted as the same time m (t). the constant -1 to a 1). Problem it should be possible to send two signals simultaneously over a digital channel. b) What is the maximum datarate the scheme can provide in terms of the available bandwidth? c) Find a receiver that yields both the analog signal and the bit stream.

25 0.0625 0. and the channels Suppose we transmit speech signals over comparable digital and analog channels. each bit in quantized speech sample is received in error with probability depends on signal-to-noise ratio pe that Eb . f ) Compare and evaluate these systems. c) Find an unequal-length codebook for this sequence that satises the Source Coding Theorem. Because these are separate.17: Source Compression Consider the following 5-letter source. the recovered speech signal can be considered to have two noise sources added to each sample's true value: One is the A/D amplitude quantization noise and the second is due to channel errors.16: Source Compression Consider the following 5-letter source.125 0. d) In the digital case.15: Digital and Analog Speech Communication Assume the transmitters use the same power. Does your code achieve the entropy limit? d) How much more ecient is this code than the simple binary code? Problem 6. b) Show that the simple binary coding is inecient. the total noise power equals the sum of these two. What is the signal-to-noise ratio of the received speech signal as a function of function of channel signal-to-noise ratio.5 0. pe ? e) Compute and plot the received signal's signal-to-noise ratio for the two transmission schemes as a Problem Assume the speech signal has a 4 kHz bandwidth and. introduce the same attenuation and additive white noise.285 a) What value would you choose for the carrier frequency b) What is the transmission bandwidth? fc ? c) What receiver would you design that would yield both bit streams? Problem 6. We want to compare the resulting quality of the received signals. a) What is the transmission bandwidth of the analog (AM) and digital schemes? b) Assume the speech signal's amplitude has a magnitude less than one. errors in each bit have a dierent impact on the error in N0 the reconstructed speech sample. is sampled at an 8 kHz rate with eight-bit A/D conversion.0625 Table 6. Available for free at Connexions <http://cnx. Letter Probability a b c d e 0. What is maximum amplitude quantization error introduced by the A/D converter? c) In the digital case. However. Assume simple binary source coding and a modulated BPSK transmission scheme. in the digital case.5 a) Find this source's entropy. Find the mean-squared error between the transmitted and received amplitude.9> .

a signal bandlimited to 5 kHz is sampled with a two-bit A/D converter at its Nyquist frequency.15 0.1 Table 6. b) Find the Human code for this source.19: Digital Communication In a digital cellular system.6 a) Find this source's entropy. Its sampled values lie in the interval (-1. • for n=0:7. How would you characterize this source code in words? c) How many fewer bits would be used in transmitting this speech segment with your Human code in comparison to simple binary coding? Problem 6.9> .2 0.2 Table 6. b) Show that the simple binary coding is inecient. c) Find the Human code for this source. Find the entropy of this source. end. count(n+1) = sum(y_quant == n). such as speech. we should consider whether a Human coding would be more a) Load into Matlab the segment of speech contained in corresponding to the integers y.5). we quantize the signal's amplitude to a set of integers.15 0.35 0. INFORMATION COMMUNICATION Letter Probability a b c d e 0. • y_quant = round(3. Although these integers could be represented by a binary code for digital transmission. signal amplitudes are represented by b-bit 2b integers. For a converter.286 CHAPTER 6.15 0. 1). Find the relative frequency of occurrence of quantized amplitude values. The following Matlab program computes the number of times each quantized value occurs. To simulate a 3-bit converter.3 0. The sample values are found to have the shown relative frequencies.18: Speech Compression When we sample a signal.4 0. we use Matlab's round function to create quantized amplitudes [0 1 2 3 4 5 6 7]. What is its average code length? Problem 6.7 Available for free at Connexions <http://cnx. Sample Value 0 1 2 3 Probability 0.5*y + 3.

39 a) What is the datarate of the compressed source? b) Which choice of signal set maximizes the communication system's performance? c) With no error-correcting coding. Find an error correcting code for two-bit data blocks that corrects all single-bit errors. what is your new code. if not. a two-bit block of data bits naturally emerges. s0(t) A T Signal Set 1 s1(t) A A/2 t T t Signal Set 2 s0(t) A T T t -A/2 t s1(t) Figure 6.8 a) What is the average number of bits necessary to represent this alphabet? b) Using a simple binary code for this alphabet.21: Universal Product Code An The Universal Product Code (UPC). often known as a bar code.287 We send the bit stream consisting of Human-coded samples using one of the two depicted signal sets (Figure 6. what signal-to-noise ratio would be needed for your chosen signal set to guarantee that the bit error probability will not exceed from the transmitter (relative to the distance at which the the performance change? 10−3 ? If the receiver moves twice as far 10−3 error rate was obtained).org/content/col10040/1. labels virtually every sold good. how does Problem 6.20: Signal Compression Letters drawn from a four-symbol alphabet have the indicated probabilities. demonstrate that this goal cannot be achieved. Problem 6. example (Figure 6. Available for free at Connexions <http://cnx.39).40) of a portion of the code is shown.9> . Letter a b c d Probability 1/3 1/3 1/4 1/12 Table 6. c) How would you modify your code so that the probability of the letter a being confused with the letter d is minimized? If so.

enter the price into the cash Data Codeword 00 01 10 11 00000 01101 10111 11010 Table 6. and 3 errors are correctly decoded? d) What is the block error probability (the probability of any number of errors occurring in the decoded codeword)? Problem 6.23: Digital Communication A digital source produces sequences of nine letters with the following probabilities.9> . each having width d. presents an 11-digit number (consisting of decimal digits) that uniquely identies the product. In retail stores. Now how many bars are needed to represent each digit? c) What is the probability that the 11-digit code is read correctly if the probability of reading a single bit incorrectly is pe ? d) How many error correcting bars would need to be present so that any single bar error occurring in the 11-digit code can be corrected? Problem 6. and after accessing a database of prices. letter probability a b c d e f g h i 1 4 1 8 1 8 1 8 1 8 1 16 1 16 1 16 1 16 Available for free at Connexions <http://cnx.40 Here a sequence of black and white bars. laser scanners read this code.288 CHAPTER 6. How many patterns of 1. c) Give the decoding table for this code. a) How many bars must be used to represent a single digit? b) A complication of the laser scanning system is that the bar code must be read either forwards or backwards. INFORMATION COMMUNICATION … … d Figure 6. 2.9 a) What is this code's eciency? b) Find the generator matrix G and parity-check matrix H for this code.22: Error Correcting Codes A code maps pairs of information bits into codewords of length 5 as follows.

9> .25: Error Correction? For example. How does the resulting code compare with the best possible code? b) A clever engineer proposes the following (6.4) Hamming code having the generator matrix         G=       1 0 0 0 1 0 1 0 1 0 0 1 1 0 0 0 1 0 1 1 1 0   0    0    1    0   1   1 This code corrects all single-bit error. it corrects using a single-bit error correction approach. c) What is the error correcting capability of the code? Problem 6. consider a (7.11 What is the error correction capability of this code? c) The channel's bit error probability is 1/ a) Find the generator matrix b) Find a G and parity-check matrix H for this code.3) code to correct errors after transmission through a digital channel. What kind of code should be used to transmit data over this channel? Problem 6. c1 = d 1 c2 = d 2 c3 = d 3 c4 = d1 ⊕ d2 ⊕ d3 c5 = d2 ⊕ d3 c6 = d1 Table 6. but if a double bit error occurs. 3×6 matrix that recovers the data bits from the codeword.10 a) Find a Human code that compresses this source.289 Table 6. He decides to represent 3-bit data with 6-bit codewords in which none of the data bits appear explicitly. c1 = d1 ⊕ d2 c2 = d2 ⊕ d3 c3 = d1 ⊕ d3 c4 = d1 ⊕ d2 ⊕ d3 c5 = d1 ⊕ d2 c6 = d1 ⊕ d2 ⊕ d3 Table 6. but also to hide the information from Rice engineers (no fear of the UT engineers). error correction algorithms believe that a smaller number of errors have occurred and correct accordingly. Available for free at Connexions <http://cnx. It is important to realize that when more transmission errors than can be corrected.24: Overly Designed Error Correction Codes An Aggie engineer wants not only to have codewords for his data.

41) shows. a singlebit error is rare. a) How many error-correction bits are required to correct scratch-induced errors for each 16-bit sample? b) Rather than use a code that can correct several errors in a codeword. 44. Assume that scratch and dust-induced errors are four or fewer consecutive bits long.41 Problem 6. The CD player de-interleaves the coded data. only the least signicant 4 bits can be received in error.26: Selective Error Correction We have found that digital transmission errors occur with a probability that remains constant no matter how "important" the bit may be. Communication System Design RU Communication Systems has been asked to design a communication system that meets the following Available for free at Connexions <http://cnx.9> . errors occur as frequently for the most signicant bit as they do for the least signicant bit. How much would the output signal-to-noise ratio improve using this error correction scheme? Problem 6.28: requirements. Yet. a) How many error correction bits must be added to provide single-bit error correction on the most signicant bits? b) How large must the signal-to-noise ratio of the received signal be to insure reliable communication? c) Assume that once error correction is sample n sample n+1 sample n+2 sample n+3 1111 1010 0000 0101 4-way interleaver 1100100111001001 Figure 6. several consecutive bits in error are much more common. Now. then performs error-correction. evaluate this proposed scheme with respect to the non-interleaved one.27: Compact Disk Errors occur in reading audio compact disks. a clever 241 engineer proposes interleaving consecutive coded samples. Rather than applying error correction to each sample value. Problem 6. what is the result of channel decoding? Express your result as a binary error sequence for the data bits. Bits are transmitted using a modulated BPSK signal set over an additive white noise channel. As the cartoon (Figure 6. The audio CD standard requires 16-bit. Very few errors are due to noise in the compact disk player. most occur because of dust and scratches on the disk surface. in transmitting digitized signals. the bits representing coded samples are interpersed before they are written on the CD. For example.1 kHz analog-to-digital conversion of each channel of the stereo analog signal. We use single-bit error correction on the most signicant four bits and none on the least signicant four.290 CHAPTER 6. why not concentrate the error correction on the most important bits? Assume that we sample an 8 kHz signal with an 8-bit A/D converter. Because scratches span several bits. INFORMATION COMMUNICATION a) How many double-bit errors can occur in a codeword? b) For each double-bit error pattern. the former errors have a much larger impact on the overall signal-to-noise ratio than the latter.

First of all. The access algorithm works as follows. measured 100 meters from the tower is 70 dB. which has a 5 kHz bandwidth. b H 3 4 5 6 2.35 Table 6. The signal-to-noise ratio of the allocated wirelss channel. This unsuccessful transmission situation will be detected by all computers once the signals have propagated the length of the cable.13 Can these specications be met? Justify your answer. as measured by the signal-to-noise ratio.9> .30: Digital Cellular Telephones In designing a digital version of a wireless telephone. its transmission occurs successfully.19 3. ip a coin that has probability If only one of the p of coming up heads N computer's coins comes up heads. Once received. If none or more than one head comes a) Using signal-to-noise ratio as the criterion. must be at least as good as that provided by wireline telephones (30 dB) and the message bandwidth must be the same as wireline telephone. how much compression must HDTV systems employ? Problem 6. The signal is to be sent through a noisy channel having a bandwidth of 25 kHz channel centered at 2 MHz and a signal-to-noise ration within that band of 10 dB.29: HDTV As HDTV (high-denition television) was being developed. Before transmitting. the N computers will either remain silent (no heads) or a collision will occur (more than one head). The least-acceptable picture received by television sets located at an analog station's broadcast perimeter has a signal-to-noise ratio of about 10 dB.31: • • • Optimal Ethernet Random Access Protocols Assume a population of N computers want to transmit information on a random access channel. Problem 6. the message signal must have a signal-to-noise ratio of at least 20 dB. can be received by any HDTV set within the same broadcast region? b) Assuming the digital television channel has the same characteristics as an analog one.28 5.25 4.291 • • • • The baseband message signal has a bandwidth of 10 kHz. the quality of the received signal. and the algorithm resumes (return to the beginning). you must rst consider certain fundamentals. the FCC restricted this digital system to use in the same bandwidth (6 MHz) as its analog (AM) counterpart. which achieves a signal-to-noise ratio of 20 dB. Available for free at Connexions <http://cnx. The desired range for a cell is 1 km. HDTV video is sampled on a 1035 × 1840 raster at 30 images per second for each of the three colors. The RUCS engineers nd that the entropy bits H of the sampled message signal depends on how many b are used in the A/D converter (see table below). how many bits per sample must be used to guarantee that a high-quality picture. and the others must wait until that transmission is complete and then resume the algorithm. Can a digital cellphone system be designed according to these criteria? Problem 6.

but the noise power added by the channel increases with bandwidth with a proportionality constant of 0. transmitter repeater receiver D/2 D Figure 6. a) Design an analog system for sending speech under this scenario. what should p be to b) What is the probability of one computer transmitting when this optimal value of number of computers grows to innity? p is used as the c) Using this optimal probability. The wireless link between transmitter and receiver is such that 200 watts of power can be received at a pre-assigned carrier frequency. What the repater does is amplify its received signal to exactly cancel the attenuation encountered along the rst leg and to re-transmit the signal to the ultimate if not. let's assume that the transmitter and receiver are apart.9> . what is the signal-to-noise ratio of the demodulated signal at the receiver? Is this better or worse than the signal-to-noise ratio when no repeater is present? c) For digital communication.292 CHAPTER 6.42 D/2 a) What is the block diagram for this system? b) For an amplitude-modulation communication system. repeaters are frequently employed for both D m For example.1 watt/kHz. why not? Is the capacity larger with the Problem 6. and a repeater is positioned halfway between them (Figure 6. We have some latitude in choosing the transmission bandwidth. analog and digital communication.32: Repeaters Because signals attenuate with distance from the transmitter. Is it realistic? Is it ecient? Problem 6. What is the received signal-to-noise ratio under these design constraints? b) How many bits must be used in the A/D converter to achieve the same signal-to-noise ratio? c) Is the bandwidth required by the digital channel to send the samples without error greater or smaller than the analog bandwidth? Available for free at Connexions <http://cnx. when. However.33: Designing a Speech Communication System We want to examine both analog and digital communication alternatives for a dedicated speech transmission system. what is the average number of coin ips that will be necessary to resolve the access so that one computer successfully transmits? d) Evaluate this algorithm. the signal the repeater receives contains white noise as well as the transmitted signal. we must consider the system's capacity. INFORMATION COMMUNICATION a) What is the optimal probability to use for ipping the coin? maximize the probability that exactly one computer transmits? In other words. repeater system than without it? If so. Assume the speech signal has a 5 kHz bandwidth.42). The receiver experiences the same amount of white noise as the repeater.

5 MHz has been allocated for a new high-quality AM band. a) How many stations can be allocated to this band and with what carrier frequencies? b) Looking ahead.9> . c) Without employing compression. conversion to digital transmission is not far in the future.34: Digital vs. twice the message bandwidth of what current stations can send.293 Problem 6. how many digital radio stations could be allocated to the band if each station used BPSK modulation? Evaluate this design approach. Analog Each station licensed for this band will transmit signals having a You are the Chairman/Chairwoman of the FCC. Available for free at Connexions <http://cnx. bandwidth of 10 kHz. The characteristics of the new digital radio system need to be established and you are the boss! Detail the characteristics of the analog-to-digital converter that must be used to prevent aliasing and ensure a signal-to-noise ratio of 25 dB. The frequency band 3 MHz to

9> . c= √ 1 µd d .3. and equals Solution to Exercise 6. The line-of-sight 2 distance between two earth-based antennae equals dLOS = 2h1 R + h1 2 + 2h2 R + h2 2 (6. voltages and currents in a wireline channel. 2 ∼ ∼ Thus. capacitance and inductance. INFORMATION COMMUNICATION Solutions to Exercises in Chapter 6 Solution to Exercise 6. d ( 2r ) Light in the middle of You can nd these frequencies from the spectrum allocation chart (Section 7.3. 233) d1 h1 R R R d2 h2 Figure 6.3 (p. and R the earth's radius.294 CHAPTER 6. with the inverse-square law a consequence of the conservation of power. Cable 8 MHz or 2 × 10 Hz). c= Solution to Exercise 6. the visible electromagnetic frequencies are over six orders of magnitude higher! 5 × 1014 Hz. ∼∼ ∼ ∼ In this high-frequency region. d is the distance from point with the earth's surface.3.4.66) Available for free at Connexions <http://cnx.1 (p. decay exponentially with distance.65) j2πf LC × ∼∼ 1+  1 1 2 j2πf ∼ G ∼ + R ∼ C ∼ L ∼  j2πf LC + ∼∼ 1 G 2 ∼ L C ∼+ R C ∼ L a (f ) = R GZ0 + Z0 .43 Use the Pythagorean Theorem. Solution to Exercise For coaxial cable.1 (p. the attenuation (space) constant equals the real part of this expression. 230) In both cases. the answer depends less on geometry than on material properties. 231) √1 µ arccosh( d 2r δ 2r +arccosh ) . Solution to Exercise 6.1 (p. 231) As frequency increases. 232) As shown previously (6. For twisted pair.2 (p. the top of the earth to a tangency (h + R) = R2 + d2 . where h is the antenna height. the visible band has a wavelength of about 600 nm. The inverse-square law governs free-space propagation because such propagation is lossless.5. The exponential decay of wireline channels occurs because they have losses and some ltering.3). which corresponds to a frequency of television transmits within the same frequency band as broadcast television (about 200 Thus. 2πf C ∼ ∼ G and 2πf L R. which is modeled as a transmission line having resistance. ∼ ∼ ∼ γ = j2πf LC 1+ G j2πf C 1+ ∼ R j2πf L ∼ ∼ (6.11).

241) Solution to Exercise 6.4 × 10 −10 . M (f − fc ). The signal-related portion of the transmitted spectrum is given by 1 2X (f − fc ) + 1 X (f + fc ) 2 = = 1 1 4 (M (f − 2fc ) + M (f )) + 4 (M (f + 2fc ) 1 1 1 4 M (f − 2fc ) + 2 M (f ) + 4 M (f + 2fc ) + M (f )) (6. which means that the total loss is the geosynchronous orbit lies at an altitude of product of the uplink and downlink losses. The ionosphere begins at an altitude of about 50 km.14. 238) X (f ) = 1 M (f − fc ) + 1 M (f + fc ). Solution to Exercise 6.2 (p. the term Solution to Exercise 6. but twice. known as the uplink.8 × 10−8 .1 (p.1 (p.13. k = 4. and scales the result by half. Reecting o the ionosphere not only encounters the same loss. Consequently. 240) Separation is M (f − fc ) M (f + fc ) 2W .14.1 (p.9.8. If one antenna is √ 2h1 R.7. M (f + fc ) do not overlap because we have assumed fc is much greater than the signal's highest frequency.295 As the earth's radius is much larger than √ √ dLOS = 2h1 R + 2h2 R . say h2 = 0 . 243) The harmonic distortion is 10%.5. 243) Solution to Exercise 6. it was proportional to very lucky.3 (p. normally obtained in computing the magnitude-squared equals zero. Solution to Exercise 6. wavelength increases and can approach the distance between the earth's surface and the the other antenna's range is Solution to Exercise 6. Marconi was the spectrum. encounters inverse-square law power losses. Speech is well contained in this bandwidth.12. 234) Transmission to the satellite. for Marconi. 234) 4.14.9> . Commercial AM signal bandwidth is 5kHz. we have to a good approximation that at ground elevation. Reection is the same as transmitting The 35700km.1 (p.75 kHz. while the Solution to Exercise 6.11.1 (p. Available for free at Connexions <http://cnx.67) The signal components centered at twice the carrier frequency are removed by the lowpass lter. the relation λf = c gives a corresponding frequency of 3. the receiver's input consists of noise). 243) x (t) = nn x (t) = ∞ n=−∞ sb(n) (t − nT ). Solution to Exercise 6. the antenna height. much better than in the telephone! Solution to Exercise 6.2 (p.1 (p. exactly what arrives. 2 2 Multiplying at the receiver by the carrier shifts this spectrum to fc and to −fc . The amplitude loss in the satellite case is proportional to If the interferer's spectrum does not overlap that of our communications channelthe interferer is out-ofbandwe need only use a bandpass lter that selects our transmission band and removes other portions of Solution to Exercise 6. 233) As frequency decreases.12. Such low carrier frequencies would be limited to low bandwidth analog communication and to low datarate digital communications.2 (p. The US Navy did use such a communication scheme to reach all of its submarines at once. 239) The key here is that the two spectra that the carrier frequency baseband signal M (f ) emerges.1 (p. Assuming a distance between the two of 80 km. 2. 236) The additive-noise channel is not linear because it does not have the zero-input-zero-output property (even though we might transmit nothing. (−1) b(n) ApT (t − nT ) sin 2πkt T Solution to Exercise 6.

To prove K . αA2 T = Solution to Exercise 6. mula: the terms P r [a0 ] log2 P r [a0 ] and P r [a0 ] appears twice in the entropy for(1 − P r [a0 ] + · · · + P r [aK−2 ]) log2 (1 − P r [a0 ] + · · · + P r [aK−2 ]).2 (p. .9> . The noise power remains the same as α 2 Eb ..18 1 1 1 2 + 5 3 + 20 3 = 1. which equals 1.1 (p.1 (p. 256) Consider the bitstream is .10. 249) 2αEb just as in the baseband case. The derivative equals and all other derivatives have the same form (just substi- The derivative with respect to this probability (and all the others) must be zero.1 (p.26. entropy answer. The average code length is is straightforward: Solution to Exercise 6. the factor of two smaller value than in the baseband case arising because the sinusoidal signals have less energy for the same amplitude.16. If it is positive. and the others are 0log2 0.1 (p. 0 ≤ pe ≤ Because this is an upward-going parabola. Consequently.47): Available for free at Connexions <http://cnx. the rst codeword encountered in a bit stream must be the right one. Consequently in the range 1 2 the error rate produced by coding is smaller. We only need to calculate one of these. the output of each multiplier-integrator combination is the negative of the other. and we would get lost. Stated in terms of Eb . 247) The matched lter outputs are amplitude. With no coding.22.16. Thus.01.4 (p. Jumping into the middle leads to no synchronization at all! Solution to Exercise 6. Solution to Exercise 6.23. 261) This question is equivalent to at 3pe × (1 − pe ) + pe 2 ≤ 1 or 2pe 2 − 3pe + 1 ≥ 0. the situation much worse. If it is negative. Thus. identical . Using the quadratic formula.1 (p. . ± A2 T 2 because the sinusoid has less power than a pulse having the same Solution to Exercise 6.18. . we need only check where its roots are. the signals are negatives of each other: the modulation: 3R. 261) 1 2 and 1. B(A)R Solution to Exercise 6. we must explicitly take into account that probaEqually likely symbols each have a probability of bilities sum to one. Focus on a particular symbol.46) yields pe = Q N0 The noise-free integrator output dierence now equals Solution to Exercise 6. each probability must equal the others. . . we are done. If we had a xed-length code (say 00.11).org/content/col10040/1.23. H (A) = − that this is the maximum-entropy probability assignment. INFORMATION COMMUNICATION Solution to Exercise 6.1 (p. jumping into the middle does not guarantee perfect decoding.14. 249) The noise-free integrator outputs dier by dierence equals αA2 T . which we dene to be zero also.296 CHAPTER 6. the average bit-error probability pe is given by the probability of error equation (6.25. Note that we must start at the beginning of the bit stream.1 (p. in the BPSK case.0110111. 256) T = 1 − H (A) = − 1 1 2 log 2 + 1 log 1 + 4 4 1 21+ 1 log 1 + 5 5 to that for the rst (Figure 6.2 (p.17.68 bits. We would decode the initial part incorrectly. 255) The Human coding tree for the second set of probabilities is (Human Coding Tree)). and we are done. then would synchronize. Solution to Exercise 6. the αEb 2 . . The minimum value of entropy is zero. Choosing the largest therefore amounts to choosing which one is positive. 252) 1 1 1 kk K log2 K = log2 K . 246) In BPSK. say the rst. 244) Twice the baseband bandwidth because both positive and negative frequencies are shifted to the carrier by Solution to Exercise 6. The end of one codeword and the beginning of another could be a codeword.75 bits.20.23.3 (p. one term is tute your letter's index).1 (p. log2 P r [a0 ] − log2 (1 − P r [a0 ] + · · · + P r [aK−2 ]). 256) Because no codeword begins with another's codeword. which from the probability of error equation (6. s1 (t) = −s0 (t). taken from the bitstream 0|10|110|110|111|. For the minimum 1log2 1 = 0. we nd that they are located Solution to Exercise 6. The entropy calculation 4 1 1 20 log 20 . Solution to Exercise 6. we choose the other signal.

1) Repetition Coding 10 -1 Coded Error Probability Uncoded 10 -2 10 -3 10 -4 10 0 Signal-to-Noise Ratio Figure 6. is always Available for free at Connexions <http://cnx.1 (p.27. the bit-error probability is given by 3pe ×(1 − pe )+pe 2 3 .9> .2 (p.297 pe = Q where 2α2 Eb N0 .27. 265) When we multiply the parity-check matrix times any codeword equal to a column of of the sum of an entry from the lower portion of zero.28. adding 0 to a binary value results in that binary value while adding 1 results in the opposite binary value. 262) In binary arithmetic (see Table 6. Plotting this reveals that the increase in bit-error probability out of the channel because of the energy reduction is not compensated by the repetition With a threefold repetition code. the result consists G and itself that. pe = Q 2α2 E 3N0 b . by the laws of binary arithmetic. G.44 10 1 Solution to Exercise 6.2).1 (p. Solution to Exercise 6. 264) dmin = 2n + 1 Solution to Exercise 6. 10 0 Error Probability with and without (3.

The telephone system forms an electrical circuit between your handset and your friend's handset. The number of non-zero vectors 2 H c ^ must equal or exceed the sum of these two numbers.68) The rst two solutions that attain equality are (5. 55 "Error Correction" <http://cnx.28.29. you initiate a dialog with your network interface by dialing the number. INFORMATION COMMUNICATION Because the code is linearsum of any two codewords is a codewordwe can generate all codewords as Solution to Exercise 6.1 (p. However. resulting from For N = 7. What you say amounts to high-level protocol while establishing the connection and maintaining it corresponds to low-level protocol.2 (p.33. we divide the capacity stated in bits/transmission by the bit interval duration T. propagation time.78) codes.31. sums of columns of G.3 (p.68) with Solution to Exercise 6.37. When you pick up the telephone.36. 275) Rout + Z0 Rout Solution to Exercise> Available for free at Connexions <http://cnx.9> . 277) accordingly. Solution to Exercise 6.1) and (90. The transfer function to some other transceiver's receiver circuit is The transmitting op-amp sees a load or divided by this load.1 (p. 268) In a length-N block. making connecting old and new systems together more complex than need be. Dialing the telephone number informs the network of who will be the message recipient. 276) The cable must be a factor of ten shorter: It cannot exceed 100 m. and for the other computer's transmission to arrive equals the round- The time taken for one computer's Solution to Exercise 6. Hc = 0. 269) To convert to bits/second. N single-bit and N (N −1) double-bit errors can occur.31. 55 . 265) The probability of a single-bit error in a length-N block is N pe (1 − pe ) N −1 and a triple-bit error has  probability N 3   pe 3 (1 − pe )N −3 .1 (p. packet to travel the Ethernet's length trip. equality.3 (p. which connects you to the nearest station.298 CHAPTER 6. Solution to Exercise 6. and routes the call The route remains xed as long as the call persists.) no perfect code exists (Perfect codes satisfy relations like (6. 271) The network entry point is the telephone handset. Since multiplying by H is also linear. other than the single-bit error correcting Hamming code.1 (p.1 (p. 265) In binary arithmetic see this table results in the opposite binary value. The network looks up where the destination corresponding to that number is located. For the rst to be greater than the second. 2N −K − 1 ≥ N + N (N − 1) 2 or 2N −K ≥ N2 + N + 2 2 (6. Rout N . not one-way. Solution to Exercise 6. adding 0 to a binary value results in that binary value while adding 1 Solution to Exercise we must have  pe < 1 (N −1)(N −2) 6 +1 Solution to Exercise 6. 276) The worst-case situation occurs when one computer begins to transmit just before the other's packet arrives. where N is the number of transceivers other than this one attached to the coaxial cable. Transmitters must sense a collision before packet transmission ends. pe < 0. Your friend receives the message via the same devicethe handsetthat served as the network entry point.2 (p. Dierent minimum packet sizes means dierent packet formats.

) The prex "deci" implies a tenth.3) Because of this consistency.Appendix 7. logarithmically.9> 299 . respectively. You will hear statements like "The signal went down by 3 dB" and "The lter's gain in the stopband is Exercise 7. Here in decibels) = 20log amplitude (s) amplitude (s0 ) Quantifying power (s0 ) and amplitude (s0 ) represent a reference power and amplitude. power or amplitude in decibels essentially means that we are comparing quantities to a standard or that we want to express how they changed. Available for free at Connexions <http://cnx. in decibels) = 10log (7. A factor of 10 increase in amplitude corresponds to a 20 dB increase in both amplitude and power! 1 This content is available online at <http://cnx. Who is this measure named for? The consistency of these two denitions arises because power is proportional to the square of amplitude: power (s) ∝ amplitude2 (s) Plugging this expression into the denition for decibels.2) power(s) 10log power(s0 ) = = amplitude (s) 10log amplitude2 (s0 ) amplitude(s) 20log amplitude(s0 ) 2 (7. 304.1 (Solution on p. power (s.1) amplitude ( we nd that (7. stating relative change in terms of decibels is unambiguous.1.16/>. power (s) power (s0 ) The denitions for these dier. a decibel is a tenth of a Bel. −60" (Decibels is abbreviated dB.1 Decibels 1 The decibel scale expresses amplitudes and power values but are consistent with each other.

26 dB equals 10 + 10 + 6 dB Decibel quantities add. The decibel values for all but the powers of ten are approximate.9> . √ halve the decibel value for 2.300 APPENDIX Decibel table Power Ratio dB 1 √ 2 √ 4 5 8 10 but are accurate to a decimal place. to nd the output amplitude at a given frequency we simply add the lter's gain in decibels (relative to a reference of one) to the input amplitude at that frequency. we 10 × 10 × 4 = 400. Because the transfer function multiplies the input signal's spectrum. This conversion rests on the logarithmic nature of the decibel scale. For example. and tests your ability to think of decibel values as sums and/or dierences of the well-known values and of ratios as products and/or quotients. This calculation is one reason that we plot transfer function magnitude on a logarithmic vertical scale expressed in decibels. to nd the decibel value for that corresponds to a ratio of 2. ratio values multiply. Available for free at Connexions <http://cnx.1: Common values for the decibel.5 3 5 6 7 9 10 −10 Figure 7.1 0 2 10 1. One reason decibels are used so much is the frequency-domain input-output relation for linear systems: Y (f ) = X (f ) H (f ). Converting decibel values back and forth is fun. The accompanying table provides "nice" decibel values.

) 6 numbers from the numbers 1-60. Newton derived that the Exercise 7.APPENDIX 301 2 7.the number of ways of choosing things when order matters as in baseball lineups . Can you prove This content is available online at <http://cnx. (x + y) =  0 1 2 n For example. Exercise 7. 9! = 362. (n − k + 1). in other words. and many more choices are possible than when order does not matter.) n   n k   k=0 A related problem is calculating the probability that any two bits are in error in a length-n codeword when p is is the probability of any bit being in error. obeyed the formula (Solution on p. something must happen to the codeword! That means that we must  have this? 2 n 0      (1 − p)n +   p(1 − p)n−1 +  n 2    p2 (1 − p)n−2 + · · · +  n n   pn = 1. This result can be written in terms of factorials as (n−k)! . etc. A related.the number of ways of choosing things when order does problems amounts to understanding not matter as in lotteries and bit errors. what is (Solution on p. we dene 0! = 1. When order does not matter. with n! = n (n − 1) (n − 2) . To win. Available for free at Connexions <http://cnx.2. for example. The chances of winning equal the number of dierent length-k sequences that can be chosen.  Note that the probability that zero or one or two.1 What are the chances of winning the lottery? Assume you pick Combinatorials occur in interesting places. Numbering the bit permutations . you only have to choose the right set of 6 numbers. we have  combination of k things drawn from a pool of n is n k k things can be ordered equals k!. Now the order matters. 304. therefore be is For the second choice. positions from In digital communications. you select 6 numbers out 60. we have 1 to N. The probability of a two-bit error occurring anywhere equals this probability times the  number of combinations: n 2 n 1   p2 (1 − p)n−2 . errors  occurring must be one. once we symbol for the  n! (n−k)!k! . The number of ways a pool of choose the nine starters for our baseball Permutations and Combinations The lottery "game" consists of picking of k numbers from a pool of n. problem is selecting the batting lineup for a baseball team. Solving these kind of k numbers from a pool of n.2.2. .2 What does the sum of binomial coecients equal? In other words. the order in which you pick the numbers doesn't matter. but 880 dierent lineups! The   and equals Thus. we have n choices n − 1. . . p2 (1 − p) n−2 The probability of any particular two-bit error sequence .2 Permutations and Combinations 7. Calculating permutations is the easiest. you might ask how many possible double-bit errors can occur in a codeword. If we are to pick for the rst one. The number of length-two ordered sequences is n (n − 1). 304. For mathematical convenience. 1. Continuing to choose until we make k choices means the number of permutations n! n (n − 1) (n − 2) . For example. of a sum n power        -th  n n n n n  xn +   xn−1 y +   xn−2 y 2 + · · · +   yn .and combinations . Answering such questions occurs in many applications beyond games. . the number of combinations equals the number of permutations divided by the number of orderings.9> . the answer is the same as the lottery problem with k = 6.13/>.

org/content/col10040/1.9> .org/content/m0083/2. Detailed radio carrier frequency assignments are much This content is available online at <http://cnx. which shows what kinds of broadcasting can occur in which frequency bands.3 Frequency Allocations 3 To prevent radio stations from transmitting signals on top of each other. With increased use of the radio spectrum for both public and private use. This is the so-called too detailed to present here. the United States and other national governments in the 1930s began regulating the carrier frequencies and power outputs stations could use.302 APPENDIX 7. this regulation has become increasingly important. 3 Frequency Allocation Chart. Available for free at Connexions <http://cnx.12/>.


Solution to Exercise 7. they would be decimal fractions. 301) Because of Newton's binomial theorem. 299) Alexander Graham Bell.1 (p. He developed it because we seem to perceive physical quantities like loudness and brightness logarithmically.2 (p. In other words. not absolute dierences. We use decibels today because common values are small integers. the sum equals (1 + 1) = 2n . Available for free at Connexions <http://cnx.1 (p. 063. If we used APPENDIX Solutions to Exercises in Chapter 7 Solution to Exercise 7.1. which aren't as elegant. 860. n Solution to Exercise 7. 301)   60 6  = 60! 54!6! = 50.9> .2.2. percentage. matter to us.

189. 259. Ÿ 6. 120 binary phase shift keying. Ÿ 1.14(241) Available for free at Connexions <http://cnx. 36 analog problem. 8.17(247) bits.2(17). 92 broadcast.16(207) analog-to-digital (A/D) conversion. Ÿ 6. Ÿ 1. 78 address. 272 circuits.4(7).20(86) circuit model.12(239). Ÿ 6.25(259). Ÿ 3. Ÿ 6.1(299) amplitude modulate. Ÿ 2. Ÿ 1. Ÿ 6.10(236).4(176). 63 bit stream.21(252). Ÿ 6.1(13) Cartesian form of z. apples. Ÿ 5. 43.26(261). 24.32(270) analog communication. Ÿ 6. Ÿ 1.30(268). Ÿ 5. 230 charge.1(39) amplier.36(274) carrier. 238 carrier frequency.14(241). Ÿ 6. Ÿ 5. Ÿ 6.14(241).32(270) analog computers.16(207).21(88).4(43). Terms are referenced by the page they appear on. 1 A active circuits. Ÿ 6. Ex.8(58). Ÿ 3. Ÿ 6.21(88) attenuation. 273 circuit-switched.26(261).10(236) basis functions. Ÿ 6. Ÿ 2.2(226) buering.25(259) channel coding. 142.5(25) channel.9> .11(237).13(199) BPSK.1 (1) Keywords do not necessarily appear in the text of the page. Ÿ 5. 261 block diagram. Ÿ 6. Ÿ 5. Ÿ 6. 236 baseband signal. 171 C capacitor. Ÿ 6. 1. 181 AM.31(269).2(226) circuit switched. Ÿ 3. Ÿ 4. Ex. Ÿ 3. Ÿ 6. 63. 189 aliasing.31(269) channel coder. Ÿ 6. Ÿ 1.10(236) analog signals. 242 binary symmetric channel.28(264) characteristic impedance.12(239) attenuation constant. Ÿ 3. 151 bandpass signal. 230 auxiliary band. 176 angle. Ÿ 3. Ÿ 6. Ÿ 6.12(239) analog. Ÿ 2. 251 bit. apples. Ÿ 2. 175 alphabet. 266 block channel coding. 39 clock speed.5(25) boolean arithmetic.9(189). Ÿ 2.14(241). Ÿ 3.6(27) amplitude. They are merely associated with that section.19(250). Ÿ 6. Ÿ 6. Ÿ 6. 142. 22.1(225). Ÿ 6. Ÿ 6. 6. 240 bit-reception error. 199. Ÿ 5. Ÿ 5.4(7). Ÿ 3. 268.2(40) B bandlimited. 252 block. 284 average power.9(235).9(59). 237 carrier amplitude. Ÿ 6. Ÿ 3. Ÿ 5.4(22).3(5). Ÿ 2. Ÿ 3. Ÿ 6. Ÿ 7.14(200). Ÿ 6. 171 bit interval. Ÿ 6. Ÿ 6. Ÿ 6. 140 amplitude modulation. Ÿ 6. Ÿ 3.1(39). Ÿ 6.11(237) Ampere. 14 cascade. Ÿ 2.3(5).2(2). Ÿ 6.30(268).21(88) analog signal. 240.11(237).27(262) channel decoding. 133. Ÿ 6. Ÿ 6. 225 broadcast mode.36(274) broadcast communication. 206 buttery. 226.1(13) ARPANET.2(40). Ÿ 5.9(235).14(200).15(244) baseband communication. Ÿ 6. Ÿ 5. 7.6(27). 15 angle of complex number. 6. Ÿ 6.31(269) bandpass lter. Ÿ 3.2(169) boxcar lter. Ÿ 6. Ÿ 2. Ÿ 6. 142 bandwidth.6(133).9(235).2(40). Ÿ 1. 238 Cartesian form. Ÿ 3. Ÿ 5.12(239).org/content/col10040/1. 191 bytes.1(39) circuit.13(240). 272 algorithm. Ÿ 6. Ÿ 3.10(236).15(204). Ÿ 6. Ÿ 6.19(250) bridge circuits.2(169) closed circuit.8(58) capacity.INDEX 305 Index of Keywords and Terms Keywords are listed by the section with that keyword (page numbers are in parentheses).11(237).2(17). Ÿ 2.6(181). Ÿ 6. Ÿ 6.34(272). Ÿ 6.34(272) ASP.

6(181) discrete-time sinc function. 242. 13 complex power. Ÿ 6. Ÿ 6. 179 complex Fourier series. Ÿ 6.23(256) Ÿ 6.26(261).3(22). Ÿ 6. Ÿ 6. 262 codeword error. Ÿ 5. 50 cuto frequency.1(225) communication theory.8(188). Ÿ 5.18(249).20(251).1(299) decode. Ÿ 6.26(261). Ÿ 2.16(207) computer network.14(200). Ÿ 4. Ÿ 2.23(256) coding eciency. Ÿ 6. Ÿ 5. 261 coherent. Ÿ 5. Ÿ 6. Ÿ 1. Ÿ 6.32(270) digital communication receiver.17(247).15(244).1(13).2(17) complex exponential sequence.13(240).11(195).22(254).5(232). Ÿ 6.29(266).9(235) communication network. Ÿ 5. 13 complex numbers.306 INDEX coaxial cable. Ÿ 5.27(262).25(259). Ÿ 5. Ÿ 6. 253 codeword.37(276) communication systems. 270 computer organization.15(204). Ÿ 6. Ÿ 5.9> . 58 complex plane. Ÿ 5. Ÿ 5.21(252).19(250) digital communication receivers.13(199) discrete-valued.12(196).34(272).19(250) digital lter.8(188).31(269) De-emphasis circuits.2(301) combinations. 196. Ÿ 6.36(274). 173 domain.16(207) digital sources.4(7) countably innite. Ÿ 6.22(254).33(271).2(226) communication channels. Ÿ 6.20(251). Ÿ 5. Ÿ 6. 271 dependent source. 20 complex number.15(204). Ÿ 5. Ÿ 6. 188 component. 273 Available for free at Connexions <http://cnx. Ÿ 6. Ÿ 5. Ÿ 5. Ÿ 5. Ÿ 6. Ÿ 6.9(189). Ÿ 2. Ÿ 6.1(39). Ÿ 5. 23. Ÿ 6. 275 combination. Ÿ 5.37(276) communication networks. Ÿ 6.3(22).6(181). 185 Discrete-Time Systems.6(181).4(22) complex amplitude. Ÿ 6. Ÿ 6.3(226). Ÿ 5. 279 complex. Ÿ 6.3(22) dedicated.21(252). Ÿ 3. Ÿ 6. Ÿ 2. Ÿ 6.2(169) conductance.14(241). Ÿ 6.20(251) Complementary lters. 14 complex exponential.6(48).2(301) coding.9(189) cosine. Ÿ 5.16(207) digital signal. Ÿ 6.2(2) digital signal processing. 187.19(250).35(273). Ÿ 3.28(264) decompose.6(181).12(196).14(200) digital.32(270). 41 conductor. Ÿ 2. Ÿ 5.31(269). Ÿ 6. Ÿ 6.3(5).2(119). Ÿ 5. Ÿ 3.9(189). Ÿ 6.33(271) digital communication.14(200). 68 D data compression. 202 current. Ÿ 6.21(252).1(225). Ÿ 5.1(39) DFT. 13. Ÿ 7. Ÿ 4. 64 complex conjugate. Ÿ 1.4(22) complexity. Ÿ 6.10(191). Ÿ 3.14(200).9(189).32(270) communication channel. Ÿ 3. Ÿ 6.14(200).20(251). Ÿ 6. Ÿ 6. Ÿ 7.4(22). 238 coherent receiver. Ÿ 5. 63 complex-valued. Ÿ 6. Ÿ 3.2(17) compression.15(204). 301 combinatorial. Ÿ 5. 227 codebook. Ÿ 6.8(234).11(237) collision.16(207) discrete-time ltering. Ÿ 2. Ÿ 5. Ÿ 6.11(195). Ÿ 5. 121 Cooley-Tukey algorithm. Ÿ 5.2(40).30(268). Ÿ 7. 189 computational complexity.17(247) digital communication systems. Ÿ 6. Ÿ 6.12(196). Ÿ 6.20(86) Discrete Fourier Transform.2(40) current divider. Ÿ 6.21(252). Ÿ 6. Ÿ 6. Ÿ 6. Ÿ 5. Ÿ 7.7(186). Ÿ 5.16(245). Ÿ 2. Ÿ 5. Ÿ 6.23(256). Ÿ 3. Ÿ 5.2(119) complex frequency. 77 device electronic. Ÿ 5. Ÿ 6.7(234). Ÿ 6. 39.6(233).22(254).16(207) discrete-time Fourier transform. Ÿ 5.14(241). 17 complex amplitudes. Ÿ 5.23(256) computational advantage.4(22) decomposition. 1. 112 decibel. Ÿ 6. Ÿ 6. Ÿ 5.10(191).2(301) communication.22(254). Ÿ 5. Ÿ 6.36(274) communication protocol.14(200) discrete-time. Ÿ 5.37(276) Computer networks. Ÿ 1.14(200) dierence equation. Ÿ 2.1(39) conjugate symmetry. Ÿ 6. Ÿ 6. Ÿ 6. Ÿ 6. Ÿ 2. 254 datarate. Ÿ 6.27(262). Ÿ 6.10(191).7(186).29(266) decoding. Ÿ 2.28(264).

17(77). 193 harmonically.29(266) DSP.3(124) Euler relations. Ÿ 4. Ÿ 6.2(119). Ÿ 6. Ÿ 3. Ÿ 6. 262 Hanning window. Ÿ 4.1(1) electron. Ÿ 6.2(17) energy. 86 Fourier coecients. 120. 120 Heaviside. Ÿ 4.27(262).30(268) error correcting code.20(86) forward biasing.3(302).INDEX 307 Doppler.29(266) Hamming codes. Ÿ 5. Ÿ 6.6(233) Heinrich Hertz. Ÿ 6.28(264).9(189). Ÿ 2. 193 frequency.2(40) elemental signals. Ÿ 5. 66.6(27). Ÿ 6.9> . Ÿ 5. Ÿ 5. Ÿ 5. 137.3(124). Ÿ 5.1(39) electronic circuits. Ÿ 2. Ÿ 4.14(200) frames.4(22). Ÿ 4. Ÿ 3.17(208) electrical. Ÿ 3. Ÿ 2.15(204) frequency response. Ÿ 4. Ÿ 4. Ÿ 3. Ÿ 3. 273 Gauss. 227 F farad.5(25) FFT.19(250) functional. 262. 172 Available for free at Connexions <http://cnx. Ÿ 1.6(133) Fourier transform.1(169).10(191).3(124).28(264) ethernet. Ÿ 3.10(191).31(269) error probability.30(268).10(191). Ÿ 6.2(40) form.9(59) G gain.1(119).8(137). Ÿ 2.25(259). Ÿ 1. Ÿ 6.2(17).3(124).26(261). 149 E eciency. 148 forward bias. Ÿ 4. 259. Ÿ 6. Ÿ 4.10(60). Ÿ 5. Ÿ 5.14(200). 189 formal circuit method.14(200). 244 FSK.2(17) Euler's relations.19(250) error-correcting code.1(39).3(124) generator matrix. 270 elec241 problems. 120.6(181).19(80) element. Ÿ 6. Ÿ 5. 147 fundamental model of communication.26(261).2(40) Faraday. Ÿ 6. Ÿ 3. Ÿ 6. Ÿ 3.2(17). Ÿ 6.2(119) Euler's relation. Ÿ 6.13(199). Ÿ 1. 133 ground. 182 geosynchronous orbits. Ÿ 5. 273 xed rate.29(266).25(259) fundamental model of speech production. Ÿ 3. Ÿ 4. 77 electronics. Ÿ 3. Ÿ 3.21(252) equivalent circuit. Ÿ 5. Ÿ 4. Ÿ 3. 7. Ÿ 6. Ÿ 4. 25 fundamental assumption. Ÿ 5. Ÿ 6. Ÿ 5. 53. Ÿ 6.15(204).15(71) H half wave rectied sinusoid. Ÿ 5.4(126) half-wave rectier.31(269) error-correcting codes.29(266) Hamming code. Ÿ 4. Ÿ 2. Ÿ 4. Ÿ 5. Ÿ 6.19(250). 69 ltering.16(207) formants.7(234). Ÿ 6. 27 gateway. 88 Hamming.8(137). Ÿ 6.11(195).4(231) frequency allocation chart.12(196).9(189). Ÿ 6.30(268) error correcting codes. 119. 199 xed.2(169) ux. Ÿ 5.7(53) error.35(273) gateways. 61. Ÿ 2.15(204). Ÿ 2.1(1) Henry. 252. 42 fundamental frequency.33(271) Fundamental Model of Digital Communication. Ÿ 6.20(251).27(262). Ÿ 5. 234 Gibb's phenomenon. Ÿ 6. Ÿ 4. 302 frequency domain. Ÿ 6.2(169) double-bit.2(40) hidden-ones notation. 162 double precision oating point. Ÿ 6. Ÿ 6. Ÿ 4. Ÿ 5. 254 Flexibility. 40.15(204) feedback.15(244).29(266). Ÿ 5.29(266) geometric series.25(259). Ÿ 3. Ÿ 5.11(63) entropy. Ÿ 6. Ÿ 3. Ÿ 5.1(119). Ÿ 6. Ÿ 6. Ÿ 5. Ÿ 5.27(262) error correction. Ÿ 6.2(40) fast Fourier transform. Ÿ 5.15(244) frequency-shift keying.15(204) lter. Ÿ 7.7(186). 15 exponential.12(64) Equivalent Circuits.9(142) fourier spectrum. 272 electrical engineering.7(135). Ÿ 6. 124 Fourier series. Ÿ 5. 267 Hamming distance.13(199) frequency shift keying.4(7).1(39). 270 oating point.2(119).16(207) FIR. Ÿ 5.36(274) Euler. Ÿ 6.27(262).org/content/col10040/1. Ÿ 6. Ÿ 6. 119.6(181). Ÿ 3. Ÿ 3. Ÿ 3.

308 INDEX history of electrical engineering. Ÿ 6.11(237) Morse code. 40. Ÿ 6. 299 long-distance. Ÿ 6. 226 linear circuit. Ÿ 3. Ÿ 3.35(273).33(271). Ÿ 6.25(259).9(235).36(274) networks. Ÿ 6.16(245). 254 KVL.6(233) IP address. Ÿ 3. Ÿ 6. 197 imaginary.2(301) lowpass lter. 273 negative.2(2) information communication.2(226). Ÿ 6. Ÿ 3.35(273) Local area networks. Ÿ 6.6(27) load. Ÿ 5. Ÿ 3.15(71).5(232). 40 KCL. 226 Mayer-Norton. Ÿ 3. Ÿ 6. Ÿ 6. Ÿ 3. Ÿ 1.4(7). Ÿ 6.1(13). 39 Human. 256. Ÿ 6.35(273). Ÿ 1.1(1). Ÿ 6. Ÿ 6.35(273). Ÿ 6.3(5) models and reality. Ÿ 6. Ÿ 6. Ÿ 3. Ÿ 2.5(232) long-distance communication.14(200). Ÿ 3. 14 impedance.8(137) inverting amplier.9> .27(262).17(247).org/content/col10040/1. Ÿ 1. 146.6(48). 233 node.5(25) input resistance.1(225). Ÿ 6. 8 modulated.7(234) long-distance transmission.20(251). Ÿ 6. Ÿ 6.16(76) L LAN. 80 input-output relationship. Ÿ 6. Ÿ 6.2(17) imaginary number.23(256).23(256) multi-level signaling.5(8) modulate. 81 ionosphere.2(40). 1. Ÿ 6.34(272).16(245).3(43) modem. Ÿ 2.34(272) message routing. Ÿ 6. Ÿ 6. 273 M magnitude.28(264). 196 input. Ÿ 7. Ÿ 3. Ÿ 6. Ÿ 3.8(234). 58. Ÿ 6.7(234). Ÿ 2. Ÿ 2. Ÿ 1. Ÿ 4.14(241).1(1) initial conditions.34(272). 51 local area network.31(269). 69. Ÿ 3. Ÿ 6.6(27) interference. Ÿ 2.17(77) integrator. Ÿ 3.7(53) Mayer-Norton equivalent. Ÿ 6. Ÿ 1.1(39) holes. 254 lottery.15(71) N name server.1(39) nerve. 88 logarithmically. Ÿ 6. Ÿ 6. Ÿ 1.35(273) inverse Fourier transform. Ÿ 6. Ÿ 1.11(63) inductor.9(59).8(58) information.29(266). Ÿ 6. Ÿ 3.7(135) linear codes. 273 logarithmic amplier. Ÿ 6.6(48).2(17) IIR. Ÿ 6.36(274) information theory. 233 linear.6(233) matched lter. 45 Kircho. Ÿ 4.1(1) joules. Ÿ 3. Ÿ 6. Ÿ 6. Ÿ 6.1(13).6(233).15(71).10(236). Ÿ 6. 56 mean-square equality. Ÿ 2. Ÿ 2.37(276) internet protocol address. Ÿ 6. Ÿ 6. 234 internet. Ÿ 6. 262 linear phase shift. Ÿ 6. 48 instantaneous power.1(39) network. Ÿ 6. 40 integrated circuit. 13 imaginary part. Ÿ 3. Ÿ 6.21(252). 164 James Maxwell. Ÿ 3. Ÿ 3.23(256) Human source coding algorithm. Ÿ 6.10(60). Ÿ 6. Ÿ 6. Ÿ 6. Ÿ 3. Ÿ 6. Ÿ 1. 254 lossy. Ÿ 6. Ÿ 3.11(237). Ÿ 6. 270 J K j.13(240). Ÿ 6.4(43). Ÿ 3.2(17) jam.30(268).35(273) name servers.36(274) network architecture. 86 leakage current. Ÿ 6.18(249).26(261). 46.22(254).27(262). 46.8(234).33(271).35(273) leakage. Ÿ 3.23(256) Human Code. Ÿ 2.5(232).22(254).32(270). Ÿ 6.12(239). Ÿ 6. Ÿ 6.4(43) Available for free at Connexions <http://cnx.34(272) model of communication. Ÿ 6. 15 magnitude of complex number.1(13) Marconi.7(135) I i.7(234) lossless.2(40). Ÿ 6. Ÿ 2.15(244). 59.12(239) modulation. Ÿ 6.22(254). 246 Maxwell's equations. Ÿ 4. 133 message. Ÿ 6. Ÿ 6.16(76) Kirchho 's Laws.22(254). 121 linear systems.20(86) line-of-sight. 237 modulated communication.1(1) hole.

40. 139.2(301) permutations. 251 probability.2(169) Nyquist frequency. Ÿ 7. Ÿ 3. Ÿ 5. 176 random access.2(17). 72 nodes.5(25) output resistance. Ÿ 2.3(124). 173. Ÿ 6. Ÿ 3.34(272). 122. 51 parity. Ÿ 4. Ÿ 3.36(274) random-access. 45. Ÿ 4. Ÿ 6.2(17) real part.4(176). 271. Ÿ 6. 14 real-valued. Ÿ 2. 127.9> . 124 output. Ÿ 2. Ÿ 3. 279 phasor. Ÿ 3.16(76). Ÿ 6. Ÿ 5.2(17) phase modulates. Ÿ Ÿ 3. 19 periodic signal.8(234). Ÿ 4.1(39).17(77). 176 sampling. Ÿ 6.5(46). 276 pulse. 147. 301 phase. 176. Ÿ 3. 72 relay network.1(39) postal service. Ÿ 6.34(272) point-to-point communication.12(239) receiver. Ÿ 1. 299 reference node. 80 output spectrum. 174 Sampling Theorem. Ÿ 6. Ÿ 3.1(225). Ÿ 4.36(274) packet-switched. 230 propagation speed. 49. 97. 47 resistor.19(80) orthogonality. 41 operational amplier. 127 route.12(64) numbers on a computer. Ÿ 7.1(13). Ÿ 6.34(272). Ÿ 6.1(299) power factor. Ÿ 6. 226. 245 prex.5(232) repeaters.19(80) open circuit. Ÿ 6.2(40). 86 rms. Ÿ 1. 272 parallel. Ÿ 6. Ÿ 6.2(40) Oliver Heaviside.29(266) parity check.4(7).21(88) propagating wave.34(272) P packet.20(86) reverse-bias. 230 proportional. Ÿ 4.5(46) reverse bias.7(53). 77. Ÿ 3.8(137) O ohm. 292 repetition code. Ÿ 6. Ÿ 3. Ÿ 5. 233 relay networks. 77. Ÿ 3. Ÿ 5. 259 resistance.4(126) reference. Ÿ 6.2(169) quantization interval. 275 real.5(8). Ÿ 4.2(40).2(119). Ÿ 3.2(40).12(239). Ÿ 3. Ÿ 5. 270 period.15(71). 235 pre-emphasis circuit. 15 positional notation.20(86) Norton.2(119). Ÿ 6.1(1) op-amp. 225 pointwise equality. Ÿ 6. Ÿ 3. Ÿ 3.37(276). Ÿ 5. Ÿ 2. Ÿ 6.5(25). 273 packets. Ÿ 6. Ÿ 2. 41 resistivity. 71 node voltages.30(268) noise removal. 78 Performance. Ÿ 4.4(176). Ÿ 4.1(39) pitch frequency. 235. Ÿ 4.6(181) passive circuits. Ÿ 6. Ÿ 6. 128.33(271) point to point communication. 7.2(119) permutation. 1.2(119).8(137). Ÿ 6. 176 satellite.INDEX 309 node method. 264 Parseval's theorem. 147 pitch lines. 272 noise. Ÿ 7. Ÿ 6.29(266) parity check matrix. Ÿ 6.18(249) problems. Ÿ 3.17(77). Ÿ 1. Ÿ 3.7(234) Available for free at Connexions <http://cnx. Ÿ 3. 272 routing. 133 polar form.4(22) received signal-to-noise-ratio. 187 sampling interval. Ÿ 6.25(259). 112 preamble. Ÿ 6. Ÿ 5. 117 power spectrum.36(274) packet size. 14. 206 noisy channel coding theorem. Ÿ 1. Ÿ 2. Ÿ 3. Ÿ 3.17(247).2(119).6(181) positive. 256 probabilistic models. 17 physical.9(142) Q R quadruple precision oating point.30(268) nonlinear.8(234). 177 quantized. Ÿ 6.2(226).16(245) rectication.6(48). Ÿ 3. Ÿ 6. Ÿ 2.33(271) Power.2(169) S samples. 150 point to point.2(17). Ÿ 2. Ÿ 4.36(274). 188 protocol.3(5). Ÿ 3.33(271) point-to-point. 46.2(301) probability of error.

180. Ÿ 6.4(22) synchronization. 79 Steinmetz.3(5). Ÿ 6.4(231) Available for free at Connexions <http://cnx. Ÿ 2.5(232). Ÿ 3. Ÿ 2. 40 wavelength. Ÿ 6. Ÿ 6.19(250) signal spectrum. Ÿ 5.27(262) signals. Ÿ 6.2(17).18(249). Ÿ 1.7(234) sawtooth. Ÿ 6.4(22). Ÿ 6. Ÿ 4. 226.9(235) SNR. Ÿ 4. 245 system.6(27) time-domain multiplexing.21(252).3(5). 226 themes of electrical engineering.12(239) signal-to-noise-ration. Ÿ 2. Ÿ 1. 230 spectrograms. Ÿ 4. Ÿ 6.33(271). 273 window. Ÿ 6. 7. Ÿ 6. Ÿ 6. Ÿ 5.2(119).35(273) wide area networks. 48 voltage gain. 164 self-clocking signaling. Ÿ 6.2(40) source coding theorem.35(273) watts.9> . 126 speech model.3(22). 23.16(245) synchronize. Ÿ 2.19(250) transmission error. Ÿ 4. Ÿ 3.3(5). Ÿ 3.18(249). Ÿ 5. Ÿ 4.6(27) time domain. 253 sinc.2(2). Ÿ 2. Ÿ 6.1(225). 235.14(241). Ÿ 2. 176 shift-invariant.6(233) transceiver. 171 signal.5(232).21(88) simple binary code. Ÿ 1.21(252).10(145) square wave.23(256) sequences. Ÿ 6. 55 time constant. Ÿ 6.12(196) short circuit.6(27) total harmonic distortion.4(126) signal-to-noise. Ÿ 2. Ÿ 6. Ÿ 6. Ÿ 4. Ÿ 6. Ÿ 60. 139 transition diagrams.4(22) series.9(235). 22 signal set.5(25) systems. Ÿ 2.31(269) source. Ÿ 2.4(7). 119. Ÿ 2. Ÿ 2. Ÿ 1. Ÿ 6. 240. Ÿ 6.3(124) SIR. 253.29(266) sink.6(48).12(239). Ÿ 6. Ÿ 4.10(145). Ÿ 5. Ÿ 6. 48 Shannon.12(64) Thévenin equivalent circuit.20(251). Ÿ 6.12(196) Superposition Principle. Ÿ 6.4(231) white noise.6(27). 6. Ÿ 3. 7 sinusoid. Ÿ 5.4(22). Ÿ 6. Ÿ 1. 6 twisted pair. Ÿ 6. 202 unit sample. 1. Ÿ 6. 251 transmission. Ÿ 6. 66. 147 Volta. Ÿ 2.14(241).7(53).31(269) Shannon sampling frequency.10(60). 230. Ÿ 3. Ÿ 6.2(17).6(233) wireless channel.22(254) space constant.4(22) T telegraph. Ÿ 1. 6.10(191) spectrum. 229 transmission lines.3(226). Ÿ 6. Ÿ 4.6(233) transmission bandwidth.1(39) voltage.2(17) superposition. 20. Ÿ 2. Ÿ 5. 6 system theory. Ÿ 6. 274 transfer function.1(119). Ÿ 6.30(268).2(40) voltage divider. 180 unit step.3(124) standard feedback conguration. 193 wireless. Ÿ 1. Ÿ 6. Ÿ 6. Ÿ 2. Ÿ 6. Ÿ 6. Ÿ 2.13(199). Ÿ 3. Ÿ 5.27(262).5(25) signal decomposition. Ÿ 2. Ÿ 6.2(169). Ÿ 3. Ÿ 6. Ÿ 5. 39.10(236). 245 self-synchronization.4(22) single-bit. 138 sine.10(236). 41 Siemens. 272 time-invariant.2(40) sign bit.6(48).4(22).3(226) transmission line equations. 201 vocal tract. 120. Ÿ 6. 196.31(269) signal-to-noise-ratio.1(119). 227 U V uncountably innite.9(142) time reversal.36(274). Ÿ 3.2(17). 90 symbolic-valued signals. Ÿ 6. Ÿ 6. 225. Ÿ 6.17(247) transmission line. Ÿ 3. 80 W WAN.1(1) Thevenin.4(176).33(271) tetherless networking. Ÿ 5. Ÿ 3. 128 transatlantic communication. Ÿ 2. Ÿ 4. Ÿ 4.14(200) shift-invariant systems.33(271) telephone. Ÿ 6. Ÿ 2.310 INDEX satellite communication. 143 time delay. Ÿ 3.4(7).16(245).2(119).17(247).2(226).12(196) time invariant.14(69). 2.1(119). Ÿ 6. Ÿ 1. 227 transmitter. Ÿ 3. Ÿ 2. 237. Ÿ 6. Ÿ 1. Ÿ 6. Ÿ 2.14(200) transforms.31(269) wide area network. 178 signal-to-noise ratio. Ÿ 6.2(17). Ÿ 4.8(234).19(250).1(39). Ÿ 6.3(5). Ÿ 6.2(17) unit-sample response.

225. 202 Available for free at Connexions <http://cnx.5(232) wireline channel.2(226). 226. Ÿ 6. Ÿ 6. Ÿ 6.3(226) World Wide> .37(276) Z zero-pad.INDEX 311 wireline. Ÿ 6.

org/licenses/by/ Module: "Introduction Problems" By: Don Johnson URL: Pages: 2-5 Copyright: Don Johnson License: Pages: 1-2 Copyright: Don Johnson License: Pages: 5-7 Copyright: Don Johnson License: http://creativecommons.9> .0 Module: "Structure of Communication Systems" By: Don Johnson URL: Module: "Complex Numbers" By: Don Johnson URL: License: http://creativecommons.0 Module: "Signals Represent Information" By: Don Johnson URL: http://cnx.15/ Pages: 7-8 Copyright: Don Johnson License: http://creativecommons.18/ Pages: 8-10 Copyright: Don Johnson License: Module: "Themes" By: Don Johnson URL: Available for free at Connexions <http://cnx.0 Module: "The Fundamental Signal" By: Don Johnson URL: Pages: 13-17 Copyright: Don Johnson License: http://creativecommons.312 ATTRIBUTIONS Attributions Collection: Fundamentals of Electrical Engineering I Edited by: Don Johnson URL: http://cnx.

org/licenses/by/1.0 Module: "Simple Systems" By: Don Johnson URL: Module: "Introduction to Systems" By: Don Johnson URL: Pages: 27-30 Copyright: Don Johnson License: Pages: 25-27 Copyright: Don Johnson License: http://creativecommons. and Generic Circuit Elements" By: Don Johnson URL: http://cnx.12/ Page: 22 Copyright: Don Johnson License: http://creativecommons.0 Module: "Signals and Systems Problems" By: Don Johnson URL: http://cnx.0 Module: "Voltage.ATTRIBUTIONS 313 Module: "Elemental Signals" By: Don Johnson URL: http://cnx.0 Module: "Discrete-Time Signals" By: Don Johnson URL: http://cnx.29/ Pages: 30-36 Copyright: Don Johnson License: http://creativecommons. Current.9> .org/content/m0005/ Module: "Signal Decomposition" By: Don Johnson URL: http://cnx.24/ Pages: 22-24 Copyright: Don Johnson License: Module: "Ideal Circuit Elements" By: Don Johnson URL: http://cnx.21/ Pages: 40-43 Copyright: Don Johnson License: Pages: 39-40 Copyright: Don Johnson License: Pages: 17-21 Copyright: Don Johnson License: Available for free at Connexions <http://cnx.

30/ Pages: 43-46 Copyright: Don Johnson License: Module: "Time and Frequency Domains" By: Don Johnson URL: http://cnx.0 Module: "The Impedance Concept" By: Don Johnson URL: Module: "Series and Parallel Circuits" By: Don Johnson URL: Pages: 46-47 Copyright: Don Johnson License: Module: "Ideal and Real-World Circuit Elements" By: Don Johnson URL: http://cnx.24/ Pages: 53-58 Copyright: Don Johnson License: http://creativecommons.9> .org/licenses/by/1.12/ Page: 58 Copyright: Don Johnson License: http://creativecommons.0 Module: "Electric Circuits and Interconnection Laws" By: Don Johnson URL: Page: 43 Copyright: Don Johnson License: Pages: 60-63 Copyright: Don Johnson License: http://creativecommons.0 Module: "Equivalent Circuits: Resistors and Sources" By: Don Johnson URL: Module: "Circuits with Capacitors and Inductors" By: Don Johnson URL: http://cnx.23/ Pages: 59-60 Copyright: Don Johnson License: http://creativecommons.9/ Pages: 48-53 Copyright: Don Johnson License: Module: "Power Dissipation in Resistor Circuits" By: Don Johnson URL: ATTRIBUTIONS Available for free at Connexions <http://cnx.

org/content/m0031/ Pages: 76-77 Copyright: Don Johnson License: http://creativecommons.0 Module: "Dependent Sources" By: Don Johnson URL: Module: "Electronics" By: Don Johnson URL: Available for free at Connexions <http://cnx.9> .org/licenses/by/1.20/ Pages: 66-69 Copyright: Don Johnson License: http://creativecommons.0 Module: "Designing Transfer Functions" By: Don Johnson URL: Pages: 77-79 Copyright: Don Johnson License: http://creativecommons.0/ Module: "Power Conservation in Circuits" By: Don Johnson URL: Module: "Transfer Functions" By: Don Johnson URL: http://cnx.22/ Pages: 71-76 Copyright: Don Johnson License: Pages: 64-66 Copyright: Don Johnson License: Pages: 69-71 Copyright: Don Johnson License: Page: 77 Copyright: Don Johnson License: http://creativecommons.0/ Module: "Equivalent Circuits: Impedances and Sources" By: Don Johnson URL: http://cnx.0 Module: "Formal Circuit Methods: Node Method" By: Don Johnson URL: http://cnx.2/ Pages: 63-64 Copyright: Don Johnson License: 315 Module: "Power in the Frequency Domain" By: Don Johnson URL: http://cnx.

0 Module: "A Signal's Spectrum" By: Don Johnson URL: http://cnx.0 Module: "Analog Signal Processing Problems" By: Don Johnson URL: Pages: 128-133 Copyright: Don Johnson License: Module: "Complex Fourier Series" By: Don Johnson URL: Pages: 119-124 Copyright: Don Johnson License: http://creativecommons.9> .0 Module: "Fourier Series Approximation of Signals" By: Don Johnson URL: http://cnx.316 Module: "Operational Ampliers" By: Don Johnson URL: Module: "Introduction to the Frequency Domain" By: Don Johnson URL: http://cnx.10/ Page: 119 Copyright: Don Johnson License: Pages: 86-88 Copyright: Don Johnson License: http://creativecommons.21/ Pages: 126-128 Copyright: Don Johnson License: http://creativecommons.0/ ATTRIBUTIONS Available for free at Connexions < Module: "The Diode" By: Don Johnson URL: Pages: 80-85 Copyright: Don Johnson License: http://creativecommons.0/ Module: "Classic Fourier Series" By: Don Johnson URL: http://cnx.47/ Pages: 88-115 Copyright: Don Johnson License: Pages: 124-126 Copyright: Don Johnson License: http://creativecommons.

0/ Module: "Frequency Domain Problems" By: Don Johnson URL: Module: "Introduction to Digital Signal Processing" By: Don Johnson URL: Pages: 137-142 Copyright: Don Johnson License: http://creativecommons.11/ Pages: 135-137 Copyright: Don Johnson License: http://creativecommons.0 Module: "Linear Time Invariant Systems" By: Don Johnson URL: 317 Module: "Encoding Information in the Frequency Domain" By: Don Johnson URL: http://cnx.0 Available for free at Connexions < Module: "Modeling the Speech Signal" By: Don Johnson URL: http://cnx.0 Module: "Introduction to Computer Organization" By: Don Johnson URL: Pages: 142-144 Copyright: Don Johnson License: http://creativecommons.0 Module: "Filtering Periodic Signals" By: Don Johnson URL: Pages: 169-173 Copyright: Don Johnson License: Pages: 145-152 Copyright: Don Johnson License: Module: "Derivation of the Fourier Transform" By: Don Johnson URL: Page: 169 Copyright: Don Johnson License:> .org/content/col10040/ Pages: 152-166 Copyright: Don Johnson License: http://creativecommons.17/ Pages: 133-135 Copyright: Don Johnson License: http://creativecommons.

0 Module: "Discrete Fourier Transform (DFT)" Used here as: "Discrete Fourier Transforms (DFT)" By: Don Johnson URL: http://cnx.0/ Module: "Amplitude Quantization" By: Don Johnson URL: http://cnx.21/ Pages: 189-191 Copyright: Don Johnson License: Pages: 173-176 Copyright: Don Johnson License: http://creativecommons.0 Module: "DFT: Computational Complexity" By: Don Johnson URL: Pages: 186-188 Copyright: Don Johnson License: http://creativecommons.0/ Module: "Discrete-Time Signals and Systems" By: Don Johnson URL: http://cnx.9> .org/licenses/by/ Module: "The Sampling Theorem" By: Don Johnson URL: http://cnx.11/ Pages: 188-189 Copyright: Don Johnson License: Pages: 176-178 Copyright: Don Johnson License: Pages: 179-181 Copyright: Don Johnson License: http://creativecommons.0 Module: "Fast Fourier Transform (FFT)" By: Don Johnson URL: http://cnx.31/ Pages: 181-186 Copyright: Don Johnson License: http://creativecommons.0/ ATTRIBUTIONS Available for free at Connexions < Module: "Discrete-Time Fourier Transform (DTFT)" By: Don Johnson URL:

0/ Available for free at Connexions <http://cnx.24/ Pages: 196-199 Copyright: Don Johnson License: http://creativecommons.14/ Pages: 199-200 Copyright: Don Johnson License: 319 Module: "Spectrograms" By: Don Johnson URL: http://cnx.0 Module: "Discrete-Time Filtering of Analog Signals" By: Don Johnson URL: http://cnx.0 Module: "Digital Signal Processing Problems" By: Don Johnson URL: Pages: 204-207 Copyright: Don Johnson License: Module: "Discrete-Time Systems in the Frequency Domain" By: Don Johnson URL: http://cnx.20/ Pages: 191-195 Copyright: Don Johnson License: http://creativecommons.42/ Pages: 208-220 Copyright: Don Johnson License: http://creativecommons.5/ Page: 195 Copyright: Don Johnson License: Pages: 207-208 Copyright: Don Johnson License: http://creativecommons.17/ Pages: 200-204 Copyright: Don Johnson License: Module: "Discrete-Time Systems" By: Don Johnson URL:> .0 Module: "Eciency of Frequency-Domain Filtering" By: Don Johnson URL: Module: "Filtering in the Frequency Domain" By: Don Johnson URL: http://cnx.0 Module: "Discrete-Time Systems in the Time-Domain" By: Don Johnson URL:

17/ Pages: 234-235 Copyright: Don Johnson License: Module: "Types of Communication Channels" By: Don Johnson URL: http://cnx.0 Module: "Noise and Interference" By: Don Johnson URL: Module: "The Ionosphere and Communications" By: Don Johnson URL: Module: "Wireline Channels" By: Don Johnson URL: http://cnx.0 Module: "Communication with Satellites" By: Don Johnson URL: Pages: 226-231 Copyright: Don Johnson License: http://creativecommons.10/ Page: 234 Copyright: Don Johnson License: http://creativecommons.13/ Page: 226 Copyright: Don Johnson License: ATTRIBUTIONS Available for free at Connexions <http://cnx.15/ Pages: 231-232 Copyright: Don Johnson License: http://creativecommons.320 Module: "Information Communication" By: Don Johnson URL:> .org/content/m0540/2.8/ Page: 225 Copyright: Don Johnson License: http://creativecommons.0 Module: "Wireless Channels" By: Don Johnson URL: Pages: 233-234 Copyright: Don Johnson License: http://creativecommons.14/ Pages: 232-233 Copyright: Don Johnson License: Module: "Line-of-Sight Transmission" By: Don Johnson URL:

org/content/m0519/2.19/ Pages: 236-237 Copyright: Don Johnson License: Pages: 240-241 Copyright: Don Johnson License: Module: "Frequency Shift Keying" By: Don Johnson URL: Pages: 237-238 Copyright: Don Johnson License: http://creativecommons.0 Module: "Digital Communication Receivers" By: Don Johnson URL: Available for free at Connexions <http://cnx.0 Module: "Digital Communication" By: Don Johnson URL: http://cnx.18/ Pages: 245-247 Copyright: Don Johnson License: http://creativecommons.9> .11/ Pages: 235-236 Copyright: Don Johnson License: Pages: 241-244 Copyright: Don Johnson License: Module: "Binary Phase Shift Keying" By: Don Johnson URL: Module: "Modulated Communication" By: Don Johnson URL: http://cnx.12/ Pages: 244-245 Copyright: Don Johnson License: Pages: 239-240 Copyright: Don Johnson License: Module: "Baseband Communication" By: Don Johnson URL: 321 Module: "Channel Models" By: Don Johnson URL: http://cnx.0 Module: "Signal-to-Noise Ratio of an Amplitude-Modulated Signal" By: Don Johnson URL: http://cnx.

0/ Module: "Digital Communication System Properties" By: Don Johnson URL: Module: "Compression and the Human Code" By: Don Johnson URL: Module: "Digital Communication in the Presence of Noise" By: Don Johnson URL: http://cnx.14/ Pages: 252-254 Copyright: Don Johnson License: Module: "Source Coding Theorem" By: Don Johnson URL: Pages: 251-252 Copyright: Don Johnson License: http://creativecommons.0 ATTRIBUTIONS Available for free at Connexions <http://cnx.9> .org/content/m0070/ Module: "Digital Channels" By: Don Johnson URL: http://cnx.15/ Pages: 247-249 Copyright: Don Johnson License: Pages: 256-258 Copyright: Don Johnson License: http://creativecommons.9/ Pages: 249-250 Copyright: Don Johnson License: Pages: 254-255 Copyright: Don Johnson License: http://creativecommons.0 Module: "Subtleties of Source Coding" Used here as: "Subtlies of Coding" By: Don Johnson URL: http://cnx.14/ Pages: 250-251 Copyright: Don Johnson License: Module: "Entropy" By: Don Johnson URL:

25/ Pages: 266-268 Copyright: Don Johnson License: http://creativecommons.20/ Pages: 264-265 Copyright: Don Johnson License: Available for free at Connexions < Module: "Capacity of a Channel" By: Don Johnson URL: Module: "Error-Correcting Codes: Hamming Distance" By: Don Johnson URL: Module: "Block Channel Coding" By: Don Johnson URL: Module: "Error-Correcting Codes: Channel Decoding" By: Don Johnson URL: http://cnx.22/ Pages: 259-261 Copyright: Don Johnson License: Page: 261 Copyright: Don Johnson License: Module: "Repetition Codes" By: Don Johnson URL: http://cnx.ATTRIBUTIONS 323 Module: "Channel Coding" By: Don Johnson URL: http://cnx.0 Module: "Noisy Channel Coding Theorem" By: Don Johnson URL: http://cnx.0 Module: "Error-Correcting Codes: Hamming Codes" By: Don Johnson URL: Pages: 262-264 Copyright: Don Johnson License: http://creativecommons.9> .13/ Pages: 269-270 Copyright: Don Johnson License: Page: 259 Copyright: Don Johnson License: http://creativecommons.12/ Pages: 268-269 Copyright: Don Johnson License:

org/content/m0076/ Module: "Information Communication Problems" By: Don Johnson URL: http://cnx.9/ Pages: 272-273 Copyright: Don Johnson License: Pages: 271-272 Copyright: Don Johnson License:> .0 Module: "Communication Networks" By: Don Johnson URL: http://cnx.0 Module: "Message Routing" By: Don Johnson URL: http://cnx.0/ Module: "Decibels" By: Don Johnson URL: http://cnx.29/ Pages: 278-293 Copyright: Don Johnson License: http://creativecommons.13/ Pages: 274-276 Copyright: Don Johnson License: ATTRIBUTIONS Available for free at Connexions < Module: "Network architectures and interconnection" By: Don Johnson URL: http://cnx.0 Module: "Communication Protocols" By: Don Johnson URL: Pages: 276-278 Copyright: Don Johnson License: Pages: 273-274 Copyright: Don Johnson License: Pages: 299-300 Copyright: Don Johnson License: Module: "Comparison of Analog and Digital Communication" By: Don Johnson URL: Pages: 270-271 Copyright: Don Johnson License: Module: "Ethernet" By: Don Johnson URL: http://cnx.

org/content/m10262/2.0 Module: "Frequency Allocations" By: Don Johnson URL: http://cnx.12/ Pages: 302-304 Copyright: Don Johnson License: http://creativecommons.ATTRIBUTIONS 325 Module: "Permutations and Combinations" By: Don Johnson URL:> .0 Available for free at Connexions <http://cnx.13/ Page: 301 Copyright: Don Johnson License:

Digital information theory. teachers. and reception of information by electronic means. professors and lifelong learners. interactive courses are in use worldwide by universities. Spanish. Connexions's modular. time. Vietnamese. including English. and lifelong learners. distance learners. teaching and learning environment open to anyone interested in education. Connexions has partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed course materials and textbooks into classrooms worldwide at lower prices than traditional academic publishers. We connect ideas and facilitate educational communities. Connexions has been pioneering a global system where anyone can create course materials and make them fully accessible and easily reusable free of charge. We are a Web-based authoring.Fundamentals of Electrical Engineering I The course focuses on the creation. Italian. About Connexions Since 1999. Sampling Theorem. French. manipulation. Chinese. Elementary signal theory. digital transmission of analog signals.and frequency-domain analysis. and Thai. including students. error-correcting codes. Connexions materials are in many languages. Japanese. community colleges. K-12 schools. . Portuguese. transmission. Connexions is part of an exciting new information distribution system that allows for Print on Demand Books.