You are on page 1of 152

Principles of Digital Communications

Collection Editor: Tuan Do-Hong

Principles of Digital Communications
Collection Editor: Tuan Do-Hong Authors: Behnaam Aazhang Richard Baraniuk C. Sidney Burrus Thanh Do-Ngoc Tuan Do-Hong Michael Haag Translated By: Ha Ta-Hong Tuan Do-Hong Thanh Do-Ngoc Sinh Nguyen-Le Nick Kingsbury Stephen Kruzick Sinh Nguyen-Le Melissa Selik Ha Ta-Hong

Online: < http://cnx.org/content/col10474/1.7/ >

CONNEXIONS
Rice University, Houston, Texas

137.0 license (http://creativecommons.This selection and arrangement of content as a collection is copyrighted by Tuan Do-Hong.org/licenses/by/2. Collection structure revised: November 16. 2007 PDF generated: February 4. It is licensed under the Creative Commons Attribution 2. . 2011 For copyright and attribution information for the modules contained in this collection. see p.0/).

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 The Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1. . 77 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5. . . . . . . . .9 Examples with Matched Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2. . . . . . . . . . . . . . . . . . . . . . . . .4 Demodulation and Detection .2 System Classications and Properties . . . . . . . .10 Performance Analysis of Binary Orthogonal Signals with Correlation . . . . . . . . . . . . . . . . . . . . . . . . 34 3. . . . .2 Introduction to ISI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Calendar . . . . . 76 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4. . . 50 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3 Chapter 2: Source Coding 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Carrier Phase Modulation . . . . . . . . . . . . . 63 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Data Transmission and Reception . . . . . . . . . . .4 Human Coding . . . 73 5. . . . . . . . . . . . . . . . . . . . . . . . 37 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Two Types of Error-Performance Degradation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4. . . . . . . . . . . . . . . . . . . . . . . . 56 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Information Theory and Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Matched Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4. . . . . . . . . . . . . . . . . . . . . . . . .2 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 . . . . . . . . . . . . . . . . .9 White and Coloured Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3. . . . . . . . . . . . . . . . . . . . . .7 Grading Procedures . . . . . . . . .7 Examples of Correlation Detection . . . . . . . . . . . . . . . 73 5. . . . . . . . . . . . . . . . . . . .1 Letter to Student . . . . . . . . . . . . . . . . . . . . . .4 Precoding and Bandlimited Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4. . . . . . . . . . . . . . . . . . . . . . . . . 58 4. . . . . . . . . . . .14 Dierential Phase Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5 Chapter 4: Communication over Band-limitted AWGN Channel 5. . . . . . . . . . . . . . . 24 2. . . . . . . . . . . . . .3 Source Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Digital Transmission over Baseband Channels . . . . . . . . .4 Purpose of the Course . . . . . . . . . . . . . . . . . . . . 12 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Course Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Signal Classications and Properties . . . . . . . . .Table of Contents 1 Syllabus 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2. . . . . . . . . 43 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5 Demodulation . . . . . . . . . . . . .3 Geometric Representation of Modulation Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Carrier Frequency Modulation . . . . . . . . . . . . . . .11 Performance Analysis of Orthogonal Binary Signals with Matched Filters .2 Signalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Contact Information . . . .6 Introduction to Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. . . . . . . . . . .3 Pulse Amplitude Modulation Through Bandlimited Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4 Chapter 3: Communication over AWGN Channels 4. . . . . . . . . . . . . . . . . . 1 1. . . . . .10 Transmission of Stationary Process Through a Linear Filter . . . . . . . . . . . . . . . . . .5 Pulse Shaping to Reduce ISI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Detection by Correlation . . . . . . . . . . . . . . . . . . . . 47 4. . . . 60 4. . . . . . . . . . . . . . . . . . . . . . . .7 Second-order Description of Stochastic Processes . . . . . . . . . . . . . . . 27 2. . . . . . . . . . . . .4 The Fourier Transform . . . . . . . . . . . . 17 2. . . . . . . . . . . 54 4. . . . . . . . . . . 4 2 Chapter 1: Signals and Systems 2. . . . . . . . . . . . . . . . . . . . . . . . 65 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Review of Probability and Random Variables . . . . . . . . . . . . . . . . . . . . 15 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 101 7. . .12 Modulation Types for Fading Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Shannon's Noisy Channel Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Glossary . . . . . . . . . 87 6. . . . . . . . . . . . . . . . . . . . . . . . . .9 Mitigation to Combat Loss in SNR . . . . . . . . . . . . . . . . . . . .5 Channel Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Adaptive Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 7. . . . . . . . . . . . . .14 Application of Viterbi Equalizer in GSM System . . . . . . . . . . . . . . . . . . . . . . . 105 7. . . . . 128 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 The Role of an Interleaver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7 Chapter 6: Communication over Fading Channels 7. . . . 116 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Mitigating the Degradation Eects of Fading . . . . . . . . .1 Channel Capacity . . . . . . . . . . . . . . . . . . . . . 118 7. . . . . . . . . . . . . . . . . . . . . . 106 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Solutions . . . . . . . . . . . . . . . . . . . . . . .3 Large-Scale Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5. . . . . . . . . . . . .10 Diversity Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Index . . . . . . .6 Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 7.3 Typical Sequences . . . . . . . . .7 Eye Pattern . .15 Application of Rake Receiver in CDMA System . . . . . . . . . . . . . . . 90 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Decision Feedback Equalizer . . . . . . . . . . .5 Signal Time-Spreading . . . . . . 118 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Characterizing Mobile-Radio Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Transversal Equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Diversity-Combining Techniques . . . . . . . . . . . 137 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .iv 5. . . . . . . . . . . . .2 Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Mitigation to Combat Fast-Fading Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 7. . . . .. . . . . . . . .4 Small-Scale Fading . . . . . . . . . . . . . . . . . . . . . 83 5. . . . . . . . . . . . . . 101 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Attributions . . . . . . . . . .7 Mitigation to Combat Frequency-Selective Distortion . . . . . . . . . . . . . . . . 120 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6 Chapter 5: Channel Coding 6. . . . . . . . . . . . . . . . . . . . . . . . . 121 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Tuan Do-Hong. This course is an important component of our academic program. content is available online at <http://cnx. Tuan Do-Hong Oce Location: Ground oor.1/>.2 Contact Information Faculty Information: 2 Department of Telecommunications Engineering.1 Letter to Student To the Student: 1 This course and this Student Manual reect a collective eort by your instructor. This Student Manual is designed to assist you through the course by providing specic information about student responsibilities including requirements. Our goal is to help you learn what is important about this particular eld and to eventually succeed as a professional applying what you learn in this course. this latest version represents an attempt to expand the range of sources of information and instruction so that the course continues to be up-to-date and the methods well suited to what is to be learned.org/content/m15431/1. Faculty of Electrical and Electron- ics Engineering. 1. B3 Building Phone: +84 (0) 8 8654184 Email: Oce Hours: 9:00 am  5:00 pm Lab sections/support: 1 This 2 This content is available online at <http://cnx. Although it has been oered for many years. Ho Chi Minh City University of Technology Instructor: Dr. timelines and evaluations.org/content/m15429/1.edu. 1 . the Vietnam Education Foundation. You will be asked from time-to-time to oer feedback on how the Student Manual is working and how the course is progressing. the Vietnam Open Courseware (VOCW) Project and faculty colleagues within Vietnam and the United States who served as reviewers of drafts of this Student Manual.Chapter 1 Syllabus 1. Your comments will inform the development team about what is working and what requires attention. B3 Building Phone: +84 (0) 8 8654184 Email: do-hong@hcmut.vn Oce Hours: 9:00 am  5:00 pm Assistants: Oce Location: Ground oor.3/>. Thank you for your cooperation.-Ing.

html Computer resource: Matlab and Simulink Textbook(s): Required: [1] Bernard Sklar. FM radio. Prenctice Hall. wireless communication encompasses any number of techniques including underwater acoustic communication. 1. noise. 2001. among others. and high-denition television.org/content/m15435/1. The fundamental dierence between the two is that in digital communication. Introduction to Digital Communication. Roger W.mit. 2001. Peterson. [4] Rogger E. McGraw-Hill. . Every major wireless system being developed and deployed is built around digital communication including cellular communication. McGraw-Hill. This course is a required core course in communications engineering which introduces principles of digital communications while reinforcing concepts learned in analog communications systems. Ziemer. Pre-requisites: Communication Systems. radio communication.placing it within the domain of electrical engineering.edu/index. and was rediscovered during the cellular telephony revolution. It is intended to provide a comprehensive coverage of digital communication systems for last year undergraduate students. and fading. 2) introduce signal processing. In principle.org/ 5 http://ocw. Prentice Hall. and rst generation cellular systems.3 Resources 3 4 Connexions: http://cnx.org/content/m15438/1. the source is assumed to be digital. The course will 1) model and study the eects of channel impairments such as distortion. Linear Algebra.4 Purpose of the Course 6 Title: Principles of Digital Communications Credits: 3 (4 hours/week. 2nd edition.html 6 This content is available online 7 This content is available online at <http://cnx.2/>.4/>. at <http://cnx. fell out of fashion for about fty years. Thus this course will focus on digital wireless communication. Wireless communication techniques can be classied as either analog or digital. wireless local area networking. and Probability Theory and Stochastic Processes is essential.edu/index. Analog communication is gradually being replaced with digital communication. and coding techniques that are used in digital communication systems.5 Course Description 7 This course explores elements of the theory and practice of digital communications. 4th edition. SYLLABUS 1. [2] John Proakis. Recommended: 5 Digital Communications: Fundamentals and Applications. rst year graduate students and practicing engineers. 2000.mit.3/>. [3] Bruce Carlson et al. and satellite communication. The term was coined in the early days of radio. 2001.2 CHAPTER 1. television. 15 weeks/semester) Course Rationale: Wireless communication is fundamentally the art of communicating information without wires.org/ MIT's OpenCourseWare: http://ocw. interference. at <http://cnx.org/content/m15433/1. Digital Signal Processing. The rst commercial systems were analog including AM radio. on the performance of communication systems. personal area networking. 2nd edition. Digital Communications. Thorough knowledge of Signals and Systems.. Wireless now implies communication using electromagnetic waves . 1. 4th edition. Communication Systems: An Introduction to Signals and Noise in Electrical Communication. The concepts/ tools are acquired in this course: 3 This content is available online 4 http://cnx. modulation.

6 Calendar 8 Week 1: Overview of signals and spectra Week 2: Source coding Week 3: Receiver structure. autocorrelation. Detection of binary signals in AWGN Week 5: Optimal detection for general modulation. demodulation and detection Week 4: Correlation receiver and matched lter. Zero-ISI condition: the Nyquist criterion Week 8: Mid-term exam Week 9: Raised cosine lters. Coherent and non-coherent detection (I) Week 6: Coherent and non-coherent detection (II) Week 7: ISI in band-limited channels. Nyquist theorem Random processes. power spectrum Systems with random input/output Source Coding Elements of compression.3 Signals and Systems Classication of signals and systems Orthogonal functions. demodulation and detection Communication over Band-limited AWGN Channel Channel Coding Block codes Types of error control Error detection and correction Convolutional codes and the Viterbi algorithm Communication over Fading Channel Fading channels Characterizing mobile-radio propagation Signal Time-Spreading Mitigating the eects of fading Application of Viterbi equalizer in GSM system Application of Rake receiver in CDMA system 1. Eb/N0 Correlation receiver and matched lter Detection of binary signals in AWGN Optimal detection for general modulation Coherent and non-coherent detection ISI in band-limited channels Zero-ISI condition: the Nyquist criterion Raised cosine lters Partial response signals Equalization using zero-forcing criterion Receiver structure. .3/>.org/content/m15448/1. Partial response signals 8 This content is available online at <http://cnx. Human coding Elements of quantization theory Pulse code Modulation (PCM) and variations Rate/bandwidth calculations in communication systems Communication over AWGN Channels Signals and noise. Fourier transform Spectra and ltering Sampling theory. Fourier series.

It is recommend that the students practice working problems from the book. Homework and programming assignments will be assigned frequently throughout the course and will be due in the time and place indicated on the assignment. Participation: Question and discussion in class are encouraged. respectively. Participation will be noted. . They will be closed book and closed notes. SYLLABUS Week 10: Channel equalization Week 11: Channel coding. and homework No late homework will be Grades for this course will be based on the following weighting: • • • • Homework and In-class Participation: 20% Programming Assignments: 20% Mid-term Exam: 20% Final Exam: 40% 9 This content is available online at <http://cnx.2/>.7 Grading Procedures • • • Midterm Exam Final Exam 9 Homework/Participation/Exams: Homework and Programming Assignments Homework and programming assignments will be given to test student's knowledge and understanding of the covered topics. allowed. The mid-term exam and the nal exam will be timelimited to 60 minutes and 120 minutes. Characterizing mobile-radio propagation Week 15: Mitigating the eects of fading Week 16: Applications of Viterbi equalizer and Rake receiver in GSM and CDMA systems Week 17: Final exam 1. Block codes Week 12: Convolutional codes Week 13: Viterbi algorithm Week 14: Fading channel. example problems. Homework and programming assignments must be individually done by each student without collaboration with others.4 CHAPTER 1. There will be in-class mid-term and nal exams. problems.org/content/m15437/1.

1). this classication is determined by whether or not the time axis is or continuous (Figure 2. It is essentially an introduction to the important denitions and properties that are fundamental to the discussion of signals and systems.1 Continuous-Time vs. often created by sampling a continuous signal.2.1.1 Introduction 1 This module will begin our study of signals and systems by laying out some of the fundamentals of signal classication.1. with a brief discussion of each.2 Classications of Signals 2. Figure 2.org/content/m10057/2.org/content/m0009/latest/> 5 . In contrast to this. 2.21/>. discrete (countable) A continuous-time signal will contain a value for all real numbers along the 2 time axis. Discrete-Time As the names suggest.Chapter 2 Chapter 1: Signals and Systems 2.1 1 This content is available 2 "Discrete-Time Signals" online at <http://cnx. will only have values at equally spaced intervals along the time axis.1. <http://cnx.1 Signal Classications and Properties 2. a discrete-time signal .

where T is a positive constant: f (t) = f (T + t) The true. fundamental period of our function.2. or nonperiodic. CHAPTER 1: SIGNALS AND SYSTEMS 2. where the values of the function can only be one or zero. is the smallest value of T Time Periodic Signals" <http://cnx. Aperiodic Periodic signals and 3 repeat with some period T . f (t). However. An common example of a digital signal is a binary sequence. t can be any number (2.2 Analog vs.2 2.2. in this case the dierence involves the values of the function.1) that the still allows (2.6 CHAPTER 2.1) to be We can dene a periodic function through the following mathematical expression.1.3 Periodic vs.3). Figure 2. Analog corresponds to a continuous set of possible function values. signals do not (Figure 2.1. while digital corresponds to a discrete set of possible function values. while aperiodic.org/content/m10744/latest/> 3 "Continuous . Digital The dierence between analog and digital is similar to the dierence between continuous-time and discretetime.

signals can be characterized as to whether they have a nite or innite length set of values. Similarly.4: Finite-Length Signal. f (t) is a nite-length signal if it is nonzero over a nite interval t1 < f (t) < t2 where t1 > −∞ and t2 < ∞.4 Finite vs. Innite Length As the name implies.2. f (t).1. Most nite length signals are used when dealing with discrete-time signals or a given sequence of values.3: (a) A periodic signal with period T0 (b) An aperiodic signal 2.7 (a) (b) Figure 2. . An example can be seen in Figure 2. an innite-length signal. Mathematically speaking. nite interval.4. Note that it only has nonzero values on a set. is dened as nonzero over all real numbers: ∞ ≤ f (t) ≤ −∞ Figure 2.

CHAPTER 1: SIGNALS AND SYSTEMS 2.5: (a) A causal signal (b) An anticausal signal (c) A noncausal signal .2. Anticausal vs.1.5 Causal vs. Noncausal signals are signals that have nonzero values in both positive and negative time (Figure 2. (a) (b) (c) Figure 2. while anticausal are signals that are zero for all positive time.8 CHAPTER 2. Noncausal Causal signals are signals that are zero for all negative time.5).

2. Odd An even signal is any are symmetric around f (t) = −f (−t) signal f such that f (t) = f (−t). odd signal. it can be shown that fullls the requirement of an By multiplying and adding this expression out.6). it can be shown to be true. Even signals can be easily spotted as they on the other hand. we have to look no further than a single equation.2) Also. we can show that any signal can be written as a combination of an even and odd signal.6: (a) An even signal (b) An odd signal Using the denitions of even and odd signals. That is. Example 2.1 .1. To demonstrate this. while odd function (Figure 2.9 2. is a signal f such that (Figure 2. (a) (b) Figure 2.6 Even vs. f (t) + f (−t) fullls the requirement of an even function. An the vertical axis.7). every signal has an odd-even decomposition. f (t) = 1 1 (f (t) + f (−t)) + (f (t) − f (−t)) 2 2 f (t) − f (−t) (2.

CHAPTER 1: SIGNALS AND SYSTEMS (a) (b) (c) (d) Figure 2.7: 1 (f (t) + f (−t)) 2 (a) The signal we will decompose using odd-even decomposition (b) Even part: e (t) = 1 (c) Odd part: o (t) = 2 (f (t) − f (−t)) (d) Check: e (t) + o (t) = f (t) .10 CHAPTER 2.

2. causal.7 Deterministic vs.11 2. neither even nor odd.1. and. On the other hand. and deterministic or random. deterministic. periodic or aperiodic. analog or digital.1. nite or innite. We can also divide them based on their causality and symmetry properties. 4 "Introduction to Random Signals and Processes" <http://cnx. that are not discussed here but will be described in subsequent modules. The future values of a random signal cannot be accurately predicted and can usually of sets of signals (Figure 2. innite length. rule.org/content/m10656/latest/> .8).8: (a) Deterministic Signal (b) Random Signal Example 2. There are other ways to classify signals. analog. They can be continuous time or discrete time. Random A deterministic signal is a signal in which each value of the signal is xed and can be determined by a mathematical expression. (a) (b) Figure 2. Because of this the future values of the signal can be calculated from past values with complete condence.2 Consider the signal dened for all real t described by f (t) = { sin (2πt) /t 0 t≥1 t<1 (2. aperiodic. such as boundedness. handedness.3) This signal is continuous time. 2. by denition. a only be guessed based on the averages 5 random signal 4 has a lot of uncertainty about its behavior. and continuity.org/content/m10649/latest/> 5 "Random Processes: Mean and Variance" <http://cnx. or table.3 Signal Classications Summary This module describes just some of the many ways in which signals can be classied.

One in which the input signal and output signal both have discrete domains is said to be a continuous system.2 Linear vs. Discrete One of the most important distinctions to understand is the dierence between discrete time and continuous time systems. such as systems in which sampling of a continuous time signal or reconstruction from a discrete time signal take place.12 CHAPTER 2.2. . 2. the properties of a system provide an easy way to distinguish one system from another.2 Classication of Systems 2. To show that a system H obeys the scaling property is to show that H (kf (t)) = kH (f (t)) (2.2.4) Figure 2. A nonlinear system is any system that does not have at least one of these properties. Of course.1 Introduction 6 In this module some of the basic classications of systems will be briey introduced and the most important properties of these systems are explained. will be a fundamental concept used in all signal and system courses.9: A block diagram demonstrating the scaling property of linearity To demonstrate that a system H obeys the superposition property of linearity is to show that (2. A system in which the input signal and output signal both have continuous domains is said to be a continuous system. Understanding these basic dierences between systems.21/>.1 Continuous vs. CHAPTER 1: SIGNALS AND SYSTEMS 2. but it can simply be known due to the the system classication.2. and their properties. one no longer has to reprove a certain characteristic of a system each time. it is possible to conceive of signals that belong to neither category.org/content/m10084/2.2. Nonlinear A linear system is any system that obeys the properties of scaling (rst order homogeneity) and superposition (additivity) further described below.2.5) H (f1 (t) + f2 (t)) = H (f1 (t)) + H (f2 (t)) 6 This content is available online at <http://cnx.2.2 System Classications and Properties 2. As can be seen. 2. Once a set of systems can be identied as sharing particular properties.

11: This block diagram shows what the condition for time invariance. The output is the same whether the delay is put on the input or the output. but not .2. 2. Noncausal A causal system is one in which the output depends only on current or past inputs. any time shift of that input function will produce an output function identical in every way except that it is shifted by the same amount.2. that means that for any input function that produces some output function.7) T. but not future inputs.2. Time Varying A system is said to be time invariant if it commutes with the parameter shift operator dened by ST (f (t)) = f (t − T ) for all T.10: A block diagram demonstrating the superposition property of linearity It is possible to check a system for linearity in a single (though larger) step.6) 2. Any system that does not have this property is said to be time varying.3 Time Invariant vs. To do this. which is to say HST = ST H for all real (2.13 Figure 2. simply combine the rst two steps to get H (k1 f1 (t) + k2 f2 (t)) = k2 H (f1 (t)) + k2 H (f2 (t)) (2. an anticausal system is one in which the output depends only on current or future inputs.2. Intuitively. Figure 2.4 Causal vs. Similarly.

and we would not necessarily have a causal system. In this context.14 CHAPTER 2.2.9) . (b) . Then the dependent variable might represent pixel positions to the left and right (the "future") of the current position on the image. Representing this mathematically.8) |x (t) | ≤ Mx < ∞ (2. we have only been dealing with time as our dependent variable so far. Figure 2. bounded output (BIBO) stability. a stable system must have the following property. y (t0 ).. an unstable system is one in which at least one bounded input produces an unbounded output. a noncausal system is one in which the output depends on both past and future inputs.2.5 Stable vs.. One may think the idea of future inputs does not seem to make much physical sense. (a) (b) (a) For a typical system to be causal. The output must satisfy the condition |y (t) | ≤ My < ∞ whenever we have an input to the system that satises (2. All "realtime" systems must be causal. a stable system is one in which the output is bounded if the input is also bounded. where input and x (t) is the y (t) is the output. however..the output at time t0 . which is not always the case. but the one that will be used most frequently in this course will be bounded input. can only depend on the portion of the input signal before t0 . Finally.12: 2.. CHAPTER 1: SIGNALS AND SYSTEMS past inputs. Unstable There are several denitions of stability. since they can not have future inputs available to them. Imagine rather that we wanted to do image processing. Similarly.

and Parseval's theorem [4].15 Mx and My both represent a set of nite positive numbers and these relationships hold for all of t. that are not discussed here but will be described in subsequent modules. the Fourier series of converges to [6][3] • f (x) f (x) at points of continuity and to 1 2 [f (x + 0) + f (x − 0)] at points of discontinuity. b). [10] a (k) and b (k) are absolutely summable. 2. π).1 Theorems on the Fourier Series Four of the most important theorems in the theory of Fourier analysis are the inversion theorem. and is absolutely convergent. the convolution theorem. it converges 1 to the value 2 [f (x + 0) + f (x − 0)] at any such point at which the function is discontinuous. Systems can be continuous time. in addition.org/content/m13873/1.3 The Fourier Series 7 2. π). There are other ways to classify systems. uniformly in any interval (a. the Fourier series converges to 2 [f (x + 0) + f (x − 0)]. the Hilbert space of theory and a geometric view can be especially useful [5][4]. Otherwise. the function has only a nite number of maxima and minima in its domain. the dierentiation theorem. when the arbitrarily small neighborhoods of a nite number of points in whose neighborhood variation. [6] • If f (x) is of bounded variation in (−π. at every point in 2 (−π. • If I = (a. discrete time. the Fourier series of f converges at every point x to the value has no upper bound have been excluded. b) in which f (x) is continuous. At the 1 points π. the system is unstable. at which the function is continuous. or neither. such that in any one of them the function is monotone. Because energy is an important measure of a function in signal processing applications. The practical and theoretical use of Fourier analysis is greatly expanded if use is made of distributions or generalized functions [8][1].1/>. the continuity at a and b being on both sides. If f is. All of these are based on the orthogonality of the basis function of the Fourier series and integral and all require knowledge of the convergence of the sums and integrals. [6] • If f (x) is bounded and if it is continuous in its domain at every point. 2. Details. π). or. [7] 7 This content is available online at <http://cnx.3. continuous at every point of an interval |f (x) | [f (x + 0) + f (x − 0)] /2. L2 functions is a proper setting for the basic f (x) has bounded variation in the interval (−π. the Fourier series corresponding to f (x) converges f (x) at any point within the interval. • If f (x) is such that.3 System Classications Summary This module describes just some of the many ways in which systems can be classied. provided the improper integral π f (x) dx exist. We can also divide them based on their causality properties. [6] 1 • If f (x) is of bounded variation in (−π. bounded throughout the interval (−π. then the Fourier series converges to the value • f (x) becomes a function with bounded 1 [f (x + 0) + f (x − 0)]. the Fourier series converges uniformly to f (x) continuous. with the exception of a nite If to the value number of points at which it may have ordinary discontinuities. such as use of memory. and stable or unstable. . π). [6] −π If f is of bounded variation. π). and if the domain may be divided into a nite number of parts. except the points of innite discontinuity of the function. discussions and proofs can be found in the cited references. which is its Fourier series is uniformly convergent in I. the Fourier series converges to f (x). −π it converges to the value 2 [f (−π + 0) + f (π − 0)]. They can be linear or nonlinear. The following theorems and results concern the existence and convergence of the Fourier series and the discrete-time Fourier transform [7].2. time invariant or time varying. in other words.

There are useful practical signals that do not have Fourier transforms if only classical functions are allowed because of problems with convergence. −∞ (2. While the Fourier series was used before Fourier worked on it.org/content/m13874/1. X] and that at least one [0. to f (x) at (x− ) + f (x+ )] at all x. X].12) Because of the innite limits on both integrals. 2. discrete indexed frequency functions. f (x) the Fourier series converges to f (x) where it is continuous. x (t).11) The inverse Fourier transform x (t) = 1 2π X (ω) ejωt dω.16 CHAPTER 2. using proper Riemann integrals.4. ∞ (2.4.2/>. the Fourier transform seems to be his original idea.1. Many practical problems in signal analysis involve either innitely long or very long signals where the For these cases.1 The Fourier Transform Fourier series is not appropriate. (ii) f has a nite discontinuities on [0. X] and a nite number of [0. (iii) f is of then it will follow that the Fourier [0. (iv) f is piecewise smooth on [0. The Fourier series expansion results in transforming a periodic. but the former is more intuitive [9][2]. continuous time function. 8 This content is available online at <http://cnx. X]. CHAPTER 1: SIGNALS AND SYSTEMS • • If a (k) and b (k) are square summable. uniformly as n → ∞. The use of delta functions (distributions) in both the time and frequency domains allows a much larger class of signals to be represented [9]. is dened and bounded on of the following four conditions is satised: (i) number of maxima and minima on bounded variation on f is piecewise monotonic on [0.10) converge to [5] f . but not necessarily uniformly. the Fourier transform (FT) and its inverse (IFT) have been developed. and to the ˆ f (k) ek (2. . the question of convergence is important. [4] ≤ p < ∞ and any f ∈ C p S 1 .4 The Fourier Transform 2. The latter approach is the more general of the two.x. [7] Suppose that is periodic. ˜ to two a (k) 8 and b (k) that are not periodic. This transform has been used with great success in virtually all quantitative areas of science and technology where the concept of frequency is important. ||Sn − f ||∞ is bounded by a constant multiple of n−p+1/2 . X]: series coecients may be dened through the dening integral. the partial sums Sn = Sn (f ) = |k|≤n each point of continuity of f.a. It can be derived as an extension of the Fourier series by letting the length increase to innity or the Fourier transform can be independently dened and then the Fourier series shown to be a special case of it. in fact. and that the Fourier series converges to • 1 2 [f For any 1 value f (x) at a. 2. X]. of period X.1 Denition of the Fourier Transform The Fourier transform (FT) of a real-valued (or complex) function of the real-variable t is dened by ∞ X (ω) = −∞ giving a complex valued function of the real variable (IFT) is given by x (t) e−jωt dt ω representing frequency.

2. Interestingly. In this chapter and in the next one we will review the theory of probability. neither function being periodic. Figure 2. X (ω) = ∞ 2π Note the Fourier transform takes a function of continuous time into a function of continuous frequency. and axioms for modeling within the context of the experiment.5. Other interesting and illustrative examples can be found in [9][2]. A scientic experiment should indeed be repeatable where each outcome could naturally have an associated probability of occurrence.1. Examples of modulation.4. • • • If If If x (t) = δ (t) then X (ω) = 1 x (t) = 1 then X (ω) = 2πδ (ω) x (t) is an innite sequence of delta functions spaced T apart. We then introduce denitions. The sample space Ω is the set of all possible outcomes of a random experiment. and characterize their behavior as they traverse through deterministic systems disturbed by noise and interference. This is dened formally as the ratio of the number of times the outcome occurs to the total number of times the experiment is repeated. modeling both digital or analog information and many physical media requires a probabilistic setting.17 2.1 Random Variables A random variable is the assignment of a real number to each outcome of a random experiment. In order to develop practical models for random phenomena we start with carrying out a random experiment.org/content/m10224/2. which involves transmission of information. model random signals. the Fourier transform of a periodic function will be a innitely long string of delta functions with weights that are the Fourier series coecients.2 Examples of the Fourier Transform Deriving a few basic transforms and using the properties allows a large class of signals to be easily studied. Engineering such a system requires modeling both the information and the transmission media. sampling. its trans2π/T spaced 2π/T apart. • ∞ k=−∞ δ (ω − 2πk/T ). .16/>. If distribution" or delta functions" are allowed. denoted by The outcome of a random experiment is Such outcomes ω.5 Review of Probability and Random Variables 9 The focus of this course is on digital communication. and others will be given.13 9 This content is available online at <http://cnx. from source to destination using digital technology. 2. rules. could be an abstract description in words. form is also an innite sequence of delta functions of weight x (t) = n=−∞ δ (t − nT ). in its most general sense.

.16) .3: Discrete Random Variable A random variable d dx f X ( x ) is the probability density function (pdf ) (e. F X ( x ) is dierentiable and f X ( x ) = (F X ( x ))) X is discrete if it only takes at most countably many points (i.Y ( a.14) Denition 2.18 CHAPTER 2.. ω4 .g.2: Continuous Random Variable A random variable integral form. 2.1: Cumulative distribution The cumulative distribution function of a random variable F X ( R → R ) such that (2. Outcomes {ω1 .e. F X (·) is piecewise constant). b ) = = P r [X ≤ a.13) F X (b) = P r [X ≤ b] = P r [{ ω ∈ Ω | X (ω) ≤ b }] Figure 2. ω5 .15) Two random variables dened on an experiment have joint distribution F X. ω3 . The probability mass function (pmf ) is dened as p X ( xk ) = P r [X = xk ] = F X ( xk ) − x(x→xk ) ∧ (x<xk ) limit F X (x) (2. ω6 } ωi = i dots X (ωi ) = i on the face of the dice..5. or X is continuous if the cumulative distribution function can be written in an b F X (b) = −∞ and f X ( x ) dx (2. ω2 .2 Distributions Probability assignments on intervals a<X≤b X is a function Denition 2. CHAPTER 1: SIGNALS AND SYSTEMS Example 2.3 Roll a dice.14 Denition 2.. Y ≤ b] P r [{ ω ∈ Ω | (X (ω) ≤ a) ∧ (Y (ω) ≤ b) }] (2.

21) k and l.17) ( x.Y ( xk . y ) dxdy −∞ (2.19) x with f X ( x ) > 0 otherwise conditional density is not dened for those values of x with f X ( x ) = 0 independent if f X.. yl ) = P r [X = xk .Y ( x. y ) = ∂2F X.Y ( a.20) x∈R and y ∈ R. yl ) = p X ( xk ) p Y ( yl ) for all (2.5. For discrete random variables. Y = yl ] Conditional density function (2.Y ( x.Y p X..18) fY |X (y|x) = for all Two random variables are f X.22)  g (xk ) p X ( xk ) . f X.. y ) = f X ( x ) f Y ( y ) for all (2.Y ( x. y ) f X (x) (2. p X.15 Joint pdf can be obtained if they are jointly continuous b a F X..g..y ) ) ∂x∂y Joint pmf if they are jointly discrete f X.3 Moments Statistical quantities to represent some of the characteristics of a random variable. 2. b ) = −∞ (e.Y ( x.19 Figure 2.Y ( xk . − g (X) = = E [g (X)]   ∞ g (x) f −∞ k X ( x ) dx if continuous if discrete (2.

5: Stochastic Process 2. Y ) σX σY ρXY = 0.4 Received signal at an antenna as in Figure 2. 10 (2. distributions. a stochastic process is an indexed collection of random variables dened for each ω ∈ Ω.org/content/m10235/2.16. y ) dxdy ( xk .30) Example 2.26) • Correlation between two random variables RXY = XY ∗   ∞ −∞ =  k ∞ −∞ xy ∗ f X.Y ( x. where i= − √ ΦX (u) =eiuX −1 (2.25) = X −µX 2 • Characteristic function − for u ∈ R. Y ) − ∗ (2. CHAPTER 1: SIGNALS AND SYSTEMS • • Mean µX =X Second moment − (2. 10 This content is available online at <http://cnx.28) = (X − µX ) (Y − µY ) = RXY − µX µ∗ Y • Correlation coecient ρXY = Cov (X. yl ) if X and Y are jointly continuous (2.27) ∗ l xk yl p X.15/>.1 Denitions.24) Var (X) = = σ (X) 2 2 − (X − µX ) − 2 (2.29) Denition 2.20 CHAPTER 2. ∀t. and stationarity Given a sample space.4: Uncorrelated random variables Two random variables X and Y are uncorrelated if 2.23) − E X 2 =X 2 • Variance (2. .6.6 Introduction to Stochastic Processes Denition 2. t ∈ R : (Xt (ω)) (2.Y if X and Y are jointly discrete • Covariance CXY = Cov (X.

XtN (b1 . . b2 ) = P r [Xt1 ≤ b1 .e. . b2 .16 For a given First-order distribution t. . . Second-order distribution FXt1 .32) Nth-order distribution t1 ∈ R. . . . . Xt (ω) is a random variable with a distribution FXt (b) = P r [Xt ≤ b] = P r [{ ω ∈ Ω | Xt (ω) ≤ b }] (2......Xt2 . t2 ∈ R.34) FXt1 .Xt2 (b1 .. . bN ) Strictly stationary : A process is strictly stationary if it is N th order stationary for all N. Xt2 ≤ b2 ] for all (2..5 Xt = cos (2πf0 t + Θ (ω)) where is a random variable dened over f0 is the deterministic carrier frequency and Θ (ω) : Ω → R [−π. b2 . bN ) = P r [Xt1 ≤ b1 . . .. b2 .XtN +T (b1 ..33) N if (2. .6: First-order stationary process If FXt (b) is not a function of time then Xt is called a rst-order stationary process. b1 ∈ R. i. . . ...XtN (b1 . bN ) = FXt1 +T .Xt2 . Example 2..31) Denition 2.21 Figure 2.. π] and is assumed to be a uniform random variable.Xt2 +T . . . XtN ≤ bN ] N th-order stationary : A random process is stationary of order (2. b2 ∈ R FXt1 .

36) + π−2πf0 t 1 dθ arccos(b)−2πf0 t 2π (2.17 The second order stationarity can be determined by rst considering conditional densities and the joint density.37) fXt (x) = = d dx   1 1 − π arccos (x) √1 π 1−x2 if |x| ≤ 1 (2.40) . π]  0 otherwise FXt (b) = P r [Xt ≤ b] = P r [cos (2πf0 t + Θ) ≤ b] (2.38)  0 This process is stationary of order 1.22 CHAPTER 2.39) P r [ Xt2 ≤ b2 | Xt1 = x1 ] (2. otherwise Figure 2.35) FXt (b) = P r [−π ≤ 2πf0 t + Θ ≤ −arccos (b)] + P r [arccos (b) ≤ 2πf0 t + Θ ≤ π] FXt (b) = = (−arccos(b))−2πf0 t 1 2π dθ (−π)−2πf0 t 1 (2π − 2arccos (b)) 2π (2. CHAPTER 1: SIGNALS AND SYSTEMS fΘ (θ) =   1 2π if θ ∈ [−π. Recall that Xt = cos (2πf0 t + Θ) Then the relevant step is to nd (2.

a fair coin is tossed. Xt is stationary of order 1. then nT ≤ t < (n + 1) T . x2 ) = pXt2 |Xt1 (x2 |x1 ) pXt1 (x1 ) (2.43) t2 − t1 .44) t ∈ R.45) .Xt1 (b2 . then for nT ≤ t < (n + 1) T . Second order probability mass function pXt1 Xt2 (x1 .6 Every T Xt = −1 seconds. Xt = 1 for Example 2. If heads.23 Note that (Xt1 = x1 = cos (2πf0 t + Θ)) ⇒ (Θ = arccos (x1 ) − 2πf0 t) Xt2 = = cos (2πf0 t2 + arccos (x1 ) − 2πf0 t1 ) cos (2πf0 (t2 − t1 ) + arccos (x1 )) (2.18 b1 FXt2 . If tails.19 pXt (x) = for all    1 2 1 2 if if x=1 x = −1 (2. Figure 2.41) (2. b1 ) = Note that this is only a function of −∞ fXt1 (x1 ) P r [ Xt2 ≤ b2 | Xt1 = x1 ] dx 1 (2.42) Figure 2.

7: Mean The mean function of a random process Xt is dened as the expected value of Xt for all t's. (2. t1 ) = = E X X  t2 t1  ∞ ∞ x x f ( x2 .Xt1 ∞ ∞  xl xk p X .1 Second-order description Practical and incomplete statistics 11 Denition 2.50) Rule 2.8: Autocorrelation The autocorrelation function of the random process Xt is dened as RX (t2 . t1 ) 11 This = = E Xt2 Xt1 ∞ ∞ −∞ −∞ x2 x1 f Xt2 . xk ) if discrete k=−∞ l=−∞ t2 t1 (2. t2 < (n + 1) T (2. µXt = = E [X ]  t  ∞ xf Xt ( x ) dx if continuous −∞ ∞  k=−∞ xk p Xt ( xk ) if discrete (2.24 CHAPTER 2.1: Proof: If Xt is second-order stationary.47) with x1 and for all x2 when nT ≤ t1 < (n + 1) T and mT ≤ t2 < (m + 1) T n=m pXt2 Xt1 (x2 .49) Denition 2. x1 ) dx 1dx 2 if continuous −∞ −∞ 2 1 Xt2 . t1 ) only depends on t2 − t1 . t2 < (n + 1) T  Xt1 1   p (x ) p (x ) if n = mfor (nT ≤ t < (n + 1) T ) ∧ (mT ≤ t < (m + 1) T Xt1 1 Xt2 2 1 2 2.X ( xl . RX (t2 .51) content is available online at <http://cnx.7 Second-order Description of Stochastic Processes 2.48) p (x ) if x2 = x1 for nT ≤ t1 . x1 ) dx 2dx 1 (2.13/>.7.org/content/m10236/2.46) nT ≤ t1 < (n + 1) T and nT ≤ t2 < (n + 1) T for some pXt2 |Xt1 (x2 |x1 ) = pXt2 (x2 ) for all (2. x1 ) =   0   if x2 = x1 for nT ≤ t1 . CHAPTER 1: SIGNALS AND SYSTEMS The conditional pmf pXt2 |Xt1 when   0 (x2 |x1 ) =  1 if if x2 = x1 x2 = x1 n. then RX (t2 . .Xt1 ( x2 .

0) (2.25 RX (t2 .56) RX (t2 .8 Toss a fair coin every T seconds. the statistical characteristics can be captured by the pmf and the mean function is written as µX (t) = E [Xt ] = = 1/2 × −1 + 1/2 × 1 0 xk xl p Xt2 .54) 0 The autocorrelation function RX (t + τ. Since Xt is a discrete valued random process. then we will represent the autocorrelation with only one variable RX (τ ) = RX (t2 − t1 ) = RX (t2 . 2. xl ) (2.X0 ( x2 .53) Properties 1. t1 ) = = = 1 kk ll 1 × 1 × 1/2 − 1 × −1 × 1/2 . 3.7 Θ is uniformly distributed between 0 and 2π . Example 2.57) (2.55) cos (2πf0 (2t + τ ) + 2θ) 1 2π dθ Not a function of t since the second term in the right hand side of the equality in (2. t) = E Xt+τ Xt = E [cos (2πf0 (t + τ ) + Θ) cos (2πf0 t + Θ)] = = = 1/2E [cos (2πf0 τ )] + 1/2E [cos (2πf0 (2t + τ ) + 2Θ)] 1/2cos (2πf0 τ ) + 1/2 1/2cos (2πf0 τ ) 2π 0 (2. x1 ) dx 2dx 1 RX (t2 − t1 . t1 ) (2. The mean function µX (t) = = = = E [Xt ] E [cos (2πf0 t + Θ)] 2π 0 1 cos (2πf0 t + θ) 2π dθ (2.52) If RX (t2 . RX (0) ≥ 0 RX (τ ) = RX (−τ ) |RX (τ ) | ≤ RX (0) Xt = cos (2πf0 t + Θ (ω)) and Example 2.55) is zero.Xt1 ( xk . t1 ) τ = t2 − t1 depends on t2 − t1 only. t1 ) = = ∞ ∞ −∞ −∞ x2 x1 f Xt2 −t1 .

t1 ) − µX (t2 ) µY (t1 ) (2. Figure 2. . t1 ) = RXY (t2 .20). y ) dxdy (2. µX is constant and RX (t2 . t1 ) is only a function of t2 − t1 .Yt1 ( x.12: Jointly Wide Sense Stationary The random processes function of Xt and t2 − t1 only and Yt µX (t) are said to be jointly wide sense stationary if and RXY (t2 . The converse is not necessarily true.59) Denition 2. t1 ) =  0 otherwise t1 and (2. t1 ) = E Xt2 Yt1 = ∞ ∞ −∞ −∞ xyf Xt2 . t1 ) is a µY (t) are constant. t1 ) = = and nT ≤ t2 < (n + 1) T (2.26 CHAPTER 2.11: Crosscorrelation The crosscorrelation function of a pair of random processes is dened as RXY (t2 .58) 1 × 1 × 1/4 − 1 × −1 × 1/4 − 1 × 1 × 1/4 + 1 × −1 × 1/4 0 when nT ≤ t1 < (n + 1) T and mT ≤ t2 < (m + 1) T with n = m   1 if (nT ≤ t < (n + 1) T ) ∧ (nT ≤ t < (n + 1) T ) 1 2 RX (t2 . t1 ) − µX (t2 ) µX (t1 ) (2. t1 ) = E (Xt2 − µX (t2 )) Xt1 − µX (t1 ) = RX (t2 .10: Autocovariance CX (t2 .20 Denition 2.62) Denition 2.61) CXY (t2 .2: Xt is strictly stationary.9: Wide Sense Stationary A process is said to be wide sense stationary if A function of t2 . then it is wide sense stationary. Autocovariance of a random process is dened as Denition 2. CHAPTER 1: SIGNALS AND SYSTEMS when nT ≤ t1 < (n + 1) T RX (t2 . t) Two processes dened on one experiment (Figure 2. If Rule 2.60) The variance of Xt is Var (Xt ) = CX (t.

tN )      CX (tN . that is. . t1 ) . If a Gaussian process is WSS.9 X and Y are Gaussian and zero mean and independent. . Properties 1. . 12 This content is available online at <http://cnx.. and    ΣX =   CX (t1 . − φX (u) = eiuX = e − “ u2 2 2 σX ” (2. . . 1 (2π) (detΣX ) N 2 e−( 2 (x−µX ) ΣX −1 (x−µX )) (2. Z =X +Y is also Gaussian. . . .      µX (tN ) . Any linear processing of a Gaussian process results in a Gaussian process. .27 2. t1 ) ..64) for all u∈R φZ (u) − − “ “ − = eiu(X+Y ) = e = e u2 2 u2 2 2 σX ” e − “ u2 2 2 σY ” (2. . Xt2 . t1 ) is said to be a Gaussian process sampling of the process is a Gaussian random 1 T vector. .8.. Example 2.org/content/m10238/2.63) x ∈ Rn where    µX =   µX (t1 ) . 2. . then they are also statistically independent. then it is strictly stationary. . tN ) . Xt CX (tN .8 Gaussian Processes A process with mean if 12 2.65) ( 2 2 σX +σY ) ” therefore Z is also Gaussian. CX (t1 .1 Gaussian Random Processes Denition 2. 3. If two Gaussian processes are uncorrelated.13: Gaussian process any µX (t) and T X = (Xt1 .7/>. XtN ) fX (x) = for all covariance function formed by any 1 2 CX (t2 . The complete statistical properties of can be obtained from the second-order statistics.

. in addition. = fX(tN ) = fX (2. x2 . X(tN ) (x1 . and SX (ω) is at from zero up to some relatively high cuto frequency and then decays to zero above that. for Pick a set of times {t1 .X(t2 ).68) However.d. i.9.d. . . ..67) = PX e−(iω0) = PX Hence X is white. CHAPTER 1: SIGNALS AND SYSTEMS 2.66) τ = 0. it is a White Noise Process if its ACF is a (2. X (tN ) are jointly independent. t2 . X (ti ) and X (tj ) ti and tj .1 White Noise delta function at 13 If we have a zero-mean Wide Sense Stationary process X. . it is very useful as a conceptual entity and as an approximation to 'nearly white' processes which have nite bandwidth.i.9. but which are 'white' over all frequencies of practical interest. Strictly White Noise Process. xN ) = i=1 fX(ti ) (xi ) (2.2 Strict Whiteness and i. their joint pdf is given by If. . i. are jointly independent. .. fX(t1 ) = fX(t2 ) = .69) and the marginal pdfs are identical. . . . (2.. . PX is the PSD of But: Power of X = 1 2π ∞ −∞ SX (ω) dω = ∞ so the White Noise Process is unrealizable in practice. because of its innite bandwidth.. i. we have a Independent and Identically Distributed (i. tN } with N nite. fX is a pdf with zero mean. For 'nearly rXX (τ ) is a narrow pulse of non-zero width. it is of the form: rXX (τ ) = PX δ (τ ) where PX is a constant.9 White and Coloured Processes 2. . t2 .e. X at all frequencies.70) then the process is termed If. Processes Usually the above concept of whiteness is sucient.e.i. . white' processes. but a much stronger denition is as follows: any choice of {t1 . . since it contains equal power at all frequencies. tN } to sample X (t). N fX(t1 ).d). the random variables X (t1 ). .i. process is 'white' because the variables separated by an innitesimally small interval between 13 This content is available online at <http://cnx. X (t2 ).e.org/content/m11105/2. .28 CHAPTER 2.4/>. The PSD of X is then given by SX (ω) = PX δ (τ ) e−(iωτ ) dτ (2. 2. . as in white light. even when An i.

3 Additive White Gaussian Noise (AWGN) In many systems the concept of Additive White Gaussian Noise (AWGN) is used. (9) <http://cnx. Using this equation from our discussion of Spectral Properties of Random Signals and (2.it is a pair of delta functions at not necessary (especially for 'nearly + (V ) −V . and hence is a nearly white process. the coloured waveforms were produced by passing uncorrelated random noise samples (white up to half the sampling frequency) through half-sine lters (as in this equation 18 from our discussion of Random Signals) of length Tb = 10 and 50 samples respectively. (1) <http://cnx. SY (ω) need only be constant (white) over the passband of the lter. as can be seen in part c of this gure from previous discussion in which the upper PSD is atter than the lower PSD. (11) <http://cnx. 14 Y (t) with PSD SY (ω) simply by passing white (or nearly white) noise X (t) with PSD PX through a lter with frequency response H (ω). Figure 1(c) <http://cnx. This simply means a process which has a Gaussian pdf.9.71) Hence if we design the lter such that |H (ω) | = then SY (ω) PX (2. In these cases. and a very high speed random bit stream has an ACF which is approximately a delta function.73) = PX δ (τ ) ∗ h (−τ ) ∗ h (τ ) PX h (−τ ) ∗ h (τ ) where h (τ ) is the impulse response of the lter.29 2.72) Y (t) will have the required coloured PSD. For this to work. is often known as a We may obtain coloured noise coloured noise process. Note that although 'white' and Gaussian' often go together. such that from this equation from our discussion of Spectral Properties of Random Signals.9.4 Coloured Processes A random process whose PSD is not white or nearly white.66). E. this is white' processes). SY (ω) = SX (ω) (|H (ω) |) = PX (|H (ω) |) 2 2 (2.org/content/m10989/latest/#eq9> .org/content/m11104/latest/#gure1> 17 "Spectral Properties of Random Signals". although the upper wave17 form is more 'nearly white' than the lower one. 2. a white PSD. the ACF of the coloured noise is given by rY Y (τ ) = = rXX (τ ) ∗ h (−τ ) ∗ h (τ ) (2. and is linearly added to whatever signal we are analysing.org/content/m11104/latest/#eq27> 15 "Spectral Properties of Random Signals". the two voltage levels of the bit stream.g. Conversely a nearly white Gaussian process which has been passed through a lowpass lter (see next section) will still have a Gaussian pdf (as it is a summation of Gaussians) but will no longer be white. 16 This Figure from previous discussion shows two examples of coloured noise. but its pdf is clearly not Gaussian .org/content/m11104/latest/#gure1c> 18 "Random Signals".org/content/m11104/latest/#eq25> 16 "Spectral Properties of Random Signals". Figure 1 <http://cnx. so a nearly 15 white process which satises this criterion is quite satisfactory and realizable. 14 "Spectral Properties of Random Signals".

CHAPTER 1: SIGNALS AND SYSTEMS 2. τ ) Xτ dτ h (t.78) = µY − RY X (t2 . 2. τ ) Xτ dτ −∞ (2.75) Dierentiation Xt = d (Xt ) dt (2.10 Transmission of Stationary Process Through a Linear Filter Integration Z (ω) = a b 19 Xt (ω) dt (2. Z= Z = − 2 − − b a Xt (ω) dt= − b a µX (t) dt b b a a b a Xt2 dt 2 b a Xt1 dt 1= RX (t2 .76) Properties 1.79) 19 This content is available online at <http://cnx. t1 ) = = = Yt2 Xt1 ∞ −∞ ∞ −∞ − h (t2 − τ ) Xτ dτ Xt1 h (t2 − τ ) RX (τ − t1 ) dτ (2.10/>. t1 ) dt 1dt 2 Figure 2. .77) Xt is wide sense stationary and the linear system is time invariant µY (t) = = ∞ −∞ h (t − τ ) µX dτ ∞ −∞ µX h (t ) dt (2.21 µY (t) If = = ∞ −∞ ∞ −∞ − h (t. τ ) µX (τ ) dτ (2.74) Linear Processing Yt = ∞ h (t.30 CHAPTER 2.org/content/m10237/2.

µY (t) = 0 for all t. τ ) Xτ dτ (2. t1 ) = = ∞ −∞ h (t2 − t1 − τ ) RX (τ ) dτ h ∗ RX (t2 − t1 ) − (2.10 Xt is a wide sense stationary process with µX = 0. and process going through a lter with impulse response denoted by RX (τ ) = N0 δ (τ ).84) ∞ SX (f ) = −∞ if RX (τ ) e−(i2πf τ ) dτ RX (τ ).22 Example 2. Consider the random 2 h (t) = e−(at) u (t). = = N0 ∞ h (α) h (α 2 −∞ N0 e−(a|τ |) 2 2a (2.80) where = Yt2 Yt1 ∞ −∞ − = Yt2 = = RY (t2 . τ ) dτ h (t1 − τ ) RY X (t2 − τ ) dτ h (τ − (t2 − t1 )) RY X (τ ) dτ (2.82) = RY (t2 − t1 ) h ∗RY X (t2 . The output process is − τ ) dα Yt . t1 ) τ ∈ R. RY (t2 .83) RY (τ ) Denition 2. The power spectral density function of a wide sense stationary (WSS) process the Fourier transform of the autocorrelation function of Xt is dened to be Xt . (2.31 RY X (t2 . Yt is WSS if τ = t2 − τ ∼ and h (τ ) = h (−τ ) for all Xt is WSS and the linear system is time-invariant. Yt is a Markov process. t1 ) = = where h (t1 .14: Power Spectral Density Xt is called a white process. Figure 2. τ ) RY X (t2 .81) ∞ −∞ ∞ −∞ ∞ −∞ ∼ h (t1 . t1 ) τ = τ − t1 . Xt is WSS with autocorrelation function Properties .

11 Xt ∞ ∼ −∞ h (t) e−(i2πf t) dt = H (f ) h (t) = e−(at) u (t). CHAPTER 1: SIGNALS AND SYSTEMS 1. h (t − τ ) Xτ dτ then SY (f ) = F (RY (τ )) = F h∗ h ∗RX (τ ) = H (f ) H (f ) SX (f ) = (|H (f ) |) SX (f ) 2 ∼ ∼ (2.87) . H (f ) = 1 a + i2πf N0 2 (2. 2.86) is a white process and SY (f ) = a2 + 4π 2 f 2 (2. If SX (f ) = SX (−f ) since RX is even and real.32 CHAPTER 2.85) since H (f ) = ∼ Example 2. 3. ∞ Var (Xt ) = RX (0) = −∞ SX (f ) df SX (f ) is real and nonnegative SX (f ) ≥ 0 for Yt = ∞ −∞ all f.

1 This content is available online at <http://cnx. The rst one can be compressed to "Hello" without much loss of information. it is still more ecient to convert analog sources into digital data and transmit over analog channels using digital transmission techniques. does its information quantity relate to how much it can be compressed? 4. many sources are analog. we will quantify information and nd ecient representation of information (Entropy (Section 3.org/content/m10162/2. There are two reasons why digital transmission could be more ecient and more reliable than analog transmission: 1. 2. Information sources are not often digital. We will also quantify how much (Section 6." and "There is an exam today. If information can be measured. How can one quantify information? 3. hello. Clearly the second one carries more information. Channel coding (Section 6. Analog sources could be compressed to digital form eciently. reliably.2)).1) information can be transmitted through channels. How can one model information? 2. In other modules. Digital data can be transmitted over noisy channels reliably.10/>. and in fact.5) can be used to reduce information rate and increase reliability. hello.1 Information Theory and Coding 1 In the previous chapters. There are several key questions that need to be addressed: 1." are not the same.Chapter 3 Chapter 2: Source Coding 3. Is it possible to determine if a particular channel can handle transmission of a source with a particular information quantity? Figure 3.1 The information content of the following sentences: "Hello. 33 .1 Example 3. Although many channels are also analog. we considered the problem of digital transmission over dierent channels.

Example 3. .. it is then best modeled as a random process. .2 Entropy 2 Information sources take very dierent forms. .34 CHAPTER 3.2 A speech signal with bandwidth of 3100 Hz can be sampled at the rate of 6.6 kbits sec (3. pX2 (x). Audio signals can be modeled as a continuous-time random process. Figure 3. The power spectral density is bandlimited to around 5 MHz (the value depends on the standards used to raster the frames of image).g. . X2 . . . Therefore...1) 2 This content is available online at <http://cnx. they can be reconstructed from their sample values. . pX2 X3 . For example. Since the information is not known to the destination. where Xi ∈ {A. a text) can be modeled as a discrete-time and discrete valued random process X1 . It has been demonstrated that the power spectral density of speech signals is bandlimited between 300 Hz and 3400 Hz. . . if sampled faster than the Nyquist rate.. D.2 kHz. • • with a particular etc. B. C. and a specic Video signals can be modeled as a continuous time random process.2 These analog information signals are bandlimited. Here are a few examples: • Digital data source (e. . } pX1 X2 .2 × 103 log2 8 = = bits 18600 sample samples sec 18. . and pX1 X2 X3 . . If the samples are quantized with a 8 level quantizer then the speech signal can be represented with a binary sequence with the rate of 6. . .org/content/m10164/2. pX1 (x). discrete-time or continuous time. . .. the speech signal can be modeled as a Gaussian process with the shown (Figure 3. pX2 X3 X4 .2) power spectral density over a small observation period. CHAPTER 2: SOURCE CODING 3.16/>. E.

Entropy is a measure of uncertainty in a random variable and a measure of information it can reveal. we will continue the discussion only for discrete random variables.35 Figure 3.3) N is the number of possible values of p X ( xi ) = P r [X = xi ]. The only function satisfying the above conditions is the -log of the probability. I (hot. The statement that x is expected x takes the value of 0 with probability 0.4 The information content in the statement about the temperature and pollution level on July 15th in Chicago should be the sum of the information that July 15th in Chicago was hot and highly polluted since pollution and temperature could be independent.1. Self information should decrease with increasing probability. knowing that x = 1 is more surprising news!! An intuitive denition of information measure should be larger when the probability is small. A more basic explanation of entropy is provided in another module .9 and the value of 1 with probability x = 1 carries more information than the statement that x = 0. Since any bandlimited analog information signal can be converted to a sequence of discrete random variables. The entropy (average self information) of a discrete random variable X is a function of its probability mass function and is dened as N H (X) = − i=1 where p X ( xi ) logp X ( xi ) X and (3. Example 3. Example 3.3 The random variable is that 0.2) An intuitive and meaningful measure of information should have the following properties: 1. The entropy of the (3. The reason to be 0. therefore. Self information should be a continuous function of the probability. high) = I (hot) + I (high) (3.4) H (X) = (− (plog2 p)) − (1 − p) log2 (1 − p) 3 "Entropy" <http://cnx.3 The sampled real values can be quantized to create a discrete-time discrete-valued random process. Denition 3. 2. 3 Example 3. 2. 1} with probabilities p and 1 − p.1: Entropy 1. Self information of two independent events should be their sum.org/content/m0070/latest/> . 3.5 source is If a source produces binary information {0. If log is base 2 then the unit of entropy is bits.

7) Denition 3. information. Y) is dened by H (X. has Figure 3. . 1.6 An analog source is modeled as a continuous-time random process with power spectral density bandlimited to the band between 0 and 4000 Hz.Y ( xi . x2 .3: Conditional Entropy The conditional entropy of the random variable X given the random variable Y is dened by H (X|Y ) = − ii jj p X.2: Joint Entropy The joint entropy of two discrete random variables (X . if p = 1/2 p = 1/2 and the source provides no new then information if H (X) = 1 bits. The samples are quantized to 5 levels values are {−2. . 0. X2 . xn ) logp X ( x1 . The probability of the samples taking the quantized 1 1 1 1 1 2 . xn ) (3. .4 Example 3. yj ) (3. are assumed to be independent. Y ) = − ii jj p X. as a result of sampling. yj ) logp X. . 16 .5) There are 8000 samples per second. . 4 . Xn ) T is dened as H (X) = − x 1x1 x 2x2 ··· x nxn p X ( x1 . .Y ( xi . . . The source p = 0 or p = 1. 8 . 16 . . . yj ) logpX|Y (xi |yj ) (3. −1. Therefore. if p = 1 then H (X) = 0. 2}. respectively.36 CHAPTER 3. CHAPTER 2: SOURCE CODING If p=0 then its largest entropy if H (X) = 0. x2 . The sequence of random variables. The signal is sampled at the Nyquist rate. the source produces 8000 × 15 8 = 15000 bits sec of Denition 3. .6) The joint entropy for a random vector X = (X1 . The entropy of the random variables are H (X) = = = = 1 1 1 1 1 1 1 1 − 4 log2 1 − 1 log2 8 − 16 log2 16 − 16 log2 16 2 log2 2 4 8 1 1 1 1 1 2 log2 2 + 4 log2 4 + 8 log2 8 + 16 log2 16 + 16 log2 16 1 3 4 1 2 + 2 + 8 + 8 15 bits 8 sample − (3.Y ( xi .8) . .

Xn−1 ) and (3. .10/>. . a2 . P (X = x) = pX1 (x1 ) pX2 (x2 ) .org/content/m10175/2.2). 2. N .10) X1 . . Shannon relates entropy to the minimum number of bits per second required to represent a source without much loss (or distortion).   . X2 . Xn are mutually independent it is easy to show that n H (X) = i=1 H (Xi ) (3. Xn ) n→∞ The limit exists and is equal to (3. pXn (xn ) 4 This (3. In 1948. . . In one of the three theorems. .37 It is easy to show that H (X) = H (X1 ) + H (X2 |X1 ) + · · · + H (Xn |X1 X2 .. Xn .13) The entropy rate is a measure of the uncertainty of information content per output symbol of the source. aN } and dene pXi (xi = aj ) = pj for j = 1.4: Entropy Rate The entropy rate of a stationary discrete-time random process is dened by H = limit H (Xn |X1 X2 . . . . a1 will Therefore. a sequence of length n. in n is very large. Y ) = H (Y ) + H (X|Y ) = H (X) + H (Y |X) If (3. .  Xn The symbol appear where it is assumed (3. Shannon introduced a theorem which related the entropy to the number of bits per second required to represent a source without much loss. Xn ) n (3. . .11) Denition 3. Claude E. . . how much a source can be compressed should be related to its entropy (Section 3. .3). . Xn are mutually independent and identically distributed.12) H = limit n→∞ 1 H (X1 .   . . X2 . The extent to which a source can be compressed is related to its entropy. Consider a sequence of length n   X1    X2    X= . . X2 . . that X1 . . Therefore. Shannon introduced three theorems and developed very rigorous mathematics for digital communications. . X2 . . 3. . .. Claude E.3 Source Coding 4 As mentioned earlier. Consider a source that is modeled by a discrete-time and discrete-valued random process X1 .15) content is available online at <http://cnx. . Entropy is closely tied to source coding (Section 3. In 1948. . . . on the average.14) a1 can occur with probability np1 times with high probabilities if p1 .9) H (X. .. . where xi ∈ {a1 .

. n. 2−(nH(X)) = 1 k=1 (3. . CHAPTER 2: SOURCE CODING N P (X = x) where p1 np1 p2 np2 . These are typical sequences.17) a1 .16) A typical sequence pi = P (Xj = ai ) for all j and for X may look like all i. H (X) # of typical seq. pN npN = i=1 pi npi (3. The probability of typical where For large sequences is almost 1.. . Xn .19) . . . almost all the output sequences of length n of the source are equally probably with probability 2−(nH(X)) .               X=              a2 . The probability of nontypical sequences are negn ligible. . aN a6 where of ai appears npi times with large probability.38 CHAPTER 3. . N i=1 This is referred to as a typical sequence. (3. . X2 . .18) 2−(nH(X)) is the entropy of the random variables X1 .. There are N dierent sequences of length n with alphabet of size N . The probability X being a typical sequence is P (X = x) pi npi = = = = 2 N log2 pi npi i=1 2 N npi log2 pi i=1 2 PN n i=1 pi log2 pi (3. .                             a1 aN a2 a5 .

C.C. Assume X1 . The probability of a source output to be in the set of typical sequences is 1 when source output to be in the set of nontypical sequences approaches 0 as The essence of source coding or data compression is that as and ignore nontypical sequences. {A. D} with the above Consider a source with alphabet {A. 1 1 1 1 2 .A}..B. {C. Theorem 3. 8 }.. independent of the complexity of coder and decoder. probability of a n → ∞. X8 is an independent and identically distributed sequence with Xi ∈ {A. 4 Examples of nontypical sequences of length 8: {D.B. Indeed.A.C.C.1: Shannon's Source-Coding A source that produced independent and identically distributed random variables with entropy can be encoded with arbitrarily small error probability at any rate H R in bits per source output if R ≥ H.21) The number of nontypical sequences Examples of typical sequences 8× B appearing 1 = 2 times. C. .C. the error probability will be bounded away from zero.D}.7 probabilities. etc.D} with probabilities { H (X) = = = = 1 1 − 1 log2 1 2 log2 2 4 4 1 3 +2+3+8 2 4 8 4+4+6 8 14 8 − 1 1 − 8 log2 8 − 1 log2 1 8 8 (3.C.D. 4 . it takes nH (X) bits to represent them on the average.B. On the average it takes H (X) bits per source output to represent a simple source that produces independent and identically distributed outputs.C.C. Conversely. The n → ∞.A. these denitions and arguments are valid when n is very large.A. B.B. X2 . Since there are only n → ∞.C. . if R < H.20) The number of typical sequences of length 8 28× 8 = 214 48 − 214 = 216 − 214 = 214 (4 − 1) = 3 × 214 1 include those with A appearing 8 × 2 = 4 times.B.C} and much more. 8 .B.C. one only needs to be able to represent typical sequences as binary codes 2nH(X) typical sequences of length n.39 Figure 3.D.A.A. nontypical sequences never appear as the output of the source.5 Example 3.B.D.B} and much more. Therefore. 14 (3. {A. .A.

3 bits/letter. If the dependency between letters in a text is captured in a model the entropy rate can be derived to be H (X) = 4.8. 5.03 H = 1.8} 5 This content is available online at <http://cnx.i. (independent and identically distributed). If modeled as a memoryless source (no dependency between letters in a word) then the entropy is bits/letter. then go to step 1. 6 It is a source coding algorithm which approaches.4). the more the source produces outputs the more one knows about the source and the more one can compress. quantizations. and voice.org/content/m10176/2.d. JPEG.10/>. .9 X ∈ {A. 3. One such algorithm is the Human source coding algorithm (Section 3. ory. D} with probabilities { 1 1 1 1 2. For cases where the source is analog there are powerful compression These are algorithms that specify all the steps from sampling. brief discussion of the algorithm is also given in another module . B.4. A 3. Arbitrarily assign 0 and 1 as codewords for the two remaining outputs.1 Human encoding algorithm 1. and sometimes achieves. 4. Example 3. If no output is preceded by another output in a preceding step. video. Sort source outputs in decreasing order of their probabilities 2. referred to as waveform coders. Note that a non-information theoretic representation of a text may require 5 bits/letter since 25 is the closest power of 2 to 27. with space it becomes an alphabet of size 27. Human codes and Lempel and Ziv apply to compression problems where the source produces discrete time and discrete valued outputs. 3.4.org/content/m0092/latest/> . If an output is the result of the merger of two outputs in a preceding step.3 bits/letter. If the number of remaining outputs is more than 2. 6 "Compression and the Human Code" <http://cnx. Shannon's bound for source compression. MPEG. Shannon's results indicate that there may be a compression algorithm with the rate of 1. Merge the two least-probable outputs into a single output whose probability is the sum of the corresponding probabilities. If the source is not i.8 The English language has 26 letters. respectively. Xn−1 ) n→∞ In the case of a source with memory. there are a number of source coding algorithms for discrete time discrete valued sources that come close to Shannon's bound.3) algorithm is the Human encoding algorithm. CHAPTER 2: SOURCE CODING The source coding theorem proves existence of source coding techniques that achieve rates close to the entropy but does not provide any algorithms or ways to construct such codes. . vocoders are a few examples for image. but it is stationary with memthen a similar theorem applies with the entropy H (X) replaced with the entropy rate H = limit H (Xn |X1 X2 . Example 3. C.4 Human Coding 5 One particular source coding (Section 3. append the current codeword with a 0 and a 1 to obtain the codeword the the preceding outputs and repeat step 5. Another is the Lempel and Ziv algorithm.40 CHAPTER 3. Although Shannon's results are not constructive. and binary representation. then stop.

As you may recall. we can dene average code length as source was − = x∈X p X ( x ) (x) (3. In general. Codes are variable length. In this case.6 1 1 Average length = 2 1 + 1 2 + 8 3 + 1 3 = 14 . Human codes provide nearly optimum code lengths. The drawbacks of Human coding 1. Another powerful source coder that does not have the above shortcomings is Lempel and Ziv. the Human code achieves the lower bound of 8 8 output . p X (x) for all x ∈ X.23) It is not very hard to show that For compressing single source output at a time. the entropy of the 4 8 8 14 14 bits also H (X) = . . H (X) ≥ > H (X) + 1 − (3. 2. The algorithm requires the knowledge of the probabilities.22) where X is the set of possible values of x.41 Figure 3.

CHAPTER 2: SOURCE CODING .42 CHAPTER 3.

1 1 This 2 This 2 Data symbols are "1" or "0" and data rate is 1 T Hertz.2).9/>.1 Data Transmission and Reception We will develop the idea of channels.2 Signalling Example 4. we will consider more practical channels. content is available online at <http://cnx. Nt is a white Gaussian random process.Chapter 4 Chapter 3: Communication over AWGN Channels 4.4).1: Xt carries data. The problem of demodulation and detection of signals is discussed in Demodulation and Detection (Section 4.org/content/m10115/2. 4.11/>. 43 . content is available online at <http://cnx. baseband channels with bandwidth constraints and passband Simple additive white Gaussian channels Figure 4. In additional modules. 1 data transmission by rst considering simple channels.org/content/m10116/2. The concept of using dierent types of modulation for transmission of data is introduced in the module Signalling (Section 4.

2 Pulse position modulation Figure 4.2: Example Data symbols are "1" or "0" and the data rate is 2 T Hertz.3 Example 4. CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS Pulse amplitude modulation (PAM) Figure 4.44 CHAPTER 4. .

T 2.4 This strategy is an alternative to PAM with half the period.45 Figure 4. .

. s2 (t). . .46 CHAPTER 4. . M }.1: antipodal Signals and n ∈ {1. t ∈ [0. .1) T < sm . M } : 0 and how dierent they are in terms of inner products. . 2.2) m ∈ {1.. sM (t) are orthogonal if m = n. are orthogonal and s1 (t). ..2: orthogonal Signals Signals 2 s2 (t) are antipodal if Denition 4. . sn >= 0 if for Denition 4. 2 s1 (t). s M (t) 2 sm (t) = . . sm 2 (t) dt (4. . . 2.. . . . sM (t) are biorthogonal −s M +m (t) for some m ∈ 1. . M } s1 (t) and Denition 4.. . .5 Relevant measures are energy of modulated signals T Em = ∀m ∈ {1. s2 (t). ∀t. . . 2.. . . . sn >= 0 for sm (t) sn (t)dt (4.. M . 2. CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS Figure 4.3: biorthogonal s1 (t). T ] : (s2 (t) = −s1 (t)) < sm .

. ψ1 ψ2 (t) = r1 ^ E2 (s2 (t) − s21 ψ1 ) where E2 = ^ T 0 (s2 (t) − s21 ψ1 (t)) dt In general  ψk (t) = 1 Ek where ^ k−1  skj ψj (t) (4. ˜ (4.3 Geometric Representation of Modulation Signals analysis of their performance as modulation signals.3) can provide a compact description of signals and can simplify performance analysis of communication systems using the signals. . then N = M .org/content/m10035/2.. . ψ2 (t) . sM (t)} be a set of orthogonal signals with equal energy. the receiver must detect and demodulate (Section 4.3) 1 s˜ (t) = sm (t) − m M If the energy of orthogonal signals is denoted by M sk (t) k=1 T ∀m. . .5) ∀m = n : It is conjectured that among all possible noise channel. s2 (t) . 2. . . Denition 4. M } : then the energy of simplex signals Es = 0 sm 2 (t) dt (4. the simplex signal set results in the smallest probability of error when used to transmit information through an additive white Gaussian The geometric representation of signals (Section 4. .4: Simplex signals the better the signal set.7) sk (t) − j=1 dt.13/>. Let Dene Dene 3 Geometric representation of signals can provide a compact characterization of signals and can simplify {s1 (t) . < sm ... .47 It is quite intuitive to expect that the smaller (the more negative) the inner products.4) Es = ˜ and 1− 1 M Es −1 Es ˜ M −1 (4. s2 (t) ψ1 (t)dt and =< 2 s2 . . sn >= m ˜ M -ary (4. . < s˜ .. .6) signals with equal energy. . . s˜ (t) are simplex signals if M The signals s1 (t). sn > for all m = n. The process continues until all of the M signals are exhausted. s2 (t) . sM (t)} be a set of signals. ψ1 (t) = s21 s1 (t) √ where E1 E1 = >= T 0 T 0 s1 2 (t) dt. m ∈ {1.. . . The results are N orthogonal signals with unit energy. Let {s1 (t) . 3 This Ek = ^ T 0 sk (t) − k−1 j=1 skj ψj 2 (t) content is available online at <http://cnx. . Orthonormal bases are essential in geometry. Once signals have been modulated.4) the signals despite interference and noise and decide which of the set of possible transmitted signals was sent. If the signals {s1 (t) . ψN (t)} where N ≤ M . 4. . . {ψ1 (t) . sM (t)} are linearly independent. .

8) sm m ∈ {1. .   .  smN where smn =< sm . The signals can be represented by Example 4. ψn > and N 2 n=1 smn .9) (4.48 CHAPTER 4.12) . . CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS The M signals can be represented as N sm (t) = n=1 with smn ψn (t) Em = (4. . M }   sm1    sm2    = .3 Figure 4.11) ^ E2 (4.6 s1 (t) ψ1 (t) = √ A2 T √ s11 = A T √ s21 = − A T ψ2 (t) = = = 0 (s2 (t) − s21 ψ1 (t)) r1 −A + √ A T √ T r1 ^ E2 (4.10) (4. . 2.   .

13) is the Euclidean distance between signals. s3 (t) = −s (t).7 Dimension of the signal set is 1 with E1 = s11 2 and E2 = s21 2 . Example 4. s2 . s2 (t) = s⊥ (t).5 Set of 4 equal energy biorthogonal signals. . s3 =  √ . s3 =  . s2 =   0   Es   0        0 0 0 T 2  and 0 0 0 √ Es            s4 =     ∀m n : dmn = |sm − sn | = N (smj − snj ) = j=1 2 2Es  (4. The four signals can s1 =  0 Es 0 − Es geometrically represented using the 4-vector of projection coecients s1 .8 (t) √ ψm (t) = smE where Es = 0 sm 2 (t) dt = A4T   √ s    Es 0 0       √  0   0   E  s       s1 =  . s4 =  √ . and s4 as a set The orthonormal basis ⊥ T be of constellation points. Example 4.49 Figure 4. s4 (t) = ⊥ −s (t). s1 (t) = s (t). ψ2 (t) = s E where Es = 0 sm 2 (t) dt  √    s  √ s   Es 0 − Es 0 . . (t) s(t) √ ψ1 (t) = √E . s3 .4 Figure 4. s2 =  √ .

15) d13 = |s1 − s3 | √ = 2 Es d13 = d24 (4. .14/>. CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS Signal constellation Figure 4.9 d21 = |s2 − s1 | √ 2Es = = d23 = d34 = d14 (4. 4 This {s1 . . 4 modulated signal Xt could be {s1 . for t ∈ [0.4 Demodulation and Detection Consider the problem where signal set. .org/content/m10054/2. . . . The content is available online at <http://cnx.50 CHAPTER 4.16) (4. sM } during the interval 0 ≤ t ≤ T . .14) d12 (4. s2 . T ] is used to transmit log2 M bits. sM }.17) Minimum distance dmin = √ 2Es 4. . . s2 .

org/content/m10141/2.5.23) 5 This content is available online at <http://cnx. (4. 4.5 Demodulation 4.51 Figure 4. Demodulation is covered here (Section 4. . Recall sm (t) = (t) for m ∈ {1. .6). . Gaussian random variables and are mutually Proof: independent if Nt is a white Gaussian process. .22) Proposition 4. .1: The noise projection coecients ηn 's are zero mean. . . M }. 2. . M } the signals are decomposed into a set of orthonor- Noise process can also be decomposed N Nt = n=1 where the ˜ ηn ψn (t) + Nt nth basis signal.19) rt = n=1 smn ψn (t) + n=1 N ˜ ηn ψn (t) + Nt (4. rt = sm (t) + Nt N N (4. µη (n) = E [ηn ] = E T 0 Nt ψn (t) dt (4.5). 2. A discussion about detection can be found here (Section 4.13/>.10: rt = Xt + Nt = sm (t) + Nt for 0 ≤ t ≤ T for some m ∈ {1.18) The problem of demodulation and detection M ηn = T 0 Nt ψn (t) dt is the projection onto the ˜ Nt for is the left over noise. .1 Demodulation 5 Convert the continuous time received signal into a vector without loss of information (or performance). N n=1 smn ψn mal signals. perfectly.21) rt = n=1 ˜ rn ψn (t) + Nt (4. is to observe rt 0≤t≤T and decide which one of signals were transmitted.20) rt = n=1 ˜ (smn + ηn ) ψn (t) + Nt N (4.

N0 and Rη (k.52 CHAPTER 4.30) The correlation between rn and ˜ Nt N ˜ E Nt rn = E N Nt − k=1 ηk ψk (t) smn + ηn N (4.33) ˜ E Nt rn = 0 N0 N0 δ (t − t ) ψn (t ) dt − ψn (t) 2 2 (4. n) = N0 δkn 2 2 if k=n Therefore. µr (n) = = E [smn + ηn ] smn = = Var (ηn ) N0 2 (4. ηk Proposition 4.31) ˜ E Nt rn = E Nt − k=1 ηk ψk (t) smn + E [ηk ηn ] − k=1 T N E [ηk ηn ] ψk (t) (4. Gaussian they are also independent.24) E [ηk ηn ] = E = T 0 T T 0 T 0 T Nt ψk (t )dt (4. Therefore. rn = smn + ηn . given sm (t) was transmitted.26) E [ηk ηn ] = 0 0 T 0 0 E [ηk ηn ] = N0 2 T δ (t − t ) ψk (t) ψn (t )dtdt = = = N0 T ψk 2 0 N0 2 δ  kn  N0 if 2 (4.34) . rt onto the orthonormal bases ψn (t)'s.25) Nt Nt ψk (t) ψn (t ) dtdt RN (t − t ) ψk (t) ψn dtdt (4. the projection of the received signal dent from the residual noise process Proof: Recall The residual noise ˜ Nt ˜ Nt .28) k=n  0 ηk 's are uncorrelated and since they are Gaussian 0. CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS µη (n) = = T 0 E [Nt ] ψn (t) dt 0 Nt ψk (t) dt T 0 (4.2: The rn 's.32) ˜ E Nt rn = E Nt 0 T Nt ψn (t )dt − k=1 N0 δkn ψk (t) 2 (4. are indepen- is irrelevant to the decision process on rt .27) E [ηk ηn ] (t) ψn (t)dt (4.29) Var (rn ) (4.

36) Figure 4..53 ˜ E Nt rn = = N0 2 ψn (t) − N0 2 ψn (t) (4.  rN random process rt for 0 ≤ t ≤ T = = sm (t) + Nt N n=1 rn ψn r rt ˜ (t) + Nt (4.  The conjecture is to ignore r1     ..   Knowing the vector ˜ Nt and we can reconstruct the relevant part of   r  2 extract information from   .35) 0 and Since both ˜ Nt and rn are Gaussian then ˜ Nt rn are also independent.11 .

the correct transmitted signal must be detected based upon observations of the input vector.org/content/m10091/2.12 Once the received signal has been converted to a vector.13 6 This content is available online at <http://cnx.6). 4.15/>.54 CHAPTER 4. CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS Figure 4.6 Detection by Correlation 6 Demodulation and Detection Figure 4. . Detection is covered elsewhere (Section 4.

.  .5) received signal.40) ∀r n. . Another type of receiver involves linear.39)  η1     η2    Since r (t) = sm (t) + Nt for 0 ≤ t ≤ T and for some m = {1. that is information symbols are equally likely to be transmitted.6.7).n ) r and where D (r.42) where r is the l2 norm of vector ^ r dened as r N n=1 (rn ) 2 m= arg max 2 < (r. .43) correlation (or correlator-type) receiver. . . time-invariant lters . sm ) 1≤m≤M arg min ( r ) − 2 < (r.n ) 2 (4. . rN     . 2.  ηN and ηn 's are Gaussian and independent.41) N n=1 (rn − sm. rn ∈ R : fr|sm = N e N0 2 2π 2 m ^ = = = = arg max fr|sm 1≤m≤M arg max ln fr|sm 1≤m≤M arg max arg min 1≤m≤M 1≤m≤M − N 2 ln (πN0 ) − 2 1 N0 N n=1 (rn − sm.   the vector composed of demodulated (Section 4. m= arg max P r [ sm (t) 1≤m≤M Note that was transmitted | r was observed] (4. then arg max P r [ sm | r] = arg max fr|sm 1≤m≤M 1≤m≤M (4.1 Detection Decide which sm (t) from the set of {s1 (t) . M } then r = sm + η where η =  .37) P r [ sm | r] If P r [ sm (t) was = transmitted | r was observed] = fr|sm P r [sm ] fr (4. . that is.n ) 2 m = = arg min D (r. . sm ) > +( sm ) 1≤m≤M 2 2 (4. Examples of the use of such a system are found here (Section 4. sm (t)} signals was transmitted based on observing r =  r1   r2   .   .38) P r [sm was transmitted] 1 M .n ) N0 1 2  2 (4. sm ) > −( sm )2 1≤m≤M This type of receiver system is known as a (4.  .   . sm ) N n=1 (rn − sm.   P 2 − N n=1 (rn −sm. the vector of projection of the received signal onto the ^ N bases.55 4. sm ) is the l2 distance between vectors ^ sm dened as D (r.

14 m= 2 ^ since D (r. 7 4. CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS and is known as a matched lter (Section 4.7 Examples of Correlation Detection Example 4.8) receiver. s2 >>< r.56 CHAPTER 4. s1 ) > D (r. Figure 4.6).org/content/m10149/2. An analysis of the performance of a correlator-type receiver using antipodal and orthogonal binary signals can be found in Performance Analysis . .15 7 "Performance Analysis" <http://cnx. Figure 4.10/>. s1 >.org/content/m10106/latest/> 8 This content is available online at <http://cnx. s2 ) or ( s1 ) = ( s2 ) 2 2 and < r.6 8 The implementation and theory of correlator-type receivers can be found in Detection (Section 4.

57

Example 4.7
Data symbols "0" or "1" with equal probability. Modulator

s1 (t) = s (t)

for

0 ≤ t ≤ T

and

s2 (t) = −s (t)

for

0 ≤ t ≤ T.

Figure 4.16

ψ1 (t) =

s(t) √ , A2 T

√ s11 = A T ,

and

√ s21 = − A T
(4.44)

∀m, m = {1, 2} : (rt = sm (t) + Nt )

Figure 4.17

√ r1 = A T + η1
or

(4.45)

√ r1 = − A T + η1
is Gaussian with zero mean and variance

(4.46)

η1

N0 2 .

58

CHAPTER 4.

CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS

Figure 4.18

√ √ ^ m= argmax A T r1 , − A T r1
decision rule decides.

, since

√ A T > 0

and

P r [s1 ] = P r [s1 ]

then the MAP

s1 (t) s2 (t)

was transmitted if was transmitted if

r1 ≥ 0 r1 < 0 (rt = sm (t) + Nt ) ⇒ (r = sm + η)
(4.47)

An alternate demodulator:

4.8 Matched Filters
ulator.

9

Signal to Noise Ratio (SNR) at the output of the demodulator is a measure of the quality of the demodSNR =
In the correlator described earlier,

signal energy noise energy
and

(4.48)

N0 2 . Is it possible to design a demodulator based on linear time-invariant lters with maximum signal-to-noise ratio?

Es = (|sm |)

2

σηn 2 =

9 This

content is available online at <http://cnx.org/content/m10101/2.14/>.

59

Figure 4.19

If

sm (t)

is the transmitted signal, then the output of the

k th 

lter is given as

yk (t)

= = =

∞ r h (t − τ ) dτ −∞ τ k ∞ (sm (τ ) + Nτ ) hk (t −∞ ∞ s (τ ) hk (t − τ ) dτ −∞ m

− τ ) dτ +
∞ −∞

(4.49)

Nτ hk (t − τ ) dτ

Sampling the output at time

T

yields

yk (T ) =
−∞
The noise contribution:

sm (τ ) hk (T − τ ) dτ +
−∞ ∞

Nτ hk (T − τ ) dτ

(4.50)

νk =
−∞
The expected value of the noise component is

Nτ hk (T − τ ) dτ

(4.51)

E [νk ]

= E = 0

∞ −∞

Nτ hk (T − τ ) dτ

(4.52)

The variance of the noise component is the second moment since the mean is zero and is given as

σ (νk )

2

= E νk 2 = E
∞ −∞

Nτ hk (T − τ ) dτ

∞ −∞

Nτ ' hk (T − τ ' )dτ

'

(4.53)

E νk 2

= =

∞ ∞ N0 δ τ − τ ' hk (T −∞ −∞ 2 2 N0 ∞ (|hk (T − τ ) |) dτ 2 −∞

− τ ) hk (T − τ ' )dτ dτ

'

(4.54)

60

CHAPTER 4.

CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS

Signal Energy can be written as

2

sm (τ ) hk (T − τ ) dτ
−∞
and the signal-to-noise ratio (SNR) as

(4.55)

SNR =

2 ∞ s (τ ) hk (T − τ ) dτ −∞ m 2 N0 ∞ (|hk (T − τ ) |) dτ 2 −∞

(4.56)

The signal-to-noise ratio, can be maximized considering the well-known Cauchy-Schwarz Inequality

2

g1 (x) g2 (x)dx
−∞
with equality when


−∞

(|g1 (x) |) dx
−∞

2

(|g2 (x) |) dx

2

(4.57)

g1 (x) = αg2 (x).

Applying the inequality directly yields an upper bound on SNR

2 ∞ sm (τ ) hk (T − τ ) dτ −∞ 2 N0 ∞ (|hk (T − τ ) |) dτ 2 −∞
with equality

2 N0

∞ −∞

(|sm (τ ) |) dτ

2

(4.58)

∀τ : hopt (T − τ ) = αsm (τ ) k

. Therefore, the lter to examine signal

m

should be

Matched Filter

∀τ : hopt (τ ) = sm (T − τ ) m

(4.59) The maximum SNR is

The constant factor is not relevant when one considers the signal to noise ratio. unchanged when both the numerator and denominator are scaled.

2 N0

∞ −∞

(|sm (τ ) |) dτ =

2

2Es N0
10

(4.60)

Examples involving matched lter receivers can be found here (Section 4.9). An analysis in the frequency domain is contained in Matched Filters in the Frequency Domain .
11

Another type of receiver system is the correlation (Section 4.6) receiver. A performance analysis of both matched lters and correlator-type receivers can be found in Performance Analysis .

4.9 Examples with Matched Filters
Example 4.8

12

The theory and rationale behind matched lter receivers can be found in Matched Filters (Section 4.8).

10 "Matched Filters in the Frequency Domain" <http://cnx.org/content/m10151/latest/> 11 "Performance Analysis" <http://cnx.org/content/m10106/latest/> 12 This content is available online at <http://cnx.org/content/m10150/2.10/>.

21 ∞ ∀t. 0 ≤ t ≤ 2T : s1 (t) ˜ s1 (t) = ˜ −∞ s1 (τ ) h1 (t − τ ) dτ (4.61 Figure 4.62) s1 (T ) = ˜ Compared to the correlator-type demodulation T3 3 (4.64) .61) = = = t τ (T − t + τ ) dτ 0 1 1 3 t 2 t 2 (T − t) τ |0 + 3 τ |0 2 t t 2 T − 3 (4.63) s1 (t) ψ1 (t) = √ Es (4.20 s1 (t) = t for 0 ≤ t ≤ T s2 (t) = −t for 0 ≤ t ≤ T h1 (t) = T − t for 0 ≤ t ≤ T h2 (t) = −T + t for 0 ≤ t ≤ T Figure 4.

65) (τ ) ψ1 (τ ) dτ (4.67) . CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS T s11 = 0 t s 0 1 s1 (τ ) ψ1 (τ ) dτ = = t √1 τ τ dτ Es 0 √1 1 t3 Es 3 (4.9 Assume binary data is transmitted at the rate of 0 ⇒ (b = 1) ⇒ (s1 (t) = s (t)) for 0 ≤ t ≤ T 1 ⇒ (b = −1) ⇒ (s2 (t) = −s (t)) for 0 ≤ t ≤ T P 1 T Hertz.66) Figure 4.22 Example 4.62 CHAPTER 4. Xt = i=−P bi s (t − iT ) (4.

m = 2. .63 Figure 4.24) content is available online at <http://cnx. 4.org/content/m10154/2.1 Correlation (correlator-type) receiver rt ⇒ r = (r1 .10 Performance Analysis of Binary Orthogonal Signals with Correlation 13 Orthogonal signals with equally likely bits.23 4. s2 >= 0.10.11/>. r2 ) = sm + η 13 This T (see Figure 4. and < s1 . rt = sm (t) + Nt for 0 ≤ t ≤ T . m = 1.

Note also that the bit-error probability is the same as for the matched 14 "Performance Analysis of Antipodal Binary signals with Correlation" <http://cnx.s2 (t) (r ) dr 1dr 2 = 1/2 R2 q 1 N 2π 20 (4.70) = Q Note that the distance between s1 14 and s2 is d12 = √ 2Es . The average bit error probability Pe = Q d √ 12 2N0 as we had for the antipodal case lter (Section 4.68) Pe = 1/2P r [r ∈ R2 | s1 (t) transmitted] + 1/2P r [r ∈ R1 | s2 (t) transmitted] = 1/2 R2 f r. .24 Decide s1 (t) was transmitted if r1 ≥ r2 . Pe = = P r [m = m] ˆ Pr ˆ = b b (4.69) e √ 2 −(|r1 − Es |) N0 √1 e πN0 −(|r2 |)2 N0 dr 1dr 2+1/2 R1 q 1 N 2π 20 e −(|r1 |)2 N0 √1 e πN0 √ 2 −(|r2 − Es |) N0 dr 1dr 2 Alternatively. if η2 − η1 > √ s1 (t) is transmitted we decide on the wrong signal if r2 > r1 or η2 > η1 + √ Es or when Es .org/content/m10152/latest/> .64 CHAPTER 4.11) receiver. CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS Figure 4.s1 (t) (r ) dr 1dr 2 + 1/2 R1 f r. Pe = 1/2 −η ∞ √ √ 1 e 2N0 Es 2πN0 Es N0 2 dη + 1/2P r [ r1 ≥ r2 | s2 (t) transmitted] (4.

65 4.74) H1  Y = 0 Es   + ν1 ν2   (4. (4.71) s1 (t) is transmitted Y1 (T ) = = ∞ s −∞ 1 ∞ s −∞ 1 (τ ) hopt (T − τ ) dτ + ν1 (T ) 1 (τ ) s∗ (τ ) dτ + ν1 (T ) 1 (4. .72) = Es + ν1 (T ) Y2 (T ) = = If ∞ s −∞ 1 (τ ) s∗ (τ ) dτ + ν2 (T ) 2 ν2 (T ) Y2 (T ) = Es + ν2 (T ).9/>.25 H0  Y = Es 0   + ν1 ν2   (4.73) s2 (t) is transmitted.75) 15 This content is available online at <http://cnx.11 Performance Analysis of Orthogonal Binary Signals with Matched Filters 15   rt ⇒ Y =  If Y1 (T ) Y2 (T )   (4. Y1 (T ) = ν1 (T ) and Figure 4.org/content/m10155/2.

.10/>. .10 Binary s1 (t) or s2 (t) 16 "Performance Analysis of Binary Antipodal Signals with Matched Filters" <http://cnx.org/content/m10128/2. As data changes from symbol period to symbol period. m ∈ {1.12.12 Carrier Phase Modulation 4.76) Note that the maximum likelihood detector decides based on comparing was sent. CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS where ν1 and ν2 are independent are Gaussian with zero mean and variance N0 2 Es . . Figure 4. See Figure 4. Pe = Q Es N0 Y1 and (4. refer here . For a similar analysis for binary antipodal signals. 2. .66 CHAPTER 4.27 4. otherwise Y2 .10). 17 Information is impressed on the phase of the carrier. M } : sm (t) = APT (t) cos 2πfc t + 2π (m − 1) M (4. .26 or Figure 4.26 Figure 4.org/content/m10153/latest/> 17 This content is available online at <http://cnx.27.77) Example 4. If Y1 ≥ Y2 then 16 s1 s2 was transmitted. ∀m. The analysis is identical to the correlator example (Section 4.1 Phase Shift Keying (PSK) the phase shifts.

M } : and sm (t) = APT (t) cos 2πfc t + 2π (m − 1) M (4. the integral in the last step before the aproximation is very small. ψ1 (t) = 2 PT (t) cos (2πfc t) T 2 T PT (t) sin (2πfc t) (4.84) ψ2 (t) = In general. . 2.67 4.82) Es = A2 T 1 + A2 2 2 cos 4πfc t + 0 4π (m − 1) M dt (4. .86) ψ1 (t) ψ1 (t) = 2 PT (t) cos (2πfc t) T (4. .88) (4.79) Sm (t) = APT (t) cos 2πfc t + Sm (t) = Acos The signal energy 2π (m − 1) M 2π (m − 1) M PT (t) sin (2πfc t) (4. .81) Es = = ∞ A2 PT 2 (t) cos2 2πfc t + 2π(m−1) M −∞ T A2 1 + 1 cos 4πfc t + 4π(m−1) 2 2 M 0 T dt dt A2 T 2 (4.80) 2π (m − 1) M PT (t) cos (2πfc t) − Asin (4.2 Representing the Signals An orthonormal basis to represent the signals is 1 ψ1 (t) = √ APT (t) cos (2πfc t) Es −1 ψ2 (t) = √ APT (t) sin (2πfc t) Es The signal (4.12. m ∈ {1.78) (4.85) ∀m.83) (Note that in the above equation.) Therefore. − (4.87) ψ2 (t) = sm 2 PT (t) sin (2πfc t) T  √  Es cos 2π(m−1) M  = √ 2π(m−1) Es sin M (4.89) .

68

CHAPTER 4.

CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS

4.12.3 Demodulation and Detection
rt = sm (t) + Nt ,
occur because of propagation delay. for somem

∈ {1, 2, . . . , M }

(4.90)

We must note that due to phase oset of the oscillator at the transmitter,

phase jitter or phase changes
(4.91)

rt = APT (t) cos 2πfc t +

2π (m − 1) + φ + Nt M

For binary PSK, the modulation is antipodal, and the optimum receiver in AWGN has average bit-error probability

Pe

= =

Q Q A

2(Es ) N0 T N0

(4.92)

The receiver where

rt = ± (APT (t) cos (2πfc t + φ)) + Nt
The statistics

(4.93)

r1

=

T 0

rt αcos 2πfc t+ φ dt
T 0

^

= ±

αAcos (2πfc t + φ) cos 2πfc t+ φ dt
T 0
^ αA cos 4πfc t + φ+ φ 2 ^

^

+

T 0

αcos 2πfc t+ φ Nt dt
^

^

(4.94)

r1 = ±
^ αA T cos φ− φ 2

αA 2
T

cos 4πfc t + φ+ φ

+ cos φ− φ dt

+ η1

(4.95)

r1 = ±

+
0
^

±

dt+η1 ±

^ αAT cos φ− φ 2

+η1

(4.96)

where

η1 = α

T 0

Nt cos ωc t+ φ dt

is zero mean Gaussian with

variance

α2 N0 T . 4

Therefore,


2 αAT 2

1 ^ @φ− φ A cos

0

Pe

 = Q 

q α2 N0 T 2 4
^

  
T N0

(4.97)

= Q cos φ− φ A
which is not a function of

α

and depends strongly on phase accuracy. ^

Pe = Q cos φ− φ

2Es N0

(4.98)

The above result implies that the amplitude of the local oscillator in the correlator structure does not play a role in the performance of the correlation receiver. However, the accuracy of the phase does indeed play a major role. This point can be seen in the following example:

69

Example 4.11
xt = −1i Acos (− (2πfc t ) + 2πfc τ ) xt = −1i Acos (2πfc t − (2πfc τ − 2πfc τ + θ ))
Local oscillator should match to phase (4.99)

(4.100)

θ.

4.13 Carrier Frequency Modulation
4.13.1 Frequency Shift Keying (FSK)

18

The data is impressed upon the carrier frequency. Therefore, the

M

dierent signals are (4.101)

sm (t) = APT (t) cos (2πfc t + 2π (m − 1) ∆ (f ) t + θm )
for The

m ∈ {1, 2, . . . , M } M dierent signals

have

M

dierent carrier frequencies with possibly dierent phase angles since

the generators of these carrier signals may be dierent. The carriers are

f1 = fc f2 = fc + ∆ (f ) fM = fc − M ∆ (f )
Thus, the

(4.102)

M

signals may be designed to be orthogonal to each other.

< sm , sn >= 0 A2 cos (2πfc t + 2π (m − 1) ∆ (f ) t + θm ) cos (2πfc t + 2π (n − 1) ∆ (f ) t(4.103) dt = + θn ) 2 T A cos (4πfc t + 2π (n + m − 2) ∆ (f ) t + θm + θn ) dt + 2 0 A2 T A2 sin(4πfc T +2π(n+m−2)∆(f )T +θm +θn )−sin(θm +θn ) cos (2π (m − n) ∆ (f ) t + θm − θn ) dt = 2 + 2 4πfc +2π(n+m−2)∆(f ) 0
A2 2 sin(2π(m−n)∆(f )T +θm −θn ) 2π(m−n)∆(f )

T

sin(θm −θn ) 2π(m−n)∆(f )

if

If 2fc T + (n + m − 2) ∆ (f ) T is an integer, and if (m − n) ∆ (f ) T is also an integer, then < Sm , Sn >= 0 1 ∆ (f ) T is an integer, then < sm , sn > 0 when fc is much larger than T . In case ∀m, θm = 0 : (θm = 0)

< sm , sn >

A2 T sinc (2 (m − n) ∆ (f ) T ) 2 ∆ (f ) =

(4.104)

Therefore, the frequency spacing could be as small as optimum receiver is

1 2T since sinc (x) = 0 if x = ± (1) or ± (2). If the signals are designed to be orthogonal then the average probability of error for binary FSK with

[U+2010] P e

=Q

Es N0

(4.105)

18 This

content is available online at <http://cnx.org/content/m10163/2.10/>.

70

CHAPTER 4.

CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS

in AWGN. Note that

−0.216.

Therefore if

sinc (x) takes its minimum ∆ (f ) = 0.7 then T

value not at

x = ± (1)

but at

± (1.4)

and the minimum value is

[U+2010] P e

=Q

1.216Es N0

(4.106)

which is a gain of

10 × log1.216

0.85dθ

over orthogonal FSK.

4.14 Dierential Phase Shift Keying
of exactly

19

The phase lock loop provides estimates of the phase of the incoming modulated signal. A phase ambiguity

π

is a common occurance in many phase lock loop (PLL) implementations. ^

Therefore it is possible that, if

θ= θ + π
^

without the knowledge of the receiver. Even if there is no noise,

b=1

then

^

b= 0

and if

b=0 π

then

b = 1.

In the presence of noise, an incorrect decision due to noise may results in a correct nal desicion (in binary case, when there is phase ambiguity with the probability:

Pe = 1 − Q
Consider a stream of bits

2Es N0

(4.107)

an ∈ {0, 1}

and BPSK modulated signal

−1an APT (t − nT ) cos (2πfc t + θ)
n
In dierential PSK, the transmitted bits are rst encoded chosen without loss of generality to be either 0 or 1. Transmitted DPSK signals

(4.108)

bn = an ⊕ bn−1

with initial symbol (e.g.

b0 )

−1bn APT (t − nT ) cos (2πfc t + θ)
n
The decoder can be constructed as

(4.109)

bn−1 ⊕ bn

= bn−1 ⊕ an ⊕ bn−1 = 0 ⊕ an
(4.110)

= an
If two consecutive bits are detected correctly, if ^ ^ ^

b n = bn
^

and

^

b n−1 = bn−1

then

an

=

b n ⊕ b n−1
(4.111)

= bn ⊕ bn−1 = an ⊕ bn−1 ⊕ bn−1 = an
19 This

content is available online at <http://cnx.org/content/m10156/2.7/>.

112) b n = bn ⊕ 1 and ^ b n−1 = bn−1 . In this case there will be an error and the probability of that error for DPSK is ^ Pe = = = P r a n = an P r b n = bn . Then. ^ ^ ^ an = b n ⊕ b n−1 = bn ⊕ 1 ⊕ bn−1 ⊕ 1 = bn ⊕ bn−1 ⊕ 1 ⊕ 1 = bn ⊕ bn−1 ⊕ 0 = bn ⊕ bn−1 = an If ^ (4. that is. .113) 1−Q 2Es N0 2Q 2Es N0 This approximation holds if is small. That is. b n−1 = bn−1 + P r b n = bn . b n−1 = bn−1 2Q Q 2Es N0 ^ ^ ^ ^ (4. one of two consecutive bits is detected in error. two consecutive bits are detected incorrectly.71 if ^ b n = bn ⊕ 1 and ^ b n−1 = bn−1 ⊕ 1.

72 CHAPTER 4. CHAPTER 3: COMMUNICATION OVER AWGN CHANNELS .

12/>.. .1 Digital Transmission over Baseband Channels limited to frequency range around zero (DC).org/content/m10056/2. .2 Introduction to ISI 1 This 2 This A typical baseband digital system is described in Figure 1(a). .4) The band limited nature of the channel and the stream of time limited modulated signal create aliasing which is referred to as intersymbol interference. Consider modulated signals is then 1 Until this point.1) The signal contribution in the frequency domain is ∀f : S˜ (f ) = Sm (f ) G (f ) m The optimum matched lter should match to the ltered signal: (5. it maximizes signal-to-noise ratio). .Chapter 5 Chapter 4: Communication over Band-limitted AWGN Channel 5.e.org/content/m15519/1. 5. The signal energy is changed to Es = ˜ −∞ |S˜ (f ) | m 2 df (5. The channel is best modied as of the baseband channel.3) optimum (i. At the transmitter. The channel output xt = sm (t) rt = = for 0≤t≤T for some m ∈ {1. the modulated pulses are ltered to comply with some bandwidth constraint. we have considered data transmissions over simple additive Gaussian channels that are not time or band limited. content is available online at <http://cnx. 2 We will investigate ISI for a general PAM signaling. and are g (t) is the impulse response .5/>. In this module we will consider channels that do have bandwidth constraints. it requires knowledge of ∞ the channel impulse response. however. 73 . 2. These pulses are distorted by the reactances of the cable content is available online at <http://cnx.2) opt ∀f : Hm (f ) = Sm (f )G (f )e(−i)2πf t This lter is indeed (5. M } ∞ −∞ ∞ −∞ xτ g (t − τ ) dτ + Nt Sm (τ ) g (t − τ ) dτ + Nt (5.

74 CHAPTER 5. H (f ) = Ht (f ) . assuming that symbols/s. (b) Equivalent model Figure 5. the gure shows all tail of h (t) passing through zero amplitude at the instant when h (t − T ) is to be sampled. . shown in gure 2b. Therefore. This sinc (t/T )-shaped pulse is called the ideal Nyquist pulse. is single-sided bandwidth the synchronization is perfect. Figure 1(b) illustrates a convenient model. Nyquist investigated and showed that theoretical minimum system bandwidth needed in order to detect Rs Rs /2 or 1/2T hertz. there will be no ISI.Hr (f ) Intersymbol interference in the detection process. Even in the absence of noise.Hc (f ) . Such interference is termed InterSymbol Interfernce (ISI). its impulse response is of the form h (t) = sinc (t/T ). lumping all the ltering into one overall equivalent system transfer function. the received pulses can overlap one another as shown in Figure 1(b). when H (f ) is such a lter with 1/2T (the ideal Nyquist lter) as shown in gure 2a. (a) Typical baseband digital system.1: Due to the eects of system ltering. CHAPTER 4: COMMUNICATION OVER BAND-LIMITTED AWGN CHANNEL or by fading in the wireless systems. without ISI. Even though two successive pulses h (t) and h (t − T ) with long tail. the eects of ltering and channel-induced distortion lead to ISI. For baseband systems.

.5) rt = = = ∞ ∞ n=−∞ an s (t − (τ − nT )) g (τ ) dτ + Nt −∞ ∞ ∞ n=−∞ an −∞ s (t − (τ − nT )) g (τ ) dτ + Nt ∞ ˜ n=−∞ an s (t − nT ) + Nt (5. without ISI is R/W bits/s/Hz. .3 Pulse Amplitude Modulation Through Bandlimited Channel Consider a PAM system This implies 3 b−10 . For example.6) Since the signals span a one-dimensional space. (a) Rectangular system transfer function H(f ). Among the class of Nyquist lters. M = 64 = 26 amplitudes. . the most popular ones are the raised cosine and root-raised cosine.75 Nyquist channels for zero ISI.. For ideal 2symbols/s/Hz. (b) Received pulse shape h (t) = sinc (t/T ) The names "Nyquist lter" and "Nyquist pulse" are often used to describe the general class of ltering and pulse-shaping that satisfy zero ISI at the sampling points. .. an ∈ {M The received signal is : xt = n=−∞ an s (t − nT ) (5. ∞ levels of amplitude} ∀a n.2: Figure 2 Nyquist channels for zero ISI. . A fundamental parameter for communication system is bandwidth eciency. maximum bandwidth eciency is possible 5.2symbols/s/Hz = 12bits/s/Hz.. the theoretical 6bits/symbol. b0 b1 . the theoretical maximum symbol-rate packing without ISI is with 64-ary PAM. (b) Received pulse shape h (t) = sinc (t/T ) Figure 5. one lter matched to s (t) = sg (t) ˜ is sucient. Nyquist ltering.org/content/m10094/2.7/>. (a) Rectangular system transfer function H(f). b−1 . 3 This content is available online at <http://cnx.

8) The decision on the k th symbol is obtained by sampling the MF output at kT : (5.10) n = k.2 Design of Bandlimited Modulation Signals Recall that modulation signals are ∞ Xt = n=−∞ We can design an s (t − nT ) (5.) 4 This content is available online at <http://cnx.7) y (t) = = = ∞ ∞ opt ˜ (τ ) dτ + ν (t) n=−∞ an s (t − (τ − nT )) h −∞ ∞ ∞ opt ˜ (τ ) dτ + ν (t) n=−∞ an −∞ s (t − (τ − nT )) h ∞ n=−∞ an u (t − nT ) + ν (t) (5.1 Precoding The data symbols are manipulated such that yk (kT ) = ak u (0) + ISI + ν (kT ) (5.76 CHAPTER 5.org/content/m10118/2.13) y (kT ) = ak u (0) + y (nT ) = sghopt (nT ) ∞ n=−∞ an u (kT − nT ) + ν (kT ) (ISI is the sum term.4 Precoding and Bandlimited Signals 5. ISI can be eliminated or controlled by proper design of transmitter. n=k .4. equalizers or sequence detectors at the receiver.9) ∞ y (kT ) = n=−∞ The an u (kT − nT ) + ν (kT ) k th symbol is of interest: ∞ y (kT ) = ak u (0) + n=−∞ where an u (kT − nT ) + ν (kT ) (5. or by symbols (possibly even future signals) lingers and aects the performance of the receiver.4.6/>. it provides memory for the transmission system.11) 5. . (5. CHAPTER 4: COMMUNICATION OVER BAND-LIMITTED AWGN CHANNEL The matched lter's impulse response is ∀t : hopt (t) = sg (T − t) The matched lter output is (5.12) s (t) such that   large if n = 0 u (nT ) =  zero or small if n = 0 where Also. 4 modulation signals or precoding 5. The signal s (t) can be designed to have reduced ISI. The eect of old The eect of lters at the Since the channel is bandlimited. and once again.

termed the "excess bandwith" The roll-o factor is dened to be With the Nyquist constrain W − W0 is −W r = WW0 0 (2). It can be express as 5 Transfer function beloging to the Nyquist class (zero ISI at the sampling time) is called the raised-cosine 1 H (f ) = { cos | f |< 2W 0 − W 2W 0 − W < | f |< W |f >W | cos[2π(W −W0 )t] 2 π |f |+W −2W 0 4 W −W0 (1a) 0 2 Where h (t) = 2W 0 sinc (2W 0 t) 1−[4(W −W0 )t] (1b) W is the absolute bandwidth. .14) y (T ) . ISI can be viewed as diversity to increase performance.5 Pulse Shaping to Reduce ISI The Raised-Cosine Filter lter. .org/content/m15520/1.4. where 0 ≤ r ≤ 1 W0 = Rs /2 equation (2) can be rewriten as W = 5 This 1 2 (1 + r) Rs content is available online at <http://cnx.3 Design Equalizers at the Receiver Linear equalizers or decision feedback equalizers reduce ISI in the statistic yt 5. y (2T ) .4 Maximum Likelihood Sequence Detection ∞ y (kT ) = n=−∞ By observing an (kT − nT ) + ν (k (T )) (5. W0 = 1/2T represent the minimum bandwidth for the rectangular spectrum and the -6 dB bandwith (or half-amplitude point) for the raised-cosine spectrum. . the date symbols are observed frequently. .77 5.4.2/>. 5. Therefore.

6 Error-performance degradation can be classifyed in two group.org/content/m15527/1. (b) System impulse The raised-cosine characteristic is illustrate in gure 1 for of excess bandwidth is 100 %. larger pulse amplitudes. and thus. . channel lter and receiving lter.6 Two Types of Error-Performance Degradation The second one is due to signal distortion such as ISI. CHAPTER 4: COMMUNICATION OVER BAND-LIMITTED AWGN CHANNEL Figure 5. The ltering at the receiver is chosen so that the overall transfer function is a form of raised-cosine. greater sensitivity to timing errors. The rst one is due to a decrease in received signal power or an increase in noise or inteference power. The smaller the lter roll-o the smaller will be the excess bandwidth. thus yielding asymbol-rate packing 1 symbols/s/Hz. The cost is longer pulse tails. 6 This content is available online at <http://cnx. r = 1. and the system can provide a symbol rate of r = 0. the required Rs symbols/s using a bandwidth Rs herts (twice the Nyquist minimum bandwidth).78 CHAPTER 5. 5. the shorter will be the pulse tail. Neglecting any channel-induced ISI.5. Often this is accomplished by choosing both the receiving lter and the transmitting lter so that each has a transfer function known as a root raised cosine. the product of these root-raised cosine functions yields the composite raised-cosine system transfer function. r = 0.3: response Raised-cosine lter characteristics. giving rise to a loss in signal-to-noise ratio EB /N0 . (a) System transfer function. When r = 1. The lager the lter roll-o.2/>. Small tails exhibit less sensitivity to timing errors and thus make for small degradation due to ISI. The Root Raised-Cosine Filter Recall that the raised-cosine frequency transfer function describes the composite H (f ) including trans- mitting lter.

the performance dose not follow the theoretical curve. If there is no solution to this problem. This loss in EB /N0 is not so terrible when compared with possible eects of degradation caused by a distortion mechanism corresponding to the dashed line plot (2). but in facts follows the dashed line plot (1).79 Figure 5. . 5. 7 An eye pattern is the display that results from measuring a system' s response to baseband signals in a 7 This content is available online at <http://cnx.2/>. there is no a mount of improve this problem.org/content/m15521/1.7 Eye Pattern prescribed way. More EB /N0 can not help the ISI problem because a incresing in EB /N0 EB /N0 that will dose not make change in overlapped pulses.4: Bit error probability Suppose that we need a communication system with a bit-error probability PB versus Eb /N0 characteristic corresponding to the solid-line curve plotted in gure 1. A loss in Eb /N0 due to some signal losses or an increased level of noise or interference. Suppose that after the system is congured. Instead of suering a simple loss in signal-to-noise ratio there is a degradation eect brought about by ISI.

The width of the opening indicates the time over which sampling for detection might be performed. If there were no ltering in the system then the system would look like a box rather than an eye. range of amplitude dierences of the zero crossings. ISI is decreaseing. . the range of amplitude dierences of the zero crossing . is a measure of distortion caused by ISI. DA . In the simplest sense. In gure 1. 8 This content is available online at <http://cnx. measure of noise margin. ISI is increase. yielding the greatest protection against noise. the most frequent use of the eye pattern is for qualitatively assessing the extent of the ISI.5: Eye pattern Figure 1 describe the eye pattern that results for binary binary pulse signalling. as the eye opens. 5. The optimum sampling time corresponds to the maxmum eye opening. As the eye closes.8 Transversal Equalizer 8 A training sequence used for equalization is often chosen to be a noise-like sequence which is needed to estimate the channel frequency response.org/content/m15522/1. is a measure of the timmung jitter. but a pseudonoise (PN) signal is preferred in practise because the PN signal has larger average power and hence larger SNR for the same peak transmitted power. ST is mesuare of sensity-to-timing error. CHAPTER 4: COMMUNICATION OVER BAND-LIMITTED AWGN CHANNEL Figure 5.80 CHAPTER 5. MN the is a JT . In general. training sequence might be a single narrow pulse.4/>.

so that the received demod- ulated pulse exhibits distortion. The tab weights could be chosen to force the system impulse response to zero at all but one of the sampling times. thus making exactly to the inverse of the channel transfer function He (f ) correspond Hc (f ) . To achieve the desired raised-cosine transfer function.6: Received pulse exhibiting distortion Consider that a single pulse was transmitted over a system designated to have a raised-cosine transfer function HRC (t) = Ht (f ) . as shown in gure 1. is the most popular form of an easily adjustable equalizing lter consisting of a delay line with T-second taps (where T is the symbol duration). also consider that the channel induces ISI.Hr (f ). illustrated in gure 2. we would like the equalizing lter to generate a set of canceling echoes. The transversal lter. such that the pulse sidelobes do not go through zero at sample times. the equalizing lter should have a frequency response He (f ) = 1 1 −jθ c (f ) (1) Hc (f ) = |Hc (f )| e In other words.81 Figure 5.

x (−N + 1) .2N (2)  x (−N ) 0 0 . .     .. . 0 .  .. ... .   . ..c(3a) z (−2N ) c−N c = x−1 z (3b)     . we can nd c by solving the following equation: .. . c−N +1 . ..  . .7: Transversal lter Consider that there are convolution the input sample z (k) = 2N + 1 taps with weights c−N .. .     ..  . as x (N ) 0     . ..    x (−N + 1) x (−N ) 0       .    .        z =  z (0)  c =  c0  x =  x (N ) x (N − 1) x (N − 2)         . . x (k) and cn more compactly z = x.    . . . ...    z (2N ) cN 0 0 0  0 0 0 We can describe the relationship among z (k). CHAPTER 4: COMMUNICATION OVER BAND-LIMITTED AWGN CHANNEL Figure 5. .cN x (k) and tap weights cn as follows: N n=−N x (k − n) cn k = −2N. ...  .. .   x (N − 1)   x (N ) Whenever the matrix x is square. . 0 .   x (−N )    . .    .. Output samples z (k) are the By dening the vectors z and c and the matrix x as respectively...  .  .82 CHAPTER 5.

9 The basic limitation of a linear equalizer. This problem can be solved in deterministic way known as the zero-forcing solution. 5. By multiplying both sides of equation (4) by xT . it is more robust in the presence of noise and large ISI. matrix x is transformed into a square matrix of 2N + 1 by 2N + 1. Zero-Forcing Solution dimension for the set of At rst. . Such equations are referred to as an overdetermined set. Minimum MSE Solution A more robust equalizer is obtained if the cn tap weights are chose to minimize the mean-square error (MSE) of all the ISI term plus the noise power at the out put of the equalizer. we have x z=x And T T xc(5) Rxz = Rxx c (6) T T Where Rxz = x z is called the cross-correlation vector and Rxx = x x is call the autocorrelation matrix of the input noisy signal. the eye is often closed before equalization. The vectors z and c 4N + 1 and 2N + 1. Since the zero-forcing equalizer neglects the eect of noise.9 Decision Feedback Equalizer having spectral nulls. in a statistical way. but they can be approximated by transmitting a test signal and using time average estimated to solve for the tap weights from equation (6) as follows: −1 c = Rxx Rxz Most high-speed telephone-line modems use an MSE weight criterion because it is superior to a zeroforcing criterion. by disposing top N rows and bottom N rows. ±3 simultaneous equations selecting the Cn weight so that the equalizer output is forced to zero at N sample points on either side of the desired pulse. known as the minimum mean-square error (MSE) solution. ±2.org/content/m15524/1. 9 This content is available online at <http://cnx. This solution minimizes the peak ISI distortion by 1 0 k=0 k = ±1. z (k) = { (4) For such an equalizer with nite length. it is not always the best system solution. or.4/>. In practice. for high-speed transmission and channels introducing much ISI. the distortion on a current pulse that was caused by previous pulses is subtracted. is the poor perform on channel A decision feedback equalizer (DFE) is a nonlinear equalizer that uses previous detector decision to eliminate the ISI on pulses that are currently being demodulated. Rxz and Rxx are unknown. Then equation c = x−1 z is used to solve the 2N + 1 2N + 1 weights cn .83 Notice that the index k was arbitrarily chosen to allow for have dimensions 4N + 1 sample points. In other words. However. such as the transversal lter. the peak distortion is guaranteed to be minimized only if the eye pattern is initially open. MSE is dened as the expected value of the squared dierence between the desire data symbol and the estimated data symbol.

84 CHAPTER 5. The advantage of a DFE implementation is the feedback lter. The forward and feedback tap weights can be adjusted simultaneously to fulll a criterion such as minimizing the MSE. which is additionally working to remove ISI.2/>.8: Decision feedback Equalizer Figure 1 shows a simplied block diagram of a DFE where the forward lter and the feedback lter can each be a linear lter. A common solution to this problem is to initialize the equalizer with an alternate process. CHAPTER 4: COMMUNICATION OVER BAND-LIMITTED AWGN CHANNEL Figure 5. 10 Another type of equalization. capable of tracking a slowly time-varying channel response. 5. operates on noiseless quantized levels. When performed continually and automatically in this way. such as transversal lter. the adaptive procedure is referred to as decision directed. the decision directed equalizer might not converge.10 Adaptive Equalization tive equalization. is known as adapIt can be implemented to perform tap-weight adjustments periodically or continually. The nonlinearity of the DFE stems from the nonlinear characteristic of the detector that provides an input to the feedback lter. The basic idea of a DFE is that if the values of the symbols previously detected are known. . and thus its output is free of channel noise. such as a preamble 10 This content is available online at <http://cnx. If the probability of error exceeds one percent. Periodic adjustments are accomplished by periodically transmitting a preamble or short training sequence of digital data known by the receiver. then ISI contributed by these symbols can be canceled out exactly at the output of the forward lter by subtracting past symbol values with appropriate weighting.org/content/m15523/1. Continual adjustment are accomplished by replacing the known training sequence with a sequence of data symbols estimated from the equalizer output and treated as known data.

we also want it to be small enough for low variance. and then switch to decision-directed mode. Thus.org/content/m15522/latest/ . e (k) = z (k) − z (k) Where Θ Θ (1) are the desired output signal (a sample free of ISI) and the estimate at time k. The most robust algorithm that average noisy solution is the least-mean-square (LMS) algorithm. and ∆ is a small term that limits the coecient step size and thus controls the rate of convergence of the algorithm as well as the variance of the steady state solution.85 to provide good channel-error performance. 11 http://cnx. do not include the eects of channel noise. while we want the convergence parameter ∆ to be large for fast convergence but not so large as to be unstable. it is necessary that the data be averaged to obtain the stable signal statistic. The noisy gradient is simply the product e (k) rx of an error scalar e (k)and the data vector rx . Each iteration of this algorithm uses a noisy estimate of the error gradient to adjust the weights in the direction to reduce the average mean-square error. z (k) T and Θ z (k) z (k) = c rx = N n=−N x (k − n) cn (2) T Where c is the transpose of the weight vector at time k. The simultaneous equations described in equation (3) of module Transversal Equalizer 11 . Stability is assured if the parameter ∆ is smaller than the reciprocal of the energy of the data in the lter. Iterative process that updates the set of weights is obtained as follows: c (k + 1) = c (k) + ∆e (k) rx (3) Where c (k) is the vector of lter weights at time k. To obtain stable solution to the lter weights. or the noisy solution obtained from the noisy data must be averaged.

CHAPTER 4: COMMUNICATION OVER BAND-LIMITTED AWGN CHANNEL .86 CHAPTER 5.

we consider channels and will nd out how much information can be sent through the channel reliably. We also discussed how to represent (and compress) information sources in binary symbols in an ecient manner. 1 This content is available online at <http://cnx. 87 .org/content/m10173/2. we discussed information sources and quantied information.Chapter 6 Chapter 5: Channel Coding 6. Figure 6.  Xn where Xi ∈ X a discrete symbol set or input alphabet. These discrete channels could represent analog channels with modulation and demodulation and detection.1)    X2    X= .1 Let us denote the input sequence to the channel as  X1  (6.1 Channel Capacity 1 In the previous section. In this section.   . We will rst consider simple channels where the input is a discrete random variable and the output is also a discrete random variable.   .8/>.

1 A binary symmetric channel (BSC) is a discrete memoryless channel with binary input and binary output and pY |X (y = 0|x = 1) = pY |X (y = 1|x = 0). The statistical properties of a channel are determined if one nds x∈X n . A discrete channel is called a discrete memoryless channel if n pY|X (y|x) for all y∈Y n and for all pY|X (y|x) = i=1 for all pYi |Xi (yi |xi ) (6. Example 6.  Yn where (6.2 It is interesting to note that every time a BSC is used one bit is sent across the channel with probability of error of . a white Gaussian channel with antipodal signaling and matched lter receiver has probability of error of the error is symmetric with respect to the transmitted bit. Before we consider the above question a few denitions are essential. Since pY |X (0|1) = pY |X (1|0) 2Es N0 (6.   .3) y∈Y n and for all x∈X n . reliably.2) Yi ∈ Y a discrete symbol set or output alphabet. The question is how much information or how many bits can be sent per channel use.   . These are discussed in mutual information (Section 6.2).4) = Q = Figure 6. then Q 2Es N0 .88 CHAPTER 6. . As an example. CHAPTER 5: CHANNEL CODING The channel output  Y1     Y2       Y3  Y =   .

y ) (6. Shannon in 1948 changed this paradigm and stated a theorem that presents the rate (speed of communication) as the limiting factor as opposed to reliability.g.2 Consider a discrete memoryless channel with four possible inputs and outputs. It may work if one considers a vector of binary inputs referred to as the extension channel. Example 6. Figure 6.e. Y ) = H (X) − H (X|Y ) and output of noisy channels. Y ) = − xx yy p X. Therefore. the receiver can not determine. Claude E. then the transmission will always be reliable.89 6. but does not seem possible with binary channels since the input is either zero or one.5) H (Y ) + H (X|Y ) = H (X) + H (Y |X) (6. the rate of transmission was the limiting factor and not reliability.1: Mutual Information The mutual information between two discrete random variables is denoted by as I (X. if "a" is received..9/>. . using only those inputs whose corresponding outputs are disjoint (e. y ) logp X. far apart).2 Mutual Information Recall that 2 H (X. but 1 bit is sent per channel use. Therefore.Y ( x. 2 bits are sent per channel use. This is the essence of Shannon's noisy channel coding theorem. 2 This content is available online at <http://cnx. For example. The concept is appealing. if the transmitter and receiver agree to only use symbols "a" and "c" and never use "b" and "d". The system. However. if "a" was transmitted or "d".7) I (X. is very unreliable. Y ) and dened (6.org/content/m10178/2. Therefore.Y ( x. reliably. one of the four symbols will be transmitted.3 Every time the channel is used. however..6) Denition 6. the limiting factor could very well be reliability when one considers noisy channels. Mutual information is a useful concept to measure the amount of information shared between input In our previous discussions it became clear that when the channel is noisy there may not be reliable communications. i.

1 3 and dierent from x by one Using Stirling's approximation n! 3 This √ nn e−n 2πn (6. for additional information on typical sequences.4 This module provides a description of the basic information necessary to understand Shannon's Noisy Channel Coding Theorem (Section 6.8) n that are dierent from n at n is   n n  = n! (n )! (n − n )! (6.org/content/m10179/2.3). 6. 0) . 0.  ∈ Y = {0. 1}  . 0. 0.9) Example 6. please refer to Typical Sequences (Section 6. CHAPTER 5: CHANNEL CODING  X1     X2  n   n X input vector =  .3 x = (0.10/>. However. The number of output sequences 3 T T T = 3 given by (1. y) The number of sequences of length n x of length (6.  Xn   Y1    Y2  n   n Y output vector =  .10) content is available online at <http://cnx.   . and (0. 1) .  Yn Figure 6.3 Typical Sequences Numbers the output 3 If the binary symmetric channel has crossover probability then if x is transmitted then by the Law of Large y is dierent from x in n places if n is very large. .  ∈ X = {0. (0. 1) . 1}  .4).90 CHAPTER 6. dH (x. 0) T and = 3! element: 1!2! = 3×2×1 1×2 n = 3 × 1 . 1.   .

12) Figure 6. The maximum number of input sequences that produce nonoverlapping output sequences M = = 2 2nH(Y ) 2nHb ( ) n(H(Y )−Hb ( )) (6.1). As discussed earlier nH(Y ) n (Example 3. Note that The maximum rate can be increased by increasing Hb ( ) is only a function of the crossover probability and can not be minimized any further. . Consider the output vector Y as a very long random vector with entropy nH (Y ).13) 2n(H(Y )−Hb ( n (H (Y ) − Hb ( )) )) The number of information bits that can be sent across the channel reliably per The maximum reliable transmission rate per channel use n channel uses R = = = log2 M n n(H(Y )−Hb ( )) n (6.5 The number of distinguishable input sequences of length n is (6. If the input is chosen to be uniformly distributed with pX (0) = pX (1) = 1 2. 2 is the number of typical sequences. Therefore.14) H (Y ) − Hb ( ) H (Y ). The entropy of the channel output is the entropy of a binary random variable.11) Hb ( ) ≡ (− ( log2 )) − (1 − ) log2 (1 − ) is the entropy of a binary memoryless source.91 we can approximate   where n n   2n((−( log2 ))−(1− )log2 (1− )) = 2nHb ( ) (6. For any x 2nHb ( ) highly probable outputs that correspond to this input. 2 is the nH(Y ) nHb ( ) total number of binary sequences. and 2 is the number of there are elements in a group of possible outputs for one input vector. the number of typical sequences (or highly probably) is roughly 2 .

The rate is 1 when =0 or = 1.15) and pY (1) = = 1pX (1) + pX (0) 1 2 (6. It is also closely related to material presented in Mutual Information (Section 6.4). .4 The maximum reliable rate for a BSC is is 0 when 1 − Hb ( ). CHAPTER 5: CHANNEL CODING Then pY (0) = = 1pX (0) + pX (1) 1 2 (6.6 This module provides background information necessary for an understanding of Shannon's Noisy Channel Coding Theorem (Section 6. Recall that for Binary Symmetric Channels (BSC) H (Y |X) = px (0) H (Y |X = 0) + px (1) H (Y |X = 1) = px (0) (− ((1 − ) log2 (1 − ) − log2 )) + px (1) (− ((1 − ) log2 (1 − ) − log2 )) = (− ((1 − ) log2 (1 − ))) − log2 = Hb ( ) (6.17) Therefore. Y ) (6. This result says that ordinarily Resulting in a maximum rate R = 1 − Hb ( ) when one bit is transmitted across a BSC with reliability pX (0) = 1 − .2).92 CHAPTER 6. The rate = 1 2 Figure 6. If one needs to have probability of error to reach zero then one should reduce transmission of information to 1 − Hb ( ) and add redundancy. H (Y ) takes its maximum value of 1.16) Then. the maximum rate indeed was R = H (Y ) − H (Y |X) = I (X.18) Example 6. 1 pX (1) = 2 .

X and the output Y .5 bits per transmission. then the capacity C 0.23) This indicates that with large probability as sphere of radius nση 2 centered about X since On the other hand since Xi 's are power approaches innity.3) be reviewed before proceeding with this document. 5 "Noisy Channel Coding Theorem" <http://cnx.20) is a Gaussian random variable with variance 2 ση . Y ) is the mutual information between the channel input transmission rate R is less than C. This means that we can take 400 information bits and map them into a code of length 1000 bits. Theorem 6.1: Shannon's Noisy Channel Coding The capacity of a discrete-memoryless channel is given by C = maxp X x { I (X.24) |Y | ≤ n P + ση 2 This mean (6.org/content/m0073/latest/> . let's concentrate on discrete-time Gaussian channels Yi = Xi + ηi where the (6. it is possible to send 0. Y ) | pX (x) } where (6.22) For large n.1. by the Law of Large Numbers.21) Consider an output block of size n Y =X +η (6. The input Xi 's are information bearing random variables and ηi Xi 's are constrained to have power less than P 1 n n Xi 2 ≤ P i=1 (6. One hundred of those bits may be detected incorrectly but the 400 information bits may be decoded correctly. 1 n n ηi 2 = i=1 1 n n (|yi − xi |) ≤ ση 2 i=1 an 2 (6. Then the whole code can be transmitted over the channels. Therefore.10/>. 4 This content is available online at <http://cnx. If the > 0 there exists a code with block length n large R > C . the error probability of any code with any Example 6.5 If we have a binary symmetric channel with cross over probability 0.4 bits per channel through the channel reliably.2) and in Typical Sequences (Section 6.19) I (X.4 Shannon's Noisy Channel Coding Theorem theorem is available at Noisy Channel Theorems 5 4 It is highly recommended that the information presented in Mutual Information (Section 6. Y will be located in 2 (|y − x|) ≤ nση 2 constrained and ηi and Xi 's are independent n n-dimensional 1 n n yi 2 ≤ P + ση 2 i=1 (6.25) Y is in a sphere of radius n (P + ση 2 ) centered around the origin. Before we consider continuous-time additive white Gaussian channels. An introductory module on the .93 6. If enough whose error probability is less than block length is bounded away from zero.org/content/m10180/2. then for any .

then 2W P log2 1 + 2 N0 W C = W log2 1 + bits/trans. we are expecting to transmit 10 bits/second/Hertz across telephone channels.30) One should not expect to design modems faster than 30 Kbs using this model of telephone channels.28) (6.7 Exercise 6.1 How many bits of information can one send in The capacity of a discrete-time Gaussian channel (Solution on p. additive white Gaussian with noise power spectral N0 2 and input power constraint P and bandwidth rate to provide power per sample P and noise power density The system can be sampled at the Nyquist ση 2 = W N0 df −W 2 = W N0 The channel capacity (6.27) 1 2 log2 1+ C= P N0 W bits per transmission. P ση 2 bits per channel use. bandlimited. . It is also interesting to note that since the signal to noise ratio is large. Therefore.) n uses of the channel? C = 1 log2 1 + 2 W.26) 1+ Figure 6. When the channel is a continuous-time. The bandwidth is 3000 Hz and the signal to noise ratio is often 30 dB. CHAPTER 5: CHANNEL CODING How many X 's can we transmit to have nonoverlapping Y spheres in the output domain? The question is how many spheres of radius nση 2 t in a sphere of radius n (P + ση 2 ).94 CHAPTER 6. “√ M = = ”n n(ση 2 +P ) ”n “√ nση 2 P ση 2 n 2 (6. Since the sampling rate is 2W .29) Example 6. C = 3000log2 (1 + 1000) 30000 bits sec (6./sec P N0 W bits sec (6. 99.6 The capacity of the voice band of a telephone channel can be determined using the Gaussian model. x trans.

31) (6.6). c2k } ci ∈ {0. Example 6. The mapping is independent from previous blocks. 7 6. that is. How can one systematically nd the corresponding information sequences from a codeword. cj ) = # of places that they are dierent (6. Another brief explanation of channel coding is oered in Channel Coding and the Repetition Code . c2 . .org/content/m0071/latest/> .1: Block codes) and convolutional codes (Section 6. there is no memory from one block to another. block codes (Section 6. This goal is achieved by adding redundancy to the information symbol vector resulting in a longer coded vector of symbols that are distinguishable at the output of the channel.34) information sequence ⇒ codeword (channel input) A binary block code is completely dened by 2k binary sequences of length n called codewords.7 k=2 and n=5 00 → 00000 01 → 10100 10 → 01111 11 → 11011 (6. Each block is mapped into channel inputs of n.e.5 Channel Coding 6 Channel coding is a viable method to reduce information rate through the channel and increase reliability.37) 6 This content is available online at <http://cnx.32) (6.org/content/m10174/2. .33) (6.11/>. i.95 6. 1} There are three key questions. How can one nd "good" codewords? 2. Hamming distance is a useful measure of codeword properties ci ∈ C and cj ∈ C implies ci ⊕ cj ∈ C where ⊕ is an elementwise modulo 2 dH (ci .5. .36) 3. 1. (6.5. We consider only two classes of codes. How can one systematically map information sequences into codewords? n (6.35) C = {c1 . A block code is linear if addition. .1 Block codes The information sequence is divided into blocks of length length k.. 7 "Repetition Codes" <http://cnx. how can we decode? These can be done if we concentrate on linear codes and utilize nite eld algebra.

96 CHAPTER 6. 1} k where g1    g2    G =  .43) (6.   0   0 by g2 . Then any information sequence can be expressed as    u =   = and the corresponding codeword could be u1 .   .  . .  .      ui ei (6.  .7 with 00 → 00000 01 → 10100 10 → 01111 11 → 11011 (6.38) uk k i=1 k c= i=1 Therefore ui gi (6.  .  .42) (6..39) with c = {0. .44) . Example 6.40) k xn matrix and all operations are modulo 2.   0   0 by g1 and  1    0    0   .8 In Example 6. and ek =                 0   0    0    0   .  gk  c = uG  a (6.  .41) (6. CHAPTER 5: CHANNEL CODING         e1 =         1          e2 =         0  Denote the codeword for information sequence  0    0    0   . .   0   1 by gk . 1} n and u ∈ {0..  . . .

Examples of good linear codes include Hamming codes. BCH codes. 6. . 8 "Block Channel Coding" <http://cnx. Example 6.org/content/m0094/latest/> 9 This content is available online at <http://cnx.6 Convolutional Codes 6. 11. there are 4 dierent rates.97  g1 = (0. States: For example. and many more. n = 2 with memory length 2 and constraint length 3. 0) T and G= 0 1 1 0 1 1 1 0 1 0   8 Additional information about coding eciency and error are provided in Block Channel Coding . each block of determined by the present 9 Convolutional codes are one type of code used for channel coding (Section 6. The rate of these codes is dened as k n and these codes have dierent error correction and error detection properties. Another type of code used is block coding (Section 6. k bits is mapped into a block of n bits but these n bits are not only k information bits but also by the previous information bits. 01. 1) T and g2 = (1. 1.1 Convolutional codes In convolutional codes. arrival of information bit 0 transitions from state 00.9 A rate 1 2 convolutional coder k = 1. 10. convolutional coder can be captured by a 4 state machine.7/>.5).org/content/m10181/2.1: Block codes). Figure 6. The behavior of the The encoding and the decoding process can be realized in trellis structure. This dependence can be captured by a nite state machine. Reed-Solomon codes.8 Since the length of the shift register is 2. 1. 0.5. 10 to state 01.6. 0. 1. 1.

10 Starting from state sequence is measured.9 If the input sequence is 1 1 0 0 the output sequence would be 11 10 10 11 The transmitted codeword is then 11 10 10 11. practical and powerful codes. the path with minimum distance to the received sequence is chosen as the correct trellis path. Convolutional coding lends itself to very ecient trellis based encoding and decoding.98 CHAPTER 6. They are very . 00 the Hamming distance between the possible paths and the received At the end. CHAPTER 5: CHANNEL CODING Figure 6. The information sequence will then be determined. If there is one error on the channel 11 00 10 11 Figure 6.

99 Solutions to Exercises in Chapter 6 Solution to Exercise 6.1 (p.45) . 94) log2 1 + P ση 2 n 2 (6.

100 CHAPTER 6. CHAPTER 5: CHANNEL CODING .

1 Fading Channel 1 For most channels.Chapter 7 Chapter 6: Communication over Fading Channels 7.2/>.2/>. content is available online at <http://cnx. such as variations in the electron density of the ionosopheric layers that reect high frequency radio signals.2 Characterizing Mobile-Radio Propagation Characterizing Mobile-Radio Propagation 2 1 This 2 This content is available online at <http://cnx. s signal can travel from transmitter to receiver over multiple reective paths. giving rise to the terminology multipath fading. This phenomenon. the free-space propagation model is inadequate to describe the channel behavior and predict system performance. where signal propagate in the atmosphere and near the ground. Another name. Both fading and scintillation refer to a signal's random uctuations. In wireless system. can cause uctuations in the received signal's amplitude. called multipath fading.org/content/m15525/1. phase.org/content/m15528/1. and angle of arrival. 7. is used to describe the fading caused by physical changes in the propagating medium. 101 . scintillation.

g. clumps of buildings. terrain contours (e.1: Fading channel manifestations Figure 1 introduces an overview of fading channel. Scattering occurs when a radio wave impinges on either a large.102 CHAPTER 7. Small-scale fading is called Rayleigh fading if there are multiple reective paths and no line-of-sight signal component otherwise it is called Rician. hills. Large-scale fading represents the average power This phenomenon is aected by prominent attenuation or the path loss due to motion over large areas. 3. When a mobile radio roams over a large area it must process signals that experience both types of fading: small-scale fading superimposed on large-scale fading. Reection occurs when a propagating electromagnetic wave impinges upon smooth surface with very large dimensions relative to the RF signal wavelength. etc) between the transmitter and receiver. rough surface or any surface whose dimension are on the other of the RF signal wavelength or less. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS Figure 7. Large-scale fading (attenuation or path loss) can be considered as a spatial average over the small-scale uctuations of the signal. 2. causing the energy to be spread out or It is often termed shadowing because the diracted eld can reach the receiver even when shadowed by an impenetrable . billboards. obstruction. There are three basic mechanisms that impact signal propagation in a mobile communication system: 1. Small-scale fading refers to the dramatic changes in signal amplitude and phase as a result of small changes (as small as half wavelength) in the spatial positioning between a receiver and transmitter. Diraction occurs when the propagation path between the transmitter and receiver is obstructed by a dense body with dimensions that are large relative to the RF signal wavelength. Diraction accounts for RF energy traveling from transmitter to receiver without line-of-sight path. forests.

(3) near-worst-case Rayleigh or small-scale fading margin (typically 20-30 dB) Using complex notation s (t) = Re{g (t) . and fc is the carrier frequency. . Figure 7.}.2: Link budget considerations for a fading channel Figure 2 is a convenient pictorial showing the various contributions that must be considered when estimating path loss for link budget analysis in a mobile radio application: (1) mean path loss as a function of distance. due to large-scale fading.} denotes the real is called the complex envelope of part of {.ejφ(t) (2) Where R (t) =| g (t) | is the envelope φ (t) is its phase. (2) near-worst-case variations about the mean path loss or large-scale fading margin (typically 6-10 dB).103 reected in all directions. The baseband waveform g (t) s (t) and can be expressed as magnitude.ejφ(t) = R (t) .ej2πf c t }(1) Where Re{. and g (t) =| g (t) | .

r0 (t) . g(t) will be modied by a complex dimentionless multiplicative factor The modied baseband waveform can be written as be expressed as follow α (t) .R (t) = m (t) .g (t).e−jθ(t) .e−jθ(t) . large-scale-fading component and the large-scale-fading component m (t) is referred to as the local mean or log-normal fading. while the Rayleigh fading is a relatively fast varying function of .R (t)(3) Where m (t) and r0 (t) are called the of the envelope respectively. In gure 3a. the large-scale fading or local mean position. In gure 3b. the or Rayleigh fading. For the case of mobile radio. and r0 (t) is referred to as multipath α (t) . α (t) . Small-scale fading superimposed on large- scale fading can be readily identied. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS In fading environment.104 CHAPTER 7. The log-normal fading is a relative slow varying function of position.m (t). The magnitude of this envelope can α (t) . Sometimes. gure 3 illustrates the relationship between signal power received is a function of the multiplicative factor α (t). The typical antenna displacement between adjacent signal-strength nulls due to small-scale fading is approximately half of wavelength. m (t) has been removed in order to view the small-scale fading r0 (t).

3: Large-scale fading and small-scale fading 7.org/content/m15526/1. . propagation models for both indoor and outdoor radio channels indicate that mean path loss as Lp (d) ~[U+E09E]d/d0 [U+E09F] (1) Lp (d) dB = Ls (d0 ) dB + 10n.105 Figure 7.log (d/d0 ) 3 This n (2) content is available online at <http://cnx.3 Large-Scale Fading follow 3 In general.2/>.

So the received bandpass signal become as follow: fc or in other words. Measurements have shown that the path loss about the mean distant-dependent value Lp is a random variable having a log-normal distribution Lp (d) Lp (d) (dB) = Ls (d0 ) (dB) + 10nlog10 (d/d0 ) + Xσ (dB)(4) Where Xσ denote a zero-mean. for an arbitrary location with a specic transmitter-receiver separation are (1) the reference distance. dB) with standard deviation [U+F073] (in As can be seen from the equation.1/>. s (t) = phases −j2πf c τn (t) = n αn (t) e−jθn (t) (4) n αn (t) e The baseband signal s(t) consists of a sum of time-variant components having amplitudes αn (t) and θn (t). resulting in amplitude variations of s(t).4 Small-Scale Fading SMALL . These multipath components combine either constructively or destructively. Typically. n is larger. and (3) the standard deviation Xσ . The value of the path-loss exponent n depends on the frequency. Moreover d0 is taken 1 km for large cells. we can write the received bandpass signal as follow: r (t) =Re = Re s (t) = αn (t) g (t − τn (t)) ej2πf c (t−τn (t)) (2) −j2πf c τn (t) g (t − τn (t)) ej2πf c t n αn (t) e n We have the equivalent received bandpass signal is −j2πfτ n (t) g (t − τn (t))(3) n αn (t) ec Consider the transmission of an unmodulated carrier at frequency g(t)=1. each associated with a time-variant propagation delay τn (t) and a time variant multiplicative factor αn (t). In the presence of a very strong guided wave phenomenon (like urban streets). Neglecting noise. n is equal to 2. and 1 m for indoor channels. micro cells. Analysis proceeds on the as- sumption that the antenna remains within a limited trajectory so that the eect of large-scale fading m(t) is constant.org/content/m15531/1. In free space. we will develop the small-scale fading component r0 (t). the received bandpass signal can be written as below: r (t) = n αn (t) s (t − τn (t))(1) Substituting Equation (1. (2) the path-loss exponent. 4 Small-scale fading refers to the dramatic changes in signal amplitude and phase that can be experienced as a result of small changes (as small as half wavelength) in the spatial position between transmitter and In this section. height and propagation environment. Gaussian random variable (in dB). for all time. Xσ is site and distance dependent. the parameters needed to statistically describe path loss due to largescale fading.106 CHAPTER 7. these eects can be described by analyzing r(t) at the baseband level. module Characterizing Mobile-Radio Propagation) over into Equation (1). and the reference distance d0 corresponds to a point located in the far eld of the transmit antenna. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS Where d is the distance between transmitter and receiver.SCALE FADING receiver. When obstructions are present. Notice that θn (t) will change by 2π radians whenever τn (t) changes by 1/ fc (very small delay). antenna n can be lower than 2. 7. Final equation is very important because it tell us that a bandpass signal s(t) is the signal that experienced the fading eects and gave rise to the received signal r(t). 100 m for d0 is evaluated using equation 2 4πd0 Ls (d0 ) = [U+E09E] λ [U+E09F] (3) or by conducting measurement. Assume that the antenna is traveling and there are multiple scatter paths. 4 This content is available online at <http://cnx. .

. K = A2 /2σ 2 . and the fading is preferred to as Rician fading 2 (r0 +A2 ) p (r0 ) = { r0 σ 2 exp − 2σ 2 I0 r0 A σ2 r0 ≥ 0.delay time τ and transmission time t. A ≥ 0 otherwise (5) 0 The parameter σ2 is the pre-detection mean power of the multipath signal. Small scale manifests itself in two mechanisms . The transmission time. accounting for propagation path changes that are perceived as the channel's time-variant behavior. The Rician distribution is often described in terms of a parameter K. It is given by pdf. A denotes the peak magnitude of the non-faded signal component and I0 (−) is the modied Bessel function. shown as When the magnitude of the specular component A approach zero. which is dened as the ratio of the power in the specular component to the power in the multipath signal. the Rician pdf approachs a Rayleigh p (r0 ) = { r0 σ 2 exp 0 − 2σ2 r2 r0 ≥ 0 otherwise (6) 0 The Rayleigh pdf results from having no specular signal component.4 When the received signal is made up of multiple reective arrays plus a signicant line-of-sight (nonfaded) component. Delay time refers to the time spreading eect resulting from the fading channel's non-optimum impulse response. is related to the motion of antenna or spatial changes. the received envelope amplitude has a Rician pdf as below.107 Figure 7. it represents the pdf associated with the worst case of fading per mean received signal power.time spreading of signal (or signal dispersion) and time-variant behavior of the channel (gure 2). however. It is important to distinguish between two dierent time references.

108 CHAPTER 7. . a multipath-intensity prole S(τ ) is plotted.5 7. Tm between the rst and last received component represents the 5 This content is available online at <http://cnx.org/content/m15533/1. S(τ ) helps us understand how the average scattering. The term time delay is used to refer to the excess delay.5 Signal Time-Spreading SIGNAL TIME  SPREADING 5 Signal Time-Spreading Viewed in the Time-Delay Domain A simple way to model the fading phenomenon is proposed the notion wide-sense stationary uncorrelated In Figure 1(a). It represents the signal's propagation delay that exceeds the delay of the rst signal arrival at the receiver. In wireless channel. The model treats arriving at a receive antenna with dierent delay as uncorrelated. For a single transmitted impulse. received power vary as a function of time delay τ. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS Figure 7. the received signal usually consists of several discrete multipath components causing S(τ ).3/>. the time maximum excess delay.

The function | R (∆f) | helps answer . In gure 1b. A channel is said to exhibit frequency nonselective or at fading if Tm < Ts . This condition occurs whenever the received multipath components of a symbol extend beyond the symbol's time duration. In this case of frequency-selective fading.109 Figure 7. A channel is said to exhibit frequency selective fading if Tm and symbol time Ts can be viewed in terms of two dierent degradation categories: frequency-selective fading and frequency nonselective Tm > Ts . all of the received multipath components of a symbol arrive within the symbol time duration. mitigating the distortion is possible because many of the multipath components are resolved by receiver. The correlation function | R (∆f) | can be seen. it is the Fourier transform of | R (∆f) | represents the correlation between the response of channel to two signals as a function of the frequency dierence between two signals. In fact. the spaced-frequency correlation function S(τ ).6 Degradation Categories due to Signal Time-Spreading Viewed in the Time-Delay Domain In a fading channel. There is no channel-induced ISI distortion because the signal time spreading does not result in signicant overlap among neighboring received symbols. hence. Signal Time-Spreading Viewed in the Frequency Domain A completely analogous characterization of signal dispersion can be specied in the frequency domain. the components are not resolvable. In this case. the relationship between maximum excess delay time or at fading. another name for this category of fading degradation is channel-induced ISI.

τ2 is the second moment and στ is the square root of the second central moment of S(τ ). coherence bandwidth f0 and the maximum excess delay time Tm are related as approximation below 1 Tm (1) A more useful parameter is the delay spread. compared with those components contained within the coherent bandwidth (Figure 2(a)). as a mobile radio changes its position. Thus. Several approximate relationships have been developed. there will be times when the received signal experiences frequency-selective distortion even though f0 > W (in Figure 2(c)). This occurs. τ Ψ 2 is the mean squared. If the coherence bandwidth is dened as the frequency interval over which the channel's complex frequency transfer function has a correlation of at least 0. | R (∆f) | can ∆f.9. When this occurs. . Flat fading does not introduce channel-induced ISI distortion. Frequency selective fading distortion occurs whenever a signal's spectral components are not all aected equally by the channel. the coherent bandwidth is approximately f0 ≈ 1 50στ (3) With the dense-scatterer channel model. Some of the signal's spectra components failing outside the coherent bandwidth will be aected dierently. the channel is required to exhibit at fading. cross-correlating the complex Note that the spectra of two separated received signals. using Fourier transform techniques an approximation can be derived from actual signal dispersion measurements in various channel.110 CHAPTER 7. to be 1 2πσ τ (4) Studies involving ionospheric eects often employ the following denition 1 f0 ≈ 5στ (5) The delay spread and coherence bandwidth are related to a channel's multipath characteristic. It is important to note that all parameters in last equation independent of signaling speed. Frequency. diering f0 ≈ for dierent propagation paths. A channel is preferred to as frequency-selective if Degradation Categories due to Signal Time-Spreading Viewed in the Frequency Domain f0 < 1/Ts ≈ W (the symbol rate is taken to be equal to the signaling rate or signal bandwidth W). the channel coherent bandwidth f0 set an upper limit on the transmission rate that can be used without incorporating an equalizer in the receiver. Spectral components in that range are aected by the channel in a similar manner. even though a channel is categorized as at-fading.nonselective of at-fading degradation occurs whenever f0 > W . coherence bandwidth is dened as the frequency interval over which the channel's complex frequency transfer function has a correlation of at least 0. but performance degradation can still be expected due to the loss in SNR whenever the signal is fading. provide that f0 > W ≈ (6) 1 Ts Hence. However. most often characterized in terms of its root-mean-square f0 ≈ (rms) value. and repeating the process many times with ever-larger separation ∆f. all of signal's spectral components will be aected by the channel in a similar manner (fading or non-fading) (Figure 2(b)). the baseband pulse can be especially mutilated by deprivation of its low-frequency components. A relationship between coherence bandwidth and delay spread does not exist. it still manifests frequency-selective fading. can be calculated as στ = τ 2 − τ 2 Where 1/2 (2) τ Ψ is the mean excess delay. a system's signaling speed only inuences its transmission bandwidth W.5. In order to avoid channel-induced ISI distortion. However. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS the correlation between received signals that are spaced in the frequency be measured by transmitting a pair of sinusoids separated in frequency by ∆f = f1 − f2 is what. hence.

7 Examples of Flat Fading and Frequency-Selective Fading The signal dispersion manifestation of the fading channel is analogous to the signal spreading that charac- .111 Figure 7.

This lter resembles a at-fading channel yielding an output that is relatively free of dispersion. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS terizes an electronic lter.8 . as shown both time domain and frequency domain. Figure 3(a) depicts a wideband lter (narrow impulse response) and its eect on a signal in both time domain and the frequency domain. The output signal suers much distortion. process resembles a frequency-selective channel. Figure 3(b) shows a narrowband lter (wide impulse response). Here the Figure 7.112 CHAPTER 7.

org/content/m15535/1.1/>. .6 Mitigating the Degradation Eects of Fading 6 This 6 Figure 1 highlights three major performance categories in terms of bit-error probability PB versus Eb /N0 content is available online at <http://cnx.113 7.

good . CHAPTER 6: COMMUNICATION OVER FADING CHANNELS Figure 7.9 The leftmost exponentially shaped curve highlights the performance that can be expected when using any nominal modulation scheme in AWGN interference. Observe that at a reasonable Eb /N0 level.114 CHAPTER 7.

in SNR. shows the performance degradation resulting Eb /N0 that is characteristic of at fading or slow fading when there is no line-of-sight signal component present. Once the signal distortion has been mitigated. The curve is a function of the reciprocal of performance will generally be bad. In such cases. If the channel introduces signal distortion as a result of fading. where the bit-error probability can level o at values nearly equal to 0. it is possible to further ameliorate the eects of fading and strive to approach AWGN system performance by using some form of diversity to provide the receiver with a collection of uncorrelated replicas of the signal. the system performance can exhibit an irreducible error rate at a level higher than the desired error rate. 2) choose a diversity type that can best approach AWGN system performance. so for practical values of Eb /N0 . This shows the severe performance degrading eects that are possible with frequency-selective fading or fast fading.5. The mitigation method depends on whether the distortion is caused by frequency-selective fading or fast fading. and by using a powerful error-correction code. sometimes called an error oor. referred to as the from a loss in Rayleigh limit. Next. The mitigation approaches to be used when designing a system should be considered in two basic steps: . the PB versus Eb /N0 performance can transition from the awful category to the merely bad Rayleigh-limit curve. Figure 2 lists several mitigation techniques for combating the eects of both signal distortion and loss 1) choose the type of mitigation to reduce or remove any distortion degradation. The middle curve. The curve that reaches an irreducible error-rate level. represents awful performance. the only approach available for improving performance is to use some forms of mitigation to remove or reduce the signal distortion. Eb /N0 .115 performance can be expected.

7 Mitigation to Combat Frequency-Selective Distortion back into its original time interval. 7 Equalization can mitigate the eects of channel-induced ISI brought on by frequency-selective fading. If the channel is frequency selective. The goal is for the combination of channel and equalizer lter to provide a at composite-received frequency response and linear phase. It can help modify system performance described by the curve that is awful to the one that is merely bad. the equalizer lters must be The decision feedback equalizer (DFE) involves: adaptive equalizers. 2) a feedback section that removes energy remaining from previously detected symbols. The process of equalizing for mitigating ISI eects involves using methods to gather the dispersed symbol energy An equalizer is an inverse lter of the channel. the ISI that it induces on future symbols can be estimated and subtracted before the detection of subsequent symbols. The basic idea behind the DFE is that once an information symbol has been detected. the equalizer enhances the frequency components with small amplitudes and attenuates those with large amplitudes.116 CHAPTER 7.1/>. 1) a feedforward section that is a linear transversal lter whose stage length and tap weights are selected to coherently combine virtually all of the current symbol's energy. 7 This content is available online at <http://cnx.10 7. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS Figure 7. . Because the channel response varies with time.org/content/m15537/1.

and ISI is a type of interference. Pilot f0 . provided that the hopping rate is at least equal to the symbol rate. it is often referred to as the Viterbi equalizer. The MLSE is optimal in the sense that it minimizes the probability of a sequence error. or in the time domain as digital sequences that can also provide information about the channel state and thus improve performance in fading conditions. A channel that is classied as at fading can occasionally exhibit frequency-selective distortion when the null of the channel's frequency-transfer function occurs at the center of the signal band. r (t) = Ax (t) g (t) cos (2πf c t) + αAx (t − τ ) g (t − τ ) cos (2πf c t + θ) where x (t) is the data signal. and each one is modulated by a dierent symbol group. thus avoiding the interference by changing the receiver band position before the arrival of the multipath signal. This requires the spread-spectrum bandwidth chip rate Rch ). goal is to reduce the symbol rate (signaling rate).117 A maximum-likelihood sequence estimation (MLSE) equalizer: tests all possible data sequences and chooses the data sequence that is the most probable of all the candidates. Pilot signals can be implemented in the frequency domain as in-band tones. assumed to be uniformly distributed in the range (0. The use of DS/SS is a practical way of mitigating such distortion because the wideband SS signal can span many lobes of the selectively faded channel frequency response. can be used for signal transmission in frequency-selective fading channels to avoid the use of an equalizer by lengthening the symbol duration. Consider a DS/SS binary and one reected path. If τ is greater than the chip duration. signal is the name given to a signal intended to facilitate the coherent detection of waveforms. and τ is the dierential time delay between the two paths. the more eective will be the mitigation. If the receiver is synchronized to the direct path signal. wave that is delayed by expressed as follows: Assume that the propagation from transmitter to receiver results in a multipath τ compared to the direct wave. The receiver multiplies the incoming r (t) by the code g (t). Even though channel-induced ISI is typically transparent to DS/SS systems. can be ISI distortion because the hallmark of spread-spectrum systems is their capability of rejecting interference. Direct-sequence spread-spectrum (DS/SS) techniques can be used to mitigate frequency-selective phase-shift keying (PSK) communication channel comprising one direct path r (t). to be greater than the coherence bandwidth f0 . multiplication by the code signal yields the following: r (t) g (t) = Ax (t) g 2 (t) cos (2πf c t) + αAx (t − τ ) g (t) g (t − τ ) cos (2πf c t + θ) 2 where g (t) = 1. The need to gather this lost energy belonging to a received chip was the motivation for developing the Rake receiver. The approach is to partition (demultiplex) a high symbol-rate sequence into contains a sequence of a lower symbol rate (by the factor is made up of N symbol groups. coherence bandwidth W ≈ 1/Ts . The received signal. The signal band on each carrier to be less than the channel's N orthogonal carrier waves. and α is the attenuation of the multipath signal relative to the direct path signal. Frequency-hopping spread-spectrum (FH/SS): can be used to mitigate the distortion caused by Orthogonal frequency-division multiplexing (OFDM): 1/N ) frequency-selective fading. . FH receivers avoid the degradation eects due to multipath by rapidly changing in the transmitter carrier-frequency band. 2π). neglecting noise. Thus. then | g (t) g (t − τ ) dt |≤| g 2 (t) dt | over some appropriate interval of integration (correlation). Since the MLSE equalizer is implemented by using Viterbi decoding algorithm. The angle θ is a random phase. the spread spectrum system eectively eliminates the multipath interference by virtue of its code-correlation receiver. such systems suer from the loss in energy contained in the multipath components rejected by the receiver. so that each group The than the original sequence. g (t) is the pseudonoise (PN) spreading code. The larger the ratio of Wss to Wss (or the f0 .

and canceling lter. the error oor will be lowered compared to the uncoded case. The term diversity is used to denote the various methods available for providing the receiver with uncorrelated renditions of the signal of interest. fd ≈ 1/T0 . multipath components are rejected if they are time-delayed by more than the duration of one chip. separated by a distance of at least 10 wavelengths when located at a base station (and less when located at a mobile unit).org/content/m15536/1. • Spread spectrum: In spread-spectrum systems. to be greater than the fading rate. redundancy. an expanded bandwidth can improve system performance (via diversity) only if the frequencyselective distortion that the diversity may have introduced is mitigated. interleaving is a form of time diversity. because instead of providing more signal energy. and thus help preserve its orthogonality. the delayed signals do not contribute to the fading. This achieves is made larger than frequency diversity of the order Whenever W L = W/f0 . Thus. Systems have also been implemented with multiple transmitters. .org/content/m15538/1. The process introduces known ISI and adjacent channel interference (ACI) which are then removed by a post-processing equalizer 7. However. however. diversity: transmit the signal on f0 .9 Mitigation to Combat Loss in SNR 9 Until this point. f0 . there is the potential for frequency-selective distortion unless mitigation in the form of equalization is provided. In the case of Direct-Sequence Spread-Spectrum (DS/SS). a code reduces the required Eb /N0 . it is necessary to compensate for the loss in energy contained in those rejected components. Fast fading. The signal bandwidth L dierent carriers with frequency separation of at least W is expanded so as to be greater than thus providing the receiver with several independently-fading signal replicas. will typically degrade conventional OFDM because the Doppler spreading corrupts the orthogonality of the OFDM subcarriers. and reduces the detector integration time. • Frequency f0 . The Rake receiver makes it possible to coherently combine the energy from several of the multipath The components arriving along dierent paths (with sucient dierential delay). content is available online at <http://cnx. Bandwidth expansion is a form of frequency diversity. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS 7. in order to approach AWGN performance. use a robust modulation (non-coherent or dierentially coherent) that does W ≈ 1/Ts .118 CHAPTER 7. Increase the symbol rate. we have considered the mitigation to combat frequency-selective and fast-fading distortions. GSM system uses slow FH (217 hops/s) to compensate for cases in which the mobile unit is moving very slowly (or not at all) and experiences deep fading due to a spectral null. When used along with error-correction coding. but to interchip interference.1/>. For a given Eb /N0 with coding present.8 Mitigation to Combat Fast-Fading Distortion • • • not require phase tracking. Signal-processing techniques must be employed to choose the best antenna output or to coherently combine all the outputs. each at a dierent location. When fast-fading distortion and frequency-selective distortion occur simultaneously. • Frequency-hopping spread-spectrum (FH/SS) is sometimes used as a diversity mechanism. • Spatial diversity is usually accomplished through the use of multiple receive antennas. 8 For fast-fading distortion. 8 This 9 This content is available online at <http://cnx. the frequency-selective distortion can be mitigated by the use of an OFDM signal set. The next step is to use diversity methods to move the system operating point from the error-performance curve labeled as bad to a curve that approaches AWGN performance. by adding signal Error-correction coding and interleaving can provide mitigation.1/>. Some of the ways in which diversity methods can be implemented are: • Time diversity: transmit the signal on L dierent time slots with time separation of at least T0 . Spread spectrum is a bandwidth-expansion technique that excels at rejecting interfering signals. A polyphase ltering technique is used to provide time-domain shaping and partial-response coding to reduce the spectral sidelobes of the signal set.

. the probability that this threshold will be exceeded). The bit-error-probability.905 10 This 1 content is available online at <http://cnx.9999 Without diversity. averaged through all the ups and downs of the fading experience in a slow-fading channel is as follows: PB = where where PB (x) p (x) dx PB (x) is the bit-error probability for a given modulation scheme x = α2 Eb /N0 . i = 1.2 × 10−5 = 0. we can say that P (γi > 10 dB) = 1 − 8. chi- 1 x p (x) = Γ exp − Γ x ≥ 0 where Γ = α2 Eb /N0 is the SNR averaged through the ups and downs of fading. 2.1)] = 0.10 Diversity Techniques techniques.1. and we assume that each branch has the same average SNR given by Γ.1)] = 8. and consequently x. γ4 ≤ 10 dB) = [1 − exp (−0.. as follows: P (γ1 .095 = 0. Rayleigh distribution so that α2 . have a = x. 10 This section shows the error-performance improvements that can be obtained with the use of diversity PB .119 • Polarization diversity is yet another way to achieve additional uncorrelated samples of the signal.1/>.. 7. γM ≤ γ) = 1 − exp − Γ The probability that any single branch achieves SNR > γ is: M γ P (γi > γ) = 1 − 1 − exp − Γ This is the probability of exceeding a threshold when selection diversity is used. Compare the results to the case when no diversity is used. M . determine the probability that all four branches are received simultaneously with an SNR less than 10 dB (and also. and p (x) is the pdf of x due to the fading conditions. α is Rayleigh fading. . Error-correction coding coupled with interleaving is probably the most prevalent of the mitigation schemes used to provide improved system performance in a fading environment... α has a at a specic value of SNR With Eb and N0 constant. P (γ1 ≤ 10 dB) = [1 − exp (−0. Error-correction coding represents a unique mitigation technique. because instead of providing more signal energy it reduces the required Eb /N0 needed to achieve a desired performance level.095 P (γ1 > 10 dB) = 1 − 0. If the average SNR is Assume that four-branch diversity is used. Example: Benets of Diversity fading signal. and that each branch receives an independently Rayleigh- Γ = 20 dB.org/content/m15540/1. Solution With γ = 10 dB.2 × 10−5 or.. has an instantaneous SNR = γi . then 1 p (γi ) = Γ exp − γi γi ≥ 0 Γ The probability that a single branch has SNR less than some threshold γ is: γ γ 1 P (γi ≤ γ) = 0 p (γi ) dγ i = 0 Γ exp − γi dγ i Γ γ = 1 − exp − Γ The probability that all M independent signal diversity branches are received simultaneously with an SNR less than some threshold value γ is: M γ P (γ1 . squared distribution: For used to represent the amplitude variations due to fading. and γ/Γ = 10 dB − 20 dB = −10 dB = 0. γ3 . using selection diversity. γ2 . . . If each diversity (signal) branch. 4 we solve for the probability that the SNR will drop below 10 dB. • Some techniques for improving the loss in SNR in a fading channel are more ecient and more powerful than repetition coding.

M M i=1 γi = i=1 Γ = MΓ where we assume that each branch has the same average SNR given by γM = γi = Γ. the M signals are scanned in a xed sequence until one is found that sending the largest one to the demodulator.org/content/m15539/1. maximal-ratio combining can produce an acceptable average SNR. Table 1 shows the Doppler spread versus vehicle speed at carrier frequencies of 900 MHz and 1800 MHz. Equal-gain combining is similar to maximal-ratio combining except that the weights are all set to unity. With exceeds a given threshold.org/content/m15541/1. MPSK with M = 8 or larger should be avoided. feedback. In slow Assume that the carrier frequency is 1800 MHz and that the velocity of the vehicle is 50 miles/hr (80 km/hr).1/>. the preferred choice for a signaling scheme is a frequency or phase-based modulation orthogonal FSK modulation for fading channels.12 Modulation Types for Fading Channels type.3 kilosymbols/s. equal gain. for fading channels. maximal ratio. The individual signals must be cophased before being summed. Calculate the phase variation per symbol for the case of signaling with QPSK modulation at the rate of 24. Selection-diversity combining is relatively easy to implement but not optimal because it does not make use of all the received signals simultaneously.1 dB of each other. higher-order modulation alphabets perform poorly.120 CHAPTER 7. In maximal-ratio combining. 7. The possibility of achieving an acceptable output SNR from a number of unacceptable inputs is still retained.1/>. In considering PSK modulation for fading channels. Thus. 12 An amplitude-based signaling scheme such as amplitude shift keying (ASK) or quadrature amplitude modulation (QAM) is inherently vulnerable to performance degradation in a fading environment. the signals from all of the M branches are weighted according to their individual SNRs and then summed. . This one becomes the chosen signal until it falls below the established threshold. content is available online at <http://cnx. Repeat for a vehicle speed of 100 miles/hr. binary DPSK and 8-FSK perform within 0. even when none of the individual i γ is acceptable. The error performance of this technique is somewhat inferior to the other methods. the use of MFSK with M = 8 or larger Rayleigh fading channels. but feedback is quite simple to implement. In considering is useful because its error performance is better than binary signaling. and feedback or scanning diversity. Thus. Maximal-ratio combining produces an average SNR as shown below: γM equal to the sum of the individual average SNRs. It uses each of the M branches in a cophased and weighted manner such that the largest possible SNR is available at the receiver. and the scanning process starts again.11 Diversity-Combining Techniques and 11 The most common techniques for combining diversity signals are selection. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS 7. Table 1 11 This 12 This content is available online at <http://cnx. Example: Phase Variations in a Mobile Communication System The Doppler spread fd = V /λ shows that the fading rate is a direct function of velocity. Selection combining used in spatial diversity systems involves the sampling of M antenna signals. The performance is marginally inferior to maximal ratio combining.

it should be clear why MPSK with a value of M ∆θ/symbol = 4o /symbol > 4 is not generally used to transmit information 7. for the case of DBPSK modulation with soft-decision decoding of a rate 1/2.6cm) 8 54 132 212 320 Solution At a velocity of 100 miles/hr: f ∆θ/symbol = Rd × 360o t 132 Hz = 24. K = 7 convolutional code.1/>. over a slow Rayleigh-fading channel.1 Doppler (Hz) 1800 Mhz (λ = 16. channel coherence time 13 The primary benet of an interleaver for transmission in fading environment is to provide time diversity Figure 1 illustrates the benets of providing an interleaver time span T .121 Velocity miles/hr 3 20 50 80 120 km/hr 5 32 60 108 192 Doppler (Hz) 900 Mhz (λ = 33cm) 4 27 66 106 160 Table 7. that is large compared to the T0 .org/content/m15542/1.13 The Role of an Interleaver (when used along with error-correction coding). Thus.3×103 symbols/s × 360o = 2o /symbol At a velocity of 100 miles/hr: in a multipath environment. . IL 13 This content is available online at <http://cnx.

122 CHAPTER 7.11 . CHAPTER 6: COMMUNICATION OVER FADING CHANNELS Figure 7.

TIL /T0 is the best-performing (large demodulated BER leading to small decoded BER). Note that the interleaver provides no benet against multipath unless there is motion between the transmitter and receiver (or motion of objects within the signal-propagating paths).000. Figure 7. This leads to the conclusion that because the inherent time delay associated with an interleaver would be excessive. Eb /N0 performance versus vehicle speed for 850 MHz links to achieve a frame-error rate of 1 percent over a Rayleigh channel with two independent paths . This is the results of eld testing performed on a moving vehicle and a base station. which becomes more eective at higher speeds Figure 2 show that communications degrade with increased speed of the mobile unit (the fading rate CDMA system meeting the Interim Specication 95 (IS-95) over a link comprising a increases).12 Typical .000 or 10. a TIL /T0 ratio of 10 is about as large as one can implement without suering excessive delay.123 It should be apparent that an interleaver having the largest ratio of large numbersay 1. TIL /T0 should be some However. the action of an interleaver in the system provides mitigation. However. in a real-time communication system this is not possible The previous section shows that for a cellular telephone system with a carrier frequency of 900 MHz. The system error-performance over a fading channel typically degrades with increased speed because of the increase in Doppler spread or fading rapidity. the benet of an interleaver is enhanced with increased speed.

the GSM receiver must utilize some form of mitigation to combat frequency-selective distortion. since the modulation is binary) is 271 kilosymbols/s. The GSM symbol rate (or bit rate.1/>. Figure 7.14 Application of Viterbi Equalizer in GSM System The slot contains 57 message bits on each side of a 26-bit 14 GSM time-division multiple access (TDMA) frame in Figure 1 has duration of 4.56 m/s). called a training or sounding sequence. the bandwidth. one assigned to each active mobile user. Assume the carrier frequency to be 900 MHz (the wavelength is corresponding to a half-wavelength is traversed in T0 ≈ The time needed for a signicant change in channel fading characteristics is relatively long compared to the time duration of one slot.org/content/m15544/1. . the fading characteristics of the channel must not change appreciably during the time interval of one slot. (55.577 ms. To accomplish this goal.577 ms slot).577 ms (or the slot rate is 1733 slots/s). the channel coherence time is more than ve times greater than the slot time of 0. Therefore. The distance λ/2 ≈ 3 corresponds approximately to the coherence V time. traveling at a constant velocity of 200 km/hr λ = 0. f0 ≈ Viterbi equalizer 14 This content is available online at <http://cnx. then 1 5σ τ ≈ 100 kHz Since f0 < W . The purpose of the midamble is to assist the receiver in estimating the impulse response of the channel adaptively (during the time duration of each 0. CHAPTER 6: COMMUNICATION OVER FADING CHANNELS 7.13 Consider a GSM receiver used aboard a high-speed train.124 CHAPTER 7.33 m).615 ms and midamble. Since the typical rms delay spread the resulting coherence bandwidth: στ in an urban environment is on the order of 2µs. For the technique to be eective. the is typically implemented. comprising 8 slots. W. A normal transmission burst occupying one time The slot-time duration is 0. is 200 kHz.

Since in GSM the bit duration is 3. we can express L0 in units of For each L0 -bit interval in the message. The term L0 consists of the sum of two contributions. denoted L0 . If str (t) is designed to have a highly-peaked (impulse-like) autocorrelation function Rs (t). that is matched to str (t). the Viterbi equalizer used in GSM has a memory of 46 bit intervals. namely LCISI .69 µs. This matched lter yields at its output an estimate of hc (t). µs. must be large enough to compensate for the eect of typical channel-induced ISI. hw (t). The time duration of w (t). w (t). L0 = LCISI + LC corresponding to the channel-induced ISI caused by multipath propagation. it is extracted and sent to a lter having impulse response hmf (t) . Thus. Next. We have: rtr (t) = str (t) ∗ hc (t) rtr (t) is part of the received normal burst. to truncate he (t) to form a computationally aordable function. and str (t) rtr (t) be the corresponding received midamble training sequence. Thus. the function of the Viterbi equalizer is to nd the most likely L0 -bit sequence out . then he (t) ≈ hc (t). The GSM system is required to provide distortion mitigation caused by signal dispersion having delay spreads of approximately 1520 bit intervals. denoted he (t): he (t) = rtr (t) ∗ hmf (t) = str (t) ∗ hc (t) ∗ hmf (t) where Rs (t) = str (t) ∗ hmf (t) is the autocorrelation function of str (t). since the controlled ISI caused by Gaussian ltering of the baseband waveform (which then modulates the carrier using MSK).125 Figure 2 shows the basic functional blocks used in a GSM receiver for estimating the channel impulse response. we use a windowing function.14 This estimate is used to provide the detector with channel-corrected reference waveforms as explained below: (the Let Viterbi algorithm is used in the nal step to compute the MLSE of the message bits) be the transmitted midamble training sequence. corresponding to At the receiver. and LC . Figure 7.

channel-corrected reference waveforms. 15 This content is available online at <http://cnx. The Rake receiver searches through the dierent multipath delays for code correlation and thus recovers delayed signals that are then optimally combined with the output of other independent correlators.2/>. and τ3 . t−3 and t−2 with delays τ2 and τ3 . Since in this example the delayed components are separated by at least one chip time. Therefore. τ2 . is time-coincident with two others.15 Application of Rake Receiver in CDMA System Interim Specication 95 (IS-95) describes a Direct-Sequence Spread-Spectrum (DS/SS) cellular system that uses a Rake receiver to provide path diversity for mitigating the eects of frequency-selective fading. namely τ1 respectively. Determining the most likely transmitted be created by disturbing) the L0 -bit sequence requires that 2L0 2L0 meaningful reference waveforms 2L0 ideal waveforms (generated at the receiver) in the same way that the reference waveforms are convolved with the channel has disturbed the transmitted slot. However. 7. the received data waveforms are convolved with the known windowed autocorrelation function pared to all possible to that used in the w (t) Rs (t). . CHAPTER 6: COMMUNICATION OVER FADING CHANNELS of the 2L0 possible sequences that might have been transmitted. Each abscissa shows three components arriving with delays The component arriving at the receiver at time t−4 . This ltered message signal is com- Viterbi decoding algorithm. 1 1. Assume that the intervals between the transmission times ti and the intervals between the delay times τi are each one chip in duration. and metrics are computed in a manner similar It yields the maximum likelihood estimate of the 15 transmitted data sequence. they can be resolved. transforming them in a manner comparable to the transformation applied to the reference waveforms. 2L0 channel-corrected reference signals. the windowed estimate of the channel impulse response. the channel-corrected reference waveforms are compared against the received data waveforms to yield metric calculations.org/content/m15534/1. before the comparison takes place. with delay the components arriving at times Figure 1 show the power proles associated with the ve chip transmissions of the code sequence 1 0 1 τ1 . hw (t) in order to generate the disturbed or so-called Next.126 CHAPTER 7.

In this example. this energy would be transparent and therefore lost to the DS/SS receiver. Each correlator receives chips with power proles represented by the sequence of components shown along a diagonal line. pseudonoise (PN) sequence. these chips form a appropriately synchronized PN code. Note that the fading rate in mobile radio system is relatively slow (in the order of milliseconds) or the channel coherence T0 > Tch ). The interference-suppression capability of DS/SS systems stems from the fact that a code sequence arriving at the receiver time-shifted by merely one chip will have very low correlation to the particular PN code with which the sequence is correlated. Therefore. which of course contains both positive and negative pulses. In reality. each one processing a delayed version of the same chip sequence 1 0 1 1 1. a separate correlator is dedicated to recovering each resolvable multipath component. since it allows the energy of a chip that arrives via multiple paths to be combined coherently. the changes in τi occur slowly enough τi delays are estimated. Without the Rake receiver. Each correlator attempts to correlate these arriving chips with the same At the end of a symbol interval (typically there may be hundreds or thousands of chips per symbol).15 At the receiver. and a symbol . The delayed chips only contribute to raising the interference level (correlation sidelobes). there would be three such dedicated correlators. the outputs of the correlators are coherently combined.127 Figure 7. detection is made. Once the τi delay times. Hence. The mitigation provided by the Rake receiver can be termed path diversity. any code chips that are delayed by one or more chip times will be suppressed by the correlator. the chips are all shown as positive signaling elements. For simplicity. there must be a sounding device dedicated to estimating the time large compared to the chip time duration ( that the receiver can readily adapt to them.

2. s M (t) 2 are orthogonal and C Conditional Entropy The conditional entropy of the random variable X given the random variable Y is dened by (3. .Xt1 = ∞ ∞  xl xk p X . t1 ) = E Xt2 Xt1   ∞ ∞ x x f ( x2 .Yt1 ( x.. T ] : (s2 (t) = −s1 (t)) Xt is dened as Autocorrelation The autocorrelation function of the random process RX (t2 . t1 ) − µX (t2 ) µY (t1 ) (2. sM (t) are biorthogonal sm (t) = −s M +m (t) for some m ∈ 1.X ( xl . .Y ( xi . t1 ) = E Xt2 Yt1 = ∞ ∞ −∞ −∞ xyf Xt2 ..50) Autocovariance Autocovariance of a random process is dened as CX (t2 . t1 ) = RXY (t2 .. M 2 2 if . .8) H (X|Y ) = − ii jj p X. yj ) logpX|Y (xi |yj ) Continuous Random Variable A random variable integral form. .62) . . .. Crosscorrelation The crosscorrelation function of a pair of random processes is dened as RXY (t2 . t1 ) − µX (t2 ) µX (t1 ) (2.. .g. or X is continuous if the cumulative distribution function can be written in an b F X (b) = −∞ and f X ( x ) is the probability d f X ( x ) = dx (F X ( x ))) f X ( x ) dx F X (x) is dierentiable and (2. .61) CXY (t2 . s2 (t).14) density function (pdf ) (e. y ) dxdy (2. s1 (t).128 GLOSSARY Glossary A antipodal Signals s1 (t) and s2 (t) are antipodal if ∀t. t1 ) = E (Xt2 − µX (t2 )) Xt1 − µX (t1 ) = RX (t2 . x1 ) dx 1dx 2 if continuous −∞ −∞ 2 1 Xt2 .60) B biorthogonal Signals s1 (t). t ∈ [0. xk ) if discrete k=−∞ l=−∞ t2 t1 (2.

. . . . .. The entropy (average self information) of a discrete random variable probability mass function and is dened as X is a function of its N H (X) = − i=1 where p X ( xi ) logp X ( xi ) X and (3.15) E Entropy Rate The entropy rate of a stationary discrete-time random process is dened by H = limit H (Xn |X1 X2 . .13) The entropy rate is a measure of the uncertainty of information content per output symbol of the source.org/content/m0070/latest/> .3) N is the number of possible values of p X ( xi ) = P r [X = xi ]. F X (·) is piecewise constant). If log is base 2 then the unit of entropy is bits. .12) H = limit n→∞ 1 H (X1 . G Gaussian process A process with mean if any µX (t) and covariance function CX (t2 . The probability mass function (pmf ) is dened as p X ( xk ) = = P r [X = xk ] F X ( xk ) − x(x→xk ) ∧ (x<xk ) limit F X (x) (2. Entropy 1. Xt2 .13) D Discrete Random Variable A random variable X is discrete if it only takes at most countably many points (i.e. . A more basic explanation of entropy is provided in another module 16 . Entropy is a measure of uncertainty in a random variable and a measure of information it can reveal.GLOSSARY 129 Cumulative distribution The cumulative distribution function of a random variable that X is a function F X (R → R) such F X (b) = P r [X ≤ b] = P r [{ ω ∈ Ω | X (ω) ≤ b }] (2. . Xn ) n (3. . X2 . XtN ) formed by any sampling of the process is a Gaussian random fX (x) = 1 (2π) (detΣX ) N 2 1 2 vector. e−( 2 (x−µX ) 1 T ΣX −1 (x−µX )) (2. F First-order stationary process If FXt (b) is not a function of time then Xt is called a rst-order stationary process. t1 ) is said to be a Gaussian process T X = (Xt1 . that is. Xn ) n→∞ The limit exists and is equal to (3. 2.63) 16 "Entropy" <http://cnx.

. µXt = E [Xt ]   ∞ xf Xt ( x ) dx if continuous −∞ = ∞  k=−∞ xk p Xt ( xk ) if discrete (2. sn >= 0 for m = n. . . t1 ) .84) ∞ SX (f ) = −∞ if RX (τ ) e−(i2πf τ ) dτ RX (τ ).. . . sM (t) are orthogonal if < sm . t1 ) is a M Mean The mean function of a random process Xt is dened as the expected value of Xt for all t's. tN ) .6) Jointly Wide Sense Stationary The random processes function of Xt and t2 − t1 only and Yt are said to be jointly wide µX (t) and µY (t) are constant. Y ) = − ii jj p X. and    ΣX =   CX (t1 . t1 ) . yj ) (3. Mutual information is a useful concept to measure the amount of information shared between O orthogonal Signals s1 (t). s2 (t). .      µX (tN ) . Y ) and (6. Xt is WSS with autocorrelation function . . CX (t1 .Y ( xi . J Joint Entropy The joint entropy of two discrete random variables (X .130 GLOSSARY for all x ∈ Rn where    µX =   µX (t1 ) . . sense stationary if RXY (t2 . The complete statistical properties of can be obtained from the second-order statistics. yj ) logp X. Y) is dened by H (X.Y ( xi ..... Y ) = H (X) − H (X|Y ) input and output of noisy channels. tN )      CX (tN . . Xt CX (tN . P Power Spectral Density The power spectral density function of a wide sense stationary (WSS) process the Fourier transform of the autocorrelation function of Xt is dened to be Xt .49) Mutual Information The mutual information between two discrete random variables is denoted by dened as I (X.7) I (X. . (2.

sM (t)} be a set s1 (t). .30) U Uncorrelated random variables Two random variables X and Y are uncorrelated if ρXY = 0..3) Stochastic Process Given a sample space.GLOSSARY 131 S Simplex signals Let {s1 (t) . The signals s˜ (t) = sm (t) − m 1 M M sk (t) k=1 (4. .. s2 (t) . ∀t. . W Wide Sense Stationary A process is said to be wide sense stationary if of µX is constant and RX (t2 . s˜ (t) are simplex signals if ˜ M of orthogonal signals with equal energy. . t ∈ R : (Xt (ω)) (2. . t1 ) is only a function t2 − t1 . a stochastic process is an indexed collection of random variables dened for each ω ∈ Ω. . .

132 BIBLIOGRAPHY .

N. 1987. The Fourier Integral and Its Applications. NJ. 1985. McGraw-Hill. [2] R. [5] H. Dover. 1985. Dover. 1906. The Fourier Transform and Its Applications. volume 2. Cambridge. The Theory of Functions of a Real Variable and the Theory of Fourier's Series. A Handbook of Fourier Theorems. 1962. Theory of Fourier's Series and Integrals. Fourier Series and Integrals. third edition. Prentice-Hall. Cambridge University Press. New York. Schafer. [10] A. McGraw-Hill. Dym and H. V. Englewood Clis. Carslaw. W. 1955. Oppenheim and R. 1926. Bracewell. McGraw-Hill. second edition. P. New York. Champeney. [4] D. Discrete-Time Signal Processing. New York. 1930. Academic Press. [7] A. [9] A. [8] A. 1972. 1962. The Fourier Transform and Its Applications. 133 . S. 1989. third edition. New York. 1935. third edition. C. Papoulis. [3] H. Dover. Bracewell. N. [6] E. Papoulis. The Fourier Integral and Its Applications. McKean. Trigonometrical Series.Bibliography [1] R. Hobson. New York. W. Zygmund. McGraw-Hill. New York.

43 bandwidth constraints.2(43). Ÿ 4.10(63). Ÿ 4. 2 continuous. Ÿ 2.1(87) Channel Coding. Ÿ 7. Ÿ 4. 1 autocorrelation.14(70) C capacity.5(95). 11 dierential phase shift keying. Ÿ 4. Ÿ 4. Ÿ 4. 29 communication. Ex. Ÿ 4.1(5). Ÿ 5. 8.2(89) Communication over AWGN Channels. 6 analysis. Ÿ 4. Ÿ 2.1(5) Assistants. Ÿ 5.9(83) decision feedback equalizer (DFE).1(43). 55. Ÿ 6.1(43).9(60) Cumulative distribution. Ÿ 4.11(65) block code. Ÿ 4.6(97) Characterizing. They are merely associated with that section. 1 A Adaptive Equalization. Ÿ 6. Ÿ 4. Ÿ 4. Ÿ 4. Ÿ 6. Ÿ 2.2(101) coding. Ÿ 4.10(63).11(65) biorthogonal. 110 Degradation Categories due to Signal Time-Spreading Viewed in the Time-Delay Domain.2(43). 43 bases. Ÿ 4. Ÿ 2. Ÿ 6. Ÿ 2. 5 discrete memoryless channel.6(54) Course Rationale.4(50). Ÿ 4. Ÿ 2. Ÿ 4.1(5) distribution. Ÿ 5. Ÿ 2.7(24). 2 Credits. 3.10(63).11(65) correlator-type.5(51) conditional entropy. Ÿ 6.6(97) correlation. Ÿ 4. Ex. Ÿ 1. Ÿ 2. 24 autocovariance. 3 Communication over Fading Channel.1(101) channel capacity. Ÿ 4.4(93). Ÿ 4.134 INDEX Index of Keywords and Terms Keywords are listed by the section with that keyword (page numbers are in parentheses).9(28) coloured noise. Ÿ 4.1 (1) Keywords do not necessarily appear in the text of the page. Ÿ 4. Ÿ 4. Ÿ 6. 109 demodulation. Ÿ 6. 46. Ÿ 4. Ÿ 2. Ÿ 3. Ÿ 4.1(5).11(65) anticausal. Ÿ 2. Ÿ 6. 29 analog. Ÿ 4. 26 CSI.3(47) binary symbol.5(95) D data transmission.10(63). 2 .4(93) carrier phase.5(51). Ÿ 4. 36 Connexions.4(50).7(56) detection.6(54). 117.5(51). 46 bit-error.8(58).7(24).7(56) correlator. Ÿ 6.7(56) deterministic signal.11(65) aperiodic.1(43).10(63). Ÿ 4. 2 crosscorrelation.6(54). 18 discrete time. Ÿ 4. 3 Computer resource. 18 continuous time.1(43). Ÿ 4.12(66) Cauchy-Schwarz inequality.8(58).8(58).1(43). Ÿ 3.11(65) bit-error probability. Ÿ 4. Ÿ 4. 18 B bandwidth.1(33).1(5) convolutional code. 88 Discrete Random Variable.10(63).1(33) Colored Processes. Ÿ 4. Ÿ 4. Ÿ 4.10(84) Additive White Gaussian Noise (AWGN).6(54). 3 Communication over Band-limited AWGN Channel.2(34). apples. 43 Decision Feedback Equalizer. Ÿ 5. Ÿ 4. Terms are referenced by the page they appear on. Ÿ 6.1(5). Ÿ 4. 26 AWGN. Ÿ 2. apples. Ÿ 2. 116 Degradation. Ÿ 3. Ÿ 4. Ÿ 2.9(60).7(24). Ÿ 4.1(33).9(60) causal. 6 Direct-sequence spread-spectrum (DS/SS).2(12) channel. Ÿ 4. 5 Continuous Random Variable.1(5). Ÿ 4. Ÿ 3.14(70) digital. 8 antipodal.7(56). Ÿ 4.3(47) basis.6(78) Degradation Categories due to Signal Time-Spreading Viewed in the Frequency Domain.6(20) DPSK.1(87). Ÿ 7. Ÿ 4. 118 discrete.1(73) baseband.2(89).

1 entropy. Ÿ 2.4(40) independent. 126 Figure 2. Ÿ 3. Ÿ 5.10(30) matched lter.9(28) nearly white. Ÿ 4.INDEX 135 E Email.1(5). Ÿ 6.5(51).3(90).4(16) frequency. Ÿ 4. Ÿ 7. Ÿ 3. Ÿ 4. Ÿ 7. 73 orthogonal. Ÿ 6. 113.4(16) Fourier Integral. 7 First-order stationary process.4(76) ISI. 121. 124 nite-length signal. 27 geometric representation. Ÿ 6. Ÿ 2. Ÿ 3.3(105) feedback.3(37). Ÿ 5. Ÿ 6. 2 Mobile.d).2(34). 24 MIT's OpenCourseWare.5(95) Homework/Participation/Exams. Ÿ 4.2(12). Ÿ 3. Ÿ 4.1(5) phase changes. Ÿ 5. Ÿ 4.2(89). Ÿ 4. 6 N O near White Processes. 76 mutual information. 68 . Ÿ 2. Ÿ 6. Ÿ 4. Ÿ 4.6(54). Ÿ 2. 120 equalizers. 50 modulation. 29 noncausal.14(70). Ÿ 2. 28 innite-length signal. Ÿ 3. 120.7(56). 76 error.3(75) passband. 117.4(93). Ÿ 5.4(76) maximum-likelihood sequence estimation (MLSE). 7 information. Ÿ 5. Ÿ 4. Ÿ 6. Ÿ 3. Ÿ 5. Ÿ 7.13(69) Frequency-hopping spread-spectrum (FH/SS).1(33). Ÿ 4.9(60). 8. Ÿ 2.3(75). Ÿ 3.7(79) information theory. 46.5(95). 117 G Gaussian. Ÿ 4.10(63).2(43).11(65). 1 intersymbol interference. 1 optimum. 4 Human. 119 Example: Phase Variations in a Mobile Communication System. Ÿ 3. Ÿ 2. 123.i. 35. 89 Faculty Information. Ÿ 4. 1 Oce Location. Ÿ 4.8(58). Ÿ 4.3(105) linear.10(63). Ÿ 4.5(77) J L joint entropy. Ÿ 4. 37 equal gain.3(75) maximal ratio. 1.13(69) fundamental period. 1 Large-Scale. Ÿ 5.3(75) modulation signals. Ÿ 2.1(87). 115. Ÿ 4. 1. Ÿ 4. 21 Fourier.3(47) Grades for this course will be based on the following weighting. Ÿ 4.2(12) odd signal.4(40).5(51) Gaussian channels. 73.4(76).3(37). 1 Fading.3(75). Ÿ 5.10(63). 19 Independent and Identically Distributed (i.4(16) Fourier series. Ÿ 2.1(5).1(73) Gaussian process. Ÿ 2.1(101).9(60) F M Markov process.11(65) Orthogonal frequency-division multiplexing (OFDM).13(69) frequency shift keying. 120 Equal-gain.2(101) modulated. 6 periodic. Ÿ 7. 118 frequency modulation. Ÿ 2.6(78) even signal.6(97) Instructor. Ÿ 4.3(47). Ÿ 5. Ÿ 4. Ÿ 5.8(58). 120 Examples of Flat Fading and Frequency-Selective Fading. Ÿ 2.1(43).1(5). Ÿ 3. Ÿ 4.12(66). 118 FSK. Ÿ 4. 120 maximal-ratio. Ÿ 4. Ÿ 3.4(16) Frequency diversity.11(65) Error-Performance.2(34). 120 Figure 1.3(15). 28 Jointly Wide Sense Stationary.2(12) nonlinear. 1. Ÿ 2. 43 period. 111 Eye Pattern.1(43). Ÿ 2. Ÿ 6. Ÿ 2. 9 Oce Hours. Ÿ 3. 117 mean.8(27). 4 H I Hamming distance.2(34). Ÿ 2. 26 Lab sections/support.3(37) P PAM.2(43).2(34). Ÿ 4. Ÿ 6. 124. 9 Example: Benets of Diversity. 120 maximum likelihood sequence detector.4(50). Ÿ 5.3(15) Fourier Transform.3(90) entropy rate. 36 jointly independent. Ÿ 2.7(24).1(33). Ÿ 5.

Ÿ 4. Ÿ 5.11(65) S selection. Ÿ 2. 119. 26 .6(20). 68 phase lock loop. Ÿ 4.9(28) wide sense stationary. Ÿ 4.4(93) signal. Ÿ 4.4(40) Spatial diversity. 119 receiver. Ÿ 2.6(20). Ÿ 4. 47 SNR. Ÿ 2. 117 Polarization diversity.2(43) Simplex signals.2(43). Ÿ 2.8(58). Ÿ 4. 20. 1.10(63).7(24).10(30).2(101) Rake receiver. Ÿ 4. Ÿ 2.9(60) Title.1(5) simplex. 109 Signal Time-Spreading Viewed in the Time-Delay Domain.2(43) Pre-requisites.8(27).12(66) Phone. Ÿ 4. 1 Pilot.8(80) typical sequence. Ÿ 3.1(33). 124 W white. 2 theorems. Ÿ 7. Ÿ 3. Ÿ 2.10(30). Ÿ 4. Ÿ 4.9(60) signals. Ÿ 4. 28 White Noise Process.5(77) Signal to Noise Ratio.7(56). Ÿ 2.3(37). 11 Rayleigh fading.8(27). Ÿ 4. Ÿ 2. 120 sequence detectors.8(58). Ÿ 4.6(20) stochastic process. Ÿ 7. Ÿ 5. Ÿ 4. Ÿ 4.2(101) PSK.12(66) pulse amplitude modulation. Ÿ 4. Ÿ 3. 20 V Viterbi equalizer.5(17) Propagation.2(12) time-invariant.136 INDEX phase jitter. Ÿ 2. 126 random process. 2 precoding.2(12) time varying. 28 white process. 118 Spread spectrum. 118 stationary.3(75) pulse position modulation. 76 Shannon.1(87) Transversal Equalizer. Ÿ 3. Ÿ 2.1(33) Shannon's noisy channel coding theorem.10(30) Strictly White Noise Process. 2.2(43). Ÿ 6.7(24).7(24). Ÿ 5.4(76) probability theory.11(65) Signal Time-Spreading Viewed in the Frequency Domain. Ÿ 2. 118 time invariant.3(90) Radio.9(60) Solution. Ÿ 4.14(70) phase shift keying. Ÿ 4. Ÿ 3.8(58). Ÿ 2. Ÿ 2. Ÿ 2.10(63). 28 R T Table 1.1(43) random signal. 58 signal-to-noise ratio.3(37).2(12) Signals and Systems. Ÿ 5.3(15) Time diversity. Ÿ 2. Ÿ 2. 2 transmission. Ÿ 2. Ÿ 4. Ÿ 4. 121 Source Coding. Ÿ 6. 119 power spectral density. 3. 38.10(30) White Processes.3(47). Ÿ 4.6(54). 108 U Uncorrelated random variables. Ÿ 6. 31 PPM. 120 Textbook(s). 120. Ÿ 2. Ÿ 2. Ÿ 4.2(43) Pulse Shaping. 76.

ATTRIBUTIONS

137

Attributions
Collection: Principles of Digital Communications Edited by: Tuan Do-Hong URL: http://cnx.org/content/col10474/1.7/ License: http://creativecommons.org/licenses/by/2.0/ Module: "Letter to Student" By: Tuan Do-Hong URL: http://cnx.org/content/m15429/1.1/ Page: 1 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Contact Information" By: Tuan Do-Hong URL: http://cnx.org/content/m15431/1.3/ Page: 1 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Resources" By: Tuan Do-Hong URL: http://cnx.org/content/m15438/1.3/ Page: 2 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Purpose of the Course" By: Tuan Do-Hong URL: http://cnx.org/content/m15433/1.2/ Page: 2 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Course Description" By: Tuan Do-Hong URL: http://cnx.org/content/m15435/1.4/ Pages: 2-3 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Calendar" By: Tuan Do-Hong URL: http://cnx.org/content/m15448/1.3/ Pages: 3-4 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/

138 Module: "Grading Procedures" By: Tuan Do-Hong URL: http://cnx.org/content/m15437/1.2/ Page: 4 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Signal Classications and Properties" By: Melissa Selik, Richard Baraniuk, Michael Haag URL: http://cnx.org/content/m10057/2.21/ Pages: 5-11 Copyright: Melissa Selik, Richard Baraniuk, Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "System Classications and Properties" By: Melissa Selik, Richard Baraniuk, Stephen Kruzick URL: http://cnx.org/content/m10084/2.21/ Pages: 12-15 Copyright: Melissa Selik, Richard Baraniuk, Stephen Kruzick License: http://creativecommons.org/licenses/by/1.0 Module: "m04 - Theorems on the Fourier Series" Used here as: "The Fourier Series" By: C. Sidney Burrus URL: http://cnx.org/content/m13873/1.1/ Pages: 15-16 Copyright: C. Sidney Burrus License: http://creativecommons.org/licenses/by/2.0/ Module: "m05 - The Fourier Transform" Used here as: "The Fourier Transform" By: C. Sidney Burrus URL: http://cnx.org/content/m13874/1.2/ Pages: 16-17 Copyright: C. Sidney Burrus License: http://creativecommons.org/licenses/by/2.0/ Module: "Review of Probability Theory" Used here as: "Review of Probability and Random Variables " By: Behnaam Aazhang URL: http://cnx.org/content/m10224/2.16/ Pages: 17-20 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Stochastic Processes" By: Behnaam Aazhang URL: http://cnx.org/content/m10235/2.15/ Pages: 20-24 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0

ATTRIBUTIONS

ATTRIBUTIONS Module: "Second-order Description" Used here as: "Second-order Description of Stochastic Processes" By: Behnaam Aazhang URL: http://cnx.org/content/m10236/2.13/ Pages: 24-26 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Gaussian Processes" By: Behnaam Aazhang URL: http://cnx.org/content/m10238/2.7/ Page: 27 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "White and Coloured Processes" By: Nick Kingsbury URL: http://cnx.org/content/m11105/2.4/ Pages: 28-29 Copyright: Nick Kingsbury License: http://creativecommons.org/licenses/by/1.0 Module: "Linear Filtering" Used here as: "Transmission of Stationary Process Through a Linear Filter" By: Behnaam Aazhang URL: http://cnx.org/content/m10237/2.10/ Pages: 30-32 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Information Theory and Coding" By: Behnaam Aazhang URL: http://cnx.org/content/m10162/2.10/ Page: 33 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Entropy" By: Behnaam Aazhang URL: http://cnx.org/content/m10164/2.16/ Pages: 34-37 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Source Coding" By: Behnaam Aazhang URL: http://cnx.org/content/m10175/2.10/ Pages: 37-40 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0

139

0 Module: "Demodulation and Detection" By: Behnaam Aazhang URL: http://cnx.11/ Pages: 43-47 Copyright: Behnaam Aazhang License: http://creativecommons.org/content/m10091/2.org/content/m10116/2.org/licenses/by/1.org/content/m10176/2.13/ Pages: 47-50 Copyright: Behnaam Aazhang License: http://creativecommons.org/content/m10054/2.org/licenses/by/1.0 Module: "Examples of Correlation Detection" By: Behnaam Aazhang URL: http://cnx.0 Module: "Data Transmission and Reception" By: Behnaam Aazhang URL: http://cnx.10/ Pages: 40-41 Copyright: Behnaam Aazhang License: http://creativecommons.org/content/m10149/2.org/licenses/by/1.org/licenses/by/1.org/licenses/by/1.0 Module: "Geometric Representation of Modulation Signals" By: Behnaam Aazhang URL: http://cnx.0 Module: "Demodulation" By: Behnaam Aazhang URL: http://cnx.0 ATTRIBUTIONS .14/ Pages: 50-51 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Detection by Correlation" By: Behnaam Aazhang URL: http://cnx.15/ Pages: 54-56 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.9/ Page: 43 Copyright: Behnaam Aazhang License: http://creativecommons.10/ Pages: 56-58 Copyright: Behnaam Aazhang License: http://creativecommons.org/content/m10115/2.org/content/m10141/2.13/ Pages: 51-54 Copyright: Behnaam Aazhang License: http://creativecommons.org/content/m10035/2.0 Module: "Signalling" By: Behnaam Aazhang URL: http://cnx.org/licenses/by/1.140 Module: "Human Coding" By: Behnaam Aazhang URL: http://cnx.

7/ Pages: 70-71 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 141 .org/licenses/by/1.12/ Page: 73 Copyright: Behnaam Aazhang License: http://creativecommons.org/content/m10150/2.org/licenses/by/1.0 Module: "Examples with Matched Filters" By: Behnaam Aazhang URL: http://cnx.0 Module: "Performance Analysis of Orthogonal Binary Signals with Matched Filters" By: Behnaam Aazhang URL: http://cnx.ATTRIBUTIONS Module: "Matched Filters" By: Behnaam Aazhang URL: http://cnx.9/ Pages: 65-66 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.10/ Pages: 60-63 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Dierential Phase Shift Keying" By: Behnaam Aazhang URL: http://cnx.org/content/m10154/2.10/ Pages: 66-69 Copyright: Behnaam Aazhang License: http://creativecommons.14/ Pages: 58-60 Copyright: Behnaam Aazhang License: http://creativecommons.10/ Pages: 69-70 Copyright: Behnaam Aazhang License: http://creativecommons.0 Module: "Carrier Frequency Modulation" By: Behnaam Aazhang URL: http://cnx.0 Module: "Carrier Phase Modulation" By: Behnaam Aazhang URL: http://cnx.org/licenses/by/1.org/licenses/by/1.org/content/m10155/2.org/content/m10128/2.org/content/m10056/2.org/content/m10156/2.org/content/m10163/2.0 Module: "Performance Analysis of Binary Orthogonal Signals with Correlation" By: Behnaam Aazhang URL: http://cnx.11/ Pages: 63-64 Copyright: Behnaam Aazhang License: http://creativecommons.0 Module: "Digital Transmission over Baseband Channels" By: Behnaam Aazhang URL: http://cnx.org/content/m10101/2.org/licenses/by/1.

Tuan Do-Hong License: http://creativecommons.org/content/m15519/1. Tuan Do-Hong License: http://creativecommons.2/ Pages: 78-79 Copyright: Ha Ta-Hong. Tuan Do-Hong URL: http://cnx.7/ Pages: 75-76 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/2.org/content/m15522/1.org/licenses/by/1.4/ Pages: 80-83 Copyright: Ha Ta-Hong.5/ Pages: 73-75 Copyright: Ha Ta-Hong.142 Module: "Introduction to ISI" By: Ha Ta-Hong.0 Module: "Precoding and Bandlimited Signals" By: Behnaam Aazhang URL: http://cnx. Tuan Do-Hong URL: http://cnx.6/ Pages: 76-77 Copyright: Behnaam Aazhang License: http://creativecommons. Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.org/content/m15524/1.0/ Module: "Transversal Equalizer" By: Ha Ta-Hong.org/content/m15520/1. Tuan Do-Hong URL: http://cnx.0/ Module: "Decision Feedback Equalizer" By: Ha Ta-Hong.0/ ATTRIBUTIONS .0/ Module: "Eye Pattern" By: Ha Ta-Hong.0 Module: "Pulse Shaping to Reduce ISI" By: Ha Ta-Hong.org/licenses/by/1.org/content/m10094/2. Tuan Do-Hong URL: http://cnx.2/ Pages: 79-80 Copyright: Ha Ta-Hong.org/licenses/by/2.4/ Pages: 83-84 Copyright: Ha Ta-Hong. Tuan Do-Hong License: http://creativecommons. Tuan Do-Hong URL: http://cnx.org/licenses/by/2.org/licenses/by/2.org/licenses/by/2. Tuan Do-Hong URL: http://cnx.2/ Pages: 77-78 Copyright: Ha Ta-Hong. Tuan Do-Hong License: http://creativecommons.org/content/m10118/2. Tuan Do-Hong License: http://creativecommons.0/ Module: "Pulse Amplitude Modulation Through Bandlimited Channel" By: Behnaam Aazhang URL: http://cnx.0/ Module: "Two Types of Error-Performance Degradation" By: Ha Ta-Hong.org/content/m15521/1.org/content/m15527/1.

org/licenses/by/1.org/licenses/by/1.org/licenses/by/1. Tuan Do-Hong License: http://creativecommons.9/ Pages: 89-90 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.2/ Pages: 84-85 Copyright: Ha Ta-Hong.10/ Pages: 93-94 Copyright: Behnaam Aazhang License: http://creativecommons. Tuan Do-Hong URL: http://cnx.0/ Module: "Channel Capacity" By: Behnaam Aazhang URL: http://cnx.0 Module: "Mutual Information" By: Behnaam Aazhang URL: http://cnx.org/content/m10178/2.0 Module: "Shannon's Noisy Channel Coding Theorem" By: Behnaam Aazhang URL: http://cnx.org/content/m10173/2.org/licenses/by/2.11/ Pages: 95-97 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/2.8/ Pages: 87-88 Copyright: Behnaam Aazhang License: http://creativecommons.org/content/m10180/2.10/ Pages: 90-92 Copyright: Behnaam Aazhang License: http://creativecommons. Tuan Do-Hong License: http://creativecommons.2/ Page: 101 Copyright: Ha Ta-Hong.org/content/m15525/1.0 Module: "Convolutional Codes" By: Behnaam Aazhang URL: http://cnx.ATTRIBUTIONS Module: "Adaptive Equalization" By: Ha Ta-Hong.org/licenses/by/1.0 Module: "Fading Channel" By: Ha Ta-Hong.org/licenses/by/1.org/content/m10179/2.7/ Pages: 97-98 Copyright: Behnaam Aazhang License: http://creativecommons.0 Module: "Typical Sequences" By: Behnaam Aazhang URL: http://cnx.0/ 143 .org/content/m15523/1.org/content/m10181/2.org/content/m10174/2.0 Module: "Channel Coding" By: Behnaam Aazhang URL: http://cnx. Tuan Do-Hong URL: http://cnx.

Tuan Do-Hong License: http://creativecommons.1/ Pages: 106-108 Copyright: Thanh Do-Ngoc. Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.1/ Pages: 113-116 Copyright: Sinh Nguyen-Le.org/content/m15536/1.144 Module: "Characterizing Mobile-Radio Propagation" By: Ha Ta-Hong. Tuan Do-Hong License: http://creativecommons.2/ Pages: 101-105 Copyright: Ha Ta-Hong.0/ Module: "Mitigation to Combat Frequency-Selective Distortion" By: Sinh Nguyen-Le. Tuan Do-Hong URL: http://cnx.org/licenses/by/2. Tuan Do-Hong License: http://creativecommons.0/ Module: "Signal Time-Spreading" By: Thanh Do-Ngoc.3/ Pages: 108-112 Copyright: Thanh Do-Ngoc. Tuan Do-Hong URL: http://cnx.org/content/m15535/1.0/ Module: "Small-Scale Fading" By: Thanh Do-Ngoc.org/licenses/by/2. Tuan Do-Hong URL: http://cnx. Tuan Do-Hong URL: http://cnx.org/licenses/by/2.0/ Module: "Mitigation to Combat Fast-Fading Distortion" By: Sinh Nguyen-Le. Tuan Do-Hong License: http://creativecommons.1/ Pages: 116-117 Copyright: Sinh Nguyen-Le.1/ Page: 118 Copyright: Sinh Nguyen-Le.org/licenses/by/2.0/ Module: "Mitigating the Degradation Eects of Fading" By: Sinh Nguyen-Le.org/content/m15537/1.org/content/m15538/1.org/content/m15528/1. Tuan Do-Hong License: http://creativecommons. Tuan Do-Hong URL: http://cnx.0/ ATTRIBUTIONS .1/ Pages: 118-119 Copyright: Sinh Nguyen-Le.org/licenses/by/2. Tuan Do-Hong URL: http://cnx. Tuan Do-Hong URL: http://cnx.org/content/m15531/1. Tuan Do-Hong License: http://creativecommons.0/ Module: "Large-Scale Fading" By: Ha Ta-Hong.org/content/m15526/1.org/content/m15533/1.0/ Module: "Mitigation to Combat Loss in SNR" By: Sinh Nguyen-Le.org/licenses/by/2. Tuan Do-Hong License: http://creativecommons. Tuan Do-Hong URL: http://cnx.2/ Pages: 105-106 Copyright: Ha Ta-Hong.org/licenses/by/2.

org/licenses/by/2.org/licenses/by/2.org/licenses/by/2.0/ 145 . Tuan Do-Hong License: http://creativecommons. Tuan Do-Hong License: http://creativecommons.0/ Module: "Diversity-Combining Techniques" By: Sinh Nguyen-Le. Tuan Do-Hong URL: http://cnx.0/ Module: "The Role of an Interleaver" By: Sinh Nguyen-Le. Tuan Do-Hong License: http://creativecommons.0/ Module: "The Rake Receiver Applied to Direct-Sequence Spread-Spectrum (DS/SS) Systems" Used here as: "Application of Rake Receiver in CDMA System" By: Sinh Nguyen-Le.0/ Module: "The Viterbi Equalizer as Applied to GSM" Used here as: "Application of Viterbi Equalizer in GSM System" By: Sinh Nguyen-Le.ATTRIBUTIONS Module: "Diversity Techniques" By: Sinh Nguyen-Le. Tuan Do-Hong URL: http://cnx.org/content/m15541/1.org/content/m15540/1. Tuan Do-Hong License: http://creativecommons.0/ Module: "Modulation Types for Fading Channels" By: Sinh Nguyen-Le.1/ Pages: 124-126 Copyright: Sinh Nguyen-Le.1/ Pages: 120-121 Copyright: Sinh Nguyen-Le.org/content/m15539/1.org/licenses/by/2.2/ Pages: 126-127 Copyright: Sinh Nguyen-Le.org/licenses/by/2.org/content/m15544/1.org/content/m15534/1.org/licenses/by/2.1/ Page: 119 Copyright: Sinh Nguyen-Le. Tuan Do-Hong URL: http://cnx. Tuan Do-Hong URL: http://cnx. Tuan Do-Hong License: http://creativecommons. Tuan Do-Hong URL: http://cnx. Tuan Do-Hong License: http://creativecommons.1/ Pages: 121-123 Copyright: Sinh Nguyen-Le.1/ Page: 120 Copyright: Sinh Nguyen-Le. Tuan Do-Hong URL: http://cnx.org/content/m15542/1.

Chinese. Vietnamese. community colleges. We connect ideas and facilitate educational communities. including English. We are a Web-based authoring. teachers.About Connexions Since 1999. K-12 schools. and lifelong learners. interactive courses are in use worldwide by universities. including students. Japanese. Portuguese. Connexions has been pioneering a global system where anyone can create course materials and make them fully accessible and easily reusable free of charge. Connexions is part of an exciting new information distribution system that allows for Print on Demand Books. . French. and Thai. Spanish. Connexions materials are in many languages. Connexions's modular. distance learners. Connexions has partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed course materials and textbooks into classrooms worldwide at lower prices than traditional academic publishers. teaching and learning environment open to anyone interested in education. professors and lifelong learners. Italian.