Signals Systems Notes | Channel (Communications) | Signal (Electrical Engineering)

ECE 5520: Digital Communications

Lecture Notes
Fall 2009
Dr. Neal Patwari
University of Utah
Department of Electrical and Computer Engineering
c _2006
ECE 5520 Fall 2009 2
Contents
1 Class Organization 8
2 Introduction 8
2.1 ”Executive Summary” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Why not Analog? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Networking Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Channels and Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 Encoding / Decoding Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.7 Topic: Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.8 Topic: Frequency Domain Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.9 Topic: Orthogonality and Signal spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.10 Related classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Power and Energy 13
3.1 Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Decibel Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 Time-Domain Concept Review 15
4.1 Periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Impulse Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 Bandwidth 16
5.1 Continuous-time Frequency Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.1.1 Fourier Transform Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Linear Time Invariant (LTI) Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6 Bandpass Signals 21
6.1 Upconversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.2 Downconversion of Bandpass Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7 Sampling 23
7.1 Aliasing Due To Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.2 Connection to DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.3 Bandpass sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
8 Orthogonality 28
8.1 Inner Product of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
8.2 Inner Product of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.3 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
8.4 Orthogonal Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
9 Orthonormal Signal Representations 32
9.1 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
9.2 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
9.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
ECE 5520 Fall 2009 3
10 Multi-Variate Distributions 37
10.1 Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
10.2 Conditional Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
10.3 Simulation of Digital Communication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10.4 Mixed Discrete and Continuous Joint Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10.5 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.6 Gaussian Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.6.1 Complementary CDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.6.2 Error Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
10.8 Gaussian Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
10.8.1 Envelope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
11 Random Processes 46
11.1 Autocorrelation and Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
11.1.1 White Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
12 Correlation and Matched-Filter Receivers 47
12.1 Correlation Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
12.2 Matched Filter Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
12.3 Amplitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
12.4 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
12.5 Correlation Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
13 Optimal Detection 51
13.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
13.2 Bayesian Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
14 Binary Detection 52
14.1 Decision Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
14.2 Formula for Probability of Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
14.3 Selecting R
0
to Minimize Probability of Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
14.4 Log-Likelihood Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
14.5 Case of a
0
= 0, a
1
= 1 in Gaussian noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
14.6 General Case for Arbitrary Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
14.7 Equi-probable Special Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
14.8 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
14.9 Review of Binary Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
15 Pulse Amplitude Modulation (PAM) 58
15.1 Baseband Signal Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
15.2 Average Bit Energy in M-ary PAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
16 Topics for Exam 1 60
ECE 5520 Fall 2009 4
17 Additional Problems 61
17.1 Spectrum of Communication Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
17.2 Sampling and Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
17.3 Orthogonality and Signal Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
17.4 Random Processes, PSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
17.5 Correlation / Matched Filter Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
18 Probability of Error in Binary PAM 63
18.1 Signal Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
18.2 BER Function of Distance, Noise PSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
18.3 Binary PAM Error Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
19 Detection with Multiple Symbols 66
20 M-ary PAM Probability of Error 67
20.1 Symbol Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
20.1.1 Symbol Error Rate and Average Bit Energy . . . . . . . . . . . . . . . . . . . . . . . . . . 67
20.2 Bit Errors and Gray Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
20.2.1 Bit Error Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
21 Inter-symbol Interference 69
21.1 Multipath Radio Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
21.2 Transmitter and Receiver Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
22 Nyquist Filtering 70
22.1 Raised Cosine Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
22.2 Square-Root Raised Cosine Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
23 M-ary Detection Theory in N-dimensional signal space 72
24 Quadrature Amplitude Modulation (QAM) 74
24.1 Showing Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
24.2 Constellation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
24.3 Signal Constellations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
24.4 Angle and Magnitude Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
24.5 Average Energy in M-QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
24.6 Phase-Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
24.7 Systems which use QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
25 QAM Probability of Error 80
25.1 Overview of Future Discussions on QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
25.2 Options for Probability of Error Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
25.3 Exact Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
25.4 Probability of Error in QPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
25.5 Union Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
25.6 Application of Union Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
25.6.1 General Formula for Union Bound-based Probability of Error . . . . . . . . . . . . . . . . 84
ECE 5520 Fall 2009 5
26 QAM Probability of Error 85
26.1 Nearest-Neighbor Approximate Probability of Error . . . . . . . . . . . . . . . . . . . . . . . . . 85
26.2 Summary and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
27 Frequency Shift Keying 88
27.1 Orthogonal Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
27.2 Transmission of FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
27.3 Reception of FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
27.4 Coherent Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
27.5 Non-coherent Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
27.6 Receiver Block Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
27.7 Probability of Error for Coherent Binary FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
27.8 Probability of Error for Noncoherent Binary FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
27.9 FSK Error Probabilities, Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
27.9.1 M-ary Non-Coherent FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
27.9.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
27.10Bandwidth of FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
28 Frequency Multiplexing 95
28.1 Frequency Selective Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
28.2 Benefits of Frequency Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
28.3 OFDM as an extension of FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
29 Comparison of Modulation Methods 98
29.1 Differential Encoding for BPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
29.1.1 DPSK Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
29.1.2 DPSK Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
29.1.3 Probability of Bit Error for DPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
29.2 Points for Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
29.3 Bandwidth Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
29.3.1 PSK, PAM and QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
29.3.2 FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
29.4 Bandwidth Efficiency vs.
E
b
N
0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
29.5 Fidelity (P [error]) vs.
E
b
N
0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
29.6 Transmitter Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
29.6.1 Linear / Non-linear Amplifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
29.7 Offset QPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
29.8 Receiver Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
30 Link Budgets and System Design 105
30.1 Link Budgets Given C/N
0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
30.2 Power and Energy Limited Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
30.3 Computing Received Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
30.3.1 Free Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
30.3.2 Non-free-space Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
30.3.3 Wired Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
30.4 Computing Noise Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
ECE 5520 Fall 2009 6
30.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
31 Timing Synchronization 113
32 Interpolation 114
32.1 Sampling Time Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
32.2 Seeing Interpolation as Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
32.3 Approximate Interpolation Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
32.4 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
32.4.1 Higher order polynomial interpolation filters . . . . . . . . . . . . . . . . . . . . . . . . . . 117
33 Final Project Overview 118
33.1 Review of Interpolation Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
33.2 Timing Error Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
33.3 Early-late timing error detector (ELTED) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
33.4 Zero-crossing timing error detector (ZCTED) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
33.4.1 QPSK Timing Error Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
33.5 Voltage Controlled Clock (VCC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
33.6 Phase Locked Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
33.6.1 Phase Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
33.6.2 Loop Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
33.6.3 VCO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
33.6.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
33.6.5 Discrete-Time Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
34 Exam 2 Topics 126
35 Source Coding 127
35.1 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
35.2 Joint Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
35.3 Conditional Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
35.4 Entropy Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
35.5 Source Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
36 Review 133
37 Channel Coding 133
37.1 R. V. L. Hartley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
37.2 C. E. Shannon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
37.2.1 Noisy Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
37.2.2 Introduction of Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
37.2.3 Introduction of Power Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
37.3 Combining Two Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
37.3.1 Returning to Hartley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
37.3.2 Final Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
37.4 Efficiency Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
38 Review 138
ECE 5520 Fall 2009 7
39 Channel Coding 138
39.1 R. V. L. Hartley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
39.2 C. E. Shannon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
39.2.1 Noisy Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
39.2.2 Introduction of Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
39.2.3 Introduction of Power Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
39.3 Combining Two Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
39.3.1 Returning to Hartley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
39.3.2 Final Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
39.4 Efficiency Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
ECE 5520 Fall 2009 8
Lecture 1
Today: (1) Syllabus (2) Intro to Digital Communications
1 Class Organization
Textbook Few textbooks cover solely digital communications (without analog) in an introductory communi-
cations course. But graduates today will almost always encounter / be developing solely digital communication
systems. So half of most textbooks are useless; and that the other half is sparse and needs supplemental material.
For example, the past text was the ‘standard’ text in the area for an undergraduate course, Proakis & Salehi,
J.G. Proakis and M. Salehi, Communication Systems Engineering, 2nd edition, Prentice Hall, 2001. Students
didn’t like that I had so many supplemental readings. This year’s text covers primarily digital communications,
and does it in depth. Finally, I find it to be very well-written. And, there are few options in this area. I will
provide additional readings solely to provide another presentation style or fit another learning style. Unless
specified, these are optional.
Lecture Notes I type my lecture notes. I have taught ECE 5520 previously and have my lecture notes from
past years. I’m constantly updating my notes, even up to the lecture time. These can be available to you at
lecture, and/or after lecture online. However, you must accept these conditions:
1. Taking notes is important: I find most learning requires some writing on your part, not just watching.
Please take your own notes.
2. My written notes do not and cannot reflect everything said during lecture: I answer questions
and understand your perspective better after hearing your questions, and I try to tailor my approach
during the lecture. If I didn’t, you could just watch a recording!
2 Introduction
A digital communication system conveys discrete-time, discrete-valued information across a physical channel.
Information sources might include audio, video, text, or data. They might be continuous-time (analog) signals
(audio, images) and even 1-D or 2-D. Or, they may already be digital (discrete-time, discrete-valued). Our
object is to convey the signals or data to another place (or time) with as faithful representation as possible.
In this section we talk about what we’ll cover in this class, and more importantly, what we won’t cover.
2.1 ”Executive Summary”
Here is the one sentence version: We will study how to efficiently encode digital data on a noisy, bandwidth-
limited analog medium, so that decoding the data (i.e., reception) at a receiver is simple, efficient, and high-
fidelity.
The keys points stuffed into that one sentence are:
1. Digital information on an analog medium: We can send waveforms, i.e., real-valued, continuous-time
functions, on the channel (medium). These waveforms are from a discrete set of possible waveforms.
What set of waveforms should we use? Why?
ECE 5520 Fall 2009 9
2. Decoding the data: When receiving a signal (a function) in noise, none of the original waveforms will
match exactly. How do you make a decision about which waveform was sent?
3. What makes a receiver difficult to realize? What choices of waveforms make a receiver simpler to imple-
ment? What techniques are used in a receiver to compensate?
4. Efficiency, Bandwidth, and Fidelity: Fidelity is the correctness of the received data (i.e., the opposite of
error rate). What is the tradeoff between energy, bandwidth, and fidelity? We all want high fidelity, and
low energy consumption and bandwidth usage (the costs of our communication system).
You can look at this like an impedance matching problem from circuits. You want, for power efficiency, to
have the source impedance match the destination impedance. In digital comm, this means that we want our
waveform choices to match the channel and receiver to maximize the efficiency of the communication system.
2.2 Why not Analog?
The previous text used for this course, by Proakis & Salehi, has an extensive analysis and study of analog
communication systems, such as radio and television broadcasting (Chapter 3). In the recent past, this course
would study both analog and digital communication systems. Analog systems still exist and will continue to
exist; however, development of new systems will almost certainly be of digital communication systems. Why?
• Fidelity
• Energy: transmit power, and device power consumption
• Bandwidth efficiency: due to coding gains
• Moore’s Law is decreasing device costs for digital hardware
• Increasing need for digital information
• More powerful information security
2.3 Networking Stack
In this course, we study digital communications from bits to bits. That is, we study how to take ones and zeros
from a transmitter, send them through a medium, and then (hopefully) correctly identify the same ones and
zeros at the receiver. There’s a lot more than this to the digital communication systems which you use on a
daily basis (e.g., iPhone, WiFi, Bluetooth, wireless keyboard, wireless car key).
To manage complexity, we (engineers) don’t try to build a system to do everything all at once. We typically
start with an application, and we build a layered network to handle the application. The 7-layer OSI stack,
which you would study in a CS computer networking class, is as follows:
• Application
• Presentation (*)
• Session (*)
• Transport
• Network
ECE 5520 Fall 2009 10
• Link Layer
• Physical (PHY) Layer
(Note that there is also a 5-layer model in which * layers are considered as part of the application layer.) ECE
5520 is part of the bottom layer, the physical layer. In fact, the physical layer has much more detail. It is
primarily divided into:
• Multiple Access Control (MAC)
• Encoding
• Channel / Medium
We can control the MAC and the encoding chosen for a digital communication.
2.4 Channels and Media
We can chose from a few media, but we largely can’t change the properties of the medium (although there are
exceptions). Here are some media:
• EM Spectra: (anything above 0 Hz) Radio, Microwave, mm-wave bands, light
• Acoustic: ultrasound
• Transmission lines, waveguides, optical fiber, coaxial cable, wire pairs, ...
• Disk (data storage applications)
2.5 Encoding / Decoding Block Diagram
Information
Source
Source
Encoder
Channel
Encoder
Modulator
Channel
Information
Output
Source
Decoder
Channel
Decoder
Demodulator
Up-conversion
Down-
conversion
OtherComponents:
Digitalto Analog
Converter
AnalogtoDigital
Converter
Synchron-
ization
Figure 1: Block diagram of a single-user digital communication system, including (top) transmitter, (middle)
channel, and (bottom) receiver.
Notes:
• Information source comes from higher networking layers. It may be continuous or packetized.
• Source Encoding: Finding a compact digital representation for the data source. Includes sampling of
continuous-time signals, and quantization of continuous-valued signals. Also includes compression of
those sources (lossy, or lossless). What are some compression methods that you’re familiar with? We
present an introduction to source encoding at the end of this course.
ECE 5520 Fall 2009 11
• Channel encoding refers to redundancy added to the signal such that any bit errors can be corrected.
A channel decoder, because of the redundancy, can correct some bit errors. We will not study channel
encoding, but it is a topic in the (ECE 6520) Coding Theory.
• Modulation refers to the digital-to-analog conversion which produces a continuous-time signal that can be
sent on the physical channel. It is analogous to impedance matching - proper matching of a modulation to
a channel allows optimal information transfer, like impedance matching ensured optimal power transfer.
Modulation and demodulation will be the main focus of this course.
• Channels: See above for examples. Typical models are additive noise, or linear filtering channel.
Why do we do both source encoding (which compresses the signal as much as possible) and also channel
encoding (which adds redundancy to the signal)? Because of Shannon’s source-channel coding separation
theorem. He showed that (given enough time) we can consider them separately without additional loss. And
separation, like layering, reduces complexity to the designer.
2.6 Channels
A channel can typically be modeled as a linear filter with the addition of noise. The noise comes from a variety
of sources, but predominantly:
1. Thermal background noise: Due to the physics of living above 0 Kelvin. Well modeled as Gaussian, and
white; thus it is referred to as additive white Gaussian noise (AWGN).
2. Interference from other transmitted signals. These other transmitters whose signals we cannot completely
cancel, we lump into the ‘interference’ category. These may result in non-Gaussian noise distribution, or
non-white noise spectral density.
The linear filtering of the channel result from the physics and EM of the medium. For example, attenuation in
telephone wires varies by frequency. Narrowband wireless channels experience fading that varies quickly as a
function of frequency. Wideband wireless channels display multipath, due to multiple time-delayed reflections,
diffractions, and scattering of the signal off of the objects in the environment. All of these can be modeled as
linear filters.
The filter may be constant, or time-invariant, if the medium, the TX and RX do not move or change.
However, for mobile radio, the channel may change very quickly over time. Even for stationary TX and RX, in
real wireless channels, movement of cars, people, trees, etc. in the environment may change the channel slowly
over time.
Transmitted
Signal
LTIFilter
h(t)
Noise
Received
Signal
Figure 2: Linear filter and additive noise channel model.
In this course, we will focus primarily on the AWGN channel, but we will mention what variations exist for
particular channels, and how they are addressed.
ECE 5520 Fall 2009 12
2.7 Topic: Random Processes
Random things in a communication system:
• Noise in the channel
• Signal (bits)
• Channel filtering, attenuation, and fading
• Device frequency, phase, and timing offsets
These random signals often pass through LTI filters, and are sampled. We want to build the best receiver
possible despite the impediments. Optimal receiver design is something that we study using probability theory.
We have to tolerate errors. Noise and attenuation of the channel will cause bit errors to be made by the
demodulator and even the channel decoder. This may be tolerated, or a higher layer networking protocol (eg.,
TCP-IP) can determine that an error occurred and then re-request the data.
2.8 Topic: Frequency Domain Representations
To fit as many signals as possible onto a channel, we often split the signals by frequency. The concept of
sharing a channel is called multiple access (MA). Separating signals by frequency band is called frequency-
division multiple access (FDMA). For the wireless channel, this is controlled by the FCC (in the US) and called
spectrum allocation. There is a tradeoff between frequency requirements and time requirements, which will be
a major part of this course. The Fourier transform of our modulated, transmitted signal is used to show that
it meets the spectrum allocation limits of the FCC.
2.9 Topic: Orthogonality and Signal spaces
To show that signals sharing the same channel don’t interfere with each other, we need to show that they are
orthogonal. This means, in short, that a receiver can uniquely separate them. Signals in different frequency
bands are orthogonal.
We can also employ multiple orthogonal signals in a single transmitter and receiver, in order to provide
multiple independent means (dimensions) on which to modulate information. We will study orthogonal signals,
and learn an algorithm to take an arbitrary set of signals and output a set of orthogonal signals with which to
represent them. We’ll use signal spaces to show graphically the results, as the example in Figure 36.
M=8 M=16
Figure 3: Example signal space diagram for M-ary Phase Shift Keying, for (a) M = 8 and (b) M = 16. Each
point is a vector which can be used to send a 3 or 4 bit sequence.
ECE 5520 Fall 2009 13
2.10 Related classes
1. Pre-requisites: (ECE 5510) Random Processes; (ECE 3500) Signals and Systems.
2. Signal Processing: (ECE 5530): Digital Signal Processing
3. Electromagnetics: EM Waves, (ECE 5320-5321) Microwave Engineering, (ECE 5324) Antenna Theory,
(ECE 5411) Fiberoptic Systems
4. Breadth: (ECE 5325) Wireless Communications
5. Devices and Circuits: (ECE 3700) Fundamentals of Digital System Design, (ECE 5720) Analog IC Design
6. Networking: (ECE 5780) Embedded System Design, (CS 5480) Computer Networks
7. Advanced Classes: (ECE 6590) Software Radio, (ECE 6520) Information Theory and Coding, (ECE 6540):
Estimation Theory
Lecture 2
Today: (1) Power, Energy, dB (2) Time-domain concepts (3) Bandwidth, Fourier Transform
Two of the biggest limitations in communications systems are (1) energy / power; and (2) bandwidth. Today’s
lecture provides some tools to deal with power and energy, and starts the review of tools to analyze frequency
content and bandwidth.
3 Power and Energy
Recall that energy is power times time. Use the units: energy is measured in Joules (J); power is measured in
Watts (W) which is the same as Joules/second (J/sec). Also, recall that our standard in signals and systems is
define our signals, such as x(t), as voltage signals (V). When we want to know the power of a signal we assume
it is being dissipated in a 1 Ohm resistor, so [x(t)[
2
is the power dissipated at time t (since power is equal to
the voltage squared divided by the resistance).
A signal x(t) has energy defined as
E =
_

−∞
[x(t)[
2
dt
For some signals, E will be infinite because the signal is non-zero for an infinite duration of time (it is always
on). These signals we call power signals and we compute their power as
P = lim
T→∞
1
2T
_
T
−T
[x(t)[
2
dt
The signal with finite energy is called an energy signal.
ECE 5520 Fall 2009 14
3.1 Discrete-Time Signals
In this book, we refer to discrete samples of the sampled signal x as x(n). You may be more familiar with the
x[n] notation. But, Matlab uses parentheses also; so we’ll follow the Rice text notation. Essentially, whenever
you see a function of n (or k, l, m), it is a discrete-time function; whenever you see a function of t (or perhaps
τ) it is a continuous-time function. I’m sorry this is not more obvious in the notation.
For discrete-time signals, energy and power are defined as:
E =

n=−∞
[x(n)[
2
(1)
P = lim
N→∞
1
2N + 1
N

n=−N
[x(n)[
2
(2)
3.2 Decibel Notation
We often use a decibel (dB) scale for power. If P
lin
is the power in Watts, then
[P]
dBW
= 10 log
10
P
lin
Decibels are more general - they can apply to other unitless quantities as well, such as a gain (loss) L(f) through
a filter H(f),
[L(f)]
dB
= 10 log
10
[H(f)[
2
(3)
Note: Why is the capital B used? Either the lowercase ‘b’ in the SI system is reserved for bits, so when the
‘bel’ was first proposed as log
10
(), it was capitalized; or it referred to a name ‘Bell’ so it was capitalized. In
either case, we use the unit decibel 10 log
10
() which is then abbreviated as dB in the SI system.
Note that (3) could also be written as:
[L(f)]
dB
= 20 log
10
[H(f)[ (4)
Be careful with your use of 10 vs. 20 in the dB formula.
• Only use 20 as the multiplier if you are converting from voltage to power; i.e., taking the log
10
of a voltage
and expecting the result to be a dB power value.
Our standard is to consider power gains and losses, not voltage gains and losses. So if we say, for example,
the channel has a loss of 20 dB, this refers to a loss in power. In particular, the output of the channel has 100
times less power than the input to the channel.
Remember these two dB numbers:
• 3 dB: This means the number is double in linear terms.
• 10 dB: This means the number is ten times in linear terms.
And maybe this one:
• 1 dB: This means the number is a little over 25% more (multiply by 5/4) in linear terms.
With these three numbers, you can quickly convert losses or gains between linear and dB units without a
calculator. Just convert any dB number into a sum of multiples of 10, 3, and 1.
Example: Convert dB to linear values:
ECE 5520 Fall 2009 15
1. 30 dBW
2. 33 dBm
3. -20 dB
4. 4 dB
Example: Convert linear values to dB:
1. 0.2 W
2. 40 mW
Example: Convert power relationships to dB:
Convert the expression to one which involves only dB terms.
1. P
y,lin
= 100P
x,lin
2. P
o,lin
= G
connector,lin
L
−d
cable,lin
, where P
o,lin
is the received power in a fiber-optic link, where d is the cable
length (typically in units of km), G
connector,lin
is the gain in any connectors, and L
cable,lin
is a loss in a 1
km cable.
3. P
r,lin
= P
t,lin
G
t,lin
G
t,lin
λ
2
(4πd)
2
, where λ is the wavelength (m), d is the path length (m), and G
t,lin
and G
t,lin
are the linear gains in the antennas, P
t,lin
is the transmit power (W) and P
r,lin
is the received power (W).
This is the Friis free space path loss formula.
These last two are what we will need in Section 6.4, when we discuss link budgets. The main idea is that
we have a limited amount of power which will be available at the receiver.
4 Time-Domain Concept Review
4.1 Periodicity
Def ’n: Periodic (continuous-time)
A signal x(t) is periodic if x(t) = x(t +T
0
) for some constant T
0
,= 0 for all t ∈ R. The smallest such constant
T
0
> 0 is the period.
If a signal is not periodic it is aperiodic.
Periodic signals have Fourier series representations, as defined in Rice Ch. 2.
Def ’n: Periodic (discrete-time)
A DT signal x(n) is periodic if x(n) = x(n + N
0
) for some integer N
0
,= 0, for all integers n. The smallest
positive integer N
0
is the period.
ECE 5520 Fall 2009 16
4.2 Impulse Functions
Def ’n: Impulse Function
The (Dirac) impulse function δ(t) is the function which makes
_

−∞
x(t)δ(t)dt = x(0) (5)
true for any function x(t) which is continuous at t = 0.
We are defining a function by its most important property, the ‘sifting property’. Is there another definition
which is more familiar?
Solution:
δ(t) = lim
T→0
_
1/T, −T ≤ t ≤ T
0, o.w.
You can visualize δ(t) here as an infinitely high, infinitesimally wide pulse at the origin, with area one. This is
why it ‘pulls out’ the value of x(t) in the integral in (5).
Other properties of the impulse function:
• Time scaling,
• Symmetry,
• Sifting at arbitrary time t
0
,
The continuous-time unit step function is
u(t) =
_
1, t ≥ 0
0, o.w.
Example: Sifting Property
What is
_

−∞
sin(πt)
πt
δ(1 −t)dt?
The discrete-time impulse function (also called the Kronecker delta or δ
K
) is defined as:
δ(n) =
_
1, n = 0
0, o.w.
(There is no need to get complicated with the math; this is well defined.) Also,
u(n) =
_
1, n ≥ 0
0, o.w.
5 Bandwidth
Bandwidth is another critical resource for a digital communications system; we have various definitions to
quantify it. In short, it isn’t easy to describe a signal in the frequency domain with a single number. And, in
the end, a system will be designed to meet a spectral mask required by the FCC or system standard.
ECE 5520 Fall 2009 17
Periodicity
Time Periodic Aperiodic
Continuous-Time Laplace Transform
x(t) ↔ X(s)
Fourier Series Fourier Transform
x(t) ↔ a
k
x(t) ↔ X(jω)
X(jω) =
_

t=−∞
x(t)e
−jωt
dt
x(t) =
1

_

ω=−∞
X(jω)e
jωt

Discrete-Time z-Transform
x(n) ↔ X(z)
Discrete Fourier Transform (DFT) Discrete Time Fourier Transform (DTFT)
x(n) ↔ X[k] x(n) ↔ X(e
jΩ
)
X[k] =

N−1
n=0
x(n)e
−j

N
kn
X(e
jΩ
) =


n=−∞
x(n)e
−jΩn
x(n) =
1
N

N−1
k=0
X[k]e
j

N
nk
x(n) =
1

_
π
n=−π
X(e
jΩ
)dΩ
Table 1: Frequency Transforms
Intuitively, bandwidth is the maximum extent of our signal’s frequency domain characterization, call it X(f).
A baseband signal absolute bandwidth is often defined as the W such that X(f) = 0 for all f except for the
range −W ≤ f ≤ W. Other definitions for bandwidth are
• 3-dB bandwidth: B
3dB
is the value of f such that [X(f)[
2
= [X(0)[
2
/2.
• 90% bandwidth: B
90%
is the value which captures 90% of the energy in the signal:
_
B
90%
−B
90%
[X(f)[
2
df = 0.90
_

|=−∞
Xd[(f)[
2
df
As a motivating example, I mention the square-root raised cosine (SRRC) pulse, which has the following
desirable Fourier transform:
H
RRC
(f) =
_
¸
¸
_
¸
¸
_

T
s
, 0 ≤ [f[ ≤
1−α
2Ts _
Ts
2
_
1 + cos
_
πTs
α
_
[f[ −
1−α
2Ts
___
,
1−α
2Ts
≤ [f[ ≤
1+α
2Ts
0, o.w.
(6)
where α is a parameter called the “rolloff factor”. We can actually analyze this using the properties of the
Fourier transform and many of the standard transforms you’ll find in a Fourier transform table.
The SRRC and other pulse shapes are discussed in Appendix A, and we will go into more detail later on.
The purpose so far is to motivate practicing up on frequency transforms.
5.1 Continuous-time Frequency Transforms
Notes about continuous-time frequency transforms:
1. You are probably most familiar with the Laplace Transform. To convert it to the Fourier tranform, we
replace s with jω, where ω is the radial frequency, with units radians per second (rad/s).
ECE 5520 Fall 2009 18
2. You may prefer the radial frequency representation, but also feel free to use the rotational frequency f
(which has units of cycles per sec, or Hz. Frequency in Hz is more standard for communications; you
should use it for intuition. In this case, just substitute ω = 2πf. You could write X(j2πf) as the notation
for this, but typically you’ll see it abbreviated as X(f). Note that the definition of the Fourier tranform
in the f domain loses the
1

in the inverse Fourier transform definition.s
X(j2πf) =
_

t=−∞
x(t)e
−j2πft
dt
x(t) =
_

f=−∞
X(j2πf)e
j2πft
df
(7)
3. The Fourier series is limited to purely periodic signals. Both Laplace and Fourier transforms are not
limited to periodic signals.
4. Note that e

= cos(α) +j sin(α).
See Table 2.4.4 in the Rice book.
Example: Square Wave
Given a rectangular pulse x(t) = rect(t/T
s
),
x(t) =
_
1, −T
s
/2 < t ≤ T
s
/2
0, o.w.
What is the Fourier transform X(f)? Calculate both from the definition and from a table.
Solution: Method 1: From the definition:
X(jω) =
_
Ts/2
t=−Ts/2
e
−jωt
dt
=
1
−jω
e
−jωt
¸
¸
¸
¸
Ts/2
t=−Ts/2
=
1
−jω
_
e
−jωTs/2
−e
jωTs/2
_
= 2
sin(ωT
s
/2)
ω
= T
s
sin(ωT
s
/2)
ωT
s
/2
This uses the fact that
1
−2j
_
e
−jα
−e

_
= sin(α). While it is sometimes convenient to replace sin(πx)/(πx) with
sincx, it is confusing because sinc(x) is sometimes defined as sin(πx)/(πx) and sometimes defined as (sin x)/x.
No standard definition for ‘sinc’ exists! Rather than make a mistake because of this, the Rice book always
writes out the expression fully. I will try to follow suit.
Method 2: From the tables and properties:
x(t) = g(2t) where g(t) =
_
1, [t[ < T
s
0, o.w.
(8)
From the table, G(jω) = 2T
s
sin(ωTs)
ωTs
. From the properties, X(jω) =
1
2
G
_
j
ω
2
_
. So
X(jω) = T
s
sin(ωT
s
/2)
ωT
s
/2
ECE 5520 Fall 2009 19
See Figure 4(a).
(a)
−4/Ts −3/Ts −2/Ts −1/Ts 0 1/Ts 2/Ts 3/Ts 4/Ts
0
Ts/2
Ts
f
T
s

s
i
n
c
(
π

f

T
s
)
(b)
−4/Ts −3/Ts −2/Ts −1/Ts 0 1/Ts 2/Ts 3/Ts 4/Ts
−50
−40
−30
−20
−10
0
f
2
0

l
o
g
1
0

(
X
(
f
)
/
T
s
)
Figure 4: (a) Fourier transform X(j2πf) of rect pulse with period T
s
, and (b) Power vs. frequency
20 log
10
(X(j2πf)/T
s
).
Question: What if Y (jω) was a rect function? What would the inverse Fourier transform y(t) be?
5.1.1 Fourier Transform Properties
See Table 2.4.3 in the Rice book.
Assume that F¦x(t)¦ = X(jω). Important properties of the Fourier transform:
1. Duality property:
x(jω) = F¦X(−t)¦
x(−jω) = F¦X(t)¦
(Confusing. It says is that you can go backwards on the Fourier transform, just remember to flip the
result around the origin.)
2. Time shift property:
F¦x(t −t
0
)¦ = e
−jωt
0
X(jω)
ECE 5520 Fall 2009 20
3. Scaling property: for any real a ,= 0,
F¦x(at)¦ =
1
[a[
X
_
j
ω
a
_
4. Convolution property: if, additionally y(t) has Fourier transform X(jω),
F¦x(t) ⋆ y(t)¦ = X(jω) Y (jω)
5. Modulation property:
F¦x(t)cos(ω
0
t)¦ =
1
2
X(ω −ω
0
) +
1
2
X(ω +ω
0
)
6. Parceval’s theorem: The energy calculated in the frequency domain is equal to the energy calculated in
the time domain.
_

t=−∞
[x(t)[
2
dt =
_

f=−∞
[X(f)[
2
df =
1

_

ω=−∞
[X(jω)[
2

So do whichever one is easiest! Or, check your answer by doing both.
5.2 Linear Time Invariant (LTI) Filters
If a (deterministic) signal x(t) is input to a LTI filter with impulse response h(t), the output signal is
y(t) = h(t) ⋆ x(t)
Using the above convolution property,
Y (jω) = X(jω) H(jω)
5.3 Examples
Example: Applying FT Properties
If w(t) has the Fourier transform
W(jω) =

1 +jω
find X(jω) for the following waveforms:
1. x(t) = w(2t + 2)
2. x(t) = e
−jt
w(t −1)
3. x(t) = 2
∂w(t)
∂t
4. x(t) = w(1 −t)
Solution: To be worked out in class.
Lecture 3
Today: (1) Bandpass Signals (2) Sampling
Solution: From the Examples at the end of Lecture 2:
ECE 5520 Fall 2009 21
1. Let x(t) = z(2t) where z(t) = w(t + 2). Then
Z(jω) = e
j2ω

1 +jω
, and X(jω) =
1
2
Z(jω/2).
So X(jω) =
1
2
e

jω/2
1+jω/2
. Alternatively, let x(t) = r(t + 1) where r(t) = w(2t). Then
R(jω) =
1
2
W(jω/2), and X(jω) = e

R(jω).
Again, X(jω) =
1
2
e

jω/2
1+jω/2
.
2. Let x(t) = e
−jt
z(t) where z(t) = w(t − 1). Then Z(ω) = e
−jω
W(jω), and X(jω) = Z(j(ω − 1)). So
X(jω) = e
−j(ω−1)
W(j(ω −1)) = e
−j(ω+1)
j(ω+1)
1+j(ω+1)
.
3. X(jω) = 2jω

1+jω
= −

2
1+jω
.
4. Let x(t) = y(−t), where y(t) = w(1 +t). Then
X(jω) = Y (−jω), where Y (jω) = e

W(jω) = e


1 +jω
So X(jω) = e
−jω −jω
1−jω
.
6 Bandpass Signals
Def ’n: Bandpass Signal
A bandpass signal x(t) has Fourier transform X(jω) = 0 for all ω such that [ω ±ω
c
[ > W/2, where W/2 < ω
c
.
Equivalently, a bandpass signal x(t) has Fourier transform X(f) = 0 for all f such that [f ± f
c
[ > B/2, where
B/2 < f
c
.
f
c
B
X f ( )
-f
c
B
Figure 5: Bandpass signal with center frequency f
c
.
Realistically, X(jω) will not be exactly zero outside of the bandwidth, there will be some ‘sidelobes’ out of
the main band.
6.1 Upconversion
We can take a baseband signal x
C
(t) and ‘upconvert’ it to be a bandpass filter by multiplying with a cosine at
the desired center frequency, as shown in Fig. 6.
ECE 5520 Fall 2009 22
x t
C
( )
cos(2 ) pf t
c
Figure 6: Modulator block diagram.
Example: Modulated Square Wave
We dealt last time with the square wave x(t) = rect(t/T
s
). This time, we will look at:
x(t) = rect(t/T
s
) cos(ω
c
t)
where ω
c
is the center frequency in radians/sec (f
c
= ω
c
/(2π) is the center frequency in Hz). Note that I use
“rect” as follows:
rect(t) =
_
1, −1/2 < t ≤ 1/2
0, o.w.
What is X(jω)?
Solution:
X(jω) = F¦rect(t/T
s
)¦ ⋆ F¦cos(ω
c
t)¦
X(jω) =
1

T
s
sin(ωT
s
/2)
ωT
s
/2
⋆ π [δ(ω −ω
c
) +δ(ω +ω
c
)]
X(jω) =
T
s
2
sin[(ω −ω
c
)T
s
/2]
(ω −ω
c
)T
s
/2
+
T
s
2
sin[(ω +ω
c
)T
s
/2]
(ω +ω
c
)T
s
/2
The plot of X(j2πf) is shown in Fig. 7.
f
c
X f ( )
-f
c
Figure 7: Plot of X(j2πf) for modulated square wave example.
6.2 Downconversion of Bandpass Signals
How will we get back the desired baseband signal x
C
(t) at the receiver?
1. Multiply (again) by the carrier 2 cos(ω
c
t).
2. Input to a low-pass filter (LPF) with cutoff frequency ω
c
.
This is shown in Fig. 8.
ECE 5520 Fall 2009 23
2cos(2 ) pf t
c
x t ( )
LPF
Figure 8: Demodulator block diagram.
Assume the general modulated signal was,
x(t) = x
C
(t) cos(ω
c
t)
X(jω) =
1
2
X
C
(j(ω −ω
c
)) +
1
2
X
C
(j(ω +ω
c
))
What happens after the multiplication with the carrier in the receiver?
Solution:
X
1
(jω) =
1

X(jω) ⋆ 2π [δ(ω −ω
c
) +δ(ω +ω
c
)]
X
1
(jω) =
1
2
X
C
(j(ω −2ω
c
)) +
1
2
X
C
(jω) +
1
2
X
C
(jω) +
1
2
X
C
(j(ω + 2ω
c
))
There are components at 2ω
c
and −2ω
c
which will be canceled by the LPF. (Does the LPF need to be ideal?)
X
2
(jω) = X
C
(jω)
So x
2
(t) = x
C
(t).
7 Sampling
A common statement of the Nyquist sampling theorem is that a signal can be sampled at twice its bandwidth.
But the theorem really has to do with signal reconstruction from a sampled signal.
Theorem: (Nyquist Sampling Theorem.) Let x
c
(t) be a baseband, continuous signal with bandwidth B (in
Hz), i.e., X
c
(jω) = 0 for all [ω[ ≥ 2πB. Let x
c
(t) be sampled at multiples of T, where
1
T
≥ 2B to yield the
sequence ¦x
c
(nT)¦

n=−∞
. Then
x
c
(t) = 2BT

n=−∞
x
c
(nT)
sin(2πB(t −nT))
2πB(t −nT)
. (9)
Proof: Not covered.
Notes:
• This is an interpolation procedure.
• Given ¦x
n
¦

n=−∞
, how would you find the maximum of x(t)?
• This is only precise when X(jω) = 0 for all [ω[ ≥ 2πB.
ECE 5520 Fall 2009 24
7.1 Aliasing Due To Sampling
Essentially, sampling is the multiplication of a impulse train (at period T with the desired signal x(t):
x
sa
(t) = x(t)

n=−∞
δ(t −nT)
x
sa
(t) =

n=−∞
x(nT)δ(t −nT)
What is the Fourier transform of x
sa
(t)?
Solution: In the frequency domain, this is a convolution:
X
sa
(jω) = X(jω) ⋆

T

n=−∞
δ
_
ω −
2πn
T
_
=

T

n=−∞
X
_
j
_
ω −
2πn
T
__
for all ω (10)
=
1
T
X(jω) for [ω[ < 2πB
This is shown graphically in the Rice book in Figure 2.12.
The Fourier transform of the sampled signal is many copies of X(jω) strung at integer multiples of 2π/T,
as shown in Fig. 9.
Figure 9: The effect of sampling on the frequency spectrum in terms of frequency f in Hz.
Example: Sinusoid sampled above and below Nyquist rate
Consider two sinusoidal signals sampled at 1/T = 25 Hz:
x
1
(nT) = sin(2π5nT)
x
2
(nT) = sin(2π20nT)
What are the two frequencies of the sinusoids, and what is the Nyquist rate? Which of them does the Nyquist
theorem apply to? Draw the spectrums of the continuous signals x
1
(t) and x
2
(t), and indicate what the spectrum
is of the sampled signals.
Figure 10 shows what happens when the Nyquist theorem is applied to the each signal (whether or not it is
valid). What observations would you make about Figure 10(b), compared to Figure 10(a)?
Example: Square vs. round pulse shape
Consider the square pulse considered before, x
1
(t) = rect(t/T
s
). Also consider a parabola pulse (this doesn’t
ECE 5520 Fall 2009 25
(a)
−1 −0.5 0 0.5 1
−1
−0.5
0
0.5
1
1.5
Time
x
(
t
)
Sampled
Interp.
(b)
−1 −0.5 0 0.5 1
−1
−0.5
0
0.5
1
1.5
Time
x
(
t
)
Sampled
Interp.
Figure 10: Sampled (a) x
1
(nT) and (b) x
2
(nT) are interpolated (—) using the Nyquist interpolation formula.
really exist in the wild – I’m making it up for an example.)
x
2
(t) =
_
1 −
_
2t
Ts
_
2
, −
1
2Ts
≤ t ≤
1
2Ts
0, o.w.
What happens to x
1
(t) and x
2
(t) when they are sampled at rate T?
In the Matlab code EgNyquistInterpolation.m we set T
s
= 1/5 and T = 1/25. Then, the sampled pulses
are interpolated using (9). Even though we’ve sampled at a pretty high rate, the reconstructed signals will not
be perfect representations of the original, in particular for x
1
(t). See Figure 11.
7.2 Connection to DTFT
Recall (10). We calculated the Fourier transform of the product by doing the convolution in the frequency
domain. Instead, we can calculate it directly from the formula.
ECE 5520 Fall 2009 26
(a)
−1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
Time
x
(
t
)
Sampled
Interp.
(b)
−1 −0.5 0 0.5 1
−0.2
0
0.2
0.4
0.6
0.8
1
Time
x
(
t
)
Sampled
Interp.
Figure 11: Sampled (a) x
1
(nT) and (b) x
2
(nT) are interpolated (—) using the Nyquist interpolation formula.
Solution:
X
sa
(jω) =
_

t=−∞

n=−∞
x(nT)δ(t −nT)e
−jωt
=

n=−∞
x(nT)
_

t=−∞
δ(t −nT)e
−jωt
(11)
=

n=−∞
x(nT)e
−jωnT
The discrete-time Fourier transform of x
sa
(t), I denote DTFT¦x
sa
(t)¦, and it is:
DTFT¦x
sa
(t)¦ = X
d
(e
jΩ
) =

n=−∞
x(nT)e
−jΩn
Essentially, the difference between the DTFT and the Fourier transform of the sampled signal is the relationship
Ω = ωT = 2πfT
ECE 5520 Fall 2009 27
But this defines the relationship only between the Fourier transform of the sampled signal. How can we
relate this to the FT of the continuous-time signal? A: Using (10). We have that X
d
(e

) = X
sa
_

2πT
_
. Then
plugging into (10),
Solution:
X
d
(e
jΩ
) = X
sa
_
j

T
_
=

T

n=−∞
X
_
j
_

T

2πn
T
__
=

T

n=−∞
X
_
j
_
Ω −2πn
T
__
(12)
If x(t) is sufficiently bandlimited, X
d
(e
jΩ
) ∝ X(jΩ/T) in the interval −π < Ω < π. This relationship holds
for the Fourier transform of the original, continuous-time signal between −π < Ω < π if and only if the
original signal x(t) is sufficiently bandlimited so that the Nyquist sampling theorem applies.
Notes:
• Most things in the DTFT table in Table 2.8 (page 68) are analogous to continuous functions which are
not bandlimited. So - don’t expect the last form to apply to any continuous function.
• The DTFT is periodic in Ω with period 2π.
• Don’t be confused: The DTFT is a continuous function of Ω.
Lecture 4
Today: (1) Example: Bandpass Sampling (2) Orthogonality / Signal Spaces I
7.3 Bandpass sampling
Bandpass sampling is the use of sampling to perform down-conversion via frequency aliasing. We assume that
the signal is truly a bandpass signal (has no DC component). Then, we sample with rate 1/T and we rely on
aliasing to produce an image the spectrum at a relatively low frequency (less than one-half the sampling rate).
Example: Bandpass Sampling
This is Example 2.6.2 (pp. 60-65) in the Rice book. In short, we have a bandpass signal in the frequency band,
450 Hz to 460 Hz. The center frequency is 455 Hz. We want to sample the signal. We have two choices:
1. Sample at more than twice the maximum frequency.
2. Sample at more than twice the bandwidth.
The first gives a sampling frequency of more than 920 Hz. The latter gives a sampling frequency of more than
20 Hz. There is clearly a benefit to the second part. However, traditionally, we would need to downconvert the
signal to baseband in order to sample it at the lower rate. (See Lecture 3 notes).
ECE 5520 Fall 2009 28
What is the sampled spectrum, and what is the bandwidth of the sampled signal in radians/sample if the
sampling rate is:
1. 1000 samples/sec?
2. 140 samples/sec?
Solution:
1. Sample at 1000 samples/sec: Ω
c
= 2π
455
500
=
455
500
π. The signal is very sparse - it occupies only

10 Hz
1000 samples/sec
= π/50 radians/sample.
2. Sample at 140 samples/sec: Copies of the entire frequency spectrum will appear in the frequency
domain at multiples of 140 Hz or 2π140 radians/sec. The discrete-time frequency is modulo 2π, so the
center frequency of the sampled signal is:

c
= 2π
455
140
mod 2π =
13π
2
mod 2π =
π
2
The value π/2 radians/sample is equivalent to 35 cycles/second. The bandwidth is 2π
10
140
= π/7 radi-
ans/sample of sampled spectrum.
8 Orthogonality
From the first lecture, we talked about how digital communications is largely about using (and choosing some
combination of) a discrete set of waveforms to convey information on the (continuous-time, real-valued) channel.
At the receiver, the idea is to determine which waveforms were sent and in what quantities.
This choice of waveforms is partially determined by bandwidth, which we have covered. It is also determined
by orthogonality. Loosely, orthogonality of waveforms means that some combination of the waveforms can be
uniquely separated at the receiver. This is the critical concept needed to understand digital receivers. To build
a better framework for it, we’ll start with something you’re probably more familiar with: vector multiplication.
8.1 Inner Product of Vectors
We often use an idea called inner product, which you’re familiar with from vectors (as the dot product). If we
have two vectors x and y,
x =
_
¸
¸
¸
_
x
1
x
2
.
.
.
x
n
_
¸
¸
¸
_
y =
_
¸
¸
¸
_
y
1
y
2
.
.
.
y
n
_
¸
¸
¸
_
Then we can take their inner product as:
x
T
y =

i
x
i
y
i
Note the
T
transpose ‘in-between’ an inner product. The inner product is also denoted ¸x, y). Finally, the norm
of a vector is the square root of the inner product of a vector with itself,
|x| =
_
¸x, x)
ECE 5520 Fall 2009 29
A norm is always indicated with the | | symbols outside of the vector.
Example: Example Inner Products
Let x = [1, 1, 1]
T
, y = [0, 1, 1]
T
, z = [1, 0, 0]
T
. What are:
• ¸x, y)?
• ¸z, y)?
• |x|?
• |z|?
Recall the inner product between two vectors can also be written as
x
T
y = |x||y| cos θ
where θ is the angle between vectors x and y. In words, the inner product is a measure of ‘same-direction-ness’:
when positive, the two vectors are pointing in a similar direction, when negative, they are pointing in somewhat
opposite directions; when zero, they’re at cross-directions.
8.2 Inner Product of Functions
The inner product is not limited to vectors. It also is applied to the ‘space’ of square-integrable real functions,
i.e., those functions x(t) for which
_

−∞
x
2
(t)dt < ∞.
The ‘space’ of square-integrable complex functions, i.e., those functions x(t) for which
_

−∞
[x(t)[
2
dt < ∞
also have an inner product. These norms are:
¸x(t), y(t)) =
_

−∞
x(t)y(t)dt Real functions
¸x(t), y(t)) =
_

−∞
x(t)y

(t)dt Complex functions
As a result,
|x(t)| =
¸
_

−∞
x
2
(t)dt Real functions
|x(t)| =
¸
_

−∞
[x(t)[
2
dt Complex functions
Example: Sine and Cosine
Let
x(t) =
_
cos(2πt), 0 < t ≤ 1
0, o.w.
y(t) =
_
sin(2πt), 0 < t ≤ 1
0, o.w.
ECE 5520 Fall 2009 30
What are |x(t)| and |y(t)|? What is ¸x(t), y(t))?
Solution: Using cos
2
x =
1
2
(1 + cos 2x),
|x(t)| =
¸
_
1
0
cos
2
(2πt)dt =
¸
1
2
_
1 +
1

sin(2πt)
_
dt
¸
¸
¸
¸
1
0
=
_
1/2
The solution for |y(t)| also turns out to be
_
1/2. Using sin 2x = 2 cos xsin x, the inner product is,
¸x(t), y(t)) =
_
1
0
cos(2πt) sin(2πt)dt
=
_
1
0
1
2
sin(4πt)dt
=
−1

cos(4πt)
¸
¸
¸
¸
1
0
=
−1

(1 −1) = 0
8.3 Definitions
Def ’n: Orthogonal
Two signals x(t) and y(t) are orthogonal if
¸x(t), y(t)) = 0
Def ’n: Orthonormal
Two signals x(t) and y(t) are orthonormal if they are orthogonal and they both have norm 1, i.e.,
|x(t)| = 1 |y(t)| = 1
(a) Are x(t) and y(t) from the sine/cosine example above orthogonal? (b) Are they orthonormal?
Solution: (a) Yes. (b) No. They could be orthonormal if they’d been scaled by

2.
x t
1
( )
A
-A
x t
2
( )
A
-A
Figure 12: Two Walsh-Hadamard functions.
Example: Walsh-Hadamard 2 Functions
Let
x
1
(t) =
_
_
_
A, 0 < t ≤ 0.5
−A, 0.5 < t ≤ 1
0, o.w.
x
2
(t) =
_
A, 0 < t ≤ 1
0, o.w.
ECE 5520 Fall 2009 31
These are shown in Fig. 12. (a) Are x
1
(t) and x
2
(t) orthogonal? (b) Are they orthonormal?
Solution: (a) Yes. (b) Only if A = 1.
8.4 Orthogonal Sets
Def ’n: Orthogonal Set
M signals x
1
(t), . . . , x
M
(t) are mutually orthogonal, or form an orthogonal set, if ¸x
i
(t), x
j
(t)) = 0 for all i ,= j.
Def ’n: Orthonormal Set
M mutually orthogonal signals x
1
(t), . . . , x
M
(t) are form an orthonormal set, if |x
i
| = 1 for all i.
(a) (b)
Figure 13: Walsh-Hadamard 2-length and 4-length functions in image form. Each function is a row of the image,
where crimson red is 1 and black is -1.
Figure 14: Walsh-Hadamard 64-length functions in image form.
Do the 64 WH functions form an orthonormal set? Why or why not?
Solution: Yes. Every pair i, j with i ,= j agrees half of the time, and disagrees half of the time. So the function
is half-time 1 and half-time -1 so they cancel and have inner product of zero. The norm of the function in row
i is 1 since the amplitude is always +1 or −1.
Example: CDMA and Walsh-Hadamard
This example is just for your information (not for any test). The set of 64-length Walsh-Hadamard sequences
ECE 5520 Fall 2009 32
is used in CDMA (IS-95) in two ways:
1. Reverse Link: Modulation: For each group of 6 bits to be send, the modulator chooses one of the 64
possible 64-length functions.
2. Forward Link: Channelization or Multiple Access: The base station assigns each user a single WH function
(channel). It uses 64 functions (channels). The data on each channel is multiplied by a Walsh function so
that it is orthogonal to the data received by all other users.
Lecture 5
Today: (1) Orthogonal Signal Representations
9 Orthonormal Signal Representations
Last time we defined the inner product for vectors and for functions. We talked about orthogonality and
orthonormality of functions and sets of functions. Now, we consider how to take a set of arbitrary signals and
represent them as vectors in an orthonormal basis.
What are some common orthonormal signal representations?
• Nyquist sampling: sinc functions centered at sampling times.
• Fourier series: complex sinusoids at different frequencies
• Sine and cosine at the same frequency
• Wavelets
And, we will come up with others. Each one has a limitation – only a certain set of functions can be exactly
represented in a particular signal representation. Essentially, we must limit the set of possible functions to a
set. That is, some subset of all possible functions.
We refer to the set of arbitrary signals as:
o = ¦s
0
(t), s
1
(t), . . . , s
M−1
(t)¦
For example, a transmitter may be allowed to send any one of these M signals in order to convey information
to a receiver.
We’re going to introduce another set called an orthonormal basis:
B = ¦φ
0
(t), φ
1
(t), . . . , φ
N−1
(t)¦
Each function in B is called a basis function. You can think of B as an unambiguous and useful language to
represent the signals from our set o. Or, in analogy to color printers, a color printer can produce any of millions
of possible colors (signal set), only using black, red, blue, and green (basis set) (or black, cyan, magenta, yellow).
ECE 5520 Fall 2009 33
9.1 Orthonormal Bases
Def ’n: Span
The span of the set B is the set of all functions which are linear combinations of the functions in B. The span
is referred to as Span¦B¦ and is
Span¦B¦ =
_
N−1

k=0
a
k
φ
k
(t)
_
a
0
,...,a
N−1
∈R
Def ’n: Orthogonal Basis
For any arbitrary set of signals o = ¦s
i
(t)¦
M−1
i=0
, the orthogonal basis is an orthogonal set of the smallest size N,
B = ¦φ
i
(t)¦
N−1
i=0
, for which s
i
(t) ∈ Span¦B¦ for every i = 0, . . . , M − 1. The value of N is called the dimension
of the signal set.
Def ’n: Orthonormal Basis
The orthonormal basis is an orthogonal basis in which each basis function has norm 1.
Notes:
• The orthonormal basis is not unique. If you can find one basis, you can use, for example, ¦−φ
k
(t)¦
N−1
k=0
as
another orthonormal basis.
• All orthogonal bases will have size N: no smaller basis can include all signals s
i
(t) in its span, and a larger
set would not be a basis by definition.
The signal set might be a set of signals we can use to transmit particular bits (or sets of bits). (As the
Walsh-Hadamard functions are used in IS-95 reverse link). An orthonormal basis for an arbitrary signal set
tells us:
• how to build the receiver,
• how to represent the signals in ‘signal-space’, and
• how to quickly analyze the error rate of the scheme.
9.2 Synthesis
Consider one of the signals in our signal set, s
i
(t). Given that it is in the span of our basis B, it can be
represented as a linear combination of the basis functions,
s
i
(t) =
N−1

k=0
a
i,k
φ
k
(t)
The particular constants a
i,k
are calculated using the inner product:
a
i,k
= ¸s
i
(t), φ
k
(t))
The projection of one signal i onto basis function k is defined as a
i,k
φ
k
(t) = ¸s
i
(t), φ
k
(t))φ
k
(t). Then the signal
is equal to the sum of its projections onto the basis functions.
Why is this?
ECE 5520 Fall 2009 34
Solution: Since s
i
(t) ∈ Span¦B¦, we know there are some constants ¦a
i,k
¦
N−1
k
such that
s
i
(t) =
N−1

k=0
a
i,k
φ
k
(t)
Taking the inner product of both sides with φ
j
(t),
¸s
i
(t), φ
j
(t)) =
_
N−1

k=0
a
i,k
φ
k
(t), φ
j
(t)
_
¸s
i
(t), φ
j
(t)) =
N−1

k=0
a
i,k
¸φ
k
(t), φ
j
(t))
¸s
i
(t), φ
j
(t)) = a
i,j
So, now we can now represent a signal by a vector,
s
i
= [a
i,0
, a
i,1
, . . . , a
i,N−1
]
T
This and the basis functions completely represent each signal. Plotting ¦s
i
¦
i
in an N dimensional grid is
termed a constellation diagram. Generally, this space that we’re plotting in is called signal space.
We can also synthesize any of our M signals in the signal set by adding the proper linear combination of
the N bases. By choosing one of the M signals, we convey information, specifically, log
2
M bits of information.
(Generally, we choose M to be a power of 2 for simplicity).
See Figure 5.5 in Rice (page 254), which shows a block diagram of how a transmitter would synthesize one
of the M signals to send, based on an input bitstream.
Example: Position-shifted pulses
Plot the signal space diagram for the signals,
s
0
(t) = u(t) −u(t −1)
s
1
(t) = u(t −1) −u(t −2)
s
2
(t) = u(t) −u(t −2)
given the orthonormal basis,
φ
0
(t) = u(t) −u(t −1)
φ
1
(t) = u(t −1) −u(t −2)
What are the signal space vectors, s
0
, s
1
, and s
2
?
Solution: They are s
0
= [1, 0]
T
, s
1
= [0, 1]
T
, and s
2
= [1, 1]
T
. They are plotted in the signal space diagram in
Figure 15.
Energy:
• Energy can be calculated in signal space as
Energy¦s
i
(t)¦ =
_
−∞
∞s
2
i
(t)dt =
N−1

k=0
a
2
i,k
Proof?
ECE 5520 Fall 2009 35
1
1
f
1
f
2
Figure 15: Signal space diagram for position-shifted signals example.
• We will find out later that it is the distances between the points in signal space which determine the bit
error rate performance of the receiver.
d
i,j
=
¸
¸
¸
_
N−1

k=0
(a
i,k
−a
j,k
)
2
for i, j in ¦0, . . . , M −1¦.
• Although different ON bases can be used, the energy and distance between points will not change.
Example: Amplitude-shifted signals
Now consider
s
0
(t) = 1.5[u(t) −u(t −1)]
s
1
(t) = 0.5[u(t) −u(t −1)]
s
2
(t) = −0.5[u(t) −u(t −1)]
s
3
(t) = −1.5[u(t) −u(t −1)]
and the orthonormal basis,
φ
1
(t) = u(t) −u(t −1)
What are the signal space vectors for the signals ¦s
i
(t)¦? What are their energies?
Solution: s
0
= [1.5], s
1
= [0.5], s
2
= [−0.5], s
3
= [−1.5]. See Figure 16.
1
f
1
-1 -1 0
Figure 16: Signal space diagram for amplitude-shifted signals example.
Energies are just the squared magnitude of the vector: 2.25, 0.25, 0.25, and 2.25, respectively.
9.3 Analysis
At a receiver, our job will be to analyze the received signal (a function) and to decide which of the M possible
signals was sent. This is the task of analysis. It turns out that an orthonormal bases makes our analysis very
straightforward and elegant.
We won’t receive exactly what we sent - there will be additional functions added to the signal function we
sent.
ECE 5520 Fall 2009 36
• Thermal noise
• Interference from other users
• Self-interference
We won’t get into these problems today. But, we might say that if we send signal m, i.e., s
m
(t) from our signal
set, then we would receive
r(t) = s
m
(t) +w(t)
where the w(t) is the sum of all of the additive signals that we did not intend to receive. But w(t) might
not (probably not) be in the span of our basis B, so r(t) would not be in Span¦B¦ either. What is the best
approximation to r(t) in the signal space? Specifically, what is ˆ r(t) ∈ Span¦B¦ such that the energy of the
difference between ˆ r(t) and r(t) is minimized, i.e.,
argmin
ˆ r(t)∈Span{B}
_

−∞
[ˆ r(t) −r(t)[
2
dt (13)
Solution: Since ˆ r(t) ∈ Span¦B¦, it can be represented as a vector in signal space,
x = [x
0
, x
1
, . . . , x
N−1
]
T
.
and the synthesis equation is
ˆ r(t) =
N−1

k=0
x
k
φ
k
(t)
If you plug in the above expression for ˆ r(t) into (13), and then find the minimum with respect to each x
k
, you’d
see that the minimum error is at
x
k
=
_

−∞
r(t)φ
k
(t)dt
that is, x
k
= ¸r(t), φ
k
(t)), for k = 0, . . . , N.
Example: Analysis using a Walsh-Hadamard 2 Basis
See Figure 17. Let s
0
(t) = φ
0
(t) and s
1
(t) = φ
1
(t). What is ˆ r(t)?
Solution:
ˆ r(t) =
_
_
_
1, 0 ≤ t < 1
−1/2, 1 ≤ t < 2
0, o.w.
Lecture 6
Today: (1) Multivariate Distributions
ECE 5520 Fall 2009 37
r t ( )
2
-2
1 2
r( ) t
2
-2
1 2
f
1
( ) t
1 2
f
2
( ) t
1/2
-1/2
1 2
1/2
-1/2
-1
Figure 17: Signal and basis functions for Analysis example.
10 Multi-Variate Distributions
For two random variables X
1
and X
2
,
• Joint CDF: F
X
1
,X
2
(x
1
, x
2
) = P [¦X
1
≤ x
1
¦ ∩ ¦X
2
≤ x
2
¦] It is the probability that both events happen
simultaneously.
• Joint pmf: P
X
1
,X
2
(x
1
, x
2
) = P [¦X
1
= x
1
¦ ∩ ¦X
2
= x
2
¦] It is the probability that both events happen
simultaneously.
• Joint pdf: f
X
1
,X
2
(x
1
, x
2
) =

2
∂x
1
∂x
2
F
X
1
,X
2
(x
1
, x
2
)
The pdf and pmf integrate / sum to one, and are non-negative. The CDF is non-negative and non-decreasing,
with lim
x
i
→.−∞
F
X
1
,X
2
(x
1
, x
2
) = 0 and lim
x
1
,x
2
→.+∞
F
X
1
,X
2
(x
1
, x
2
) = 1.
Note: Two errors in Rice book dealing with the definition of the CDF. Eqn 4.9 should be:
F
X
(x) = P [X ≤ x] =
_
x
−∞
f
X
(t)dt
and Eqn 4.15 should be:
F
X
(x) = P [X ≤ x]
To find the probability of an event, you integrate. For example, for event B ∈ S,
• Discrete case: P [B] =

(X
1
,X
2
)∈B
P
X
1
,X
2
(x
1
, x
2
)
• Continuous Case: P [B] =
_ _
(x
1
,x
2
)∈B
f
X
1
,X
2
(x
1
, x
2
)dx
1
dx
2
ECE 5520 Fall 2009 38
The marginal distributions are:
• Marginal pmf: P
X
2
(x
2
) =

x
1
∈S
X
1
P
X
1
,X
2
(x
1
, x
2
)
• Marginal pdf: f
X
2
(x
2
) =
_
x
1
∈S
X
1
f
X
1
,X
2
(x
1
, x
2
)dx
1
Two random variables X
1
and X
2
are independent iff for all x
1
and x
2
,
• P
X
1
,X
2
(x
1
, x
2
) = P
X
1
(x
1
)P
X
2
(x
2
)
• f
X
1
,X
2
(x
1
, x
2
) = f
X
2
(x
2
)f
X
1
(x
1
)
10.1 Random Vectors
Def ’n: Random Vector
A random vector (R.V.) is a list of multiple random variables X
1
, X
2
, . . . , X
n
,
X = [X
1
, X
2
, . . . , X
n
]
T
Here are the Models of R.V.s:
1. The CDF of R.V. X is F
X
(x) = F
X
1
,...,Xn
(x
1
, . . . , x
n
) = P [X
1
≤ x
1
, . . . , X
n
≤ x
n
].
2. The pmf of a discrete R.V. X is P
X
(x) = P
X
1
,...,Xn
(x
1
, . . . , x
n
) = P [X
1
= x
1
, . . . , X
n
= x
n
].
3. The pdf of a continuous R.V. X is f
X
(x) = f
X
1
,...,Xn
(x
1
, . . . , x
n
) =

n
∂x
1
···∂xn
F
X
(x).
10.2 Conditional Distributions
Given event B ∈ S which has P [B] > 0, the joint probability conditioned on event B is
• Discrete case:
P
X
1
,X
2
|B
(x
1
, x
2
) =
_
P
X
1
,X
2
(x
1
,x
2
)
P[B]
, (X
1
, X
2
) ∈ B
0, o.w.
• Continuous Case:
f
X
1
,X
2
|B
(x
1
, x
2
) =
_
f
X
1
,X
2
(x
1
,x
2
)
P[B]
, (X
1
, X
2
) ∈ B
0, o.w.
Given r.v.s X
1
and X
2
,
• Discrete case. The conditional pmf of X
1
given X
2
= x
2
, where P
X
2
(x
2
) > 0, is
P
X
1
|X
2
(x
1
[x
2
) = P
X
1
,X
2
(x
1
, x
2
)/P
X
2
(x
2
)
• Continuous Case: The conditional pdf of X
1
given X
2
= x
2
, where f
X
2
(x
2
) > 0, is
f
X
1
|X
2
(x
1
[x
2
) = f
X
1
,X
2
(x
1
, x
2
)/f
X
2
(x
2
)
ECE 5520 Fall 2009 39
Def ’n: Bayes’ Law
Bayes’ Law is a reformulation of he definition of the marginal pdf. It is written either as:
f
X
1
,X
2
(x
1
, x
2
) = f
X
2
|X
1
(x
2
[x
1
)f
X
1
(x
1
)
or
f
X
1
|X
2
(x
1
[x
2
) =
f
X
2
|X
1
(x
2
[x
1
)f
X
1
(x
1
)
f
X
2
(x
2
)
10.3 Simulation of Digital Communication Systems
A simulation of a digital communication system is often used to estimate a bit error rate. Each bit can either
be demodulated without error, or with error. Thus the simulation of one bit is a Bernoulli trial. This trial E
i
is in error (E
1
= 1) with probability p
e
(the true bit error rate) and correct (E
i
= 0) with probability 1 − p
e
.
What type of random variable is E
i
?
Simulations run many bits, say N bits through a model of the communication system, and count the number
of bits that are in error. Let S =

N
i=1
E
i
, and assume that ¦E
i
¦ are independent and identically distributed
(i.i.d.).
1. What type of random variable is S?
2. What is the pmf of S?
3. What is the mean and variance of S?
Solution: E
i
is called a Bernoulli r.v. and S is called a Binomial r.v., with pmf
P
S
(s) =
_
N
s
_
p
s
e
(1 −p
e
)
N−s
The mean of S is the the same as the mean of the sum of ¦E
i
¦,
E
S
[S] = E
{E
i
}
_
N

i=1
E
i
_
=
N

i=1
E
E
i
[E
i
]
=
N

i=1
[(1 −p) 0 +p 1] = Np
We can find the variance of S the same way:
Var
S
[S] = Var
{E
i
}
_
N

i=1
E
i
_
=
N

i=1
Var
E
i
[E
i
]
=
N

i=1
_
(1 −p) (0 −p)
2
+p (1 −p)
2
¸
=
N

i=1
_
(1 −p)p
2
+ (1 −p)(p −p
2
)
¸
= Np(1 −p)
ECE 5520 Fall 2009 40
We may also be interested knowing how many bits to run in order to get an estimate of the bit error rate.
For example, if we run a simulation and get zero bit errors, we won’t have a very good idea of the bit error rate.
Let T
1
be the time (number of bits) up to and including the first error.
1. What type of random variable is T
1
?
2. What is the pmf of T
1
?
3. What is the mean of T
1
?
Solution: T
1
is a Geometric r.v. with pmf
P
T
1
(t) = (1 −p
e
)
t−1
p
e
The mean of T
1
is
E[T
1
] =
1
p
e
Note the variance of T
1
is Var [T
1
] = (1 − p
e
)/p
2
e
, so the standard deviation for very low p
e
is almost the same
as the expected value.
So, even if we run an experiment until the first bit error, our estimate of p
e
will have relatively high variance.
10.4 Mixed Discrete and Continuous Joint Variables
This was not covered in ECE 5510, although it was in the Yates & Goodman textbook. We’ll often have X
1
discrete and X
2
continuous.
Example: Digital communication system in noise
Let X
1
is the transmitted signal voltage, and X
2
is the received signal voltage, which is modeled as
X
2
= aX
1
+N
where N is a continuous-valued random variable representing the additive noise of the channel, and a is a
constant which represents the attenuation of the channel. In a digital system, X
1
may take a discrete set of
values, e.g., ¦1.5, 0.5, −0.5, −1.5¦. But the noise N may be continuous, e.g., Gaussian with mean µ and variance
σ
2
. As long as σ
2
> 0, then X
2
will be continuous-valued.
Work-around: We will sometimes use a pdf for a discrete random variable. For instance, if
P
X
1
(x
1
) =
_
_
_
0.6, x
1
= 0
0.4, x
1
= 1
0, o.w.
then we would write a pdf with the probabilities as amplitudes of a dirac delta function centered at the value
that has that probability,
f
X
1
(x
1
) = 0.6δ(x
1
) + 0.4δ(x
1
−1)
This pdf has integral 1, and non-negative values for all x
1
. Any probability integral will return the proper value.
For example, the probability that −0.5 < x
1
< 0.5 would be an integral that would return 0.6.
ECE 5520 Fall 2009 41
Example: Joint distribution of X
1
, X
2
Consider the channel model X
2
= X
1
+N, and
f
N
(n) =
1

2πσ
2
e
−n
2
/(2σ
2
)
where σ
2
is some known variance, and the r.v. X
1
is independent of N with
P
X
1
(x
1
) =
_
_
_
0.5, x
1
= 0
0.5, x
1
= 1
0, o.w.
.
1. What is the pdf of X
2
?
2. What is the joint p.d.f. of (X
1
, X
2
) ?
Solution:
1. When two independent r.v.s are added, the pdf of the sum is the convolution of the pdfs of the inputs.
Writing
f
X
1
(x
1
) = 0.5δ(x
1
) + 0.5δ(x
1
−1)
we convolve this with f
N
(n) above, to get
f
X
2
(x
2
) = 0.5f
N
(x
2
) + 0.5f
N
(x
2
−1)
f
X
2
(x
2
) =
1
2

2πσ
2
_
e
−x
2
2
/(2σ
2
)
+e
−(x
2
−1)
2
/(2σ
2
)
_
2. What is the joint p.d.f. of (X
1
, X
2
) ? Since X
1
and X
2
are NOT independent, we cannot simply multiply
the marginal pdfs together. It is necessary to use Bayes’ Law.
f
X
1
,X
2
(x
1
, x
2
) = f
X
2
|X
1
(x
2
[x
1
)f
X
1
(x
1
)
Given a value of X
1
(either 0 or 1) we can write down the pdf of X
2
. So break this into two cases:
f
X
1
,X
2
(x
1
, x
2
) =
_
_
_
f
X
2
|X
1
(x
2
[0)f
X
1
(0), x
1
= 0
f
X
2
|X
1
(x
2
[1)f
X
1
(1), x
1
= 1
0, o.w.
f
X
1
,X
2
(x
1
, x
2
) =
_
_
_
0.5f
N
(x
2
), x
1
= 0
0.5f
N
(x
2
−1), x
1
= 1
0, o.w.
f
X
1
,X
2
(x
1
, x
2
) = 0.5f
N
(x
2
)δ(x
1
) +
0.5f
N
(x
2
−1)δ(x
1
−1)
These last two lines are completely equivalent. Use whichever seems more convenient for you. See Figure
18.
ECE 5520 Fall 2009 42
−1
0
1
2
−4
−2
0
2
4
0
X
2
X
1
Figure 18: Joint pdf of X
1
and X
2
, the input and output (respectively) of the example additive noise commu-
nication system, when σ = 1.
10.5 Expectation
Def ’n: Expected Value (Joint)
The expected value of a function g(X
1
, X
2
) of random variables X
1
and X
2
is given by,
1. Discrete: E[g(X
1
, X
2
)] =

X
1
∈S
X
1

X
2
∈S
X
2
g(X
1
, X
2
)P
X
1
,X
2
(x
1
, x
2
)
2. Continuous: E[g(X
1
, X
2
)] =
_
X
1
∈S
X
1
_
X
2
∈S
X
2
g(X
1
, X
2
)f
X
1
,X
2
(x
1
, x
2
)
Typical functions g(X
1
, X
2
) are:
• Mean of X
1
or X
2
: g(X
1
, X
2
) = X
1
or g(X
1
, X
2
) = X
2
will result in the means µ
X
1
and µ
X
2
.
• Variance (or 2nd central moment) of X
1
or X
2
: g(X
1
, X
2
) = (X
1
− µ
X
1
)
2
or g(X
1
, X
2
) = (X
2
− µ
X
2
)
2
.
Often denoted σ
2
X
1
and σ
2
X
2
.
• Covariance of X
1
and X
2
: g(X
1
, X
2
) = (X
1
−µ
X
1
)(X
2
−µ
X
2
).
• Expected value of the product of X
1
and X
2
, also called the ‘correlation’ of X
1
and X
2
: g(X
1
, X
2
) = X
1
X
2
.
10.6 Gaussian Random Variables
For a single Gaussian r.v. X with mean µ
X
and variance σ
2
X
, we have the pdf,
f
X
(x) =
1
_
2πσ
2
X
e

(x−µ
X
)
2

2
Consider Y to be Gaussian with mean 0 and variance 1. Then, the CDF of Y is denoted as F
Y
(y) = P [Y ≤ y] =
Φ(y). So, for X, which has non-zero mean and non-unit-variance, we can write its CDF as
F
X
(x) = P [X ≤ x] = Φ
_
x −µ
X
σ
X
_
ECE 5520 Fall 2009 43
You can prove this by showing that the event X ≤ x is the same as the event
X −µ
X
σ
X

x −µ
X
σ
X
Since the left-hand side is a unit-variance, zero mean Gaussian random variable, we can write the probability
of this event using the unit-variance, zero mean Gaussian CDF.
10.6.1 Complementary CDF
The probability that a unit-variance, zero mean Gaussian r.v. X exceeds some value x is one minus the CDF,
that is, 1 −Φ(x). This is so common in digital communications, it is given its own name, Q(x),
Q(x) = P [X > x] = 1 −Φ(x)
What is Q(x) in integral form?
Q(x) =
_

x
1


e
−w
2
/2
dw
For an Gaussian r.v. X with variance σ
2
X
,
P [X > x] = Q
_
x −µ
X
σ
X
_
= 1 −Φ
_
x −µ
X
σ
X
_
10.6.2 Error Function
In math, in some texts, and in Matlab, the Q(x) function is not used. Instead, there is a function called erf(x)
erf(x)
2

π
_
x
0
e
−t
2
dt
Example: Relationship between Q() and erf()
What is the functional relationship between Q() and erf()?
Solution: Substituting t = u/

2 (and thus dt = du/

2),
erf(x)
2


_

2x
0
e
−u
2
/2
du
= 2
_

2x
0
1


e
−u
2
/2
du
= 2
_
Φ(

2x) −
1
2
_
Equivalently, we can write Φ() in terms of the erf() function,
Φ(

2x) =
1
2
erf(x) +
1
2
Finally let y =

2x, so that
Φ(y) =
1
2
erf
_
y

2
_
+
1
2
ECE 5520 Fall 2009 44
Or in terms of Q(),
Q(y) = 1 −Φ(y) =
1
2

1
2
erf
_
y

2
_
(14)
You should go to Matlab and create a function Q(y) which implements:
function rval = Q(y)
rval = 1/2 - 1/2 .* erf(y./sqrt(2));
10.7 Examples
Example: Probability of Error in Binary Example
As in the previous example, we have a model system in which the receiver sees X
2
= X
1
+N. Here, X
1
∈ ¦0, 1¦
with equal probabilities and N is independent of X
1
and zero-mean Gaussian with variance 1/4. The receiver
decides as follows:
• If X
2
≤ 1/3, then decide that the transmitter sent a ‘0’.
• If X
2
> 1/3, then decide that the transmitter sent a ‘1’.
1. Given that X
1
= 1, what is the probability that the receiver decides that a ‘0’ was sent?
2. Given that X
1
= 0, what is the probability that the receiver decides that a ‘1’ was sent?
Solution:
1. Given that X
1
= 1, since X
2
= X
1
+ N, it is clear that X
2
is also a Gaussian r.v. with mean 1 and
variance 1/4. Then the probability that the receiver decides ‘0’ is the probability that X
2
≤ 1/3,
P [error[X
1
= 1] = P [X
2
≤ 1/3]
= P
_
X
2
−1
_
1/4

1/3 −1
_
1/4
_
= 1 −Q((−2/3)/(1/2))
= 1 −Q(−4/3)
2. Given that X
1
= 0, the probability that the receiver decides ‘1’ is the probability that X
2
> 1/3,
P [error[X
1
= 0] = P [X
2
> 1/3]
= P
_
X
2
_
1/4
>
1/3
_
1/4
_
= Q((1/3)/(1/2))
= Q(2/3)
ECE 5520 Fall 2009 45
10.8 Gaussian Random Vectors
Def ’n: Multivariate Gaussian R.V.
An n-length R.V. X is multivariate Gaussian with mean µ
X
, and covariance matrix C
X
if it has the pdf,
f
X
(x) =
1
_
(2π)
n
det(C
X
)
exp
_

1
2
(x −µ
X
)
T
C
−1
X
(x −µ
X
)
_
where det() is the determinant of the covariance matrix, and C
−1
X
is the inverse of the covariance matrix.
Any linear combination of jointly Gaussian random variables is another jointly Gaussian ran-
dom variable. For example, if we have a matrix A and we let a new random vector Y = AX, then Y is also
a Gaussian random vector with mean Aµ
X
and covariance matrix AC
X
A
T
.
If the elements of X were independent random variables, the pdf would be the product of the individual
pdfs (as with any random vector) and in this case the pdf would be:
f
X
(x) =
1
_
(2π)
n

i
σ
2
i
exp
_

n

i=1
(x
i
−µ
X
i
)
2

2
X
i
_
Section 4.3 spends some time with 2-D Gaussian random vectors, which is the dimension with which we
spend most of our time in this class.
10.8.1 Envelope
The envelope of a 2-D random vector is its distance from the origin (0, 0). Specifically, if the vector is (X
1
, X
2
),
the envelope R is
R =
_
X
2
1
+X
2
2
(15)
If both X
1
and X
2
are i.i.d. Gaussian with mean 0 and variance σ
2
, the pdf of R is
f
R
(r) =
_
r
σ
2
exp
_

r
2

2
_
, r ≥ 0
0, o.w.
This is the Rayleigh distribution. If instead X
1
and X
2
are independent Gaussian with means µ
1
and µ
2
,
respectively, and identical variances σ
2
, the envelope R would now have a Rice (a.k.a. Rician) distribution,
f
R
(r) =
_
r
σ
2
exp
_

r
2
+s
2

2
_
I
0
_
rs
σ
2
_
, r ≥ 0
0, o.w.
where s
2
= µ
2
1

2
2
, and I
0
() is the zeroth order modified Bessel function of the first kind.
Note that both the Rayleigh and Rician pdfs can be derived from the Gaussian distribution and (15) using
the transformation of random variables methods studied in ECE 5510. Both pdfs are sometimes needed to
derive probability of error formulas.
Lecture 7
Today: (1) Matlab Simulation (2) Random Processes (3) Correlation and Matched Filter Receivers
ECE 5520 Fall 2009 46
11 Random Processes
A random process X(t, s) is a function of time t and outcome (realization) s. Outcome s lies in sample space
S. Outcome or realization is included because there could be ways to record X even at the same time. For
example, multiple receivers would record different noisy signals even of the same transmission.
A random sequence X(n, s) is a sequence of random variables indexed by time index n and realization s ∈ S.
Typically, we omit the “s” when writing the name of the random process or random sequence, and abbreviate
it to X(t) or X(n), respectively.
11.1 Autocorrelation and Power
Def ’n: Mean Function
The mean function of the random process X(t, s) is
µ
X
(t) = E[X(t, s)]
Note the mean is taken over all possible realizations s. If you record one signal over all time t, you don’t have
anything to average to get the mean function µ
X
(t).
Def ’n: Autocorrelation Function
The autocorrelation function of a random process X(t) is
R
X
(t, τ) = E[X(t)X(t −τ)]
The autocorrelation of a random sequence X(n) is
R
X
(n, k) = E[X(n)X(n −k)]
Def ’n: Wide-sense stationary (WSS)
A random process is wide-sense stationary (WSS) if
1. µ
X
= µ
X
(t) = E[X(t)] is independent of t.
2. R
X
(t, τ) depends only on the time difference τ and not on t. We then denote the autocorrelation function
as R
X
(τ).
A random process is wide-sense stationary (WSS) if
1. µ
X
= µ
X
(n) = E[X(n)] is independent of n.
2. R
X
(n, k) depends only on k and not on n. We then denote the autocorrelation function as R
X
(k).
The power of a signal is given by R
X
(0).
We can estimate the autocorrelation function of an ergodic, wide-sense stationary random process from one
realization of a power-type signal x(t),
R
X
(τ) = lim
T→∞
1
T
_
T/2
t=−T/2
x(t)x(t −τ)dt
This is not a critical topic for this class, but we now must define “ergodic”. For an ergodic random process,
its time averages are equivalent to its ensemble averages. An example that helps demonstrate the difference
between time averages and ensemble averages is the non-ergodic process of opinion polling. Consider what
happens when a pollster takes either a time-average or a ensemble average:
ECE 5520 Fall 2009 47
• Time Average: The pollster asks the same person, every day for N days, whether or not she will vote for
person X. The time average is taken by dividing the total number of “Yes”s by N.
• Ensemble Average: The pollster asks N different people, on the same day, whether or not they will vote
for person X. The ensemble average is taken by dividing the total number of “Yes”s by N.
Will they come up with the same answer? Perhaps; but probably not. This makes the process non-ergodic.
Power Spectral Density We have seen that for a WSS random process X(t) (and for a random sequence
X(n) we compute the power spectral density as,
S
X
(f) = F¦R
X
(τ)¦
S
X
(e
jΩ
) = DTFT¦R
X
(k)¦
11.1.1 White Noise
Let X(n) be a WSS random sequence with autocorrelation function
R
X
(k) = E[X(n)X(n −k)] = σ
2
δ(k)
This says that each element of the sequence X(n) has zero covariance with every other sample of the sequence,
i.e., it is uncorrelated with X(m), m ,= n.
• Does this mean it is independent of every other sample?
• What is the PSD of the sequence?
This sequence is commonly called ‘white noise’ because it has equal parts of every frequency (analogy to
light).
For continuous time signals, white noise is a commonly used approximation for thermal noise. Let X(t) be
a WSS random process with autocorrelation function
R
X
(τ) = E[X(t)X(t −τ)] = σ
2
δ(τ)
• What is the PSD of the sequence?
• What is the power of the signal?
Realistically, thermal noise is not constant in frequency, because as frequency goes very high (e.g., 10
15
Hz), the
power spectral density goes to zero. The Rice book (4.5.2) has a good analysis of the physics of thermal noise.
12 Correlation and Matched-Filter Receivers
We’ve been talking about how our digital transmitter can transmit at each time one of a set of signals ¦s
i
(t)¦
M
i=1
.
This transmission conveys to us which of M messages we want to send, that is, log
2
M bits of information.
At the receiver, we receive signal s
i
(t) scaled and corrupted by noise:
r(t) = bs
i
(t) +n(t)
How do we decide which signal i was transmitted?
ECE 5520 Fall 2009 48
12.1 Correlation Receiver
What we’ve been talking about it the inner product. In other terms, the inner product is a correlation:
¸r(t), s
i
(t)) =
_

−∞
r(t)s
i
(t)dt
But we want to build a receiver that does the minimum amount of calculation. If s
i
(t) are non-orthogonal,
then we can reduce the amount of correlation (and thus multiplication and addition) done in the receiver by
instead, correlating with the basis functions, ¦φ
k
(t)¦
N
k=1
. Correlation with the basis functions gives
x
k
= ¸r(t), φ
k
(t)) =
_

−∞
r(t)φ
k
(t)dt
for k = 1 . . . N. As notation,
x = [x
1
, x
2
, . . . , x
N
]
T
Now that we’ve done these N correlations (inner products), we can compute the estimate of the received signal
as
ˆ r(t) =
K−1

k=0
x
k
φ
k
(t)
This is the ‘correlation receiver’, shown in Figure 5.1.6 in Rice (page 226).
Noise-free channel Since r(t) = bs
i
(t) +n(t), if n(t) = 0 and b = 1 then
x = a
i
where a
i
is the signal space vector for s
i
(t).
Noisy Channel Now, let b = 1 but consider n(t) to be a white Gaussian random process with zero mean and
PSD S
N
(f) = N
0
/2. Define
n
k
= ¸n(t), φ
k
) =
_

−∞
n(t)φ
k
(t)dt.
What is x in terms of a
i
and the noise ¦n
k
¦? What are the mean and covariance of ¦n
k
¦?
Solution:
x
k
=
_

−∞
r(t)φ
k
(t)dt
=
_

−∞
[s
i
(t) +n(t)]φ
k
(t)dt
= a
i,k
+
_

−∞
n(t)φ
k
(t)dt
= a
i,k
+n
k
First, n
k
is zero mean:
E[n
k
] =
_

−∞
E[n(t)] φ
k
(t)dt = 0
ECE 5520 Fall 2009 49
Next we can show that n
1
, . . . , n
N
are i.i.d. by calculating the autocorrelation R
n
(m, k).
R
n
(m, k) = E[n
k
n
m
]
=
_

t=−∞
_

τ=−∞
E[n(t)n(τ)] φ
k
(t)φ
m
(τ)dτdt
=
_

t=−∞
_

τ=−∞
N
0
2
δ(t −τ)φ
k
(t)φ
m
(τ)dτdt
=
N
0
2
_

t=−∞
φ
k
(t)φ
m
(t)dτdt
=
N
0
2
δ
k,m
=
_
N
0
2
, m = k
0, o.w.
Is ¦n
k
¦ WSS? Is it a Gaussian random sequence?
Since the noise components are independent, then x
k
are also independent. (Why?) What is the pdf of x
k
?
What is the pdf of x?
Solution: x
k
are independent because x
k
= a
i,k
+ n
k
and a
i,k
is a deterministic constant. Then since n
k
is
Gaussian,
f
X
k
(x
k
) =
1
_
2π(N
0
/2)
e

(x
k
−a
i,k
)
2
2(N
0
/2)
And, since ¦X
k
¦ are independent,
f
x
(x) =
N

k=1
f
X
k
(x
k
)
=
N

k=1
1
_
2π(N
0
/2)
e

(x
k
−a
i,k
)
2
2(N
0
/2)
=
1
[2π(N
0
/2)]
N/2
e

P
N
k=1
(x
k
−a
i,k
)
2
2(N
0
/2)
An example is in ece5510 lec07.m.
12.2 Matched Filter Receiver
Above we said that
x
k
=
_

t=−∞
r(t)φ
k
(t)dt
But there really are finite limits – let’s say that the signal has a duration T, and then rewrite the integral as
x
k
=
_
T
t=0
r(t)φ
k
(t)dt
This can be written as
x
k
=
_
T
t=0
r(t)h
k
(T −t)dt
ECE 5520 Fall 2009 50
where h
k
(t) = φ
k
(T −t). (Plug in (T −t) in this formula and the Ts cancel out and only positive t is left.) This
is the output of a convolution, taken at time T,
x
k
= r(t) ⋆ h
k
(t)[
t=T
Or equivalently
x
k
= r(t) ⋆ φ
k
(T −t)[
t=T
This is Rice Section 5.1.4.
Notes:
• The x
k
can be seen as the output of a ‘matched’ filter at time T.
• This works at time T. The output for other times will be different in the correlation and matched filter.
• These are just two different physical implementations. We might, for example, have a physical filter with
the impulse response φ
k
(T −t) and thus it is easy to do a matched filter implementation.
• It may be easier to ‘see’ why the correlation receiver works.
Try out the Matlab code, correlation and matched filter rx.m, which is posted on WebCT.
12.3 Amplitude
Before we said b = 1. A real channel attenuates! The job of our receiver will be to amplify the signal by
approximately 1/b. This effectively multiplies the noise. In general, textbooks will assume that the automatic
gain control (AGC) works properly, and then will assume the noise signal is multiplied accordingly.
Lecture 8
Today: (1) Optimal Binary Detection
12.4 Review
At a digital transmitter, we can transmit at each time one of a set of signals ¦s
i
(t)¦
M−1
i=0
. This transmission
conveys to us which of M messages we want to send, that is, log
2
M bits of information.
At the receiver, we assume we receive signal s
i
(t) corrupted by noise:
r(t) = s
i
(t) +n(t)
How do we decide which signal i ∈ ¦0, . . . , M − 1¦ was transmitted? We split the task into down-conversion,
gain control, correlation or matched filter reception, and detection, as shown in Figure 19.
Down-
converter
AGC Correlationor
MatchedFilter
Optimal
Detector
r t ’( )e
- 2 j f t p
c
r t ’( ) r t ( ) x
b
Figure 19: A block diagram of the receiver blocks which discussed in lecture 7 & 8.
ECE 5520 Fall 2009 51
12.5 Correlation Receiver
Our signal set can be represented by the orthonormal basis functions, ¦φ
k
(t)¦
K−1
k=0
. Correlation with the basis
functions gives
x
k
= ¸r(t), φ
k
(t)) = a
i,k
+n
k
for k = 0 . . . K −1. We denote vectors:
x = [x
0
, x
1
, . . . , x
K−1
]
T
a
i
= [a
i,0
, a
i,1
, . . . , a
i,K−1
]
T
n = [n
0
, n
1
, . . . , n
K−1
]
T
Hopefully, x and a
i
should be close since i was actually sent. In an example Matlab simulation, ece5520 lec07.m,
we simulated sending a
i
= [1, 1]
T
and receiving r(t) (and thus x) in noise.
In general, the pdf of x is multivariate Gaussian with each component x
k
independent, because:
• x
k
= a
i,k
+n
k
• a
i,k
is a deterministic constant
• The ¦n
k
¦ are i.i.d. Gaussian.
The joint pdf of x is
f
X
(x) =
K−1

k=0
f
X
k
(x
k
) =
1
[2π(N
0
/2)]
N/2
e

P
N
k=1
(x
k
−a
i,k
)
2
2(N
0
/2)
(16)
We showed that there are two ways to implement this: the correlation receiver, and the matched filter re-
ceiver. Both result in the same output x. We simulated this in the Matlab code correlation and matched filter rx.m.
13 Optimal Detection
We receive r(t) and use the matched filter or correlation receiver to calculate x. Now, how exactly do we decide
which s
i
(t) was sent?
Consider this in signal space, as shown in Figure 20.
1
f
1
-1 -1 0
X
Figure 20: Signal space diagram of a PAM system with an X to mark the receiver output x.
Given the signal space diagram of the transmitted signals and the received signal space vector x, what rules
should we have to decide on i? This is the topic of detection. Optimal detection uses rules designed to minimize
the probability that a symbol error will occur.
13.1 Overview
We’ll show two things:
• If each symbol i is equally probable, then the decision will be, pick the i with a
i
closest to x.
• If symbols aren’t equally probable, then we’ll need to shift the decision boundaries.
ECE 5520 Fall 2009 52
Detection theory is a major learning objective of this course. So even though it is somewhat
theoretical, it is very applicable in digital receivers. Further, detection is applicable in a wide variety of problems,
for example,
• Medical applications: Does an image show a tumor? Does a blood sample indicate a disease? Does a
breathing sound indicate an obstructed airway?
• Communications applications: Receiver design, signal detection.
• Radar applications: Obstruction detection, motion detection.
13.2 Bayesian Detection
When we say ‘optimal detection’ in the Bayesian detection framework, we mean that we want the smallest
probability of error. The probability of error is denoted
P [error]
By error, we mean that a different signal was detected than the one that was sent. At the start of every detection
problem, we list the events that could have occurred, i.e., the symbols that could have been sent. We follow all
detection and statistics textbooks and label these classes H
i
, and write:
H
0
: r(t) = s
0
(t) +n(t)
H
1
: r(t) = s
1
(t) +n(t)

H
M−1
: r(t) = s
M−1
(t) +n(t)
This must be a complete listing of events. That is, the events H
0
∪ H
1
∪ ∪ H
M−1
= S, where the ∪ means
union, and S is the complete event space.
14 Binary Detection
Let’s just say for now that there are only two signals s
0
(t) and s
1
(t), and only one basis function φ
0
(t). Then,
instead of vectors x, a
0
and a
1
, and n, we’ll have only scalars: x, a
0
and a
1
, and n. When we talk about them
as random variables, we’ll substitute N for n and X for x. We need to decide from X whether s
0
(t) or s
1
(t)
was sent.
The event listing is,
H
0
: r(t) = s
0
(t) +n(t)
H
1
: r(t) = s
1
(t) +n(t)
Equivalently,
H
0
: X = a
0
+N
H
1
: X = a
1
+N
We use the law of total probability to say that
P [error] = P [error ∩ H
0
] +P [error ∩ H
1
] (17)
Where the cap means ‘and’. Then using Bayes’ Law,
P [error] = P [error[H
0
] P [H
0
] +P [error[H
1
] P [H
1
]
ECE 5520 Fall 2009 53
14.1 Decision Region
We’re making a decision based only on X. Over some set R
0
of values of X, we’ll decide that H
0
happened
(s
0
(t) was sent). Over a different set R
1
of values, we’ll decide H
1
occurred (that s
1
(t) was sent). We can’t be
indecisive, so
• There is no overlap: R
0
∩ R
1
= ∅.
• There are no values of x disregarded: R
0
∪ R
1
= S.
14.2 Formula for Probability of Error
So the probability of error is
P [error] = P [X ∈ R
1
[H
0
] P [H
0
] +P [X ∈ R
0
[H
1
] P [H
1
] (18)
The probability that X is in R
1
is one minus the probability that it is in R
0
, since the two are complementary
sets.
P [error] = (1 −P [X ∈ R
0
[H
0
])P [H
0
] +P [X ∈ R
0
[H
1
] P [H
1
]
P [error] = P [H
0
] −P [X ∈ R
0
[H
0
] P [H
0
] +P [X ∈ R
0
[H
1
] P [H
1
]
Now note that probabilities that X ∈ R
0
are integrals over the event (region) R
0
.
P [error] = P [H
0
] −
_
x∈R
0
f
X|H
0
(x[H
0
)P [H
0
] dx
+
_
x∈R
0
f
X|H
1
(x[H
1
)P [H
1
] dx
= P [H
0
] (19)
+
_
x∈R
0
_
f
X|H
1
(x[H
1
)P [H
1
] −f
X|H
0
(x[H
0
)P [H
0
]
_
dx
We’ve got a lot of things in the expression in (19), but the only thing we can change is the region R
0
. Everything
else is determined by the time we get to this point. So the question is, how do you pick R
0
to minimize (19)?
14.3 Selecting R
0
to Minimize Probability of Error
We can see what the integrand looks like. Figure 21 shows the conditional probability density functions. Figure
?? shows the joint densities (the conditional pdfs multiplied by the bit probabilities P [H
0
] and P [H
1
]. Finally,
Figure ?? shows the full integrand of (19), the difference between the joint densities.
We can pick R
0
however we want - we just say what region of x, and the integral in (19) will integrate over
it. The objective is to minimize the probability of error. Which x’s should we include in the region? Should
we include x which has a positive value of the integrand? Or should we include the parts of x which have a
negative value of the integrand?
Solution: Select R
0
to be all x such that the integrand is negative.
Then R
0
is the area in which
f
X|H
0
(x[H
0
)P [H
0
] > f
X|H
1
(x[H
1
)P [H
1
]
If P [H
0
] = P [H
1
], then this is the region in which X is more probable given H
0
than given H
1
.
ECE 5520 Fall 2009 54
(a)
−3 −2 −1 0 1 2 3
0
0.2
0.4
0.6
0.8
1
r
P
r
o
b
a
b
i
l
i
t
y

D
e
n
s
i
t
y
f
X|H
1
f
X|H
2
(b)
(c)
−3 −2 −1 0 1 2 3
−0.6
−0.4
−0.2
0
0.2
0.4
r
I
n
t
e
g
r
a
n
d
f
X ∩H
2
− f
X ∩H
1
Figure 21: The (a) conditional p.d.f.s (likelihood functions) f
X|H
0
(x[H
0
) and f
X|H
1
(x[H
1
), (b) joint p.d.f.s
f
X|H
0
(x[H
0
)P [H
0
] and f
X|H
1
(x[H
1
)P [H
1
], and (c) difference between the joint p.d.f.s, f
X|H
1
(x[H
1
)P [H
1
] −
f
X|H
0
(x[H
0
)P [H
0
], which is the integrand in (19).
ECE 5520 Fall 2009 55
Rearranging the terms,
f
X|H
1
(x[H
1
)
f
X|H
0
(x[H
0
)
<
P [H
0
]
P [H
1
]
(20)
The left hand side is called the likelihood ratio. The right hand side is a threshold. Whenever x indicates that
the likelihood ratio is less than the threshold, then we’ll decide H
0
, i.e., that s
0
(t) was sent. Otherwise, we’ll
decide H
1
, i.e., that s
1
(t) was sent.
Equation (20) is a very general result, applicable no matter what conditional distributions x has.
14.4 Log-Likelihood Ratio
For the Gaussian distribution, the math gets much easier if we take the log of both sides. Why can we do this?
Solution: 1. Both sides are positive, 2. The log() function is strictly increasing.
Now, the log-likelihood ratio is
log
f
X|H
1
(x[H
1
)
f
X|H
0
(x[H
0
)
< log
P [H
0
]
P [H
1
]
14.5 Case of a
0
= 0, a
1
= 1 in Gaussian noise
In this example, n ∼ ^(0, σ
2
N
). In addition, assume for a minute that a
0
= 0 and a
1
= 1. What is:
1. The log of a Gaussian pdf?
2. The log-likelihood ratio?
3. The decision regions for x?
Solution: What is the log of a Gaussian pdf?
log f
X|H
0
(x[H
0
) = log
_
_
1
_
2πσ
2
N
e

x
2

2
N
_
_
= −
1
2
log(2πσ
2
N
) −
x
2

2
N
(21)
The log f
X|H
1
(x[H
1
) term will be the same but with (x − 1)
2
instead of x
2
. Continuing with the log-likelihood
ratio,
log f
X|H
1
(x[H
1
) −log f
X|H
0
(x[H
0
) < log
P [H
0
]
P [H
1
]
x
2

2
N

(x −1)
2

2
N
< log
P [H
0
]
P [H
1
]
x
2
−(x −1)
2
< 2σ
2
N
log
P [H
0
]
P [H
1
]
2x −1 < 2σ
2
N
log
P [H
0
]
P [H
1
]
x <
1
2

2
N
log
P [H
0
]
P [H
1
]
ECE 5520 Fall 2009 56
In the end result, there is a simple test for x - if it is below the decision threshold, decide H
0
. If it is above
the decision threshold,
x >
1
2

2
N
log
P [H
0
]
P [H
1
]
decide H
1
. Rather than writing both inequalities each time, we use the following notation:
x
H
1
>
<
H
0
1
2

2
N
log
P [H
0
]
P [H
1
]
This completely describes the detector receiver.
For simplicity, we also write x
H
1
>
<
H
0
γ where
γ =
1
2

2
N
log
P [H
0
]
P [H
1
]
(22)
14.6 General Case for Arbitrary Signals
If, instead of a
0
= 0 and a
1
= 1, we had arbitrary values for them (the signal space representations of s
0
(t) and
s
1
(t)), we could have derived the result in the last section the same way. As long as a
0
< a
1
, we’d still have
r
H
1
>
<
H
0
γ, but now,
γ =
a
0
+a
1
2
+
σ
2
N
a
1
−a
0
log
P [H
0
]
P [H
1
]
(23)
14.7 Equi-probable Special Case
If symbols are equally likely, P [H
1
] = P [H
0
], then
P[H
1
]
P[H
0
]
= 1 and the logarithm of the fraction is zero. So then
x
H
1
>
<
H
0
a
0
+a
1
2
The decision above says that if x is closer to a
0
, decide that s
0
(t) was sent. And if x is closer to a
1
, decide that
s
1
(t) was sent. The boundary is exactly half-way in between the two signal space vectors.
This receiver is also called a maximum likelihood detector, because we only decide which likelihood function
is higher (neither is scaled by the prior probabilities P [H
0
] or P [H
1
].
14.8 Examples
Example: When H
1
becomes less likely, which direction will the optimal threshold move, towards
a
0
or towards a
1
?
Solution: Towards a
1
.
Example: Let a
0
= −1, a
1
= 1, σ
2
N
= 0.1, P [H
1
] = 0.4, and P [H
0
] = 0.6. What is the decision
threshold for x?
ECE 5520 Fall 2009 57
Solution: From (23),
γ = 0 +
0.1
2
log
0.6
0.4
= 0.05 log 1.5 ≈ 0.0203
Example: Can the decision threshold be higher than both a
0
and a
1
in this binary, one-dimensional
signalling, receiver?
Given a
0
, a
1
, σ
2
N
, P [H
1
], and P [H
0
], you should be able to calculate the optimal decision threshold γ.
Example: In this example, given all of the above constants and the optimal threshold γ, calculate
the probability of error from (18).
Starting from
P [error] = P [x ∈ R
1
[H
0
] P [H
0
] +P [x ∈ R
0
[H
1
] P [H
1
]
we can use the decision regions in (22) to write
P [error] = P [x > γ[H
0
] P [H
0
] +P [x < γ[H
1
] P [H
1
]
What is the first probability, given that r[H
0
is Gaussian with mean a
0
and variance σ
2
N
? What is the second
probability, given that x[H
1
is Gaussian with mean a
1
and variance σ
2
N
? What is then the overall probability
of error?
Solution:
P [x > γ[H
0
] = Q
_
γ −a
0
σ
N
_
P [x < γ[H
1
] = 1 −Q
_
γ −a
1
σ
N
_
= Q
_
a
1
−γ
σ
N
_
P [error] = P [H
0
] Q
_
γ −a
0
σ
N
_
+P [H
1
] Q
_
a
1
−γ
σ
N
_
14.9 Review of Binary Detection
We did three things to prove some things about the optimal detector:
• We wrote the formula for the probability of error.
• We found the decision regions which minimized the probability of error.
• We used the log operator to show that for the Gaussian error case the decision regions are separated by
a single threshold.
• We showed the formula for that threshold, both in the equi-probable symbol case, and in the general case.
Lecture 9
Today: (1) Finish Lecture 8 (2) Intro to M-ary PAM
ECE 5520 Fall 2009 58
Sample exams (2007-8) and solutions posted on WebCT. Of course, by putting these sample exams up, you
know the actual exam won’t contain the same exact problems. There is no guarantee this exam will be the
same level of difficulty of either past exam.
Notes on 2008 sample exam:
• Problem 1(b) was not material covered this semester.
• Problem 3 is not on the topic of detection theory, it is on the probability topic. There is a typo: f
N
(n)
should be f
N
(t).
• Problem 6 typo: x
1
(t) =
_
t, −1 ≤ t ≤ 1
0, o.w.
and x
1
(t) =
_
1
2
(3t
2
−1), −1 ≤ t ≤ 1
0, o.w.
. That is, the “x”s
on the RHS of the given expressions should be “t”s.
Notes on 2007 sample exam:
• Problem 2 should have said “with period T
sa
” rather than “at rate T
sa
”.
• The book that year used ϕ
i
(t) instead of φ
i
(t) as the notation for a basis function, and α
i
instead of s
i
as
the signal space vector for signal s
i
(t).
• Problem 6 is on the topic of detection theory.
15 Pulse Amplitude Modulation (PAM)
Def ’n: Pulse Amplitude Modulation (PAM)
M-ary Pulse Amplitude Modulation is the use of M scaled versions of a single basis function p(t) = φ
0
(t) as a
signal set,
s
i
(t) = a
i
p(t), for i = 0, . . . , M
where a
i
is the amplitude of waveform i.
Each sent waveform (a.k.a. “symbol”) conveys k = log
2
M bits of information. Note we’re calling it p(t)
instead of φ
0
(t), and a
i
instead of a
i,0
, both for simplicity, since there’s only one basis function.
15.1 Baseband Signal Examples
When you compose a signal of a number of symbols, you’ll just add each time-delayed signal to the transmitted
waveform. The resulting signal we’ll call s(t) (without any subscript) and is
s(t) =

n
a(n)p(t −nT
s
)
where a(n) ∈ ¦a
0
, . . . , a
M−1
¦ is the nth symbol transmitted, and T
s
is the symbol period (not the sample
period!). Instead of just sending one symbol, we are sending one symbol every T
s
units of time. That makes
our symbol rate as 1/T
s
symbols per second.
Note: Keep in mind that when s(t) is to be transmitted on a bandpass channel, it is modulated with a
carrier cos(2πf
c
t),
x(t) = ℜ
_
s(t)e
j2πfct
_
ECE 5520 Fall 2009 59
Rice Figures 5.2.3 and 5.2.6 show continuous-time and discrete-time realizations, respectively, of PAM
transmitters and receivers.
Example: Binary Bipolar PAM
Figure 22 shows a 4-ary PAM signal set using amplitudes a
0
= −A, a
1
= A. It shows a signal set using square
pulses,
φ
0
(t) = p(t) =
_
1/

T
s
, 0 ≤ t < T
s
0, o.w.
0 0.2 0.4 0.6 0.8 1
−1
−0.5
0
0.5
1
Time t
V
a
l
u
e
Figure 22: Example signal for binary bipolar PAM example for A =

T
s
.
Example: Binary Unipolar PAM
Unipolar binary PAM uses the amplitudes a
0
= 0, a
1
= A. The average energy per symbol has decreased. This
is also called On-Off Keying (OOK). If the y-axis in Figure 22 was scaled such that the minimum was 0 instead
of −1, it would represent unipolar PAM.
Example: 4-ary PAM
A 4-ary PAM signal set using amplitudes a
0
= −3A, a
1
= −A, a
2
= A, a
3
= 3A is shown in Figure 23. It shows
a signal set using square pulses,
p(t) =
_
1/

T
s
, 0 ≤ t < T
s
0, o.w.
Typical M-ary PAM (for all even M) uses the following amplitudes:
−(M −1)A, −(M −3)A, . . . , −A, A, . . . , +(M −3)A, +(M −1)A
Rice Figure 5.2.1 shows the signal space diagram of M-ary PAM for different values of M.
15.2 Average Bit Energy in M-ary PAM
Assuming that each symbol s
i
(t) is equally likely to be sent, we want to calculate the average bit energy c
b
,
which is 1/ log
2
M times the symbol energy c
s
,
c
s
= E
__

−∞
a
2
i
p
2
(t)dt
_
=
_

−∞
E
_
a
2
i
¸
φ
2
0
(t)dt
= E
_
a
2
i
¸
(24)
ECE 5520 Fall 2009 60
0 0.2 0.4 0.6 0.8 1
−3
−2
−1
0
1
2
3
Time t
V
a
l
u
e
Figure 23: Example signal for 4-ary PAM example.
Let ¦a
0
, . . . , a
M−1
¦ = −(M −1)A, −(M −3)A, . . . , (M −1)A, as described above to be the typical M-ary PAM
case. Then
E
_
a
2
i
¸
=
A
2
M
_
(1 −M)
2
+ (3 −M)
2
+ + (M −1)
2
¸
=
A
2
M
M

m=1
(2m−1 −M)
2
=
A
2
M
M(M
2
−1)
3
= A
2
(M
2
−1)
3
So c
s
=
(M
2
−1)
3
A
2
, and
c
b
=
1
log
2
M
(M
2
−1)
3
A
2
Lecture 10
Today: Review for Exam 1
16 Topics for Exam 1
1. Fourier transform of signals, properties (12)
2. Bandpass signals
3. Sampling and aliasing
4. Orthogonality / Orthonormality
5. Correlation and Covariance
6. Signal space representation
7. Correlation / matched filter receivers
ECE 5520 Fall 2009 61
Make sure you know how to do each HW problem in HWs 1-3. It will be good to know the ‘tricks’ of how
each problem is done, since they will reappear.
Read over the lecture notes 1-7. If anything is confusing, ask me.
You may have one side of an 8.511 sheet of paper for notes. You will be provided with Table 2.4.4 and
Table 2.4.8.
17 Additional Problems
17.1 Spectrum of Communication Signals
1. Bateman Problem 1.9: An otherwise ideal mixer (multiplier) generates an internal DC offset which sums
with the baseband signal prior to multiplication with the cos(2πf
c
t) carrier. How does this affect the
spectrum of the output signal?
2. Haykin & Moyer Problem 2.25. A signal x(t) of finite energy is applied to a square law device whose
output y(t) is defined by y(t) = x
2
(t). The spectrum of x(t) is bandlimited to −W ≤ ω ≤ W. Prove that
the spectrum of y(t) is bandlimited to −2W ≤ ω ≤ 2W.
3. Consider the pulse shape
x(t) =
_
cos
_
πt
Ts
_
, −
Ts
2
< t <
Ts
2
0, o.w.
(a) Draw a plot of the pulse shape x(t).
(b) Find the Fourier transform of x(t).
4. Rice 2.39, 2.52
17.2 Sampling and Aliasing
1. Assume that x(t) is a bandlimited signal with X(f) = 0 for [f[ > W. When x(t) is sampled at a rate
T =
1
2W
, its samples X
n
are,
X
n
=
_
_
_
1, n = ±1
2, n = 0
0, o.w.
What was the original continuous-time signal x(t)? Or, if x(t) cannot be determined, why not?
2. Rice 2.99, 2.100
17.3 Orthogonality and Signal Space
1. (from L.W. Couch 2007, Problem 2-49). Three functions are shown in Figure P2-49.
(a) Show that these functions are orthogonal over the interval (−4, 4).
(b) Find the scale factors needed to scale the functions to make them into an orthonormal set over the
interval (−4, 4).
(c) Express the waveform
w(t) =
_
1, 0 ≤ t ≤ 4
0, o.w.
in signal space using the orthonormal set found in part (b).
ECE 5520 Fall 2009 62
2. Rice Example 5.1.1 on Page 219.
3. Rice 5.2, 5.5, 5.13, 5.14, 5.16, 5.23, 5.26
17.4 Random Processes, PSD
1. (From Couch 2007 Problem 2-80) An RC low-pass filter is the circuit shown in Figure 24. Its transfer
function is
X(t) Y(t)
R
C
+ +
- -
Figure 24: An RC low-pass filter.
H(f) =
Y (f)
X(f)
=
1
1 +j2πRCf
Given that the PSD of the input signal is flat and constant at 1, i.e., S
X
(f) = 1, design an RC low-pass
filter that will attenuate this signal by 20 dB at 15 kHz. That is, find the value of RC to satisfy the design
specifications.
2. Let a binary 1-D communication system be described by two possible signals, and noise with different
variance depending on which signal is sent, i.e.,
H
0
: r = a
0
+n
0
H
1
: r = a
1
+n
1
where α
0
= −1 and α
1
= 1, and n
0
is zero-mean Gaussian with variance 0.1, and n
1
is zero-mean Gaussian
with variance 0.3.
(a) What is the conditional pdf of r given H
0
?
(b) What is the conditional pdf of r given H
1
?
17.5 Correlation / Matched Filter Receivers
1. A binary communication system uses the following equally-likely signals,
s
1
(t) =
_
cos
_
πt
T
_
, −
T
2
≤ t ≤
T
2
0, o.w.
s
2
(t) =
_
−cos
_
πt
T
_
, −
T
2
≤ t ≤
T
2
0, o.w.
At the receiver, a signal x(t) = s
i
(t) +n(t) is received.
(a) Is s
1
(t) an energy-type or power-type signal?
(b) What is the energy or power (depending on answer to (a)) in s
1
(t)?
(c) Describe in words and a block diagram the operation of an correlation receiver.
ECE 5520 Fall 2009 63
2. Proakis & Salehi 7.8. Suppose that two signal waveforms s
1
(t) and s
2
(t) are orthogonal over the interval
(0, T). A sample function n(t) of a zero-mean, white noise process is correlated with s
1
(t) and
2
(t) to
yield,
n
1
=
_
T
0
s
1
(t)n(t)dt
n
2
=
_
T
0
s
2
(t)n(t)dt
Prove that E[n
1
n
2
] = 0.
3. Proakis & Salehi 7.16. Prove that when a sinc pulse g
T
(t) = sin(πt/T)/(πt/T) is passed through its
matched filter, the output is the same sinc pulse.
Lecture 11
Today: (1) M-ary PAM Intro from Lecture 9 notes (2) Probability of Error in PAM
18 Probability of Error in Binary PAM
This is in Section 6.1.2 of Rice.
How are N
0
/2 and σ
N
related? Recall from lecture 7 that the variance of n
k
came out to be N
0
/2. We had
also called the variance of noise component n
k
as σ
2
N
. So:
σ
2
N
=
N
0
2
. (25)
18.1 Signal Distance
We mentioned, when talking about signal space diagrams, a distance between vectors,
d
i,j
= |a
i
−a
j
| =
_
M

k=1
(a
i,k
−a
j,k
)
2
_
1/2
For the 1-D, two signal case, there is only d
1,0
,
d
1,0
= [a
1
−a
0
[ (26)
18.2 BER Function of Distance, Noise PSD
We have consistently used a
1
> a
0
. In the Gaussian noise case (with equal variance of noise in H
0
and H
1
),
and with P [H
0
] = P [H
1
], we came up with threshold γ = (a
0
+a
1
)/2, and
P [error] = P [x > γ[H
0
] P [H
0
] +P [x < γ[H
1
] P [H
1
]
ECE 5520 Fall 2009 64
Which simplified using the Q function,
P [x > γ[H
0
] = Q
_
γ −a
0
σ
N
_
P [x < γ[H
1
] = 1 −Q
_
γ −a
1
σ
N
_
= Q
_
a
1
−γ
σ
N
_
(27)
Thus
P [x > γ[H
0
] = Q
_
(a
1
−a
0
)/2
σ
N
_
P [x < γ[H
1
] = Q
_
(a
1
−a
0
)/2
σ
N
_
(28)
So both are equal, and again with P [H
0
] = P [H
1
] = 1/2,
P [error] = Q
_
(a
1
−a
0
)/2
σ
N
_
(29)
We’d assumed a
1
> a
0
, so if we had also calculated for the case a
1
< a
0
, we’d have seen that in general, the
following formula works:
P [error] = Q
_
[a
1
−a
0
[

N
_
Then, using (25) and (26), we have
P [error] = Q
_
d
0,1
2
_
N
0
/2
_
= Q
_
_
¸
d
2
0,1
2N
0
_
_
(30)
This will be important as we talk about binary PAM. This expression,
Q
_
_
¸
d
2
0,1
2N
0
_
_
Is one that we will see over and over again in this class.
18.3 Binary PAM Error Probabilities
For binary PAM (M = 2) it takes one symbol to encode one bit. Thus ‘bit’ and ‘symbol’ are interchangeable.
In signal space, our signals s
0
(t) = a
0
p(t) and s
1
(t) = a
1
p(t) are just s
0
= a
0
and s
1
= a
1
. Now we can use
(30) to compute the probability of error.
For bipolar signaling, when A
0
= −A and A
1
= A,
P [error] = Q
_
_
¸
4A
2
2N
0
_
_
= Q
_
_
¸
2A
2
N
0
_
_
For unipolar signaling, when A
0
= 0 and A
1
= A,
P [error] = Q
_
_
¸
A
2
2N
0
_
_
ECE 5520 Fall 2009 65
However, this doesn’t take into consideration that the unipolar signaling method uses only half of the energy
to transmit. Since half of the bits are exactly zero, they take no signal energy. Using (24), what is the average
energy of bipolar and unipolar signaling?
Solution: For the bipolar signalling, A
2
m
= A
2
always, so c
b
= c
s
= A
2
. For the unipolar signalling, A
2
m
equals A
2
with probability 1/2 and zero with probability 1/2. Thus c
b
= c
s
=
1
2
A
2
.
Now, we re-write the probability of error expressions in terms of c
b
. For bipolar signaling,
P [error] = Q
_
_
¸
4A
2
2N
0
_
_
= Q
_
_
2c
b
N
0
_
For unipolar signaling,
P [error] = Q
_
_
2c
b
2N
0
_
= Q
_
_
c
b
N
0
_
Discussion These probabilities of error are shown in Figure 25.
0 2 4 6 8 10 12 14
10
−6
10
−5
10
−4
10
−3
10
−2
10
−1
10
0
E
b
/ N
0
ratio, dB
T
h
e
o
r
e
t
i
c
a
l

P
r
o
b
a
b
i
l
i
t
y

o
f

E
r
r
o
r
Unipolar
Bipolar
Figure 25: Probability of Error in binary PAM signalling.
• Even for the same bit energy c
b
, the bit error rate of bipolar PAM beats unipolar PAM.
• What is the difference in c
b
/N
0
for a constant probability of error?
• What is the difference in terms of probability of error for a given c
b
/N
0
?
Lecture 12
Today: (0) Exam 1 Return, (1) Multi-Hypothesis Detection Theory, (2) Probability of Error in M-ary
PAM
ECE 5520 Fall 2009 66
19 Detection with Multiple Symbols
When we only had s
0
(t) and s
1
(t), we saw that our decision was based on which of the following was higher:
f
X|H
0
(x[H
0
)P [H
0
] and f
X|H
1
(x[H
1
)P [H
1
]
For M-ary signals, we’ll see that we need to consider the highest of all M possible events H
m
,
H
0
: r(t) = s
0
(t) +n(t)
H
1
: r(t) = s
1
(t) +n(t)

H
M−1
: r(t) = s
M
(t) +n(t)
which have joint probabilities,
H
0
: f
X|H
0
(x[H
0
)P [H
0
]
H
1
: f
X|H
1
(x[H
1
)P [H
1
]

H
M−1
: f
X|H
M−1
(x[H
M−1
)P [H
M−1
]
We will find which of these joint probabilities is highest. For this class, we’ll only consider the case of equally
probable signals. (While equi-probable signals is sometimes not the case for M = 2 binary detection, it is very
rare in higher M communication systems.) If P [H
0
] = = P [H
M−1
] then we only need to find the i that
makes the likelihood f
X|H
i
(x[H
i
) maximum, that is, maximum likelihood detection.
Symbol Decision = arg max
i
f
X|H
i
(x[H
i
)
For Gaussian (conditional) r.v.s with equal variances σ
2
N
, it is better to maximize the log of the likelihood rather
than the likelihood directly, so
log f
X|H
i
(x[H
i
) = −
1
2
log(2πσ
2
N
) −
(x −a
i
)
2

2
N
This is maximized when (x − a
i
)
2
is minimized. Essentially, this is a (squared) distance between x and a
i
. So,
the decision is, find the a
i
which is closest to x.
The decision between two neighboring signal vectors a
i
and a
i+1
will be
r
H
i+1
>
<
H
i
γ
i,i+1
=
a
i
+a
i+1
2
As an example: 4-ary PAM. See Figure 26.
r
a
1
a
2
a
3
a
4
g
12
g
23
g
34
Figure 26: Signal space representation and decision thresholds for 4-ary PAM.
ECE 5520 Fall 2009 67
20 M-ary PAM Probability of Error
20.1 Symbol Error
The probability that we don’t get the symbol correct is the probability that x does not fall within the range
between the thresholds in which it belongs. Here, each a
i
= A
i
. Also the noise σ
2
N
= N
0
/2, as discussed above.
Assuming neighboring symbols a
i
are spaced by 2A, the decision threshold is always A away from the symbol
values. For the symbols i in the ‘middle’,
P(symbol error[H
i
) = 2Q
_
A
_
N
0
/2
_
For the symbols i on the ‘sides’,
P(symbol error[H
i
) = Q
_
A
_
N
0
/2
_
So overall,
P(symbol error) =
2(M −1)
M
Q
_
A
_
N
0
/2
_
20.1.1 Symbol Error Rate and Average Bit Energy
How does this relate to the average bit energy c
b
? From last lecture, c
b
=
1
log
2
M
(M
2
−1)
3
A
2
, which means that
A =
_
3 log
2
M
M
2
−1
c
b
So
P(symbol error) =
2(M −1)
M
Q
_
_
6 log
2
M
M
2
−1
c
b
N
0
_
(31)
Equation (31) is plotted in Figure 27.
0 2 4 6 8 10 12 14
10
−6
10
−5
10
−4
10
−3
10
−2
10
−1
10
0
E
b
/ N
0
ratio, dB
T
h
e
o
r
e
t
i
c
a
l

P
r
o
b
a
b
i
l
i
t
y

o
f

E
r
r
o
r
M=2
M=4
M=8
M=16
Figure 27: Probability of Symbol Error in M-ary PAM.
ECE 5520 Fall 2009 68
20.2 Bit Errors and Gray Encoding
For binary PAM, there are only two symbols, one will be assigned binary 0 and the other binary 1. When you
make one symbol error (decide H
0
or H
1
in error) then it will cause one bit error.
For M-ary PAM, bits and symbols are not synonymous. Instead, we have to carefully assign bit codes to
symbols 1 . . . M.
Example: Bit coding of M = 4 symbols
While the two options shown in Figure 28 both assign 2-bits to each symbol in unique ways, one will lead to a
higher bit error rate than the other. Is one is better or worse?
1 -1 -1 0
“00” “01” “10” “11”
“00” “11” “10” “01” Option1:
Option2:
Figure 28: Two options for assigning bits to symbols.
The key is to recall the model for noise. It will not shift a signal uniformly across symbols. It will tend to
leave the received signal x close to the original signal a
i
. The neighbors of a
i
will be more likely than distant
signal space vectors.
Thus Gray encoding will only change one bit across boundaries, as in Option 2 in Figure 28.
Example: Bit coding of M = 8 symbols
Assign three bits to each symbol such that any two nearest neighbors are different in only one bit (Gray
encoding).
Solution: Here is one solution.
f
1 -1
“101” “100”
1 0
“001” “000”
3 2
“010” “011”
-2 -3
“110” “111”
Figure 29: Gray encoding for 8-ary PAM.
20.2.1 Bit Error Probabilities
How many bit errors are caused by a symbol error in M-ary PAM?
• One. If Gray encoding is used, the errors will tend to be just one bit flipped, more than multiple bits
flipped. At least at high E
b
/N
0
,
P(error) ≈
1
log
2
M
P(symbol error) (32)
• Maybe more, up to log
2
M in the worst case. Then, we need to study further the probability that x will
jump more than one decision region.
As of now - if you have to estimate a bit error rate for M-ary PAM - use (32). We will show that this
approximation is very good, in almost all cases. We will also show examples in multi-dimensional signalling
(e.g., QAM) when this is not a good approximation.
ECE 5520 Fall 2009 69
Lecture 13
Today: (1) Pulse Shaping / ISI, (2) N-Dim Detection Theory
21 Inter-symbol Interference
1. Consider the spectrum of the ideal 1-D PAM system with a square pulse.
2. Consider the time-domain of the signal which is close to a rect function in the frequency domain.
We don’t want either: (1) occupies too much spectrum, and (2) occupies too much time.
1. If we try (1) above and use FDMA (frequency division multiple access) then the interference is out-of-band
interference.
2. If we try (2) above and put symbols right next to each other in time, our own symbols experience
interference called inter-symbol interference.
In reality, we want to compromise between (1) and (2) and experience only a small amount of both.
Now, consider the effect of filtering in the path between transmitter and receiver. Filters will be seen in
1. Transmitter: Baseband filters come from the limited bandwidth, speed of digital circuitry. RF filters are
used to limit the spectral output of the signal (to meet out-of-band interference requirements).
2. Channel: The wideband radio channel is a sum of time-delayed impulses. Wired channels have (non-
ideal) bandwidth – the attenuation of a cable or twisted pair is a function of frequency. Waveguides have
bandwidth limits (and modes).
3. Receiver: Bandpass filters are used to reduce interference, noise power entering receiver. Note that the
‘matched filter’ receiver is also a filter!
21.1 Multipath Radio Channel
(Background knowledge). The measured radio channel is a filter, call its impulse response h(τ, t). If s(t) is
transmitted, then
r(t) = s(t) ⋆ h(τ, t)
will be received. Typically, the radio channel is modeled as
h(τ, t) =
L

l=1
α
l
e

l
δ(τ −τ
l
), (33)
where α
l
and φ
l
are the amplitude and phase of the lth multipath component, and τ
l
is its time delay. Essen-
tially, each reflection, diffraction, or scattering adds an additional impulse to (33) with a particular time delay
(depending on the extra path length compared to the line-of-sight path) and complex amplitude (depending
on the losses and phase shifts experienced along the path). The amplitudes of the impulses tend to decay over
time, but multipath with delays of hundreds of nanoseconds often exist in indoor links and delays of several
microseconds often exist in outdoor links.
ECE 5520 Fall 2009 70
The channel in (33) has infinite bandwidth (why?) so isn’t really what we’d see if we looked just at a narrow
band of the channel.
Example: 2400-2480 MHz Measured Radio Channel
The ‘Delay Profile’ is shown in Figure 30 for an example radio channel. Actually what we observe
PDP(t) = r(t) ⋆ x(t) = (x(t) ⋆ h(t)) ⋆ x(t)
PDP(t) = r(t) ⋆ x(t) = R
X
(t) ⋆ h(t)
PDP(f) = H(f)S
X
(f) (34)
Note that 200 ns is 0.2µs is 1/ 5 MHz. At this symbol rate, each symbol would fully bleed into the next symbol.
(a)
0 100 200 300 400
0
0.05
0.1
0.15
0.2
0.25
Delay, ns
A
m
p
l
i
t
u
d
e

D
e
l
a
y

P
r
o
f
i
l
e
(b)
0 100 200 300 400
0
0.05
0.1
0.15
0.2
0.25
Delay, ns
A
m
p
l
i
t
u
d
e

D
e
l
a
y

P
r
o
f
i
l
e
Figure 30: Multiple delay profiles measured on two example links: (a) 13 to 43, and (b) 14 to 43.
How about for a 5 kHz symbol rate? Why or why not?
Other solutions:
• OFDM (Orthogonal Frequency Division Multiplexing)
• Direct Sequence Spread Spectrum / CDMA
• Adaptive Equalization
21.2 Transmitter and Receiver Filters
Example: Bipolar PAM Signal Input to a Filter
How does this effect the correlation receiver and detector? Symbols sent over time are no longer orthogonal
– one symbol ‘leaks’ into the next (or beyond).
22 Nyquist Filtering
Key insight:
• We don’t need the leakage to be zero at all time.
ECE 5520 Fall 2009 71
• The value out of the matched filter (correlator) is taken only at times nT
s
, for integer n.
Nyquist filtering has the objective of, at the matched filter output, making the sum of ‘leakage’ or ISI from
one symbol be exactly 0 at all other sampling times nT
s
.
Where have you seen this condition before? This condition, that there is an inner product of zero, is the con-
dition for orthogonality of two waveforms. What we need are pulse shapes (waveforms), that when shifted in
time by T
s
, are orthogonal to each other. In more mathematical language, we need p(t) such that
¦φ
n
(t) = p(t −nT
s

n=...,−1,0,1,...
forms an orthonormal basis. The square root raised cosine, shown in Figure 32 accomplishes this. But lots of
other functions also accomplish this. The Nyquist filtering theorem provides an easy means to come up with
others.
Theorem: Nyquist Filtering
Proof: A necessary and sufficient condition for x(t) to satisfy
x(nT
s
) =
_
1, n = 0
0, o.w.
is that its Fourier transform X(f) satisfy

m=−∞
X
_
f +
m
T
s
_
= T
s
Proof: on page 833, Appendix A, of Rice book.
Basically, X(f) could be:
• X(f) = rect(fT
s
), i.e., exactly constant within −
1
2Ts
< f <
1
2Ts
band and zero outside.
• It may bleed into the next ‘channel’ but the sum of
+X
_
f −
1
T
s
_
+X(f) +X
_
f +
1
T
s
_
+
must be constant across all f.
Thus X(f) is allowed to bleed into other frequency bands – but the neighboring frequency-shifted copy of X(f)
must be lower s.t. the sum is constant. See Figure 31.
If X(f) only bleeds into one neighboring channel (that is, X(f) = 0 for all [f[ >
1
Ts
), we denote the difference
between the ideal rect function and our X(f) as ∆(f),
∆(f) = [X(f) −rect(fT
s
)[
then we can rewrite the Nyquist Filtering condition as,

_
1
2T
s
−f
_
= ∆
_
1
2T
s
+f
_
for −1/T
s
≤ f < 1/T
s
. Essentially, symmetric about f = 1/(2T
s
). See Figure 31
ECE 5520 Fall 2009 72
1
2T
f
X f ( )
X f+ T ( / ) 1
X f X f+ T ( )+ ( / ) 1
1
T
Figure 31: The Nyquist filtering criterion – 1/T
s
-frequency-shifted copies of X(f) must add up to a constant.
Bateman presents this whole condition as “A Nyquist channel response is characterized by the transfer
function having a transition band that is symmetrical about a frequency equal to 0.5 1/T
s
.”
Activity: Do-it-yourself Nyquist filter. Take a sheet of paper and fold it in half, and then in half the other
direction. Cut a function in the thickest side (the edge that you just folded). Leave a tiny bit so that it is not
completely disconnected into two halves. Unfold. Drawing a horizontal line for the frequency f axis, the middle
is 0.5/T
s
, and the vertical axis it X(f).
22.1 Raised Cosine Filtering
H
RC
(f) =
_
¸
_
¸
_
T
s
, 0 ≤ [f[ ≤
1−α
2Ts
Ts
2
_
1 + cos
_
πTs
α
_
[f[ −
1−α
2Ts
___
,
1−α
2Ts
≤ [f[ ≤
1+α
2Ts
0, o.w.
(35)
22.2 Square-Root Raised Cosine Filtering
But we usually need do design a system with two identical filters, one at the transmitter and one at the receiver,
which in series, produce a zero-ISI filter. In other words, we need [H(f)[
2
to meet the Nyquist filtering condition.
H
RRC
(f) =
_
¸
¸
_
¸
¸
_

T
s
, 0 ≤ [f[ ≤
1−α
2Ts _
Ts
2
_
1 + cos
_
πTs
α
_
[f[ −
1−α
2Ts
___
,
1−α
2Ts
≤ [f[ ≤
1+α
2Ts
0, o.w.
(36)
23 M-ary Detection Theory in N-dimensional signal space
We are going to start to talk about QAM, a modulation with two dimensional signal vectors, and later even
higher dimensional signal vectors. We have developed M-ary detection theory for 1-D signal vectors, and now
we will extend that to N-dimensional signal vectors.
Setup:
• Transmit: one of M possible symbols, a
0
, . . . , a
M−1
.
ECE 5520 Fall 2009 73
−6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7
−0.05
0
0.05
0.1
0.15
0.2
0.25
Time, t/T
sy
B
a
s
i
s

F
u
n
c
t
i
o
n

φ
i
(
t
)
Figure 32: Two SRRC Pulses, delayed in time by nT
s
for any integer n, are orthogonal to each other.
• Receive: the symbol plus noise:
H
0
: X = a
0
+n

H
M−1
: X = a
M−1
+n
• Assume: n is multivariate Gaussian, each component n
i
is independent with zero mean and variance
σ
N
= N
0
/2.
• Assume: Symbols are equally likely.
• Question: What are the optimal decision regions?
When symbols are equally likely, the optimal decision turns out to be given by the maximum likelihood
receiver,
ˆ
i = argmax
i
log f
X|H
i
(x[H
i
) (37)
Here,
f
X|H
i
(x[H
i
) =
1
(2πσ
2
N
)
N/2
exp
_

j
(x
j
−a
i,j
)
2

2
N
_
which can also be written as
f
X|H
i
(x[H
i
) =
1
(2πσ
2
N
)
N/2
exp
_

|x −a
i
|
2

2
N
_
So when we want to solve (37) we can simplify quickly to:
ˆ
i = argmax
i
_
log
1
(2πσ
2
N
)
N/2

|x −a
i
|
2

2
N
_
ˆ
i = argmax
i

|x −a
i
|
2

2
N
ˆ
i = argmin
i
|x −a
i
|
2
ˆ
i = argmin
i
|x −a
i
| (38)
ECE 5520 Fall 2009 74
Again: Just find the a
i
in the signal space diagram which is closest to x.
Pairwise Comparisons When is x closer to a
i
than to a
j
for some other signal space point j? Solution:
The two decision regions are separated by a straight line (Note: replace “line” with plane in 3-D, or
subspace in N-D). To find this line:
1. Draw a line segment connecting a
i
and a
j
.
2. Draw a point in the middle of that line segment.
3. Draw the perpendicular bisector of the line segment through that point.
Why?
Solution: Try to find the locus of point x which satisfy
|x −a
i
|
2
= |x −a
j
|
2
You can do this by using the inner product to represent the magnitude squared operator:
(x −a
i
)
T
(x −a
i
) = (x −a
j
)
T
(x −a
j
)
Then use FOIL (multiply out), cancel, and reorganize to find a linear equation in terms of x. This is left as an
exercise.
Decision Regions Each pairwise comparison results in a linear division of space. The combined decision
region of R
i
is the space which is the intersection of all pair-wise decision regions. (All conditions must be
satisfied.)
Example: Optimal Decision Regions
See Figure 33.
Lecture 14
Today: (1) QAM & PSK, (2) QAM Probability of Error
24 Quadrature Amplitude Modulation (QAM)
Quadrature Amplitude Modulation (QAM) is a two-dimensional signalling method which uses the in-phase and
quadrature (cosine and sine waves, respectively) as the two dimensions. Thus QAM uses two basis functions.
These are:
φ
0
(t) =

2p(t) cos(ω
0
t)
φ
1
(t) =

2p(t) sin(ω
0
t)
where p(t) is a pulse shape (like the ones we’ve looked at previously) with support on T
1
≤ t ≤ T
2
. That is, p(t)
is only non-zero within that window.
Previously, we’ve separated frequency up-conversion and down-conversion from pulse shaping. This defini-
tion of the orthonormal basis specifically considers a pulse shape at a frequency ω
0
. We include it here because
ECE 5520 Fall 2009 75
(a)
(b)
(c)
(d)
Figure 33: Example signal space diagrams. Draw the optimal decision regions.
ECE 5520 Fall 2009 76
it is critical to see how, with the same pulse shape p(t), we can have two orthogonal basis functions. (This is
not intuitive!)
We have two restrictions on p(t) that makes these two basis functions orthonormal:
• p(t) is unit-energy.
• p(t) is ‘low pass’; that is, it has low frequency content compared to ω
0
t.
24.1 Showing Orthogonality
Let’s show that the basis functions are orthogonal. We’ll need these cosine / sine function relationships:
sin(2A) = 2 cos Asin A
cos A−cos B = −2 sin
_
A+B
2
_
sin
_
A−B
2
_
As a first example, let p(t) = c for a constant c , for T
1
≤ t ≤ T
2
. Are the two bases orthogonal? First, show
that
¸φ
0
(t), φ
1
(t)) = c
2
sin(ω
0
(T
2
+T
1
)) sin(ω
0
(T
2
−T
1
))
ω
0
Solution: First, see how far we can get before plugging in for p(t):
¸φ
0
(t), φ
1
(t)) =
_
T
2
T
1

2p(t) cos(ω
0
t)

2p(t) sin(ω
0
t)dt
= 2
_
T
2
T
1
p
2
(t) cos(ω
0
t) sin(ω
0
t)dt
=
_
T
2
T
1
p
2
(t) sin(2ω
0
t)dt
Next, consider the constant p(t) = c for a constant c, for T
1
≤ t ≤ T
2
.
¸φ
0
(t), φ
1
(t)) = c
2
_
T
2
T
1
sin(2ω
0
t)dt
= c
2
_

cos(2ω
0
t)

0
¸
¸
¸
¸
T
2
T
1
= −c
2
cos(2ω
0
T
2
) −cos(2ω
0
T
1
)

0
= −c
2
cos(2ω
0
T
2
) −cos(2ω
0
T
1
)

0
I want the answer in terms of T
2
−T
1
(for reasons explained below), so
¸φ
0
(t), φ
1
(t)) = c
2
sin(ω
0
(T
2
+T
1
)) sin(ω
0
(T
2
−T
1
))
ω
0
We can see that there are two cases:
ECE 5520 Fall 2009 77
1. The pulse duration T
2
− T
1
is an integer number of periods. That is, ω
0
(T
2
− T
1
) = πk for some integer
k. In this case, the right sin is zero, and so the correlation is exactly zero.
2. Otherwise, the numerator bounded above and below by +1 and -1, because it is a sinusoid. That is,

c
2
ω
0
≤ ¸φ
0
(t), φ
1
(t)) ≤
c
2
ω
0
Typically, ω
0
is a large number. For example, frequencies 2πω
0
could be in the MHz or GHz ranges.
Certainly, when we divide by numbers on the order of 10
6
or 10
9
, we’re going to get a very small inner
product. For engineering purposes, φ
0
(t) and φ
1
(t) are orthogonal.
Finally, we can attempt the proof for the case of arbitrary pulse shape p(t). In this case, we use the ‘low-pass’
assumption that the maximum frequency content of p(t) is much lower than 2πω
0
. This assumption allows us
to assume that p(t) is nearly constant over the course of one cycle of the carrier sinusoid.
This is well-illustrated in Figure 5.15 in the Rice book (page 298). In this figure, we see a pulse modulated
by a sine wave at frequency 2πω
0
. Zooming in on any few cycles, you can see that the pulse p
2
(t) is largely
constant across each cycle. Thus, when we integrate p
2
(t) sin(2ω
0
t) across one cycle, we’re going to end up with
approximately zero. Proof?
Solution: How many cycles are there? Consider the period [T
1
, T
2
]. How many times can it be divided by
π/ω
0
? Let the integer number be
L =
_
(T
2
−T
1

0
π
_
.
Then each cycle is in the period, [T
1
+ (i − 1)π/ω
0
, T
1
+ iπ/ω
0
], for i = 1, . . . , L. Within each of these cycles,
assume p
2
(t) is nearly constant. Then, in the same way that the earlier integral was zero, this part of the
integral is zero here. The only remainder is the remainder (partial cycle) period, [T
1
+Lπ/ω
0
, T
2
]. Thus
¸φ
0
(t), φ
1
(t)) = p
2
(T
1
+Lπ/ω
0
)
_
T
2
T
1
+Lπ/ω
0
sin(2ω
0
t)dt
= p
2
(T
1
+Lπ/ω
0
)
cos(2ω
0
T
2
) −cos(2ω
0
T
1
+ 2πL)

0
= p
2
(T
1
+Lπ/ω
0
)
sin(ω
0
(T
2
+T
1
)) sin(ω
0
(T
2
−T
1
))
ω
0
Again, the inner product is bounded on either side:

p
2
(T
1
+Lπ/ω
0
)
ω
0
≤ ¸φ
0
(t), φ
1
(t))
p
2
(T
1
+Lπ/ω
0
)
ω
0
which for very large ω
0
, is nearly zero.
24.2 Constellation
With these two basis functions, M-ary QAM is defined as an arbitrary signal set a
0
, . . . , a
M−1
, where each
signal space vector a
k
is two-dimensional:
a
k
= [a
k,0
, a
k,1
]
T
ECE 5520 Fall 2009 78
The signal corresponding to symbol k in (M-ary) QAM is thus
s(t) = a
k,0
φ
0
(t) +a
k,1
φ
1
(t)
= a
k,0

2p(t) cos(ω
0
t) +a
k,1

2p(t) sin(ω
0
t)
=

2p(t) [a
k,0
cos(ω
0
t) +a
k,1
sin(ω
0
t)]
Note that we could also write the signal s(t) as
s(t) =

2p(t)R
_
a
k,0
e
−jω
0
t
+ja
k,1
e
−jω
0
t
_
=

2p(t)R
_
e
−jω
0
t
(a
k,0
+ja
k,1
)
_
(39)
In many textbooks, you will see them write a QAM signal in shorthand as
s
CB
(t) = p(t)(a
k,0
+ja
k,1
)
This is called ‘Complex Baseband’. If you do the following operation you can recover the real signal s(t) as
s(t) =

2R
_
e
−jω
0
t
s
SB
(t)
_
You will not be responsible for Complex Baseband notation, but you should be able to read other books and
know what they’re talking about.
Then, we can see that the signal space representation a
k
is given by
a
k
= [a
k,0
, a
k,1
]
T
for k = 0, . . . , M −1
24.3 Signal Constellations
M=64QAM
Figure 34: Square signal constellation for 64-QAM.
• See Figure 5.17 for examples of square QAM. These constellations use M = 2
a
for some even integer a,
and arrange the points in a grid. One such diagram for M = 64 square QAM is also given here in Figure
34.
• Figure 5.18 shows examples of constellations which use M = 2
a
for some odd integer a, and arrange the
points in a grid. These are either rectangular grids, or use squares with the corners cut out.
ECE 5520 Fall 2009 79
24.4 Angle and Magnitude Representation
You can plot a
k
in signal space and see that it has a magnitude (distance from the origin) of [a
k
[ =
_
a
2
k,0
+a
2
k,1
and angle of ∠a
k
= tan
−1
a
k,1
a
k,0
. In the continuous time signal s(t) this is
s(t) =

2p(t)[a
k
[ cos(ω
0
t +∠a
k
)
24.5 Average Energy in M-QAM
Recall that the average energy is calculated as:
c
s
=
1
M
M−1

i=0
[a
i
[
2
c
b
=
1
M log
2
M
M−1

i=0
[a
i
[
2
where c
s
is the average energy per symbol and c
b
is the average energy per bit. We’ll work in class some
examples of finding c
b
in different constellation diagrams.
24.6 Phase-Shift Keying
Some implementations of QAM limit the constellation to include only signal space vectors with equal magnitude,
i.e.,
[a
0
[ = [a
1
[ = = [a
M−1
[
The points a
i
for i = 0, . . . , M −1 are uniformly spaced on the unit circle. Some examples are shown in Figure
35.
BPSK For example, binary phase-shift keying (BPSK) is the case of M = 2. Thus BPSK is the same as
bipolar (2-ary) PAM. What is the probability of error in BPSK? The same as in bipolar PAM, i.e., for equally
probable symbols,
P [error] = Q
_
_
2c
b
N
0
_
.
QPSK M = 4 PSK is also called quadrature phase shift keying (QPSK), and is shown in Figure 35(a). Note
that the rotation of the signal space diagram doesn’t matter, so both ‘versions’ are identical in concept (although
would be a slightly different implementation). Note how QPSK is the same as M = 4 square QAM.
24.7 Systems which use QAM
See [Couch 2007], wikipedia, and the Rice book:
• Digital Microwave Relay, various manufacturer-specific protocols. 6 GHz, and 11 GHz.
• Dial-up modems: use a M = 16 or M = 8 QAM constellation.
• DSL. G.DMT uses multicarrier (up to 256 carriers) methods (OFDM), and on each narrowband (4.3kHz)
carrier, it can send up to 2
15
QAM (32,768 QAM). G.Lite uses up to 128 carriers, each with up to 2
8
= 256
QAM.
ECE 5520 Fall 2009 80
(a)
QPSK QPSK
(b)
M=8 M=16
Figure 35: Signal constellations for (a) M = 4 PSK and (b) M = 8 and M = 16 PSK.
• Cable modems. Upstream: 6 MHz bandwidth channel, with 64 QAM or 256 QAM. Downstream: QPSK
or 16 QAM.
• 802.11a and 802.11g: Adaptive modulation method, uses up to 64 QAM.
• Digital Video Broadcast (DVB): APSK used in ETSI standard.
25 QAM Probability of Error
25.1 Overview of Future Discussions on QAM
Before we get into details about the probabilities of error, you should realize that there are two main consider-
ations when a constellation is designed for a particular M:
1. Points with equal distances separating them and their neighboring points tend to reduce probability of
error.
2. The highest magnitude points (furthest from the origin) strongly (negatively) impact average bit energy.
There are also some practical considerations that we will discuss
1. Some constellations have more complicated decision regions which require more computation to implement
in a receiver.
2. Some constellations are more difficult (energy-consuming) to amplify at the transmitter.
ECE 5520 Fall 2009 81
25.2 Options for Probability of Error Expressions
In order of preference:
1. Exact formula. In a few cases, there is an exact expression for P [error] in an AWGN environment.
2. Union bound. This is a provable upper bound on the probability of error. It is not an approximation. It
can be used for “worst case” analysis which is often very useful for engineering design of systems.
3. Nearest Neighbor Approximation. This is a way to get a solution that is analytically easier to handle.
Typically these approximations are good at high
E
b
N
0
.
25.3 Exact Error Analysis
Exact probability of error formulas in N-dimensional modulations can be very difficult to find. This is because
our decisions regions are more complex than one threshold test. They require an integration of a N-D Gaussian
pdf across an area. We needed a Q function to get a tail probability for a 1-D Gaussian pdf. Now we need
not just a tail probability... but more like a section probability which is the volume under some part of the
N-dimensional Gaussian pdf.
For example, consider M-ary PSK. Essentially, we must find calculate the probability of symbol error as 1
M=8 M=16
Figure 36: Signal space diagram for M-ary PSK for M = 8 and M = 16.
minus the area in the sector within ±
π
M
of the correct angle φ
i
. This is,
P(symbol error) = 1 −
_
r∈R
i
1
2πσ
2
e

r−α
i

2

2
(40)
This integral is a double integral, and we don’t generally have any exact expression to use to express the result
in general.
25.4 Probability of Error in QPSK
In QPSK, the probability of error is analytically tractable. Consider the QPSK constellation diagram, when
Gray encoding is used. You have already calculated the decision regions for each symbol; now consider the
decision region for the first bit.
The decision is made using only one dimension, of the received signal vector x, specifically x
1
. Similarly,
the second bit decision is made using only x
2
. Also, the noise contribution to each element is independent. The
decisions are decoupled – x
2
has no impact on the decision about bit one, and x
1
has no impact on the decision
ECE 5520 Fall 2009 82
on bit two. Since we know the bit error probability for each bit decision (it is the same as bipolar PAM) we can
see that the bit error probability is also
P [error] = Q
_
_
2c
b
N
0
_
(41)
This is an extraordinary result – the bit rate will double in QPSK, but in theory, the bit error rate does not
increase. As we will show later, the bandwidth of QPSK is identical to that of BPSK.
25.5 Union Bound
From 5510 or an equivalent class (or a Venn diagram) you may recall the probability formula, that for two
events E and F that
P [E ∪ F] = P [E] +P [F] −P [E ∩ F]
You can prove this from the three axioms of probability. (This holds for any events E and F!) Then, using the
above formula, and the first axiom of probability, we have that
P [E ∪ F] ≤ P [E] +P [F] . (42)
Furthermore, from (42) it is straightforward to show that for any list of sets E
1
, E
2
, . . . E
n
we have that
P
_
n
_
i=1
E
i
_

n

i=1
P [E
i
] (43)
This is called the union bound, and it is very useful across communications. If you know one inequality, know
this one. It is useful when the overlaps E
i
∩ E
j
are small but difficult to calculate.
25.6 Application of Union Bound
In this class, our events are typically decision events. The event E
i
may represent the event that we decide H
i
,
when some other symbol not equal to i was actually sent. In this case,
P [symbol error] ≤
n

i=1
P [E
i
] (44)
The union bound gives a conservative estimate. This can be useful for quick initial study of a modulation
type.
Example: QPSK
First, let’s study the union bound for QPSK, as shown in Figure 37(a). Assume s
1
(t) is sent. We know that
(from previous analysis):
P [error] = Q
_
_
2c
b
N
0
_
So the probability of symbol error is one minus the probability that there were no error in either bit,
P [symbol error] = 1 −
_
1 −Q
_
_
2c
b
N
0
__
2
(45)
In contrast, let’s calculate the union bound on the probability of error. We define the two error events as:
ECE 5520 Fall 2009 83
a
1
E
2
E
4
a
2
a
3
a
0
a
1
E
2
E
4
a
2
a
3
a
0
(a) (b)
Figure 37: Union bound examples. (a) is QPSK with symbols of equal energy

c
s
. In (b) a
1
= −a
3
=
[0,
_
3c
s
/2]
T
and a
2
= −a
0
= [
_
c
s
/2, 0]
T
.
• E
2
, the event that r falls on the wrong side of the decision boundary with a
2
.
• E
0
, the event that r falls on the wrong side of the decision boundary with a
0
.
Then we write
P [symbol error[H
1
] = P [E
2
∪ E
0
∪ E
3
]
Questions:
1. Is this equation exact?
2. Is E
2
∪ E
0
∪ E
3
= E
2
∪ E
0
? Why or why not?
Because of the answer to the question (2.),
P [symbol error[H
1
] = P [E
2
∪ E
0
]
But don’t be confused. The Rice book will not tell you to do this! His strategy is to ignore redundancies,
because we are only looking for an upper bound any way. Either way produces an upper bound, and in fact,
both produce nearly identical numerical results. However, it is perfectly acceptable (and in fact a better bound)
to remove redundancies and then apply the bound.
Then, we use the union bound.
P [symbol error[H
1
] ≤ P [E
2
] +P [E
0
]
These two probabilities are just the probability of error for a binary modulation, and both are identical, so
P [symbol error[H
1
] ≤ 2Q
_
_
2c
b
N
0
_
What is missing / What is the difference in this expression compared to (45)?
See Figure 38 to see the union bound probability of error plot, compared to the exact expression. Only at
very low E
b
/N
0
is there any noticeable difference!
Example: 4-QAM with two amplitude levels
This is shown (poorly) in Figure 37(b). The amplitudes of the top and bottom symbols are

3 times the
ECE 5520 Fall 2009 84
0 1 2 3 4 5 6
10
−2
10
−1
E
b
/N
0
Ratio, dB
P
r
o
b
a
b
i
l
i
t
y

o
f

S
y
m
b
o
l

E
r
r
o
r
QPSK Exact
QPSK Union Bound
Figure 38: For QPSK, the exact probability of symbol error expression vs. the union bound.
amplitude of the symbols on the right and left. (They are positioned to keep the distance between points in the
signal space equal to
_
2c
s
/N
0
.) I am calling this ”2-amplitude 4-QAM” (I made it up).
What is the union bound on the probability of symbol error, given H
1
? Solution: Given symbol 1, the
probability is the same as above. Defining E
2
and E
0
as above, these two distances between symbol a
1
and a
2
or a
0
are the same:
_
2c
s
/N
0
. Thus the formula for the union bound is the same.
What is the union bound on the probability of symbol error, given H
2
? Solution: Now, it is
P [symbol error[H
2
] ≤ 3Q
_
_
2c
b
N
0
_
So, overall, the union bound on probability of symbol error is
P [symbol error[H
2
] ≤
3 2 + 2 2
4
Q
_
_
2c
b
N
0
_
How about average energy? For QPSK, the symbol energies are all equal. Thus c
av
= c
s
. For the two-
amplitude 4-QAM modulation,
c
av
= c
s
2(0.5) + 2(1.5)
4
= c
s
Thus there is no advantage to the two-amplitude QAM modulation in terms of average energy.
25.6.1 General Formula for Union Bound-based Probability of Error
This is given in Rice 6.76:
P [symbol error] ≤
1
M
M−1

m=0
M−1

n=0
n=m
P [decide H
n
[H
m
]
Note the ≤ sign. This means the actual P [symbol error] will be at most this value. It is probably less than this
value. This is a general formula and is not necessarily the best upper bound. What do we mean? If I draw any
ECE 5520 Fall 2009 85
function that is always above the actual P [symbol error] formula, I have drawn an upper bound. But I could
draw lots of functions that are upper bounds, some higher than others.
In particular, for some of the error events, decide H
n
[H
m
may be redundant, and do not need to be included.
We will talk about the concept of “neighboring” symbols to symbol i. These neighbors are the ones that are
necessary to include in the union bound.
Note that the Rice book uses E
avg
where I use c
s
to denote average symbol energy. The Rice book uses E
b
where I use c
b
to denote average bit energy. I prefer c
s
to make clear we are talking about Joules per symbol.
Given this, the pairwise probability of error, P [decide H
n
[H
m
], is given by:
P [decide H
n
[H
m
] = Q
_
_
¸
d
2
m,n
2N
0
_
_
= Q
_
_
¸
d
2
m,n
2c
s
c
s
N
0
_
_
(46)
where d
m,n
is the Euclidean distance between the symbols m and n in the signal space diagram. If we use a
constant A when describing the signal space vectors (as we usually do), then, since c
s
will be proportional to
A
2
and d
2
m,n
will be proportional to A
2
, the factors A will cancel out of the expression.
Proof of (46)?
Lecture 14
Today: (1) QAM Probability of Error, (2) Union Bound and Approximations
Lecture 15
Today: (1) M-ary PSK and QAM Analysis
26 QAM Probability of Error
26.1 Nearest-Neighbor Approximate Probability of Error
As it turns out, the probability of error is often well approximated by the terms in the Union Bound (Lecture 14,
Section 2.6.1) with the smallest d
m,n
. This is because higher d
m,n
means a higher argument in the Q-function,
which in turn means a lower value of the Q-function. A little extra distance means a much lower value of the
Q-function. So approximately,
P [symbol error] ≈
N
min
M
Q
_
_
¸
d
2
min
2N
0
_
_
P [symbol error] ≈
N
min
M
Q
_
_
¸
d
2
min
2c
s
c
s
N
0
_
_
(47)
where
d
min
= min
m=n
d
m,n
, m = 0, . . . , M −1, and n = 0, . . . , M −1.
and N
min
is the number of pairs of symbols which are separated by distance d
min
.
ECE 5520 Fall 2009 86
26.2 Summary and Examples
We’ve been formulating in class a pattern or step-by-step process for finding the probability of bit or symbol
error in arbitrary M-ary PSK and QAM modulations. These steps include:
1. Calculate the average energy per symbol, c
s
, as a function of any amplitude parameters (typically
we’ve been using A).
2. Calculate the average energy per bit, c
b
, using c
b
= c
s
/ log
2
M.
3. Draw decision boundaries. It is not necessary to include redundant boundary information, that is, an
error region which is completely contained in a larger error region.
4. Calculate pair-wise distances between pairs of symbols which contribute to the decision boundaries.
5. Convert distances to be in terms of c
b
/N
0
, if they are not already, using the answer to step (1.).
6. Calculate the union bound on probability of symbol error using the formula derived in the previous
lecture,
P [symbol error] ≤
1
M
M−1

m=0
M−1

n=0
n=m
P [decide H
n
[H
m
]
P [decide H
n
[H
m
] = Q
_
_
¸
d
2
m,n
2N
0
_
_
7. Approximate using the nearest neighbor approximation if needed:
P [symbol error] ≈
N
min
M
Q
_
_
¸
d
2
min
2N
0
_
_
where d
min
is the shortest pairwise distance in the constellation diagram, and N
min
is the number of
ordered pairs (i, j) which are that distance apart, d
i,j
= d
min
(don’t forget to count both (i, j) and (j, i)!).
8. Convert probability of symbol error into probability of bit error if Gray encoding can be assumed,
P [error] ≈
1
log
2
M
P [symbol error]
Example: Additional QAM / PSK Constellations
Solve for the probability of symbol error in the signal space diagrams in Figure 39. You should calculate:
• An exact expression if one should happen to be available,
• The Union bound,
• The nearest-neighbor approximation.
ECE 5520 Fall 2009 87
Figure 39: Signal space diagram for some example (made up) constellation diagrams.
ECE 5520 Fall 2009 88
on, first, probability of symbol error, and then also probability of bit error. The plots are in terms of amplitude
A, but all probability of error expressions should be written in terms of c
b
/N
0
.
Lecture 16
Today: (1) QAM Examples (Lecture 15) , (2) FSK
27 Frequency Shift Keying
Frequency modulation in general changes the center frequency over time:
x(t) = cos(2πf(t)t).
Now, in frequency shift keying, symbols are selected to be sinusoids with frequency selected among a set of M
different frequencies ¦f
0
, f
1
, . . . f
M−1
¦.
27.1 Orthogonal Frequencies
We will want our cosine wave at these M different frequencies to be orthonormal. Consider f
k
= f
c
+k∆f, and
thus
φ
k
(t) =

2p(t) cos(2πf
c
t + 2πk∆ft) (48)
where p(t) is a pulse shape, which for example, could be a rectangular pulse:
p(t) =
_
1/

T
s
, 0 ≤ t ≤ T
s
0, o.w.
• Does this function have unit energy?
• Are these functions orthogonal?
Solution:
¸φ
k
(t), φ
m
(t)) =
_

t=−∞
(2/T
s
) cos(2πf
c
t + 2πk∆ft) cos(2πf
c
t + 2πk∆ft)
= 1/T
s
_
Ts
t=0
cos(2π(k −m)∆ft)dt +
1/T
s
_
Ts
t=0
cos(4πf
c
t + 2π(k +m)∆ft)dt
= 1/T
s
_
sin(2π(k −m)∆ft)
2π(k −m)∆f
¸
¸
¸
¸
Ts
t=0
=
sin(2π(k −m)∆fT
s
)
2π(k −m)∆fT
s
Yes, they are orthogonal if 2π(k − m)∆fT
s
is a multiple of π. (They are also approximately orthogonal if
∆f is realy big, but we don’t want to waste spectrum.) For general k ,= m, this requires that ∆fT
s
= n/2, i.e.,
∆f = n
1
2T
s
= n
f
sy
2
ECE 5520 Fall 2009 89
for integer n. (Otherwise, no they’re not.)
Thus we need to plug into (48) for ∆f = n
1
2Ts
for some integer n in order to have an orthonormal basis.
What n? We will see that we either use an n of 1 or 2.
Signal space vectors a
i
are given by
a
0
= [A, 0, . . . , 0]
a
1
= [0, A, . . . , 0]
.
.
.
a
M−1
= [0, 0, . . . , A]
What is the energy per symbol? Show that this means that A =

c
s
.
For M = 2 and M = 3 these vectors are plotted in Figure 40.
M=2FSK M=3FSK
Figure 40: Signal space diagram for M = 2 and M = 3 FSK modulation.
27.2 Transmission of FSK
At the transmitter, FSK can be seen as a switch between M different carrier signals, as shown in Figure 41.
FSK
Out
Signalf (t)
1
Signalf (t)
2
Switch
Figure 41: Block diagram of a binary FSK transmitter.
But usually it is generated from a single VCO, as seen in Figure 42.
Def ’n: Voltage Controlled Oscillator (VCO)
A sinusoidal generator with frequency that linearly proportional to an input voltage.
Note that we don’t need to sent square wave input into the VCO. In fact, bandwidth will be lower when we
send less steep transitions, for example, SRRC pulses.
ECE 5520 Fall 2009 90
Def ’n: Continuous Phase Frequency Shift Keying
FSK with no phase discontinuity between symbols. In other words, the phase of the output signal φ
k
(t) does
not change instantaneously at symbol boundaries iT
s
for integer i, and thus φ(t

+iT
s
) = φ(t
+
+iT
s
) where t

and t
+
are the limiting times just to the left and to the right of 0, respectively.
There are a variety of flavors of CPFSK, which are beyond the scope of this course.
VCO
FSK
Out
Frequency
Signalf(t)
Figure 42: Block diagram of a binary FSK transmitter.
27.3 Reception of FSK
FSK reception is either phase coherent or phase non-coherent. Here, there are M possible carrier frequencies,
so we’d need to know and be synchronized to M different phases θ
i
, one for each symbol frequency:
cos(2πf
c
t +θ
0
)
cos(2πf
c
t + 2π∆ft +θ
1
)
.
.
.
cos(2πf
c
t + 2π(M −1)∆ft +θ
M−1
)
27.4 Coherent Reception
FSK reception can be done via a correlation receiver, just as we’ve seen for previous modulation types.
Each phase θ
k
is estimated to be
ˆ
θ
k
by a separate phase-locked loop (PLL). In one flavor of CPFSK (called
Sunde’s FSK), the carrier cos(2πf
k
t + θ
k
) is sent with the transmitted signal, to aid in demodulation (at the
expense of the additional energy). This is the only case where I’ve heard of using coherent FSK reception.
As M gets high, coherent detection becomes difficult. These M PLLs must operate even though they can
only synchronize when their symbol is sent, 1/M of the time (assuming equally-probable symbols). Also, having
M PLLs is a drawback.
27.5 Non-coherent Reception
Notice that in Figure 40, the sign or phase of the sinusoid is not very important – only one symbol exists in
each dimension. In non-coherent reception, we just measure the energy in each frequency.
This is more difficult than it sounds, though – we have a fundamental problem. As we know, for every
frequency, there are two orthogonal functions, cosine and sine (see QAM and PSK). Since we will not know the
phase of the received signal, we don’t know whether or not the energy at frequency f
k
correlates highly with
the cosine wave or with the sine wave. If we only correlate it with one (the sine wave, for example), and the
phase makes the signal the other (a cosine wave) we would get a inner product of zero!
The solution is that we need to correlate the received signal with both a sine and a cosine wave at the
frequency f
k
. This will give us two inner products, lets call them x
I
k
using the capital I to denote in-phase and
x
Q
k
with Q denoting quadrature.
ECE 5520 Fall 2009 91
cos(2 ) pf t
i
sin(2 ) pf t
i
x
i
Q
x
i
I
length:
() +
2
()
2
x
i
I
x
i
Q
Figure 43: The energy in a non-coherent FSK receiver at one frequency f
k
is calculated by finding its correlation
with the cosine wave (x
I
k
) and sine wave (x
Q
k
) at the frequency of interest, f
k
, and calculating the squared length
of the vector [x
I
k
, x
Q
k
]
T
.
The energy at frequency f
k
, that is,
c
f
k
= (x
I
k
)
2
+ (x
Q
k
)
2
is calculated for each frequency f
k
, k = 0, 1, . . . , M−1. You could prove this, but the optimal detector (Maximum
likelihood detector) is then to decide upon the frequency with the maximum energy c
f
k
. Is this a threshold
test?
We would need new analysis of the FSK non-coherent detector to find its analytical probability of error.
27.6 Receiver Block Diagrams
The block diagram of the the non-coherent FSK receiver is shown in Figures 45 and 46 (copied from the Proakis
& Salehi book). Compare this to the coherent FSK receiver in Figure 44.
Figure 44: (Proakis & Salehi) Phase-coherent demodulation of M-ary FSK signals.
ECE 5520 Fall 2009 92
Figure 45: (Proakis & Salehi) Demodulation of M-ary FSK signals for non-coherent detection.
Figure 46: (Proakis & Salehi) Demodulation and square-law detection of binary FSK signals.
ECE 5520 Fall 2009 93
27.7 Probability of Error for Coherent Binary FSK
First, let’s look at coherent detection of binary FSK.
1. What is the detection threshold line separating the two decision regions?
2. What is the distance between points in the Binary FSK signal space?
What is the probability of error for coherent binary FSK? It is the same as bipolar PAM, but the symbols are
spaced differently (more closely) as a function of c
b
. We had that
P [error]
2−ary
= Q
_
_
¸
d
2
0,1
2N
0
_
_
Now, the spacing between symbols has reduced by a factor of

2/2 compared to bipolar PAM, to d
0,1
=

2c
b
.
So
P [error]
2−Co−FSK
= Q
_
_
c
b
N
0
_
For the same probability of bit error, binary FSK is about 1.5 dB better than OOK (requires 1.5 dB less energy
per bit), but 1.5 dB worse than bipolar PAM (requires 1.5 dB more energy per bit).
27.8 Probability of Error for Noncoherent Binary FSK
The energy detector shown in Figure 46 uses the energy in each frequency and selects the frequency with
maximum energy.
This energy is denoted r
2
m
in Figure 46 for frequency m and is
r
2
m
= r
2
mc
+r
2
ms
This energy measure is a statistic which measures how much energy was in the signal at frequency f
m
. The
‘envelope’ is a term used for the square root of the energy, so r
m
is termed the envelope.
Question: What will r
2
m
equal when the noise is very small?
As it turns out, given the non-coherent receiver and r
mc
and r
ms
, the envelope r
m
is an optimum (sufficient)
statistic to use to decide between s
1
. . . s
M
.
What do they do to prove this in Proakis & Salehi? They prove it for binary non-coherent FSK. It takes
quite a bit of 5510 to do this proof.
1. Define the received vector r as a 4 length vector of the correlation of r(t) with the sin and cos at each
frequency f
1
, f
2
.
2. They formulate the prior probabilities f
r|H
i
(r[H
i
). Note that this depends on θ
m
, which is assumed to be
uniform between 0 and 2π, and independent of the noise.
f
r|H
i
(r[H
i
) =
_

0
f
r,θm|H
i
(r, φ[H
i
)dφ
=
_

0
f
r|θm,H
i
(r[φ, H
i
)f
θm|H
i
(φ[H
i
)dφ
(49)
Note that f
r|θm,H
0
(r[φ, H
0
) is a 2-D Gaussian random vector with i.i.d. components.
ECE 5520 Fall 2009 94
3. They formulate the joint probabilities f
r∩H
0
(r ∩ H
0
) and f
r∩H
1
(r ∩ H
1
).
4. Where the joint probability f
r∩H
0
(r ∩H
0
) is greater than f
r|H
1
(r[H
1
), the receiver decides H
0
. Otherwise,
it decides H
1
.
5. The decisions in this last step, after manipulation of the pdfs, are shown to reduce to this decision (given
that P [H
0
] = P [H
1
]):
_
r
2
1c
+r
2
1s
H
0
>
<
H
1
_
r
2
2c
+r
2
2s
The “envelope detector” can equally well be called the “energy detector”, and it often is.
The above proof is simply FYI, and is presented since it does not appear in the Rice book.
An exact expression for the probability of error can be derived, as well. The proof is in Proakis & Salehi,
Section 7.6.9, page 430, which is posted on WebCT. The expression for probability of error in binary non-coherent
FSK is given by,
P [error]
2−NC−FSK
=
1
2
exp
_

c
b
2N
0
_
(50)
The expressions for probability of error in binary FSK (both coherent and non-coherent) are important, and
you should make note of them. You will use them to be able to design communication systems that use FSK.
Lecture 17
Today: (1) FSK Error Probabilities (2) OFDM
27.9 FSK Error Probabilities, Part 2
27.9.1 M-ary Non-Coherent FSK
For M-ary non-coherent FSK, the derivation in the Proakis & Salehi book, section 7.6.9 shows that
P [symbol error] =
M−1

n=1
(−1)
n+1
_
M −1
n
_
1
n + 1
e
−log
2
M
n
n+1
E
b
N
0
and
P [error]
M−nc−FSK
=
M/2
M −1
P [symbol error]
See Figure 47.
Proof Summary: Our non-coherent receiver finds the energy in each frequency. These energy values no longer
have a Gaussian distribution (due to the squaring of the amplitudes in the energy calculation). They instead are
either Ricean (for the transmitted frequency) or Rayleigh distributed (for the “other” M −1 frequencies). The
probability that the correct frequency is selected is the probability that the Ricean random variable is larger
than all of the other random variables measured at the other frequencies.
Example: Probability of Error for Non-coherent M = 2 case
Use the above expressions to find the P [symbol error] and P [error] for binary non-coherent FSK.
Solution:
P [symbol error] = P [bit error] =
1
2
e

1
2
E
b
N
0
ECE 5520 Fall 2009 95
0 2 4 6 8 10 12
10
−6
10
−5
10
−4
10
−3
10
−2
10
−1
E
b
/N
0
Ratio, dB
P
r
o
b
a
b
i
l
i
t
y

o
f

E
r
r
o
r
M=2
M=4
M=8
M=16
M=32
Figure 47: Probability of bit error for non-coherent reception of M-ary FSK.
27.9.2 Summary
The key insight is that probability of error decreases for increasing M. As M → ∞, the probability of error
goes to zero, for the case when
E
b
N
0
> −1.6dB. Remember this for later - it is related to the Shannon-Hartley
theorem on the limit of reliable communication.
27.10 Bandwidth of FSK
Carson’s rule is used to calculate the bandwidth of FM signals. For M-ary FSK, it tells us that the approximate
bandwidth is,
B
T
= (M −1)∆f + 2B
where B is the one-sided bandwidth of the digital baseband signal. For the null-to-null bandwidth of raised-
cosine pulse shaping, B = (1 +α)/T
s
. Note for square wave pulses, B = 1/T
s
.
28 Frequency Multiplexing
Frequency multiplexing is the division of the total bandwidth (call it B
T
) into many smaller frequency bands,
and sending a lower bit rate on each one.
Specifically, divide your B
T
bandwidth into K subchannels, you now have B
T
/K bandwidth on each sub-
channel. Note that in the multiplexed system, each subchannel has a symbol period of T
s
K, longer by a factor
of K so that the bandwidth of that subchannel has narrower bandwidth. In HW 7, you show that the total
bit rate is the same in the multiplexed system, so you don’t lose anything by this division. Furthermore, your
transmission energy is the same. The energy would be divided by K in each subchannel, so the sum of the
energy is constant.
For each band, we might send a PSK or PAM or QAM signal on that band. Our choice is arbitrary.
28.1 Frequency Selective Fading
Frequency selectivity is primarily a problem in wireless channels. But, frequency dependent gains are also
experienced in DSL, because phone lines were not designed for higher frequency signals. For example, consider
Figure 48, which shows an example frequency selective fading pattern, 10 log
10
[H(f)[
2
, where H(f) is an
ECE 5520 Fall 2009 96
B K
T
/
Severelyfaded
channel
Figure 48: An example frequency selective fading pattern. The whole bandwidth is divided into K subchannels,
and each subchannel’s channel filter is mostly constant.
example wireless channel. This H(f) might be experienced in an average outdoor channel. (The f is relative,
i.e., take the actual frequency
˜
f = f +f
c
for some center frequency f
c
.)
• You can see that some channels experience severe attenuation, while most experience little attenuation.
Some channels experience gain.
• For stationary transmitter and receiver and stationary f, the channel stays constant over time. If you put
down the transmitter and receiver and have them operate at a particular f

, then the loss 10 log
10
[H(f

)[
2
will not change throughout the communication. If the channel loss is too great, than the error rate will
be too high.
Problems:
1. Wideband: The bandwidth is used for one channel. The H(f) acts as a filter, which introduces ISI.
2. Narrowband: Part of the bandwidth is used for one narrow channel, at a lower bit rate. The power is
increased by a fade margin which makes the
E
b
N
0
high enough for low BER demodulation except in the
most extreme fading situations.
When designing for frequency selective fading, designers may use a wideband modulation method which is
robust to frequency selective fading. For example,
• Frequency multiplexing methods (including OFDM),
• Direct-sequence spread spectrum (DS-SS), or
• Frequency-hopping spread spectrum (FH-SS).
We’re going to mention frequency multiplexing.
28.2 Benefits of Frequency Multiplexing
Now, bit errors are not an ‘all or nothing’ game. In frequency multiplexing, there are K parallel bitstreams,
each of rate R/K, where R is the total bit rate. As a first order approximation, a subchannel either experiences
a high SNR and makes no errors; or is in a severe fade, has a very low SNR, and experiences a BER of 0.5 (the
ECE 5520 Fall 2009 97
worst bit error rate!). If β is the probability that a subchannel experiences a severe fade, the overall probability
of error will be 0.5β.
Compare this to a single narrowband channel, which has probability β that it will have a 0.5 probability of
error. Similarly, a wideband system with very high ISI might be completely unable to demodulate the received
signal.
Frequency multiplexing is typically combined with channel coding designed to correct a small percentage of
bit errors.
28.3 OFDM as an extension of FSK
This is section 5.5 in the Rice book.
In FSK, we use a single basis function at each of different frequencies. In QAM, we use two basis functions
at the same frequency. OFDM is the combination:
φ
0,I
(t) =
_ _
2/T
s
cos(2πf
c
t), 0 ≤ t ≤ T
s
0, o.w.
φ
0,Q
(t) =
_ _
2/T
s
sin(2πf
c
t), 0 ≤ t ≤ T
s
0, o.w.
φ
1,I
(t) =
_ _
2/T
s
cos(2πf
c
t + 2π∆ft), 0 ≤ t ≤ T
s
0, o.w.
φ
1,Q
(t) =
_ _
2/T
s
sin(2πf
c
t + 2π∆ft), 0 ≤ t ≤ T
s
0, o.w.
.
.
.
φ
M−1,I
(t) =
_ _
2/T
s
cos(2πf
c
t + 2π(M −1)∆ft), 0 ≤ t ≤ T
s
0, o.w.
φ
M−1,Q
(t) =
_ _
2/T
s
sin(2πf
c
t + 2π(M −1)∆ft), 0 ≤ t ≤ T
s
0, o.w.
where ∆f =
1
2Ts
. These are all orthogonal functions! We can transmit much more information than possible in
M-ary FSK. (Note we have 2M basis functions here!)
The signal on subchannel k might be represented as:
x
k
(t) =
_
2/T
s
[a
k,I
(t) cos(2πf
k
t) +a
k,Q
(t) sin(2πf
k
t)]
The complex baseband signal of the sum of all K signals might then be represented as
x
l
(t) =
_
2/T
s
R
_
K

k=1
(a
k,I
(t) +ja
k,Q
(t))e
j2πk∆ft
_
x
l
(t) =
_
2/T
s
R
_
K

k=1
A
k
(t)e
j2πk∆ft
_
(51)
where A
k
(t) = a
k,I
(t) +ja
k,Q
(t). Does this look like an inverse discrete fourier transform? If yes, than you can
see why it is possible to use an IFFT and FFT to generate the complex baseband signal.
1. FFT implementation: There is a particular implementation of the transmitter and receiver that use
FFT/IFFT operations. This avoids having K independent transmitter chains and receiver chains. The
ECE 5520 Fall 2009 98
FFT implementation (and the speed and ease of implementation of the FFT in hardware) is why OFDM
is popular.
Since the K carriers are orthogonal, the signal is like K-ary FSK. But, rather than transmitting on one of
the K carriers at a given time (like FSK) we transmit information in parallel on all K channels simultaneously.
An example state space diagram for K = 3 and PAM on each channel is shown in Figure 49.
OFDM,3subchannels
of4-aryPAM
Figure 49: Signal space diagram for K = 3 subchannel OFDM with 4-PAM on each channel.
Example: 802.11a
IEEE 802.11a uses OFDM with 52 subcarriers. Four of the subcarriers are reserved for pilot tones, so effectively
48 subcarriers are used for data. Each data subcarrier can be modulated in different ways. One example is to
use 16 square QAM on each subcarrier (which is 4 bits per symbol per subcarrier). The symbol rate in 802.11a
is 250k/sec. Thus the bit rate is
250 10
3
OFDM symbols
sec
48
subcarriers
OFDM symbol
4
coded bits
subcarrier
= 48
Mbits
sec
Lecture 18
Today: Comparison of Modulation Methods
29 Comparison of Modulation Methods
This section first presents some miscellaneous information which is important to real digital communications
systems but doesn’t fit nicely into other lectures.
29.1 Differential Encoding for BPSK
Def ’n: Coherent Reception
The reception of a signal when its carrier phase is explicitly determined and used for demodulation.
ECE 5520 Fall 2009 99
For coherent reception of PSK, will always need some kind of phase synchronization in BPSK. Typically,
this means transmitting a training sequence.
For non-coherent reception of PSK, we use differential encoding (at the transmitter) and decoding (at the
receiver).
29.1.1 DPSK Transmitter
Now, consider the bit sequence ¦b
n
¦, where b
n
is the nth bit that we want to send. The sequence b
n
is a sequence
of 0’s and 1’s. How do we decide which phase to send? Prior to this, we’ve said, send a
0
if b
n
= 0, and send a
1
if b
n
= 1.
Instead of setting k for a
k
only as a function of b
n
, in differential encoding, we also include k
n−1
. Now,
k
n
=
_
k
n−1
, b
n
= 0
1 −k
n−1
, b
n
= 1
Note that 1 − k
n−1
is the complement or negation of k
n−1
– if k
n−1
= 1 then 1 − k
n−1
= 0; if k
n−1
= 0 then
1 − k
n−1
= 1. Basically, for differential BPSK, a switch in the angle of the signal space vector from 0
o
to 180
o
or vice versa indicates a bit 1; while staying at the same angle indicates a bit 0.
Note that we have to just agree on the “zero” phase. Typically k
0
= 0.
Example: Differential encoding
Let b = [1, 0, 1, 0, 1, 1, 1, 0, 0]. Assume b
0
= 0. What symbols k = [k
0
, . . . , k
8
]
T
will be sent? Solution:
k = [1, 1, 0, 0, 1, 0, 1, 1, 1]
T
These values of k
n
correspond to a symbol stream with phases:
∠a
k
= [π, π, 0, 0, π, 0, π, π, π]
T
29.1.2 DPSK Receiver
Now, at the receiver, we find b
n
by comparing the phase of x
n
to the phase of x
n−1
. What our receiver does, is
to measure the statistic
ℜ¦x
n
x

n−1
¦ = [x
n
[[x
n−1
[ cos(∠x
n
−∠x
n−1
)
as the statistic – if it less than zero, decide b
n
= 1, and if it is greater than zero, decide b
n
= 0.
Example: Differential decoding
1. Assuming no phase shift in the above encoding example, show that the receiver will decode the original
bitstream with differential decoding. Solution: Assuming φ
i
0
= 0,
ˆ
b
n
= [1, 0, 1, 0, 1, 1, 1, 0, 0]
T
.
ECE 5520 Fall 2009 100
2. Now, assume that all bits are shifted π radians and we receive
∠x

= [0, 0, π, π, 0, π, 0, 0, 0].
What will be decoded at the receiver? Solution:
ˆ
b
n
= [0, 0, 1, 0, 1, 1, 1, 0, 0].
Rotating all symbols by π radians only causes one bit error.
29.1.3 Probability of Bit Error for DPSK
The probability of bit error in DPSK is slightly worse than that for BPSK:
P [error] =
1
2
exp
_

c
b
N
0
_
For a constant probability of error, DPSK requires about 1 dB more
E
b
N
0
than BPSK, which has probability
of bit error Q
__
2E
b
N
0
_
. Both are plotted in Figure 50.
0 2 4 6 8 10 12 14
10
−6
10
−4
10
−2
10
0
E
b
/N
0
, dB
P
r
o
b
a
b
i
l
i
t
y

o
f

B
i
t

E
r
r
o
r
BPSK
DBPSK
Figure 50: Comparison of probability of bit error for BPSK and Differential BPSK.
29.2 Points for Comparison
• Linear amplifiers (transmitter complexity)
• Receiver complexity
• Fidelity (P [error]) vs.
E
b
N
0
• Bandwidth efficiency
ECE 5520 Fall 2009 101
η α = 0 α = 0.5 α = 1
BPSK 1.0 0.67 0.5
QPSK 2.0 1.33 1.5
16-QAM 4.0 2.67 2.0
64-QAM 6.0 4.0 3.0
Table 2: Bandwidth efficiency of PSK and QAM modulation methods using raised cosine filtering as a function
of α.
29.3 Bandwidth Efficiency
We’ve talked about measuring data rate in bits per second. We’ve also talked about Hertz, i.e., the quantity of
spectrum our signal will use. Typically, we can scale a system, to increase the bit rate by decreasing the symbol
period, and correspondingly increase the bandwidth. This relationship is typically linearly proportional.
Def ’n: Bandwidth efficiency
The bandwidth efficiency, typically denoted η, is the ratio of bits per second to bandwidth:
η = R
b
/B
T
Bandwidth efficiency depends on the definition of “bandwidth”. Since it is usually used for comparative pur-
poses, we just make sure we use the same definition of bandwidth throughout a comparison.
The key figure of merit: bits per second / Hertz, i.e., bps/Hz.
29.3.1 PSK, PAM and QAM
In these three modulation methods, the bandwidth is largely determined by the pulse shape. For root raised
cosine filtering, the null-null bandwidth is 1 +α times the bandwidth of the case when we use pure sinc pulses.
The transmission bandwidth (for a bandpass signal) is
B
T
=
1 +α
T
s
Since T
s
is seconds per symbol, we divide by log
2
M bits per symbol to get T
b
= T
s
/ log
2
M seconds per bit, or
B
T
=
(1 +α)R
b
log
2
M
where R
b
= 1/T
b
is the bit rate.
Bandwidth efficiency is then
η = R
b
/B
T
=
log
2
M
1 +α
See Table 2 for some numerical examples.
29.3.2 FSK
We’ve said that the bandwidth of FSK is,
B
T
= (M −1)∆f + 2B
ECE 5520 Fall 2009 102
where B is the one-sided bandwidth of the digital baseband signal. For the null-to-null bandwidth of raised-
cosine pulse shaping, 2B = (1 +α)/T
s
. So,
B
T
= (M −1)∆f + (1 +α)/T
s
=
R
b
log
2
M
¦(M −1)∆fT
s
+ (1 +α)¦
since R
b
= 1/T
s
for
η = R
b
/B
T
=
log
2
M
(M −1)∆fT
s
+ (1 +α)
If ∆f = 1/T
s
(required for non-coherent reception),
η =
log
2
M
M +α
29.4 Bandwidth Efficiency vs.
E
b
N
0
For each modulation format, we have quantities of interest:
• Bandwidth efficiency, and
• Energy per bit (
E
b
N
0
) requirements to achieve a given probability of error.
Example: Bandwidth efficiency vs.
E
b
N
0
for M = 8 PSK
What is the required
E
b
N
0
for 8-PSK to achieve a probability of bit error of 10
−6
? What is the bandwidth
efficiency of 8-PSK when using 50% excess bandwidth?
Solution: Given in Rice Figure 6.3.5 to be about 14 dB, and 2.
We can plot these (required
E
b
N
0
, bandwidth efficiency) pairs. See Rice Figure 6.3.6.
29.5 Fidelity (P [error]) vs.
E
b
N
0
Main modulations which we have evaluated probability of error vs.
E
b
N
0
:
1. M-ary PAM, including Binary PAM or BPSK, OOK, DPSK.
2. M-ary PSK, including QPSK.
3. Square QAM
4. Non-square QAM constellations
5. FSK, M-ary FSK
In this part of the lecture we will break up into groups and derive: (1) the probability of error and (2)
probability of symbol error formulas for these types of modulations.
See also Rice pages 325-331.
ECE 5520 Fall 2009 103
Name P [symbol error] P [error]
BPSK = Q
__
2E
b
N
0
_
same
OOK = Q
__
E
b
N
0
_
same
DPSK =
1
2
exp
_

E
b
N
0
_
same
M-PAM =
2(M−1)
M
Q
_
_
6 log
2
M
M
2
−1
E
b
N
0
_

1
log
2
M
P [symbol error]
QPSK = Q
__
2E
b
N
0
_
M-PSK ≤ 2Q
__
2(log
2
M) sin
2
(π/M)
E
b
N
0
_

1
log
2
M
P [symbol error]
Square M-QAM ≈
4
log
2
M
(

M−1)

M
Q
_
_
3 log
2
M
M−1
E
b
N
0
_
2-non-co-FSK =
1
2
exp
_

E
b
2N
0
_
same
M-non-co-FSK =

M−1
n=1
(
M−1
n
)
(−1)
n+1
n+1
exp
_

nlog
2
M
n+1
E
b
N
0
_
=
M/2
M−1
P [symbol error]
2-co-FSK = Q
__
E
b
N
0
_
same
M-co-FSK ≤ (M −1)Q
__
log
2
M
E
b
N
0
_
=
M/2
M−1
P [symbol error]
Table 3: Summary of probability of bit and symbol error formulas for several modulations.
29.6 Transmitter Complexity
What makes a transmitter more complicated?
• Need for a linear power amplifier
• Higher clock speed
• Higher transmit powers
• Directional antennas
29.6.1 Linear / Non-linear Amplifiers
An amplifier uses DC power to take an input signal and increase its amplitude at the output. If P
DC
is the
input power (e.g., from battery) to the amplifier, and P
out
is the output signal power, the power efficiency is
rated as η
P
= P
out
/P
DC
.
Amplifiers are separated into ‘classes’ which describe their configuration (circuits), and as a result of their
configuration, their linearity and power efficiency.
• Class A: linear amplifiers with maximum power efficiency of 50%. Output signal is a scaled up version of
the input signal. Power is dissipated at all times.
• Class B: linear amplifiers turn on for half of a cycle (conduction angle of 180
o
) with maximum power
efficiency of 78.5%. Two in parallel are used to amplify a sinusoidal signal, one for the positive part and
one for the negative part.
• Class AB: Like class B, but each amplifier stays on slightly longer to reduce the “dead zone” at zero.
That is, the conduction angle is slightly higher than 180
o
.
ECE 5520 Fall 2009 104
• Class C: A class C amplifier is closer to a switch than an amplifier. This generates high distortion, but
then is band-pass filtered or tuned to the center frequency to force out spurious frequencies. Class C
non-linear amplifiers have power efficiencies around 90%. Can only amplify signals with a nearly constant
envelope.
In order to double the power efficiency, battery-powered transmitters are often willing to use Class C
amplifiers. They can do this if their output signal has constant envelope. This means that, if you look at
the constellation diagram, you’ll see a circle. The signal must never go through the origin (envelope of zero) or
even near the origin.
29.7 Offset QPSK
Q-channel
I-channel
OffsetQPSK:
Q-channel
I-channel
QPSKwithoutoffset:
(b) (a)
Figure 51: Constellation Diagram for (a) QPSK and (b) O-QPSK.
For QPSK, we wrote the modulated signal as
s(t) =

2p(t) cos(ω
0
t +∠a(t))
where ∠a(t) is the angle of the symbol chosen to be sent at time t. It is in a discrete set,
∠a(t) ∈ ¦
π
4
,

4
,

4
,

4
¦
and p(t) is the pulse shape. We could have also written s(t) as
s(t) =

2p(t) [a
0
(t) cos(ω
0
t) +a
1
(t) sin(ω
0
t)]
The problem is that when the phase changes 180 degrees, the signal s(t) will go through zero, which precludes
use of a class C amplifier. See Figure 52(b) and Figure 51 (a) to see this graphically.
For offset QPSK (OQPSK), we delay the quadrature T
s
/2 with respect to the in-phase. Rewriting s(t)
s(t) =

2p(t)a
0
(t) cos(ω
0
t) +

2p(t −T
s
/2)a
1
(t −T
s
/2) sin(ω
0
(t −T
s
/2))
At the receiver, we just need to delay the sampling on the quadrature half of a sample period with respect to
the in-phase signal. The new transmitted signal takes the same bandwidth and average power, and has the same
E
b
/N
0
vs. probability of bit error performance. However, the envelope [s(t)[ is largely constant. See Figure 52
for a comparison of QPSK and OQPSK.
ECE 5520 Fall 2009 105
(a)
0 1 2 3 4
−0.1
0
0.1
Time t
R
e
a
l
0 1 2 3 4
−0.1
0
0.1
Time t
I
m
a
g
(b)
−0.2 −0.1 0 0.1 0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
I
m
a
g
Real
(c)
0 1 2 3 4
−0.1
0
0.1
Time t
R
e
a
l
0 1 2 3 4
−0.1
0
0.1
Time t
I
m
a
g
(d)
−0.2 −0.1 0 0.1 0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
I
m
a
g
Real
Figure 52: Matlab simulation of (a-b) QPSK and (c-d) O-QPSK, showing the (d) largely constant envelope of
OQPSK, compared to (b) that for QPSK.
29.8 Receiver Complexity
What makes a receiver more complicated?
• Synchronization (carrier, phase, timing)
• Multiple parallel receiver chains
Lecture 19
Today: (1) Link Budgets and System Design
30 Link Budgets and System Design
As a digital communication system designer, your mission (if you choose to accept it) is to achieve:
1. High data rate
2. High fidelity (low bit error rate)
3. Low transmit power
ECE 5520 Fall 2009 106
4. Low bandwidth
5. Low transmitter/receiver complexity
But this is truly a mission impossible, because you can’t have everything at the same time. So the system design
depends on what the desired system really needs and what are the acceptable tradeoffs. Typically some subset
of requirements are given; for example, given the bandwidth limits, the received signal power and noise power,
and bit error rate limit, what data rate can be achieved? Using which modulation?
To answer this question, it is just a matter of setting some system variables (at the given limits) and
determining what that means about the values of other system variables. See Figure 53. For example, if I was
given the bandwidth limits and C/N
0
ratio, I’d be able to determine the probability of error for several different
modulation types.
In this lecture, we discuss this procedure, and define each of the variables we see in Figure 53, and to what
they are related. This lecture is about system design, and it cuts across ECE classes; in particular circuits,
radio propagation, antennas, optics, and the material from this class. You are expected to apply what you have
learned in other classes (or learn the functional relationships that we describe here).
Figure 53: Relationships between important system design variables (rectangular boxes). Functional relation-
ships between variables are given as circles – if the relationship is a single operator (e.g., division), the operator
is drawn in, otherwise it is left as an unnamed function f(). For example, C/N
0
is shown to have a divide by
relationship with C and N
0
. The effect of the choice of modulation impacts several functional relationships,
e.g., the relationship between probability of bit error and
E
b
N
0
, which is drawn as a dotted line.
30.1 Link Budgets Given C/N
0
The received power is denoted C, it has units of Watts. What is C/N
0
? It is received power divided by noise
energy. It is an odd quantity, but it summarizes what we need to know about the signal and the noise for the
purposes of system design.
• We often describe both the received power at a receiver P
R
, but in the Rice book it is typically denoted
C.
ECE 5520 Fall 2009 107
• We know the probability of bit error is typically written as a function of
E
b
N
0
. The noise energy is N
0
. The
bit energy is c
b
. We can write c
b
= CT
b
, since energy is power times time. To separate the effect of T
b
,
we often denote:
c
b
N
0
=
C
N
0
T
b
=
C/N
0
R
b
where R
b
= 1/T
b
is the bit rate. In other words, C/N
0
=
E
b
N
0
R
b
What are the units of C/N
0
? Answer:
Hz, 1/s.
• Note that people often report C/N
0
in dB Hz, which is
10 log
10
C
N
0
• Be careful of Bytes per sec vs bits per sec. Commonly, CS people use Bps (kBps or MBps) when describing
data rate. For example, if it takes 5 seconds to transfer a 1MB file, then software often reports that the
data rate is 1/5 = 0.2 MBps or 200 kBps. But the bit rate is 8/5 Mbps or 1.6 10
6
bps.
Given C/N
0
, we can now relate bit error rate, modulation, bit rate, and bandwidth.
Note: We typically use Q() and Q
−1
() to relate BER and
E
b
N
0
in each direction. While you have Matlab, this
is easy to calculate. If you can program it into your calculator, great. Otherwise, it’s really not a big deal to
pull it off of a chart or table. For your convenience, the following tables/plots of Q
−1
(x) will appear on Exam
2. I am not picky about getting lots of correct decimal places.
TABLE OF THE Q
−1
() FUNCTION:
Q
−1
_
1 10
−6
_
= 4.7534 Q
−1
_
1 10
−4
_
= 3.719 Q
−1
_
1 10
−2
_
= 2.3263
Q
−1
_
1.5 10
−6
_
= 4.6708 Q
−1
_
1.5 10
−4
_
= 3.6153 Q
−1
_
1.5 10
−2
_
= 2.1701
Q
−1
_
2 10
−6
_
= 4.6114 Q
−1
_
2 10
−4
_
= 3.5401 Q
−1
_
2 10
−2
_
= 2.0537
Q
−1
_
3 10
−6
_
= 4.5264 Q
−1
_
3 10
−4
_
= 3.4316 Q
−1
_
3 10
−2
_
= 1.8808
Q
−1
_
4 10
−6
_
= 4.4652 Q
−1
_
4 10
−4
_
= 3.3528 Q
−1
_
4 10
−2
_
= 1.7507
Q
−1
_
5 10
−6
_
= 4.4172 Q
−1
_
5 10
−4
_
= 3.2905 Q
−1
_
5 10
−2
_
= 1.6449
Q
−1
_
6 10
−6
_
= 4.3776 Q
−1
_
6 10
−4
_
= 3.2389 Q
−1
_
6 10
−2
_
= 1.5548
Q
−1
_
7 10
−6
_
= 4.3439 Q
−1
_
7 10
−4
_
= 3.1947 Q
−1
_
7 10
−2
_
= 1.4758
Q
−1
_
8 10
−6
_
= 4.3145 Q
−1
_
8 10
−4
_
= 3.1559 Q
−1
_
8 10
−2
_
= 1.4051
Q
−1
_
9 10
−6
_
= 4.2884 Q
−1
_
9 10
−4
_
= 3.1214 Q
−1
_
9 10
−2
_
= 1.3408
Q
−1
_
1 10
−5
_
= 4.2649 Q
−1
_
1 10
−3
_
= 3.0902 Q
−1
_
1 10
−1
_
= 1.2816
Q
−1
_
1.5 10
−5
_
= 4.1735 Q
−1
_
1.5 10
−3
_
= 2.9677 Q
−1
_
1.5 10
−1
_
= 1.0364
Q
−1
_
2 10
−5
_
= 4.1075 Q
−1
_
2 10
−3
_
= 2.8782 Q
−1
_
2 10
−1
_
= 0.84162
Q
−1
_
3 10
−5
_
= 4.0128 Q
−1
_
3 10
−3
_
= 2.7478 Q
−1
_
3 10
−1
_
= 0.5244
Q
−1
_
4 10
−5
_
= 3.9444 Q
−1
_
4 10
−3
_
= 2.6521 Q
−1
_
4 10
−1
_
= 0.25335
Q
−1
_
5 10
−5
_
= 3.8906 Q
−1
_
5 10
−3
_
= 2.5758 Q
−1
_
5 10
−1
_
= 0
Q
−1
_
6 10
−5
_
= 3.8461 Q
−1
_
6 10
−3
_
= 2.5121
Q
−1
_
7 10
−5
_
= 3.8082 Q
−1
_
7 10
−3
_
= 2.4573
Q
−1
_
8 10
−5
_
= 3.775 Q
−1
_
8 10
−3
_
= 2.4089
Q
−1
_
9 10
−5
_
= 3.7455 Q
−1
_
9 10
−3
_
= 2.3656
ECE 5520 Fall 2009 108
10
−7
10
−6
10
−5
10
−4
10
−3
10
−2
10
−1
10
0
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Value, x
I
n
v
e
r
s
e

Q

f
u
n
c
t
i
o
n
,

Q

1
(
x
)
30.2 Power and Energy Limited Channels
Assume the C/N
0
, the maximum bandwidth, and the maximum BER are all given. Sometimes power is the
limiting factor in determining the maximum achievable bit rate. Such links (or channels) are called power limited
channels. Sometimes bandwidth is the limiting factor in determining the maximum achievable bit rate. In this
case, the link (or channel) is called a bandwidth limited channel. You just need to try to solve the problem and
see which one limits your system.
Here is a step-by-step version of what you might need do in this case:
Method A: Start with power-limited assumption:
1. Use the probability of error constraint to determine the
E
b
N
0
constraint, given the appropriate probability
of error formula for the modulation.
2. Given the C/N
0
constraint and the
E
b
N
0
constraint, find the maximum bit rate. Note that R
b
= 1/T
b
=
C/N
0
E
b
N
0
,
but be sure to express both in linear units.
3. Given a maximum bit rate, calculate the maximum symbol rate R
s
=
R
b
log
2
M
and then compute the required
bandwidth using the appropriate bandwidth formula.
4. Compare the bandwidth at maximum R
s
to the bandwidth constraint: If BW at R
s
is too high, then
the system is bandwidth limited; reduce your bit rate to conform to the BW constraint. Otherwise, your
system is power limited, and your R
b
is achievable.
Method B: Start with a bandwidth-limited assumption:
ECE 5520 Fall 2009 109
1. Use the bandwidth constraint and the appropriate bandwidth formula to find the maximum symbol rate
R
s
and then the maximum bit rate R
b
.
2. Find the
E
b
N
0
at the given bit rate by computing
E
b
N
0
=
C/N
0
R
b
. (Again, make sure that everything is in linear
units.)
3. Find the probability of error at that
E
b
N
0
, using the appropriate probability of error formula.
4. If the computed P [error] is greater than the BER constraint, then your system is power limited. Use the
previous method to find the maximum bit rate. Otherwise, your system is bandwidth-limited, and you
have found the correct maximum bit rate.
Example: Rice 6.33
Consider a bandpass communications link with a bandwidth of 1.5 MHz and with an available C/N
0
= 82
dB Hz. The maximum bit error rate is 10
−6
.
1. If the modulation is 16-PSK using the SRRC pulse shape with α = 0.5, what is the maximum achievable
bit rate on the link? Is this a power limited or bandwidth limited channel?
2. If the modulation is square 16-QAM using the SRRC pulse shape with α = 0.5, what is the maximum
achievable bit rate on this link? Is this a power limited or bandwidth limited channel?
Solution:
1. Try Method A. For M = 16 PSK, we can find
E
b
N
0
for the maximum BER:
10
−6
= P [error] =
2
log
2
M
Q
_
_
2(log
2
M) sin
2
(π/M)
c
b
N
0
_
10
−6
=
2
4
Q
_
_
2(4) sin
2
(π/16)
c
b
N
0
_
c
b
N
0
=
1
8 sin
2
(π/16)
_
Q
−1
_
2 10
−6

2
c
b
N
0
= 69.84 (52)
Converting C/N
0
to linear, C/N
0
= 10
82/10
= 1.585 10
8
. Solving for R
b
,
R
b
=
C/N
0
E
b
N
0
=
1.585 10
8
69.84
= 2.2710
6
= 2.27 Mbits/s
and thus R
s
= R
b
/ log
2
M = 2.2710
6
/4 = 5.6710
5
Msymbols/s. The required bandwidth for this
system is
B
T
=
(1 +α)R
b
log
2
M
= 1.5(2.2710
6
)/4 = 851 kHz
This is clearly lower than the maximum bandwidth of 1.5 MHz. So, the system is power limited, and can
operate with bit rate 2.27 Mbits/s. (If B
T
had come out > 1.5 MHz, we would have needed to reduce R
b
to meet the bandwidth limit.)
ECE 5520 Fall 2009 110
2. Try Method A. For M = 16 (square) QAM, we can find
E
b
N
0
for the maximum BER:
10
−6
= P [error] =
4
log
2
M
(

M −1)

M
Q
_
_
3 log
2
M
M −1
c
b
N
0
_
10
−6
=
4
4
(4 −1)
4
Q
_
_
¸
3(4)
15
c
b
N
0
_
_
c
b
N
0
=
15
12
_
Q
−1
_
(4/3) 10
−6

2
c
b
N
0
= 27.55 (53)
Solving for R
b
,
R
b
=
C/N
0
E
b
N
0
=
1.585 10
8
27.55
= 5.7510
6
= 5.75 Mbits/s
The required bandwidth for this bit rate is:
B
T
=
(1 +α)R
b
log
2
M
= 1.5(5.7510
6
)/4 = 2.16 MHz
This is greater than the maximum bandwidth of 1.5 MHz, so we must reduce the bit rate to
R
b
=
B
T
log
2
M
1 +α
= 1.5 MHz
4
1.5
= 4 MHz
In summary, we have a bandwidth-limited system with a bit rate of 4 MHz.
30.3 Computing Received Power
We need to design our system for the power which we will receive. How can we calculate how much power will
be received? We can do this for both wireless and wired channels. Wireless propagation models are required;
this section of the class provides two examples, but there are others.
30.3.1 Free Space
‘Free space’ is the idealization in which nothing exists except for the transmitter and receiver, and can really only
be used for deep space communications. But, this formula serves as a starting point for other radio propagation
formulas. In free space, the received power is calculated from the Friis formula,
C = P
R
=
P
T
G
T
G
R
λ
2
(4πR)
2
where
• G
T
and G
R
are the antenna gains at the transmitter and receiver, respectively.
• λ is the wavelength at signal frequency. For narrowband signals, the wavelength is nearly constant across
the bandwidth, so we just use the center frequency f
c
. Note that λ = c/f
c
where c = 3 10
8
meters per
second is the speed of light.
ECE 5520 Fall 2009 111
• P
T
is the transmit power.
Here, everything is in linear terms. Typically people use decibels to express these numbers, and we will write
[P
R
]
dBm
or [G
T
]
dB
to denote that they are given by:
[P
R
]
dBm
= 10 log
10
P
R
[G
T
]
dB
= 10 log
10
G
T
(54)
In lecture 2, we showed that the Friis formula, given in dB, is
[C]
dBm
= [G
T
]
dB
+ [G
R
]
dB
+ [P
T
]
dBm
+ 20 log
10
λ

−20 log
10
R
This says the received power, C, is linearly proportional to the log of the distance R. The Rice book writes the
Friis formula as
[C]
dBm
= [G
T
]
dB
+ [G
R
]
dB
+ [P
T
]
dBm
+ [L
p
]
dB
where
[L
p
]
dB
= +20 log
10
λ

−20 log
10
R
where [L
p
]
dB
is called the “channel loss”. (This name that Rice chose is unfortunate because if it was positive,
it would be a “gain”, but it is typically very negative. My opinion is, it should be called “channel gain” and
have a negative gain value.)
There are also typically other losses in a transmitter / receiver system; losses in a cable, other imperfections,
etc. Rice lumps these in as the term L and writes:
[C]
dBm
= [G
T
]
dB
+ [G
R
]
dB
+ [P
T
]
dBm
+ [L
p
]
dB
+ [L]
dB
30.3.2 Non-free-space Channels
We don’t study radio propagation path loss formulas in this course. But, a very short summary is that radio
propagation on Earth is different than the Friis formula suggests. Lots of other formulas exist that approximate
the received power as a function of distance, antenna heights, type of environment, etc.
For example, whenever path loss is linear with the log of distance,
[L
p
]
dB
= L
0
−10nlog
10
R.
for some constants n and L
0
. Effectively, because of shadowing caused by buildings, trees, etc., the average loss
may increase more quickly than 1/R
2
, instead, it may be more like 1/R
n
.
30.3.3 Wired Channels
Typically wired channels are lossy as well, but the loss is modeled as linear in the length of the cable. For
example,
[C]
dBm
= [P
T
]
dBm
−R[L
1m
]
dB
where P
T
is the transmit power and R is the cable length in meters, and L
1m
is the loss per meter.
ECE 5520 Fall 2009 112
30.4 Computing Noise Energy
The noise energy N
0
can be calculated as:
N
0
= kT
eq
where k = 1.38 10
−23
J/K is Boltzmann’s constant and T
eq
is called the equivalent noise temperature in
Kelvin. This is a topic covered in another course, and so you are not responsible for that material here. But
in short, the equivalent temperature is a function of the receiver design. T
eq
is always going to be higher than
the temperature of your receiver. Basically, all receiver circuits add noise to the received signal. With proper
design, T
eq
can be kept low.
30.5 Examples
Example: Rice 6.36
Consider a “point-to-point” microwave link. (Such links were the key component in the telephone company’s
long distance network before fiber optic cables were installed.) Both antenna gains are 20 dB and the transmit
antenna power is 10 W. The modulation is 51.84 Mbits/sec 256 square QAM with a carrier frequency of 4 GHz.
Atmospheric losses are 2 dB and other incidental losses are 2 dB. A pigeon in the line-of-sight path causes an
additional 2 dB loss. The receiver has an equivalent noise temperature of 400 K and an im- plementation loss
of 1 dB. How far away can the two towers be if the bit error rate is not to exceed 10
−8
? Include the pigeon.
Neal’s hint: Use the dB version of Friis formula and subtract these mentioned dB losses: atmospheric losses,
incidental losses, implementation loss, and the pigeon.
Solution: Starting with the modulation, M = 256 square QAM (log
2
M = 8,

M = 16), to achieve
P [error] = 10
−8
,
10
−8
=
4
log
2
m
(

M −1)

M
Q
_
_
3 log
2
M
M −1
c
b
N
0
_
10
−8
=
4
8
15
16
Q
_
_
¸
3(8)
255
c
b
N
0
_
_
10
−8
=
15
32
Q
_
_
24
255
c
b
N
0
_
.
c
b
N
0
=
255
24
_
Q
−1
_
32
15
10
−8
__
2
= 319.0
The noise power N
0
= kT
eq
= 1.3810
−23
(J/K)400(K) = 5.5210
−21
J. So c
b
= 319.05.5210
−21
= 1.76
10
−18
J. Since c
b
= C/R
b
and the bit rate R
b
= 51.84 10
6
bits/sec, C = (51.84 10
6
)J(1.76 10
−18
)1/sec =
9.13 10
−11
W, or -100.4 dBW.
Switching to finding an expression for C, the wavelength is λ = 3 10
8
m/s/4 10
9
1/s = 0.075m, so:
[C]
dBW
= [G
T
]
dB
+ [G
R
]
dB
+ [P
T
]
dBW
+ 20 log
10
λ

−20 log
10
R −2dB −2dB −2dB −1dB
= 20dB + 20dB + 10dBW + 20 log
10
0.075m

−20 log
10
R −7dB
= −1.48dBW−20 log
10
R (55)
ECE 5520 Fall 2009 113
Plugging in [C]
dBm
= −100.4 dBW = −1.48dBW− 20 log
10
R and solving for R, we find R = 88.3 km. Thus
microwave towers should be placed at most 88.3 km (about 55 miles) apart.
Lecture 20
Today: (1) Timing Synchronization Intro, (2) Interpolation Filters
31 Timing Synchronization
At the receiver, a transmitted signal arrives with an unknown delay τ. The received complex baseband signal
(for QAM, PAM, or QPSK) can be written (assuming carrier synchronization) as
r(t) =

k

m
a
m
(k)φ
m
(t −kT
s
−τ) (56)
where a
k
(m) is the mth transmitted symbol.
As a start, let’s compare receiver implementations for a (mostly) continuous-time and a (more) discrete-time
receiver. Figure 54 has a timing synchronization loop which controls the sampler (the ADC).
Figure 54: Block diagram for a continuous-time receiver, including analog timing synchronization (from Rice
book, Figure 8.3.1).
The input signal is downconverted and then run through matched filters, which correlate the signal with
φ
n
(t −t
k
) for each n, and for some delay t
k
. For the correlation with n,
x
n
(k) = ¸r(t), φ
n
(t))
x
n
(k) =

k

m
a
m
(k)¸φ
n
(t −t
k
), φ
n
(t −kT
s
−τ)) (57)
Note that if t
k
= kT
s
+τ, then the correlation ¸φ
n
(t −t
k
), φ
n
(t −kT
s
−τ)) is highest and closest to 1. This
t
k
is the correct timing delay at each correlator for the kth symbol. But, these are generally unknown to the
receiver until timing synchronization is complete.
Figure 55 shows a receiver with an ADC immediately after the downconverter. Here, note the ADC has
nothing controlling it. Instead, after the matched filter, an interpolator corrects the sampling time problems
using discrete-time processing. This interpolation is the subject of this section.
ECE 5520 Fall 2009 114
cts-time
bandpass
signal
x t ( )
2cos(2 + ) p f f t
c
2sin(2 + ) p f f t
c
PLL
p/2
ADC
Matched
Filter
Digital
Interpolator
Timing
Synch
Control
O
p
t
i
m
a
l

D
e
t
e
c
t
o
r
b
m
ADC
Matched
Filter
Digital
Interpolator
r nT ( )
sa
r nT ( )
sa
r kT ( )
sy
r kT ( )
sy
T
i
m
i
n
g

S
y
n
c
h
r
o
n
i
z
a
t
i
o
n
Figure 55: Block diagram for a digital receiver for QAM/PSK, including discrete-time timing synchronization.
The industry trend is more and more towards digital implementations. A ‘software radio’ follows the idea
that as much of the radio is done in digital, after the signal has been sampled. The idea is to “bring the ADC
to the antenna” for purposes of reconfigurability, and reducing part counts and costs.
Another implementation is like Figure 55 but instead of the interpolator, the timing synch control block is
fed back to the ADC. But again, this requires a DAC and feedback to the analog part of the receiver, which is
not preferred. Also, because of the processing delay, this digital and analog feedback loop can be problematic.
First, we’ll talk about interpolation, and then, we’ll consider the control loop.
32 Interpolation
In this class, we will emphasize digital timing synchronization using an interpolation filter. For example, consider
Figure 56. In this figure, a BPSK receiver samples the matched filter output at a rate of twice per symbol,
unsynchronized with the symbol clock, resulting in samples r(nT).
4 6 8 10 12 14 16
−1
−0.5
0
0.5
1
Time t
V
a
l
u
e
Figure 56: Samples of the matched filter output (BPSK, RRC with α = 1) taken at twice the correct symbol
rate (vertical lines), but with a timing error. If down-sampling (by 2) results in the symbol samples r
k
given by
red squares, then sampling sometimes reduces the magnitude of the desired signal.
Some example sampling clocks, compared to the actual symbol clock, are shown in Figure 57. These are
shown in degrees of severity of correction for the receiver. When we say ‘synchronized in rate’, we mean within
an integer multiple, since the sampling clock must operate at (at least) double the symbol rate.
ECE 5520 Fall 2009 115
Figure 57: (1) Sampling clock and (2-4) possible actual symbol clocks. Symbol clock may be (2) synchronized
in rate and phase, (3) synchronized in rate but not in phase, (4) synchronized neither in rate nor phase, with
sample clock.
In general, our receivers always deal with type (4) sampling clock error as drawn in Figure 57. That is, the
sampling clock has neither the same exact rate nor the same phase as the actual symbol clock.
Def ’n: Incommensurate
Two clocks with rates T and T
s
are incommensurate if the ratio T/T
s
is irrational. In contrast, two clocks are
commensurate if the ratio T/T
s
can be written as n/m where n, m are integers.
For example, T/T
s
= 1/2, the two clocks are commensurate and we sample exactly twice per symbol period.
As another example, if T/T
s
= 2/5, we sample exactly 2.5 times per symbol, and every 5 samples the delay
until the next correct symbol sample will repeat. Since clocks are generally incommensurate, we cannot count
on them ever repeating.
The situation shown in Figure 56 is case (3), where T/T
s
= 1/2 (the clocks are commensurate), but the
sampling clock does not have the correct phase (τ is not equal to an integer multiple of T).
32.1 Sampling Time Notation
In general, for both cases (3) and (4) in Figure 57, the correct sampling times should be kT
s
+τ, but no samples
were taken at those instants. Instead, kT
s
+ τ is always µ(k)T after the most recent sampling instant, where
µ(k) is called the fractional interval. We can write that
kT
s
+τ = [m(k) +µ(k)]T (58)
where m(k) is an integer, the highest integer such that nT < kT
s
+τ, and 0 ≤ µ(k) < 1. In other words,
m(k) =
_
kT
s

T
_
where ⌊x⌋ is the greatest integer less than function (the Matlab floor function). This means that µ(k) is given
by
µ(k) =
kT
s

T
−m(k)
Example: Calculation Example
Let T
s
/T = 3.1 and τ/T = 1.8. Calculate (m(k), µ(k)) for k = 1, 2, 3.
ECE 5520 Fall 2009 116
Solution:
m(1) = ⌊3.1 + 1.8⌋ = 4; µ(1) = 0.9
m(2) = ⌊2(3.1) + 1.8⌋ = 8; µ(1) = 0
m(3) = ⌊3(3.1) + 1.8⌋ = 11; µ(1) = 0.1
Thus your interpolation will be done: in between samples 4 and 5; at sample 8; and in between samples 11 and
12.
32.2 Seeing Interpolation as Filtering
Consider the output of the matched filter, r(t) as given in (57). The analog output of the matched filter could
be represented as a function of its samples r(nT),
r(t) =

n
r(nT)h
I
(t −nT) (59)
where
h
I
(t) =
sin(πt/T)
πt/T
.
Why is this so? What are the conditions necessary for this representation to be accurate?
If we wanted the signal at the correct sampling times, we could have it – we just need to calculate r(t) at
another set of times (not nT).
Call the correct symbol sampling times as kT
s
+τ for integer k, where T
s
is the actual symbol period used
by the transmitter. Plugging these times in for t in (59), we have that
r(kT
s
+τ) =

n
r(nT)h
I
(kT
s
+τ −nT)
Now, using the (m(k), µ(k)) notation, since kT
s
+τ = [m(k) +µ(k)]T,
r([m(k) +µ(k)]T) =

n
r(nT)h
I
([m(k) −n +µ(k)]T).
Re-indexing with i = m(k) −n,
r([m(k) +µ(k)]T) =

i
r([m(k) −i]T)h
I
([i +µ(k)]T). (60)
This is a filter on samples of r(), where the filter coefficients are dependent on µ(k).
Note: Good illustrations are given in M. Rice, Figure 8.4.12, and Figure 8.4.13.
32.3 Approximate Interpolation Filters
Clearly, (60) is a filter. The desired sample at [m(k) +µ(k)]T is calculated by adding the weighted contribution
from the signal at each sampling time. The problem is that in general, this requires an infinite sum over i from
−∞ to ∞, because the sinc function has infinite support.
Instead, we use polynomial approximations for h
I
(t):
• The easiest one we’re all familiar with is linear interpolation (a first-order polynomial), in which we draw
a straight line between the two nearest sampled values to approximate the values of the continuous signal
between the samples. This isn’t so great of an approximation.
ECE 5520 Fall 2009 117
• A second-order polynomial (i.e.a parabola) is actually a very good approximation. Given three points,
one can determine a parabola that fits those three points exactly.
• However, the three point fit does not result in a linear-phase filter. (To see this, note in the time domain
that two samples are on one side of the interpolation point, and one on the other. This is temporal
asymmetry.) Instead, we can use four points to fit a second-order polynomial, and get a linear-phase
filter.
• Finally, we could use a cubic interpolation filter. Four points determine a 3rd order polynomial, and result
in a linear filter.
To see results for different order polynomial filters, see M. Rice Figure 8.24.
32.4 Implementations
Note: These polynomial filters are called Farrow filters and are named after Cecil W. Farrow, of AT&T Bell
Labs, who has the US patent (1989) for the “Continuously variable digital delay circuit”. These Farrow filters
started to be used in the 90’s and are now very common due to the dominance of digital processing in receivers.
From (60), we can see that the filter coefficients are a function of µ(k), the fractional interval. Thus we
could re-write (60) as
r([m(k) +µ(k)]T) =

i
r([m(k) −i]T)h
I
(i; µ(k)). (61)
That is, the filter is h
I
(i) but its values are a function of µ(k). The filter coefficients are a polynomial function
of µ(k), that is, they are a weighted sum of µ(k)
0
, µ(k)
1
, µ(k)
2
, . . . , µ(k)
p
for a pth order polynomial filter.
Example: First order polynomial interpolation
For example, consider the linear interpolator.
r([m(k) +µ(k)]T) =
0

i=−1
r([m(k) −i]T)h
I
(i; µ(k))
What are the filter elements h
I
for a linear interpolation filter?
Solution:
r([m(k) +µ(k)]T) = µ(k)r([m(k) + 1]T) + [1 −µ(k)]r(m(k)T)
Here we have used h
I
(−1; µ(k)) = µ(k) and h
I
(0; µ(k)) = 1 −µ(k).
Essentially, given µ(k), we form a weighted average of the two nearest samples. As µ(k) → 1, we should
take the r([m(k) + 1]T) sample exactly. As µ(k) → 0, we should take the r(m(k)T) sample exactly.
32.4.1 Higher order polynomial interpolation filters
In general,
h
I
(i; µ(k)) =
p

l=0
b
l
(i)µ(k)
l
A full table of b
l
(i) is given in Table 8.1 of the M. Rice handout.
Note that the i indices seem backwards.
ECE 5520 Fall 2009 118
For the 2nd order Farrow filter, there is an extra degree of freedom – you can select parameter α to be in
the range 0 < α < 1. It has been shown by simulation that α = 0.43 is best, but people tend to use α = 0.5
because it is only slightly worse, and division by two is extremely easy in digital filters.
Example: 2nd order Farrow filter
What is the Farrow filter for α = 0.5 which interpolates exactly half-way between sample points?
Solution: From the problem statement, µ = 0.5. Since µ
2
= 0.25, µ = 0.5, µ
0
= 1, we can calculate that
h
I
(−2; 0.5) = αµ
2
−αµ = 0.125 −0.25 = −0.125
h
I
(−1; 0.5) = −αµ
2
+ (1 +α)µ = −0.125 + 0.75 = 0.625
h
I
(0; 0.5) = −αµ
2
+ (α −1)µ + 1 = −0.125 −0.25 + 1 = 0.625
h
I
(1; 0.5) = αµ
2
−αµ = 0.125 −0.25 = −0.125
(62)
Does this make sense? Do the weights add up to 1? Does it make sense to subtract a fraction of the two
more distant samples?
Example: Matlab implementation of Farrow Filter
My implementation is called ece5520 lec20.m and is posted on WebCT. Be careful, as my implementation
uses a loop, rather than vector processing.
Lecture 21
Today: (1) Timing Synchronization (2) PLLs
33 Final Project Overview
s n [ ]
MF N/2
h n p nT [ ]= (- )
cos( ) W
0
n 2
MF N/2
h n p nT [ ]= (- )
sin( ) W
0
n 2
Timing
Error
Detector
Interp.
Filter
Loop
Filter
VCC
v n ( )
m
e n ( )
x
I
( ) n
x
Q
( ) n
symbol
strobe
Interp.
Filter
r
I
( ) k
r
Q
( ) k
Preamble
Detector
Symbol
Decision
DATA
Bits
All
Bits
Figure 58: Flow chart of final project; timing synchronization for QPSK.
ECE 5520 Fall 2009 119
For your final project, you will implement the above receiver. It adds these blocks to your QPSK implemen-
tation:
1. Interpolation Filter
2. Timing Error Detector (TED)
3. Loop Filter
4. Voltage Controlled Clock (VCC)
5. Preamble Detector
These notes address the motivation and overall design of each block.
Compared to last year’s class, you have the advantage of having access to the the Rice book, Section 8.4.4,
which contains much of the code for a BPSK implementation. When you put these together and implement
them for QPSK, you will need to understand how and why they work, in order to fix any bugs.
33.1 Review of Interpolation Filters
Timing synchronization is necessary to know when to sample the matched filter output. We want to sample at
times [n+µ]T
s
, where n is the integer part and µ is the fractional offset. Often, we leave out the T
s
and simply
talk about the index n or fractional delay µ.
Implementations may be continuous time, discrete time, or a mix. We focus on the discrete time solutions.
• Problem: After the matched filter, the samples may be at incorrect times, and in modern discrete-time
implementations, there may be no analog feedback to the ADC.
• Solution: From samples taken at or above the Nyquist rate, you can interpolate between samples to find
the desired sample.
However this solution leads to new problems:
• Problem: True interpolation requires significant calculation – the sinc filter has infinite impulse response.
• Solution: Approximate the sinc function with a 2nd or 3rd order polynomial interpolation filter, it works
nearly as well.
• Problem: How do you know when the correct symbol sampling time should be?
• Solution: Use a timing locked loop, analogous to a phased locked loop.
33.2 Timing Error Detection
We consider timing error detection blocks that operate after the matched filter and interpolator in the receiver,
which has output denoted x
I
(n). The job of the timing error detector is to produce a voltage error signal e(n)
which is proportional to the correct fractional offset between the current sampling offset (ˆ µ) and the correct
sampling offset.
There are several possible timing error detection (TED) methods, as related in Rice 8.4.1. This is a good
source for detail on many possible methods. We will talk through two common discrete-time implementations:
1. Early-late timing error detector (ELTED)
ECE 5520 Fall 2009 120
2. Zero-crossing timing error detector (ZCTED)
We’ll use a continuous-time realization of a BPSK received matched filter output, x(t), to illustrate the
operation of timing error detectors. You can imagine that the interpolator output x
I
(n) is a sampled version of
x(t), hopefully with some samples taken at exactly the correct symbol sampling times.
Figure 59(a) shows an example eye diagram of the signal x(t) and Figure 59(b) shows its derivative ˙ x(t). In
both, time 0 corresponds to a correct symbol sampling time.
(a)
−1.5 −1 −0.5 0 0.5 1 1.5
−1
−0.5
0
0.5
1
Time from Symbol Sample Time
x
(
t
)

V
a
l
u
e
(b)
−1.5 −1 −0.5 0 0.5 1 1.5
−5
0
5
x 10
−4
Time from Symbol Sample Time
S
l
o
p
e

o
f

x
(
t
)
Figure 59: A sample eye-diagram of a RRC-shaped BPSK received signal (post matched filter), x(t). Here (a)
shows the signal x(t) and (b) shows its derivative ˙ x(t).
33.3 Early-late timing error detector (ELTED)
The early-late TED is a discrete-time implementation of the continuous-time “early-late gate” timing error
detector. In general, the error e(n) is determined by the slope and the sign of the sample x(n), at time n. In
particular, consider for BPSK the value of
˙ x(t)sgn ¦x(t)¦
Since the derivative ˙ x(t) is close to zero at the correct sampling time, and increases away from the correct
sampling time, we can use it to indicate how far from the sampling time we might be.
Figure 59 shows that the derivative is a good indication of timing error when the bit is changing from -1,
to 1, to -1. When the bit is constant (e.g., 1, to 1, to 1) the slope is close to zero. When the bit sequence is
ECE 5520 Fall 2009 121
from 1, to 1, to -1, the slope will be somewhat negative even when the sample is taken at the right time, and
when the bit is changing from -1, to -1, to 1, the slope will be somewhat positive even when the sample is taken
at the right time. These are imperfections in the ELTED which (hopefully) average out over many bits. In
particular, we can send alternating bits during the synchronization period in order to help the ELTED more
quickly converge.
The signum function sgn ¦x(t)¦ just corrects for the sign:
• A positive slope would mean we’re behind for a +1 symbol, but would mean that we’re ahead for a -1
symbol.
• In the opposite manner, a negative slope would mean we’re ahead for a +1 symbol, but would mean that
we’re behind for a -1 symbol.
For a discrete time system, you get only an approximation of the slope unless the sampling rate SPS (or
N, as it is called in the Rice book) is high. We’d estimate ˙ x(t) from the samples out of the interpolator and see
that
e(n −1) = ¦x
I
(n) −x
I
(n −2)¦ sgn ¦x
I
(n −1)¦
Here, we don’t divide by the time duration 2T
s
because it is just a scale factor, and we just need something
proportional to the timing error. This factor contributes to the gain K
p
of this part in the loop. This timing
error detector has a theoretical gain K
p
which shown in Figure 8.4.7 of the Rice book.
33.4 Zero-crossing timing error detector (ZCTED)
The zero-crossing timing error detector (ZCTED) is also described in detail in Section 8.4.1 of the Rice book.
In particular see Figure 8.4.8 of the Rice book. This error detector assumes that the sampling rate is SPS = 2.
It is described here for BPSK systems.
The error is,
e(k) = x((k −1/2)T
s
+ ˆ τ)[ˆ a(k −1) − ˆ a(k)] (63)
where the ˆ a(k) term is the estimate of the kth symbol (either +1 or −1),
ˆ a(k −1) = sgn ¦x((k −1)T
s
+ ˆ τ)¦ , (64)
ˆ a(k) = sgn ¦x(kT
s
+ ˆ τ)¦ . (65)
The error detector is non-zero at symbol sampling times. That is, if the symbol strobe is not activated at sample
n, then the error e(n) = 0.
In terms of n, because every second sample is a symbol sampling time, (63) can be re-written as
e(n) = x
I
(n −1)[sgn¦x
I
(n −2)¦ −sgn¦x
I
(n)¦] (66)
This operation is drawn in Figure 60.
Basically, if the sign changes between n −2 and n, that indicates a symbol transition. If there was a symbol
transmission, the intermediate sample n−1 should be approximately zero, if it was taken at the correct sampling
time.
The theoretical gain in the ZCTED is twice that of the ELTED. The factor of two comes from the difference
of signs in (66), which will be 2 when the sign changes. In the project, you will need to look up K
p
in Figure
8.17 of Rice, which is a function of the excess bandwidth of the RRC filter used, and then multiply it by 2 to
find the gain K
p
of your ZCTED.
ECE 5520 Fall 2009 122
x n
I
( )
TimingErrorDetector
z
-1
z
-1
x n
I
( -1) x n
I
( -2)
symbol
strobe
sgn sgn
e n ( )
Figure 60: Block diagram of the zero-crossing timing error detector (ZCTED), where sgn indicates the signum
function, 1 when positive and -1 when negative. The input strobe signal activates the sampler when it is the
correct symbol sampling time. When this happens, and the symbol has switched, the output is 2x
I
(n − 1),
which would be approximately zero if the sample clock is synchronized.
33.4.1 QPSK Timing Error Detection
When using QPSK, both the in-phase and quadrature signals are used to calculate the error term. The error is
simply the sum of the two errors calculated for each signal. See (8.100) in the Rice book for the exact expression.
33.5 Voltage Controlled Clock (VCC)
We’ll use a decrementing VCC in the project. This is shown in Figure 61. (The choice of incrementing or
decrementing is arbitrary.) An example trace of the numerically controlled oscillator (NCO) which is the heart
of the VCC is shown in Figure 62.
VoltageControlledClock(VCC)
v n ( )
1/2
W n ( )
z
-1
modulo-1
NCO n ( -1)
underflow?
Strobehigh:change to
= ( -1)/ ( )
m
m NCO n W n
symbol
strobe
symbol
strobe
m
Figure 61: Block diagram of decrementing VCC with control voltage v(n).
NCO n ( -2)
...
NCO n ( )
NCO n ( -1)
m
W n ( )
Figure 62: Numerically-controlled oscillator signal NCO(n).
ECE 5520 Fall 2009 123
The NCO starts at 1. At each sample n, the NCO decrements by 1/SPS + v(n). Once per symbol, (on
average) the NCO voltage drops below zero. When this happens, the strobe is set to high, meaning that a
sample must be taken.
When the symbol strobe is activated, the fractional interval is also recalculated. It is,
µ(n) =
NCO(n −1)
W(n)
(67)
When the strobe is not activated, µ(n) is kept the same, i.e., µ(n) = µ(n − 1). You will prove that (67) is the
correct form for µ(n) in the HW 9 assignment.
33.6 Phase Locked Loops
Carrier synchronization is done using a phase-locked loop (PLL). This section is included for reference. For
those of you familiar with PLLs, the operation of the timing synchronization loop may be best understood by
analogy to the continuous time carrier synchronization loop.
Phase
Detector
Loop
Filter
VCO
A f t t cos(2 + ( )) p q
c g( ( )- ) q t q( ) t
v( ) t
cos(2 + ( )) p q f t t
c
q ò ( )= ( ) t k v x dx
0

t
Figure 63: General block diagram of a phase-locked loop for carrier synchronization [Rice].
Consider a generic PLL block diagram in Figure 63. The incoming signal is a carrier wave,
Acos(2πf
c
t +θ(t))
where θ(t) is the phase offset. It is a function of time, t, because it may not be constant. For example, it may
include a frequency offset, and then θ(t) = ∆ft +α. In this case, the phase offset is a ramp.
The job of the the PLL is to estimate φ(t). We call this estimate
ˆ
θ(t). The error in the phase estimate is
θ
e
(t) = θ(t) −
ˆ
θ(t)
If the PLL is perfect, θ
e
(t) = 0.
33.6.1 Phase Detector
If the estimate of the phase is not perfect, the phase detector produces a voltage to correct the phase estimate.
The function g(θ
e
) may look something like the one shown in Figure 64.
Initially, let’s ignore the loop filter.
1. If the phase estimate is too low, that is, behind the carrier, then g(θ
e
) > 0 and the VCO increases the
phase of the VCO output.
2. If the phase estimate is too high, that is, ahead of the carrier, then g(θ
e
) < 0 and the VCO decreases the
phase of the VCO output.
3. If the phase estimate is exactly correct, then g(θ
e
) = 0 and VCO output phase remains constant.
This is the effect of the ‘S’ curve function g(θ
e
) in Figure 64.
ECE 5520 Fall 2009 124
g( ) q
e
q
e
Figure 64: Typical phase detector input-output relationship. This curve is called an ‘S’ Curve. because the
shape of the curve looks like the letter ‘S’ [Rice].
33.6.2 Loop Filter
The loop filter is just a low-pass filter to reduce the noise. Note that in addition to the Acos(2πf
c
t + θ(t))
signal term we also have channel noise coming into the system. We represent the loop filter in the Laplace or
frequency domain as F(s) or F(f), respectively, for continuous time. We will be interested in discrete time
later, so we could also write the Z-transform response as F(z).
33.6.3 VCO
A voltage-controlled oscillator (VCO) simply integrates the input and uses that integral as the phase of a cosine
wave. The input is essentially the gas pedal for a race car driving around a track - increase the gas (voltage),
and the engine (VCO) will increase the frequency of rotation around the track.
Note that if you just raise the voltage for a short period and then reduce it back (a constant plus rect
function input), you change the phase of the output, but leave the frequency the same.
33.6.4 Analysis
To analyze the loop in Figure 63, we end up making simplifications. In particular, we model the ‘S’ curve
(Figure 64) as a line. This linear approximation is generally fine when the phase error is reasonably small, i.e.,
θ
e
= 0. If we do this, we can re-draw Figure 63 as it is drawn in Figure 65.
F s ( )
Q( ) s
V s ( )
Q( ) s
k
0
s
k
p
phase-
equivalent
perspective
Figure 65: Phase-equivalent and linear model for the continuous-time PLL, in the Laplace domain [Rice].
In the left half of Figure 65, we write just Θ(s) and
ˆ
Θ(s) even though the physical process, as we know, is
the cosine of 2πf
c
plus that phase. This phase-equivalent model of the PLL allows us to do analysis.
ECE 5520 Fall 2009 125
We can write that
Θ
e
(s) = Θ(s) −
ˆ
Θ(s)
= Θ(s) −Θ
e
(s)k
p
F(s)
k
0
s
To get the transfer function of the filter (when we consider the phase estimate to be the output), then some
manipulation shows you that
H
a
(s) =
ˆ
Θ(s)
Θ(s)
=
k
p
F(s)
k
0
s
1 +k
p
F(s)
k
0
s
=
k
0
k
p
F(s)
s +k
0
k
p
F(s)
You can use H
a
(s) to design the loop filter F(s) and gains k
p
and k
0
to achieve the desired PLL goals. Here
those goals include:
1. Desired bandwidth.
2. Desired response to particular phase error models.
The latter deserves more explanation. For example, phase error models might be: step input; or ramp input.
Designing a filter to respond to these, but not to AWGN noise (as much as possible), is a challenge. The Rice
book recommends a 2nd order proportional-plus-integrator loop filter,
F(s) = k
1
+
k
2
s
This filter has a proportional part (k
1
) which is just proportional to the input, which responds to the input
during phase offset. The filter also has an integrator part (k
2
/s) which integrates the phase input, and in case
of a frequency offset, will ramp up the phase at the proper rate.
Note: A accelerating frequency change wouldn’t be handled properly by the given filter. For example, if the
temperature of the transmitter or receiver varied quickly, the frequency of the crystal will also change quickly.
However, this is not typically fast enough to be a problem in most radios.
In this case there are four parameters to set, k
0
, k
1
, k
2
, and k
p
. The Rice Section C.2 details how to set these
parameters to achieve a desired bandwidth and a desired damping factor (to avoid ringing, but to converge
quickly). You will, in Homework 9, design the particular loop filter to use in your final project.
33.6.5 Discrete-Time Implementations
You can alter F(s) to be a discrete time filter using Tustin’s approximation,
1
s

T
2
1 +z
−1
1 −z
−1
Because it is only a second-order filter, its implementation is quite efficient. See Figure C.2.1 (p 733, Rice book)
for diagrams of this discrete-time PLL. Equations C.56 and C.57 (p 736) show how to calculate the constants
K
1
and K
2
, given the desired natural frequency θ
n
, the normalized bandwidth B
n
T, damping coefficient ζ, and
other gain constants K
0
and K
p
.
ECE 5520 Fall 2009 126
Note that the bandwidth B
n
T is the most critical factor. It is typically much less than one, (e.g., 0.005).
This means that the loop filter effectively averages over many samples when setting the input to the VCO.
Lecture 22
Today: (1) Exam 2 Review
• Exam 2 is Tue Apr 14.
• I will be out of town Mon-Wed. The exam will be proctored. Please come to me with questions today or
tomorrow.
• Office hours: Today: 2-3:30. Friday 11-12, 3-4.
34 Exam 2 Topics
Where to study:
• Lectures covered: 8 (detection theory), 9, 11-19. Lectures 20,21 are not covered. No material from 1-7
will be directly tested, but of course you need to know some things from the start of the semester to be
able to perform well on the second part of this course.
• Homeworks covered: 4-8. Please review these homeworks and know the correct solutions.
• Rice book: Chapters 5 and 6.
What topics:
1. Detection theory
2. Signal space
3. Digital modulation methods:
• Binary, bipolar vs. unipolar
• M-ary: PAM, QAM, PSK, FSK
4. Inter-symbol interference, Nyquist filtering, SRRC pulse shaping
5. Differential encoding and decoding for BPSK
6. Energy detection for FSK
7. Gray encoding
8. Probability of Error vs.
E
b
N
0
.
• Standard modulation method. Both know the formula and know how it can be derived.
• A non-standard modulation method, given signal space diagram.
• Exact expression, union bound, nearest-neighbor approximation.
ECE 5520 Fall 2009 127
9. Bandwidth assuming SRRC pulse shaping for PAM/QAM/PSK, FSK, concept of frequency multiplexing
10. Comparison of digital modulation methods:
• Need for linear amplifiers (transmitter complexity)
• Receiver complexity
• Bandwidth efficiency
Types of questions (format):
1. Short answer, with a 1-2 sentence limit. There may be multiple correct answers, but there may be many
wrong answers (or right answer / wrong reason combinations).
2. True / false or multiple choice.
3. Work out solution. Example: What is the probability of bit error vs.
E
b
N
0
for this signal space diagram?
4. System design: Here are some engineering requirements for this communication system. From this list of
possible modulation, which one would you recommend? What bandwidth and bit rate would it achieve?
You will be provided the table of the Qinv function, and the plot of Qinv. You can use two sides of an 8.5 x 11
sheet of paper.
Lecture 23
Today: (1) Source Coding
• HW 9 is due today at 5pm. OH today 2-3pm, Monday 4-5pm.
• Final homework: HW 10, which will be due April 28 at 5pm.
35 Source Coding
This topic is a semester-long course at the graduate level in itself; but the basic ideas can be presented pretty
quickly.
One good reference:
• C. E. Shannon, “A mathematical theory of communication,” Bell System Technical Journal, vol. 27, pp.
379-423 and 623-656, July and October, 1948. http://www.cs.bell-labs.com/cm/ms/what/shannonday/
paper.html
We have done a lot of counting of bits as our primary measure of communication systems. Our information
source is measured in bits, or in bits per second. Modulation schemes’ bandwidth efficiency is measured in bits
per Hertz, and energy efficiency is energy per bit over noise PSD. Everything is measured in bits!
But how do we measure the bits of a source (e.g., audio, video, email, ...)? Information can often be
represented in many different ways. Images and sound can be encoded in different ways. Text files can be
presented in different ways.
Here are two misconceptions:
ECE 5520 Fall 2009 128
1. Using the file size tells you how much information is contained within the file.
2. Take the log
2
of the number of different messages you could send.
For example, consider a digital black & white image (not grayscale, but truly black or white).
1. You could store it as a list of pixels. Each pixel has two possibilities (possible messages), thus we could
encode it in log
2
2 or one bit per pixel.
2. You could simply send the coordinates of the pixels of one of the colors (e.g.all black pixels).
How many bits would be used in these two representations? What would make you decide which one is more
efficient?
You can see that equivalent representations can use different number of bits. This is the idea behind source
compression. For example, .zip or .tar files represent the exact same information that was contained in the
original files, but with fewer bits.
What if we had a fixed number of bits to send any image, and we used the sparse B&W image coding scheme
(2.) above? Sometimes, the number of bits in the compressed image would exceed what we had allocated. This
would introduce errors into the image.
Two types of compression algorithms:
• Lossless: e.g., Zip or compress.
• Lossy: e.g., JPEG, MP3
Note: Both “zip” and the unix “compress” commands use the Lempel-Ziv algorithm for source compression.
So what is the intrinsic measure of bits of text, an image, audio, or video?
35.1 Entropy
Entropy is a measure of the randomness of a random variable. Randomness and information, in non-technical
language, are just two perspectives on the same thing:
• If you are told the value of a r.v. that doesn’t vary that much, that telling conveys very little information
to you.
• If you are told the value of a very “random” r.v., that telling conveys quite a bit of information to you.
Our technical definition of entropy of a random variable is as follows.
Def ’n: Entropy
Let X be a discrete random variable with pmf p
X
(x
i
) = P [X = x]. Here, there is a finite or countably infinite
set S
X
, and x ∈ S
X
. We will shorten the notation by using p
i
as follows:
p
i
= p
X
(x
i
) = P [X = x
i
]
where ¦x
1
, x
2
, . . .¦ is an ordering of the possible values in S
X
. Then the entropy of X, in units of bits, is defined
as,
H [X] = −

i
p
i
log
2
p
i
(68)
Notes:
ECE 5520 Fall 2009 129
• H [X] is an operator on a random variable, not a function of a random variable. It returns a (deterministic)
number, not another random variable. This it is like E[X], another operator on a random variable.
• Entropy of a discrete random variable X is calculated using the probability values of the pmf of X, p
i
.
Nothing else is needed.
• The sum will be from i = 1 . . . N when [S
X
[ = N < ∞.
• Use that 0 log 0 = 0. This is true in the limit of xlog x as x → 0
+
.
• All “log” functions are log-base-2 in information theory unless otherwise noted. Keep this in mind when
reading a book on information theory. The “reason” the units are bits is because of the base-2 of the
log. Actually, when theorists use log
e
or the natural log, they express information in “nats”, short for
“natural” digits.
Example: Binary r.v.
A binary (Bernoulli) r.v. has pmf,
p
X
(x) =
_
_
_
s, x = 1
1 −s, x = 0
0, o.w.
What is the entropy H [X] as a function of s?
Solution: Entropy is given by (68) and is:
H[X] = −s log
2
s −(1 −s) log
2
(1 −s)
The solution is plotted in Figure 66.
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
P[X=1]
E
n
t
r
o
p
y

H
[
X
]
Figure 66: Entropy of a binary r.v.
Example: Non-uniform source with five messages
Some signals are more often close to zero (e.g.audio). Model the r.v. X to have pmf
p
X
(x) =
_
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
_
1/16, x = 2
1/4, x = 1
1/2, x = 0
1/8, x = −1
1/16, x = −2
0, o.w.
ECE 5520 Fall 2009 130
What is its entropy H [X]?
Solution:
H [X] =
1
2
log
2
2 +
1
4
log
2
4 +
1
8
log
2
8 + 2
1
16
log
2
16
=
15
8
bits (69)
Other questions:
1. Do you need to know what the symbol set S
X
is?
2. Would multiplying X by 2 change its entropy?
3. Would an arbitrary one-to-one function change the entropy of X?
35.2 Joint Entropy
Def ’n: Joint Entropy
The joint entropy of two random variables X
1
, X
2
with event sets S
X
1
and S
X
2
is defined as
H[X
1
, X
2
] = −

x
1
∈S
X
1

x
2
∈S
X
2
p
X
1
,X
2
(x
1
, x
2
) log
2
p
X
1
,X
2
(x
1
, x
2
) (70)
For N joint random variables, X
1
, . . . , X
N
, entropy is
H[X
1
, . . . , X
N
] = −

x
1
∈S
X
1

x
N
∈S
X
N
p
X
1
,...,X
N
(x
1
, . . . , x
N
) log
2
p
X
1
,...,X
N
(x
1
, . . . , x
N
)
What is the entropy for N i.i.d. random variables? You can show that
H[X
1
, . . . , X
N
] = −N

x
1
∈S
X
1
p
X
1
(x
1
) log
2
p
X
1
(x
1
) = NH(X
1
)
The entropy of N i.i.d. random variables has N times the entropy of any one of them. In addition, the entropy
of any N independent (but possibly with different distributions) r.v.s is just the sum of the entropy of each
individual r.v.
When r.v.s are not independent, the joint entropy of N r.v.s is less than N times the entropy of one of them.
Intuitively, if you know some of them, because of the dependence or correlation, the rest that you don’t know
become less informative. For example, the B&W image, since pixels are correlated in space, the joint r.v. of
several neighboring pixels will have less entropy than the sum of the individual pixel entropies.
35.3 Conditional Entropy
How much additional entropy is in the joint random variables X
1
, X
2
compared just to one of them? This is
often an important question because it answers the question, “How much additional information do I get from
both, compared to just one of them?”. We call this difference the conditional entropy, H[X
2
[X
1
]:
H[X
2
[X
1
] = H[X
2
, X
1
] −H[X
1
]. (71)
ECE 5520 Fall 2009 131
What is an equation for H[X
2
[X
1
] as a function of the joint probabilities p
X
1
,X
2
(x
1
, x
2
) and the conditional
probabilities p
X
2
|X
1
(x
2
[x
1
).
Solution: Plugging in (68) for H[X
2
, X
1
] and H[X
1
],
H[X
2
[X
1
] = −

x
1
∈S
X
1

x
2
∈S
X
2
p
X
1
,X
2
(x
1
, x
2
) log
2
p
X
1
,X
2
(x
1
, x
2
) +

x
1
∈S
X
1
p
X
1
(x
1
) log
2
p
X
1
(x
1
)
= −

x
1
∈S
X
1

x
2
∈S
X
2
p
X
1
,X
2
(x
1
, x
2
) log
2
p
X
1
,X
2
(x
1
, x
2
) +

x
1
∈S
X
1
_
_

x
2
∈S
X
2
p
X
1
,X
2
(x
1
, x
2
)
_
_
log
2
p
X
1
(x
1
)
= −

x
1
∈S
X
1

x
2
∈S
X
2
p
X
1
,X
2
(x
1
, x
2
) (log
2
p
X
1
,X
2
(x
1
, x
2
) −log
2
p
X
1
(x
1
))
= −

x
1
∈S
X
1

x
2
∈S
X
2
p
X
1
,X
2
(x
1
, x
2
) log
2
p
X
1
,X
2
(x
1
, x
2
)
p
X
1
(x
1
)
= −

x
1
∈S
X
1

x
2
∈S
X
2
p
X
1
,X
2
(x
1
, x
2
) log
2
p
X
2
|X
1
(x
2
[x
1
) (72)
Note the asymmetry – there is the joint probability multiplied by the log of the conditional probability. This
is not like either the joint or the marginal entropy.
We could also have multi-variate conditional entropy,
H[X
N
[X
N−1
, . . . , X
1
] = −

x
N−1
∈S
X
N−1

x
1
∈S
X
1
p
X
1
,...,X
N
(x
1
, x
N
) log
2
p
X
N
|X
N−1
,...,X
1
(x
N
[x
N−1
, . . . , x
1
)
which is the additional entropy (or information) contained in the Nth random variable, given the values of the
N −1 previous random variables.
35.4 Entropy Rate
Typically, we’re interested in discrete-time random processes, in which we have random variables X
1
, X
2
, . . ..
Since there are infinitely many of them, the joint entropy of all of them may go to infinity as N → ∞. For this
case, we are more interested in the rate. How many additional bits, in the limit, are needed for the average r.v.
as N → ∞?
Def ’n: Entropy Rate
The entropy rate of a stationary discrete-time random process, in units of bits per random variable (a.k.a.
source output), is defined as
H = lim
N→∞
H[X
N
[X
N−1
, . . . , X
1
].
It can be shown that entropy rate can equivalently be written as
H = lim
N→∞
1
N
H[X
1
, X
2
, . . . , X
N
].
Example: Entropy of English text
Let X
i
be the ith letter or space in a common English sentence. What is the sample space S
X
i
? Is X
i
uniform
on that space?
ECE 5520 Fall 2009 132
What is H[X
i
]? Solution: I had Matlab read in the text of Shakespeare’s Romeo and Juliet. See Figure
67(a). For this pmf, I calculated an entropy of H = 4.1199. The Proakis & Salehi book mentions that this value
for general English text is about 4.3.
What is H[X
i
, X
i+1
]? Solution: Again, using Matlab on Shakespeare’s Romeo and Juliet, I calculated the
entropy of the joint pmf of each two-letter combination. This gives me the two-dimensional pmf shown in Figure
??(b). I calculate an entropy of 7.46, which is 2 3.73. For the three-letter combinations, the joint entropy was
10.04 = 3 3.35. For four-letter combinations, the joint entropy was 11.98 = 4 2.99.
You can see that the average entropy in bits per letter is decreasing quickly.
(a)
sp f k p u z
0
0.05
0.1
0.15
0.2
Space or Letter
P
o
c
c
u
r
r
e
n
c
e
(b) Second Space or Letter
F
i
r
s
t

S
p
a
c
e

o
r

L
e
t
t
e
r
sp f k p u z
sp
f
k
p
u
z
0.005
0.01
0.015
0.02
0.025
0.03
Figure 67: PMF of (a) single letters and (b) two-letter combinations (including spaces) in Shakespeare’s Romeo
and Juliet.
What is the entropy rate, H? Solution: For N = 10, we have H = 1.3 bits/letter [taken from Proakis &
Salehi Section 6.2].
35.5 Source Coding Theorem
The key connection between this mathematical definition of entropy and the bit rate that we’ve been talking
about all semester is given by the source coding theorem. It is one of the two fundamental theorems of
information theory, and was introduced by Claude Shannon in 1948.
Theorem: A source with entropy rate H can be encoded with arbitrarily small error probability, at any rate
R (bits / source output) as long as R > H. Conversely, if R < H, the error probability will be bounded away
from zero, independent of the complexity of the encoder and the decoder employed.
Proof: Proof: Using typical sequences. See Shannon’s original 1948 paper.
Notes:
• Here, an ‘error’ occurs when your compressed version of the data is not exactly the same as the original.
Example: B&W images.
• R is our 1/T
b
.
• Theorem fact: Information measure (entropy) gives us a minimum bit rate.
• What is the minimum possible rate to encode English text (if you remove all punctuation)?
• The theorem does not tell us how to do it – just that it can be done.
ECE 5520 Fall 2009 133
• The theorem does not tell us how well it can be done if N is not infinite. That is, for a finite source, the
rate may need to be higher.
Lecture 24
Today: (1) Channel Capacity
36 Review
Last time, we defined entropy,
H[X] = −

i
p
i
log
2
p
i
and entropy rate,
H = lim
N→∞
1
N
H[X
1
, X
2
, . . . , X
N
].
We showed that entropy can be used to quantify information. Given our information source X or ¦X
i
¦, the
value of H[X] or H gives us a measure of how many bits we actually need to use to encode, without loss, the
source data.
The major result was the Shannon’s source coding theorem, which says that a source with entropy rate H
can be encoded with arbitrarily small error probability, at any rate R (bits / source output) as long as R > H.
Any lower rate than H would guarantee loss of information.
37 Channel Coding
Now, we turn to the noisy channel. This discussion of entropy allows us to consider the maximum data rate
which can be carried without error on a bandlimited channel, which is affected by additive White Gaussian
noise (AWGN).
37.1 R. V. L. Hartley
Ralph V. L. Hartley (born Nov. 30, 1888) received the A.B. degree from the University of Utah in 1909. He
worked as a researcher for the Western Electric Company, involved in radio telephony. Here he developed the
“Hartley oscillator”. Afterwards, at Bell Laboratories, he developed relationships useful for determining the
capacity of bandlimited communication channels. In July 1928, he published in the Bell System Technical
Journal a paper on “Transmission of Information”.
Hartley was particularly influenced by Nyquist’s result. When transmitting a sequence of pulses, each of
duration T
sy
, Nyquist determined that the pulse rate was limited to two times the available channel bandwidth
B,
1
T
sy
≤ 2B.
In Hartley’s 1928 paper, he considered digital transmission in pulse-amplitude modulated systems. The
pulse rate was limited to 2B, as described by Nyquist. But, depending on how pulse amplitudes were chosen,
each pulse could represent more or less information.
In particular, Hartley assumed that the maximum amplitude available to the transmitter was A. Then,
Hartley made the assumption that the communication system could discern between pulse amplitudes, if they
ECE 5520 Fall 2009 134
were at separated by at least a voltage spacing of A
δ
. Given that a PAM system operates from 0 to A in
increments of A
δ
, the number of different pulse amplitudes (symbols) is
M = 1 +
A
A
δ
Note that early receivers were modified AM envelope detectors, and did not deal well with negative amplitudes.
Next, Hartley used the ‘bit’ measure to quantify the data which could be encoded using M amplitude levels,
log
2
M = log
2
_
1 +
A
A
δ
_
Finally, Hartley quantified the data rate using Nyquist’s relationship to determine the maximum rate C, in
bits per second, possible from the digital communication system,
C = 2Blog
2
_
1 +
A
A
δ
_
(Note that the Hartley transform came later in his career in 1942).
37.2 C. E. Shannon
What was left unanswered by Hartley’s capacity formula was the relationship between noise and the minimum
amplitude separation between symbols. Engineers would have to be conservative when setting A
δ
to ensure a
low probability of error.
Furthermore, the capacity formula was for a particular type of PAM system, and did not say anything
fundamental about the relationship between capacity and bandwidth for arbitrary modulation.
37.2.1 Noisy Channel
Shannon did take into account an AWGN channel, and used statistics to develop a universal bound for capacity,
regardless of modulation type. In this AWGN channel, the ith symbol sample at the receiver (after the matched
filter, assuming perfect syncronization) is y
i
,
y
i
= x
i
+z
i
where X is the transmitted signal and Z is the noise in the channel. The noise term z
i
is assumed to be i.i.d.
Gaussian with variance P
N
.
37.2.2 Introduction of Latency
Shannon’s key insight was to exchange latency (time delay) for reduced probability of error. In fact, his capacity
bound considers simultaneously demodulating sequences of received symbols, y = [y
1
, . . . , y
n
], of length n. All
n symbols are received before making a decision. This late decision will decide all values of y = [x
1
, . . . , x
n
]
simultaneously. Further, Shannon’s proof considers the limiting case as n → ∞.
This asymptotic limit as n → ∞ allows for a proof using the statistical convergence of a sequence of random
variables. In particular, we need a law called the law of large numbers (ECE 6962 topic). This law says that
the following event,
1
n
n

i=1
(y
i
−x
i
)
2
≤ P
N
happens with probability one, as n → ∞. In other words, as n → ∞, the measured value y will be located
within an n-dimensional sphere (hypersphere) of radius

nP
N
with center x.
ECE 5520 Fall 2009 135
37.2.3 Introduction of Power Limitation
Shannon also formulated the problem as a power-limited case, in which the average power in the desired signal
x
i
was limited to P. That is,
1
n
n

i=1
x
2
i
≤ P
This combination of signal power limitation and noise power results in the fact that,
1
n
n

i=1
y
2
i

1
n
n

i=1
x
2
i
+
1
n
n

i=1
(y
i
−x
i
)
2
≤ P +P
N
As a result
|y|
2
≤ n(P +P
N
)
This result says that the vector y, with probability one as n → ∞, is contained within a hypersphere of radius
_
n(P +P
N
) centered at the origin.
37.3 Combining Two Results
The two results, together, show how we many different symbols we could have uniquely distinguished, within
a period of n sample times. Hartley asked how many symbol amplitudes could be fit into [0, A] such that they
are all separated by A
δ
. Shannon’s formulation asks us how many multidimensional amplitudes x
i
can be fit
into a hypersphere of radius
_
n(P +P
N
) centered at the origin, such that hyperspheres of radius

nP
N
do
not overlap. This is shown in P&S in Figure 9.9 (pg. 584).
This number M is the number of different messages that could have been sent in n pulses. The result of
this geometrical problem is that
M =
_
1 +
P
P
N
_
(
n/2) (73)
37.3.1 Returning to Hartley
Adjusting Hartley’s formula, if we could send M messages now in n pulses (rather than 1 pulse) we would adjust
capacity to be:
C =
2B
n
log
2
M
Using the M from (76) above,
C =
2B
n
n
2
log
2
_
1 +
P
P
N
_
= Blog
2
_
1 +
P
P
N
_
37.3.2 Final Results
Finally, we can replace the noise variance P
N
with the relationship between noise two-sided PSD N
0
/2, by
P
N
= N
0
B
ECE 5520 Fall 2009 136
So finally we have the Shannon-Hartley Theorem,
C = Blog
2
_
1 +
P
N
0
B
_
(74)
Typically, people also use W = B so you’ll see also
C = W log
2
_
1 +
P
N
0
W
_
(75)
This result says that a communication system can operate at bit rate C (in a bandlimited channel
with width W given power limit P and noise value N
0
), with arbitrarily low probability of error.
Shannon also proved that any system which operates at a bit rate higher than the capacity C will certainly
incur a positive bit error rate. Any practical communication system must operate at R
b
< C, where R
b
is the
operating bit rate.
Note that the ratio
P
N
0
W
is the signal power divided by the noise power, or signal to noise ratio (SNR). Thus
the capacity bound is also written C = W log
2
(1 +SNR).
37.4 Efficiency Bound
Recall that c
b
= PT
b
. That is, the energy per bit is the power multiplied by the bit duration. Thus from (74),
C = W log
2
_
1 +
c
b
/T
b
N
0
W
_
or since R
b
= 1/T
b
,
C = W log
2
_
1 +
R
b
W
c
b
N
0
_
Here, C is just a capacity limit. We know that our bit rate R
b
≤ C, so
R
b
W
≤ log
2
_
1 +
R
b
W
c
b
N
0
_
Defining η =
R
b
W
,
η ≤ log
2
_
1 +η
c
b
N
0
_
This expression can’t analytically be solved for η. However, you can look at it as a bound on the bandwidth
efficiency as a function of the
E
b
N
0
ratio. This relationship is shown in Figure 70.
Lecture 25
Today: (1) Channel Capacity
• Lecture Tue April 28, is canceled. Instead, please the Robert Graves lecture 3:05 PM in Room 105 WEB,
“Digital Terrestrial Television Broadcasting”.
• The final project is due Tue, May 5, 5pm. No late projects are accepted, because of the late due date.
If you think you might be late, set your deadline to May 4.
• Everyone gets a 100% on the “Discussion Item” which I never implemented.
ECE 5520 Fall 2009 137
(a)
0 5 10 15 20 25 30
0
2
4
6
8
10
12
14
E
b
/N
0
ratio (dB)
B
i
t
s
/
s
e
c
o
n
d

p
e
r

H
e
r
t
z
(b)
0 5 10 15 20 25 30
10
−1
10
0
10
1
E
b
/N
0
ratio (dB)
B
i
t
s
/
s
e
c
o
n
d

p
e
r

H
e
r
t
z
Figure 68: From the Shannon-Hartley theorem, bound on bandwidth efficiency, η, on a (a) linear plot and (b)
log plot.
ECE 5520 Fall 2009 138
38 Review
Last time, we defined entropy,
H[X] = −

i
p
i
log
2
p
i
and entropy rate,
H = lim
N→∞
1
N
H[X
1
, X
2
, . . . , X
N
].
We showed that entropy can be used to quantify information. Given our information source X or ¦X
i
¦, the
value of H[X] or H gives us a measure of how many bits we actually need to use to encode, without loss, the
source data.
The major result was the Shannon’s source coding theorem, which says that a source with entropy rate H
can be encoded with arbitrarily small error probability, at any rate R (bits / source output) as long as R > H.
Any lower rate than H would guarantee loss of information.
39 Channel Coding
Now, we turn to the noisy channel. This discussion of entropy allows us to consider the maximum data rate
which can be carried without error on a bandlimited channel, which is affected by additive White Gaussian
noise (AWGN).
39.1 R. V. L. Hartley
Ralph V. L. Hartley (born Nov. 30, 1888) received the A.B. degree from the University of Utah in 1909. He
worked as a researcher for the Western Electric Company, involved in radio telephony. Afterwards, at Bell Labo-
ratories, he developed relationships useful for determining the capacity of bandlimited communication channels.
In July 1928, he published in the Bell System Technical Journal a paper on “Transmission of Information”.
Hartley was particularly influenced by Nyquist’s sampling theorem. When transmitting a sequence of pulses,
each of duration T
s
, Nyquist determined that the pulse rate was limited to two times the available channel
bandwidth B,
1
T
s
≤ 2B.
In Hartley’s 1928 paper, he considered digital transmission in pulse-amplitude modulated systems. The
pulse rate was limited to 2B, as described by Nyquist. But, depending on how pulse amplitudes were chosen,
each pulse could represent more or less information.
In particular, Hartley assumed that the maximum amplitude available to the transmitter was A. Then,
Hartley made the assumption that the communication system could discern between pulse amplitudes, if they
were at separated by at least a voltage spacing of A
δ
. Given that a PAM system operates from 0 to A in
increments of A
δ
, the number of different pulse amplitudes (symbols) is
M = 1 +
A
A
δ
Note that early receivers were modified AM envelope detectors, and did not deal well with negative amplitudes.
Next, Hartley used the ‘bit’ measure to quantify the data which could be encoded using M amplitude levels,
log
2
M = log
2
_
1 +
A
A
δ
_
ECE 5520 Fall 2009 139
Finally, Hartley quantified the data rate using Nyquist’s relationship to determine the maximum rate C, in
bits per second, possible from the digital communication system,
C = 2Blog
2
_
1 +
A
A
δ
_
39.2 C. E. Shannon
What was left unanswered by Hartley’s capacity formula was the relationship between noise and the minimum
amplitude separation between symbols. Engineers would have to be conservative when setting A
δ
to ensure a
low probability of error.
Furthermore, the capacity formula was for a particular type of PAM system, and did not say anything
fundamental about the relationship between capacity and bandwidth for arbitrary modulation.
39.2.1 Noisy Channel
Shannon did take into account an AWGN channel, and used statistics to develop a universal bound for capacity,
regardless of modulation type. In this AWGN channel, the ith symbol sample at the receiver (after the matched
filter, assuming perfect synchronization) is y
i
,
y
i
= x
i
+z
i
where X is the transmitted signal and Z is the noise in the channel. The noise term z
i
is assumed to be
i.i.d. Gaussian with variance E
N
= N
0
/2.
39.2.2 Introduction of Latency
Shannon’s key insight was to exchange latency (time delay) for reduced probability of error. In fact, his capacity
bound considers n-dimensional signaling. So the received vector is y = [y
1
, . . . , y
n
], of length n. These might be
truly an n-dimensional signal (e.g., FSK), or they might use multiple symbols over time (recall that symbols at
different multiples of T
s
are orthogonal). In either case, Shannon uses all n dimensions in the constellation – the
detector must use all n samples of y to make a decision. In the multiple symbols over time, this late decision
will decide all values of x = [x
1
, . . . , x
n
] simultaneously. Further, Shannon’s proof considers the limiting case as
n → ∞.
This asymptotic limit as n → ∞ allows for a proof using the statistical convergence of a sequence of random
variables. In particular, we need a law called the law of large numbers. This law says that the following event,
1
n
n

i=1
(y
i
−x
i
)
2
≤ E
N
happens with probability one, as n → ∞. In other words, as n → ∞, the measured value y will be located
within an n-dimensional sphere (hypersphere) of radius

nE
N
with center x.
39.2.3 Introduction of Power Limitation
Shannon also formulated the problem as a energy-limited case, in which the maximum symbol energy in the
desired signal x
i
was limited to E. That is,
1
n
n

i=1
x
2
i
≤ E
ECE 5520 Fall 2009 140
This combination of signal energy limitation and noise energy results in the fact that,
1
n
n

i=1
y
2
i

1
n
n

i=1
x
2
i
+
1
n
n

i=1
(y
i
−x
i
)
2
≤ E +E
N
As a result
|y|
2
≤ n(E +E
N
)
This result says that the vector y, with probability one as n → ∞, is contained within a hypersphere of radius
_
n(E +E
N
) centered at the origin.
39.3 Combining Two Results
The two results, together, show how we many different symbols we could have uniquely distinguished, within
a period of n sample times. Hartley asked how many symbol amplitudes could be fit into [0, A] such that they
are all separated by A
δ
. Shannon’s formulation asks us how many multidimensional amplitudes x
i
can be fit
into a hypersphere of radius
_
n(E +E
N
) centered at the origin, such that hyperspheres of radius

nE
N
do
not overlap. This is shown in Figure 69.
n
E
E
(
+
)
N
radius
nE
N
Figure 69: Shannon’s capacity formulation simplifies to the geometrical question of: how many hyperspheres of
a smaller radius

nE
N
fit into a hypersphere of radius
_
n(E +E
N
)?
This number M is the number of different messages that could have been sent in n pulses. The result of
this geometrical problem is that
M =
_
1 +
E
E
N
_
n/2
(76)
39.3.1 Returning to Hartley
Adjusting Hartley’s formula, if we could send M messages now in n pulses (rather than 1 pulse) we would adjust
capacity to be:
C =
2B
n
log
2
M
Using the M from (76) above,
C =
2B
n
n
2
log
2
_
1 +
E
E
N
_
= Blog
2
_
1 +
E
E
N
_
ECE 5520 Fall 2009 141
39.3.2 Final Results
Since energy is power multiplied by time, E = PT
s
=
P
2B
where P is the maximum signal power and B is the
bandwidth, and E
N
= N
0
/2, we have the Shannon-Hartley Theorem,
C = Blog
2
_
1 +
P
N
0
B
_
. (77)
This result says that a communication system can operate at bit rate C (in a bandlimited channel
with width B given power limit E and noise value N
0
), with arbitrarily low probability of error.
Shannon also proved that any system which operates at a bit rate higher than the capacity C will certainly
incur a positive bit error rate. Any practical communication system must operate at R
b
< C, where R
b
is the
operating bit rate.
Note that the ratio
E
N
0
B
is the signal power divided by the noise power, or signal to noise ratio (SNR). Thus
the capacity bound is also written C = Blog
2
(1 +SNR).
39.4 Efficiency Bound
Another way to write the maximum signal power P is to multiply it by the bit period and use it as the maximum
energy per bit, i.e., c
b
= PT
b
. That is, the energy per bit is the maximum power multiplied by the bit duration.
Thus from (77),
C = Blog
2
_
1 +
c
b
/T
b
N
0
B
_
or since R
b
= 1/T
b
,
C = Blog
2
_
1 +
R
b
B
c
b
N
0
_
Here, C is just a capacity limit. Be know that our bit rate R
b
≤ C, so
R
b
B
≤ log
2
_
1 +
R
b
B
c
b
N
0
_
Defining η =
R
b
B
(the spectral efficiency),
η ≤ log
2
_
1 +η
c
b
N
0
_
This expression can’t analytically be solved for η. However, you can look at it as a bound on the bandwidth
efficiency as a function of the
E
b
N
0
ratio. This relationship is shown in Figure 70. Figure 71 is the plot on a log-y
axis with some of the modulation types discussed this semester.
ECE 5520 Fall 2009 142
0 5 10 15 20 25 30
0
2
4
6
8
10
12
14
E
b
/N
0
ratio (dB)
B
i
t
s
/
s
e
c
o
n
d

p
e
r

H
e
r
t
z
Figure 70: From the Shannon-Hartley theorem, bound on bandwidth efficiency, η.
−10 0 10 20 30
10
−2
10
−1
10
0
10
1
E
b
/N
0
(dB)
B
a
n
d
w
i
d
t
h

E
f
f
i
c
i
e
n
c
y

(
b
p
s
/
H
z
)
4−FSK
16−FSK
64−FSK
256−FSK
1024−FSK
4−QAM
16−QAM
64−QAM
256−QAM
1024−QAM
8−PSK
16−PSK
32−PSK
64−PSK
Figure 71: From the Shannon-Hartley theorem bound with achieved bandwidth efficiencies of M-QAM, M-PSK,
and M-FSK.

ECE 5520 Fall 2009

2

Contents
1 Class Organization 2 Introduction 2.1 ”Executive Summary” . . . . . . . . . . . 2.2 Why not Analog? . . . . . . . . . . . . . . 2.3 Networking Stack . . . . . . . . . . . . . . 2.4 Channels and Media . . . . . . . . . . . . 2.5 Encoding / Decoding Block Diagram . . . 2.6 Channels . . . . . . . . . . . . . . . . . . 2.7 Topic: Random Processes . . . . . . . . . 2.8 Topic: Frequency Domain Representations 2.9 Topic: Orthogonality and Signal spaces . 2.10 Related classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 8 8 9 9 10 10 11 12 12 12 13

3 Power and Energy 13 3.1 Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 Decibel Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4 Time-Domain Concept Review 15 4.1 Periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.2 Impulse Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5 Bandwidth 5.1 Continuous-time Frequency Transforms 5.1.1 Fourier Transform Properties . . 5.2 Linear Time Invariant (LTI) Filters . . . 5.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 17 19 20 20

6 Bandpass Signals 21 6.1 Upconversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 6.2 Downconversion of Bandpass Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 7 Sampling 23 7.1 Aliasing Due To Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 7.2 Connection to DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 7.3 Bandpass sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 8 Orthogonality 8.1 Inner Product of Vectors . . 8.2 Inner Product of Functions 8.3 Definitions . . . . . . . . . . 8.4 Orthogonal Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 28 29 30 31

32 9 Orthonormal Signal Representations 9.1 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 9.2 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 9.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

ECE 5520 Fall 2009

3 37 38 38 39 40 42 42 43 43 44 45 45

10 Multi-Variate Distributions 10.1 Random Vectors . . . . . . . . . . . . . . . . . 10.2 Conditional Distributions . . . . . . . . . . . . 10.3 Simulation of Digital Communication Systems . 10.4 Mixed Discrete and Continuous Joint Variables 10.5 Expectation . . . . . . . . . . . . . . . . . . . . 10.6 Gaussian Random Variables . . . . . . . . . . . 10.6.1 Complementary CDF . . . . . . . . . . 10.6.2 Error Function . . . . . . . . . . . . . . 10.7 Examples . . . . . . . . . . . . . . . . . . . . . 10.8 Gaussian Random Vectors . . . . . . . . . . . . 10.8.1 Envelope . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

11 Random Processes 46 11.1 Autocorrelation and Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 11.1.1 White Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 12 Correlation and Matched-Filter Receivers 12.1 Correlation Receiver . . . . . . . . . . . . . 12.2 Matched Filter Receiver . . . . . . . . . . . 12.3 Amplitude . . . . . . . . . . . . . . . . . . . 12.4 Review . . . . . . . . . . . . . . . . . . . . . 12.5 Correlation Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 48 49 50 50 51

13 Optimal Detection 51 13.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 13.2 Bayesian Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 14 Binary Detection 14.1 Decision Region . . . . . . . . . . . . . . . . . 14.2 Formula for Probability of Error . . . . . . . 14.3 Selecting R0 to Minimize Probability of Error 14.4 Log-Likelihood Ratio . . . . . . . . . . . . . . 14.5 Case of a0 = 0, a1 = 1 in Gaussian noise . . . 14.6 General Case for Arbitrary Signals . . . . . . 14.7 Equi-probable Special Case . . . . . . . . . . 14.8 Examples . . . . . . . . . . . . . . . . . . . . 14.9 Review of Binary Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 53 53 53 55 55 56 56 56 57

15 Pulse Amplitude Modulation (PAM) 58 15.1 Baseband Signal Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 15.2 Average Bit Energy in M -ary PAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 16 Topics for Exam 1 60

2 Constellation . . . . . 24. . .6 Phase-Shift Keying . . . . . . . . . . 17. . . . . . . . . . . . . . . . . . . . . . . . .1 Symbol Error . . . . .1 Symbol Error Rate and Average Bit Energy 20. .1 Showing Orthogonality . . . . . . 24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Multipath Radio Channel . . . . . . . . .ECE 5520 Fall 2009 4 61 61 61 61 62 62 63 63 63 64 66 . . . . . 24. . . . . . . . . . . . . . . . . 25. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . .4 Random Processes. . . . . . . . . . . . . . . . . . . . . . 24. . . . . . . . . . . 21 Inter-symbol Interference 69 21. . . . . . . . . . . . . .4 Angle and Magnitude Representation . . . . . . . . . . . . . . .1. . . . . . . . 24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Probability of Error in QPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 General Formula for Union Bound-based . . . . . .2 Sampling and Aliasing . . . . . . . . . . . . . 25. 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25. . . . . . . . . . . 72 23 M -ary Detection Theory in N -dimensional signal space 24 Quadrature Amplitude Modulation (QAM) 24. . . . . . . .2 BER Function of Distance. . . . . . . . . . . . . .2. . . . . . . . . . . 18. . . . . . . . . . . . . . . . . . . . . . . . . .7 Systems which use QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18. . . . . . . .3 Binary PAM Error Probabilities . . . . . . . . . . . . .1 Signal Distance . . . . . Noise PSD . . . . . . . . . . . . . . . .3 Signal Constellations . . . . . . . . . . . . . . . . . . . . . . .6 Application of Union Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Overview of Future Discussions on QAM . . . . . . Probability . . . . . . . 72 22.2 Transmitter and Receiver Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.1 Bit Error Probabilities . . . . . 25. . . . . . . . . . . . . . . of Error . . . . . . . . . . . . . . . . . . . 70 22 Nyquist Filtering 70 22. . . . . . . . . . . . . . . . . 18 Probability of Error in Binary PAM 18. . . . . . . . . . .5 Average Energy in M-QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Correlation / Matched Filter Receivers . . . . . . . . . . . . . . . . . . . . . . . .2 Bit Errors and Gray Encoding . . . . . 25. . . . . . . . . . .1 Spectrum of Communication Signals . . .2 Options for Probability of Error Expressions . .3 Orthogonality and Signal Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 67 67 68 68 17 Additional Problems 17. . . 20. . . . 19 Detection with Multiple Symbols 20 M -ary PAM Probability of Error 20. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Raised Cosine Filtering . . .3 Exact Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Union Bound . . . . . . . . . . . . . . . . . 20. . . . . . . . . . . . 17. . . . . . . . . . 72 74 76 77 78 79 79 79 79 80 80 81 81 81 82 82 84 25 QAM Probability of Error 25. . . . . . . . . . . . . . . . . . . . 69 21. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Square-Root Raised Cosine Filtering . . . . . . . . . . . . . . . . 17. . 24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 30. . . . . . . . . .5 Non-coherent Reception . . . . . . . . . . . .6 Receiver Block Diagrams . . . . . Nb . . . . . . . . .1. . . . . . . . . .ECE 5520 Fall 2009 5 26 QAM Probability of Error 85 26. . . . . . . . . . . . . . 30. . . . . . . .2 Summary and Examples . . 86 27 Frequency Shift Keying 27. . . . . . . . . . . . . . . . .1 Free Space . . . . . . . . . . . . . . . .2 DPSK Receiver . .1. . . . . . . .3 OFDM as an extension of FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Bandwidth Efficiency . . . .8 Probability of Error for Noncoherent Binary FSK 27. . . .1 Linear / Non-linear Amplifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 88 89 90 90 90 91 93 93 94 94 95 95 28 Frequency Multiplexing 95 28. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Fidelity (P [error]) vs. . . . .3. . . . . . . . . . . . . . . . 27. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27. . . .1 Nearest-Neighbor Approximate Probability of Error . . . . . . . . . . . . . . .9. . . . . 30. . . . . . . . . . . . . . . . . 27. . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 28. . . . . . . . . . . . . . . . 97 29 Comparison of Modulation Methods 29. . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Probability of Bit Error for DPSK 29. . . . . . . . . . 27. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 DPSK Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Link Budgets and System Design 30. . . . . . . . . . . . . . . . . . . . . .1 Orthogonal Frequencies . . . . . . . . . . . . . . . . . . . . .7 Probability of Error for Coherent Binary FSK . . . . . . . . . . . . . . . . . . . . . .2 Points for Comparison . . . . . . . . . . . . . . . . . . . . . . . 27. . . . . . . . . . . .4 Computing Noise Energy . . . . . 29. . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . 29. . . . . . . . . . . . . . .1 M -ary Non-Coherent FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Reception of FSK . . . . . 29. . . . . . . . . .4 Bandwidth Efficiency vs. . . . . . . . . . . . . . .3 Computing Received Power . . . . . . . . . . .2 FSK . . . . . . . . . . . . . . . . . . . . . . . . . . 29. . . . . .10Bandwidth of FSK . . . . . . . . . . . . .7 Offset QPSK . . . . . . 27.3. . . Nb . . . . . . . . . . . . . . . . . . 0 E 29. . . . . . . . . . . . .9. . . . . . . . .1 PSK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27. . . . . . 30. . . .4 Coherent Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PAM and QAM . . . . . . . . . . . . . . . . . . . .1 Link Budgets Given C/N0 . . . . . . . . . . . . . . . . . . . 27. . . . . . . . . . 30. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . 0 29. . . . . . . . . . . . .3 Wired Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29. . 98 98 99 99 100 100 101 101 101 102 102 103 103 104 105 105 106 108 110 110 111 111 112 . . . . . . . . . . . . . . . . . . . . 29. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29. . . . . . . . . . . . . . . . .1 Differential Encoding for BPSK . .2 Transmission of FSK . . . . . . . . . . . . . . . . . . . . . . .6 Transmitter Complexity . . . . 27. .2 Power and Energy Limited Channels 30. . . . . . . . . . . . . .2 Summary . . . . . . . . . . . . . . . . . . 29. . . . 29. . . .8 Receiver Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . .9 FSK Error Probabilities. . . . . .1. . . . . . . 85 26. . . . . . . . . . . . . . . . . . . . 96 28. . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part 2 . . . . . . . . . . . . . . . . . . . . . . . . .2 Benefits of Frequency Multiplexing . . . . . . . . . . . .1 Frequency Selective Fading . . . . . . . . . . . . . . 27. . . . . . . . . . . . . . .2 Non-free-space Channels . . . . . . . . . . . . . . . E 29. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . 119 . . . . . . . . . 112 31 Timing Synchronization 32 Interpolation 32. 119 . .3 VCO . . 120 . . . . . . . . . . . . . . . . 33. . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Introduction of Latency . . . . . . . . . . . . . . . . . . 135 . . . . . . . . . . . . . . . . . . . . .4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32. . . . . . . . . . . .5 Voltage Controlled Clock (VCC) . . . . . . . 33. . . . . . . . . . . . . . . . . . 135 . . . . 131 . .1 QPSK Timing Error Detection . . . . . . . . . . . .2.2. . . . . . . . . . . . . . . . . . . . 116 . . . . . . . . . 135 . . . . . . . . . . . . .4 Implementations . . . . . 121 . 38 Review . . . . .6 Phase Locked Loops .3 Early-late timing error detector (ELTED) . . . . . . . . 117 . . . . . . . . . . . . . . . . . . . . . . Hartley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Efficiency Bound . .3 Combining Two Results . . . . . . . 123 . . . . . . 33. . . . . . . . . . . . . . 33 Final Project Overview 33. . . . . .1 Sampling Time Notation . . . . . . . . . . . . . . .5 Examples . . . . . . . . . . . . . . . . 33. . . . . 115 . . . . . . . . . . . . . . . . . . . . . . . . . .1 R.1 Phase Detector . . . . . . . . . . . . . . . . . . . . . . . . . .2 Seeing Interpolation as Filtering .2 Loop Filter . . . . . . . . . . . . . . 33. . . . . . . . . . . . . . 122 . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . .3. . . 124 .ECE 5520 Fall 2009 6 30. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 . . . . . .6. . . . 134 . . . . . . . . . 123 . . . . . . . . . . . . . . . . . . . . . . 37. 116 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33. . 37. . . . 37. . . . . . . . . .3 Approximate Interpolation Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Higher order polynomial interpolation filters . . . . . . . . . . . . . . . . . . . . . . 124 .1 Returning to Hartley . . . . . . . . . . . . 132 133 133 . . . . . . . . . . . . . . . . . . . 37. . . . . . . . . . . . 125 126 127 . .1 Review of Interpolation Filters . 32. . . 37. . . . . . . . . . .6. . . . . . . . . . . . . . . . . . .2 Final Results . 113 114 . . .2 C. . . . . . . . . . . . . . . . . 34 Exam 2 Topics 35 Source Coding 35. . . 128 . . . . .3 Conditional Entropy . .4 Entropy Rate . 33. . . . . . . .6. 35. . . . . . . . . . . . 135 . . .6. . . . . . . . . . . . . .3. . . . . 124 . . . . . . . . . . . V.2 Joint Entropy . . . . . . . . . . . . . . . . . 37. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33. 136 138 . L. . . . . . . . . . . . . Shannon . . . . . . . . . . . . . . . . .4 Zero-crossing timing error detector (ZCTED) 33. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32. . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 . . . . . . . . . . . . . . 133 . . . . . . . . . . .5 Discrete-Time Implementations . . . . . . . . . . . . . . 35. . . . . . . . . . . . . . . . . . . . . . . 117 118 . . .3 Introduction of Power Limitation 37. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 . 37. . . . . . . . . . . . . . 130 . . . . . . . . . . .6. . . . . . . . . . . . . .1 Noisy Channel . 35. . . . . . . . . . . . . . . . . 33. . . . . . . . . . .2. . . . . . 35. . . . . 33. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Timing Error Detection . . . . . . . . . E. . . .1 Entropy . . 134 . . . .4. . . . . . . . . . . . . . . .5 Source Coding Theorem 36 Review 37 Channel Coding 37. . . . . . . . . . . . . .

.2 C. . . . . . . . . .2 Introduction of Latency . . 39. . . . . . . .2. . . . . . . . . . . . . . . . . 39. . .1 Noisy Channel . . . . .1 R. . L. . . 39. . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . .4 Efficiency Bound . . . . . . . . . . . . . . . . . . . . .ECE 5520 Fall 2009 7 138 138 139 139 139 139 140 140 141 141 39 Channel Coding 39. . . . . . . . . . . 39. . . . . . . . . . . . . . .2 Final Results . . . . . . . . . . . . . . . . . . . . . . . . . . . Hartley . . . . .2. . . . . . . . Shannon . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39.3 Introduction of Power Limitation 39. . . . . . . . . 39. . . . . . . . 39. . . . . . . . . . .1 Returning to Hartley .3 Combining Two Results . . . . . V. . . . .

But graduates today will almost always encounter / be developing solely digital communication systems.ECE 5520 Fall 2009 8 Lecture 1 Today: (1) Syllabus (2) Intro to Digital Communications 1 Class Organization Textbook Few textbooks cover solely digital communications (without analog) in an introductory communications course. real-valued. My written notes do not and cannot reflect everything said during lecture: I answer questions and understand your perspective better after hearing your questions. i. J.. Prentice Hall. continuous-time functions. Taking notes is important: I find most learning requires some writing on your part. They might be continuous-time (analog) signals (audio. reception) at a receiver is simple. 2. they may already be digital (discrete-time.G. discrete-valued information across a physical channel. and highfidelity. on the channel (medium). I find it to be very well-written. Our object is to convey the signals or data to another place (or time) with as faithful representation as possible. These waveforms are from a discrete set of possible waveforms. and more importantly. 2001. Or. what we won’t cover. bandwidthlimited analog medium.e. 2. and does it in depth. Lecture Notes I type my lecture notes. In this section we talk about what we’ll cover in this class. Digital information on an analog medium: We can send waveforms. The keys points stuffed into that one sentence are: 1.1 ”Executive Summary” Here is the one sentence version: We will study how to efficiently encode digital data on a noisy. What set of waveforms should we use? Why? . and/or after lecture online. you must accept these conditions: 1. Salehi. These can be available to you at lecture. Unless specified. Please take your own notes. efficient. If I didn’t. I will provide additional readings solely to provide another presentation style or fit another learning style. Communication Systems Engineering. and that the other half is sparse and needs supplemental material. even up to the lecture time.. Students didn’t like that I had so many supplemental readings. I’m constantly updating my notes. For example. text.e. This year’s text covers primarily digital communications. and I try to tailor my approach during the lecture. However. So half of most textbooks are useless. Proakis & Salehi. Information sources might include audio. discrete-valued). Proakis and M. the past text was the ‘standard’ text in the area for an undergraduate course. or data. Finally. And. these are optional. I have taught ECE 5520 previously and have my lecture notes from past years. there are few options in this area. you could just watch a recording! 2 Introduction A digital communication system conveys discrete-time. 2nd edition. so that decoding the data (i. not just watching. images) and even 1-D or 2-D. video.

to have the source impedance match the destination impedance. wireless keyboard.. Why? • Fidelity • Energy: transmit power. 2. this course would study both analog and digital communication systems. Analog systems still exist and will continue to exist.e. by Proakis & Salehi. send them through a medium. and low energy consumption and bandwidth usage (the costs of our communication system). is as follows: • Application • Presentation (*) • Session (*) • Transport • Network . however. we study how to take ones and zeros from a transmitter. In the recent past. and Fidelity: Fidelity is the correctness of the received data (i.ECE 5520 Fall 2009 9 2. There’s a lot more than this to the digital communication systems which you use on a daily basis (e. Decoding the data: When receiving a signal (a function) in noise.g. for power efficiency. To manage complexity.3 Networking Stack In this course. which you would study in a CS computer networking class. and device power consumption • Bandwidth efficiency: due to coding gains • Moore’s Law is decreasing device costs for digital hardware • Increasing need for digital information • More powerful information security 2.. The 7-layer OSI stack. What is the tradeoff between energy. That is. wireless car key). and then (hopefully) correctly identify the same ones and zeros at the receiver. You want. such as radio and television broadcasting (Chapter 3). none of the original waveforms will match exactly. and fidelity? We all want high fidelity. How do you make a decision about which waveform was sent? 3. What makes a receiver difficult to realize? What choices of waveforms make a receiver simpler to implement? What techniques are used in a receiver to compensate? 4. WiFi. You can look at this like an impedance matching problem from circuits. and we build a layered network to handle the application.2 Why not Analog? The previous text used for this course. we (engineers) don’t try to build a system to do everything all at once. Efficiency. we study digital communications from bits to bits. development of new systems will almost certainly be of digital communication systems. bandwidth. the opposite of error rate). We typically start with an application. Bluetooth. has an extensive analysis and study of analog communication systems. In digital comm. this means that we want our waveform choices to match the channel and receiver to maximize the efficiency of the communication system. iPhone. Bandwidth.

optical fiber. It may be continuous or packetized.ECE 5520 Fall 2009 • Link Layer • Physical (PHY) Layer 10 (Note that there is also a 5-layer model in which * layers are considered as part of the application layer. but we largely can’t change the properties of the medium (although there are exceptions). It is primarily divided into: • Multiple Access Control (MAC) • Encoding • Channel / Medium We can control the MAC and the encoding chosen for a digital communication. • Disk (data storage applications) 2. the physical layer has much more detail. 2. . coaxial cable. Here are some media: • EM Spectra: (anything above 0 Hz) Radio. In fact. and quantization of continuous-valued signals.) ECE 5520 is part of the bottom layer. • Source Encoding: Finding a compact digital representation for the data source. the physical layer. (middle) channel.5 Encoding / Decoding Block Diagram Source Encoder Channel Encoder Modulator Up-conversion Information Source Other Components: Digital to Analog Converter Analog to Digital Converter Synchronization Channel Information Output Source Decoder Channel Decoder Demodulator Downconversion Figure 1: Block diagram of a single-user digital communication system. . including (top) transmitter.. Includes sampling of continuous-time signals.4 Channels and Media We can chose from a few media. What are some compression methods that you’re familiar with? We present an introduction to source encoding at the end of this course. mm-wave bands. light • Acoustic: ultrasound • Transmission lines. wire pairs. Microwave. and (bottom) receiver.. Also includes compression of those sources (lossy. Notes: • Information source comes from higher networking layers. or lossless). waveguides.

2. . Well modeled as Gaussian. However. Interference from other transmitted signals. the TX and RX do not move or change. attenuation in telephone wires varies by frequency. The noise comes from a variety of sources. if the medium. can correct some bit errors. Noise Transmitted Signal LTI Filter h(t) Received Signal Figure 2: Linear filter and additive noise channel model. For example. like layering. All of these can be modeled as linear filters.ECE 5520 Fall 2009 11 • Channel encoding refers to redundancy added to the signal such that any bit errors can be corrected. We will not study channel encoding. we will focus primarily on the AWGN channel. and scattering of the signal off of the objects in the environment. but predominantly: 1. Even for stationary TX and RX. The linear filtering of the channel result from the physics and EM of the medium. etc. Wideband wireless channels display multipath. A channel decoder. 2. in real wireless channels. movement of cars. In this course. but we will mention what variations exist for particular channels. we lump into the ‘interference’ category. • Channels: See above for examples. like impedance matching ensured optimal power transfer.6 Channels A channel can typically be modeled as a linear filter with the addition of noise. He showed that (given enough time) we can consider them separately without additional loss. • Modulation refers to the digital-to-analog conversion which produces a continuous-time signal that can be sent on the physical channel. or linear filtering channel. And separation. trees. diffractions. The filter may be constant. for mobile radio. and how they are addressed. Modulation and demodulation will be the main focus of this course. or time-invariant. and white. reduces complexity to the designer. people. thus it is referred to as additive white Gaussian noise (AWGN). These may result in non-Gaussian noise distribution.proper matching of a modulation to a channel allows optimal information transfer. in the environment may change the channel slowly over time. These other transmitters whose signals we cannot completely cancel. Why do we do both source encoding (which compresses the signal as much as possible) and also channel encoding (which adds redundancy to the signal)? Because of Shannon’s source-channel coding separation theorem. Thermal background noise: Due to the physics of living above 0 Kelvin. but it is a topic in the (ECE 6520) Coding Theory. due to multiple time-delayed reflections. Typical models are additive noise. because of the redundancy. or non-white noise spectral density. It is analogous to impedance matching . the channel may change very quickly over time. Narrowband wireless channels experience fading that varies quickly as a function of frequency.

8 Topic: Frequency Domain Representations To fit as many signals as possible onto a channel. we need to show that they are orthogonal. We want to build the best receiver possible despite the impediments. There is a tradeoff between frequency requirements and time requirements. Signals in different frequency bands are orthogonal. We will study orthogonal signals.9 Topic: Orthogonality and Signal spaces To show that signals sharing the same channel don’t interfere with each other. This may be tolerated. as the example in Figure 36. Separating signals by frequency band is called frequencydivision multiple access (FDMA). and timing offsets These random signals often pass through LTI filters. We have to tolerate errors. we often split the signals by frequency. TCP-IP) can determine that an error occurred and then re-request the data. The Fourier transform of our modulated. M=8 M=16 Figure 3: Example signal space diagram for M -ary Phase Shift Keying. for (a) M = 8 and (b) M = 16. transmitted signal is used to show that it meets the spectrum allocation limits of the FCC. and are sampled. in order to provide multiple independent means (dimensions) on which to modulate information. Each point is a vector which can be used to send a 3 or 4 bit sequence. this is controlled by the FCC (in the US) and called spectrum allocation. and learn an algorithm to take an arbitrary set of signals and output a set of orthogonal signals with which to represent them. Optimal receiver design is something that we study using probability theory. This means.. Noise and attenuation of the channel will cause bit errors to be made by the demodulator and even the channel decoder. We’ll use signal spaces to show graphically the results. attenuation. . We can also employ multiple orthogonal signals in a single transmitter and receiver. The concept of sharing a channel is called multiple access (MA). in short. phase. that a receiver can uniquely separate them.7 Topic: Random Processes Random things in a communication system: • Noise in the channel • Signal (bits) • Channel filtering. 2. 2. which will be a major part of this course.ECE 5520 Fall 2009 12 2. or a higher layer networking protocol (eg. and fading • Device frequency. For the wireless channel.

Devices and Circuits: (ECE 3700) Fundamentals of Digital System Design. (ECE 6520) Information Theory and Coding. Use the units: energy is measured in Joules (J). (CS 5480) Computer Networks 7. such as x(t). power is measured in Watts (W) which is the same as Joules/second (J/sec). and starts the review of tools to analyze frequency content and bandwidth. (ECE 5324) Antenna Theory. dB (2) Time-domain concepts (3) Bandwidth. When we want to know the power of a signal we assume it is being dissipated in a 1 Ohm resistor. Fourier Transform Two of the biggest limitations in communications systems are (1) energy / power . . so |x(t)|2 is the power dissipated at time t (since power is equal to the voltage squared divided by the resistance). recall that our standard in signals and systems is define our signals. 3 Power and Energy Recall that energy is power times time. E will be infinite because the signal is non-zero for an infinite duration of time (it is always on). 2. Energy. (ECE 5411) Fiberoptic Systems 4. (ECE 3500) Signals and Systems. (ECE 5720) Analog IC Design 6.10 Related classes 1. Advanced Classes: (ECE 6590) Software Radio. These signals we call power signals and we compute their power as P = lim 1 T →∞ 2T T −T |x(t)|2 dt The signal with finite energy is called an energy signal. Electromagnetics: EM Waves. Pre-requisites: (ECE 5510) Random Processes.ECE 5520 Fall 2009 13 2. and (2) bandwidth. Also. (ECE 5320-5321) Microwave Engineering. Breadth: (ECE 5325) Wireless Communications 5. as voltage signals (V). A signal x(t) has energy defined as E= ∞ −∞ |x(t)|2 dt For some signals. Signal Processing: (ECE 5530): Digital Signal Processing 3. Networking: (ECE 5780) Embedded System Design. (ECE 6540): Estimation Theory Lecture 2 Today: (1) Power. Today’s lecture provides some tools to deal with power and energy.

20 in the dB formula. l. 3. [L(f )]dB = 10 log 10 |H(f )|2 (3) Note: Why is the capital B used? Either the lowercase ‘b’ in the SI system is reserved for bits. the output of the channel has 100 times less power than the input to the channel. you can quickly convert losses or gains between linear and dB units without a calculator. energy and power are defined as: E = ∞ n=−∞ |x(n)|2 N n=−N (1) |x(n)|2 P 1 = lim N →∞ 2N + 1 (2) 3. then [P ]dBW = 10 log 10 Plin Decibels are more general . Remember these two dB numbers: • 3 dB: This means the number is double in linear terms. we use the unit decibel 10 log 10 (·) which is then abbreviated as dB in the SI system. But.ECE 5520 Fall 2009 14 3. this refers to a loss in power. for example. it was capitalized. Example: Convert dB to linear values: (4) . Essentially. • Only use 20 as the multiplier if you are converting from voltage to power.they can apply to other unitless quantities as well. Just convert any dB number into a sum of multiples of 10. Matlab uses parentheses also.e. In either case.1 Discrete-Time Signals In this book. m). so when the ‘bel’ was first proposed as log10 (·). and 1. I’m sorry this is not more obvious in the notation. With these three numbers. taking the log10 of a voltage and expecting the result to be a dB power value. such as a gain (loss) L(f ) through a filter H(f ). For discrete-time signals. so we’ll follow the Rice text notation. Our standard is to consider power gains and losses.. it is a discrete-time function. the channel has a loss of 20 dB. You may be more familiar with the x[n] notation. Note that (3) could also be written as: [L(f )]dB = 20 log10 |H(f )| Be careful with your use of 10 vs. If Plin is the power in Watts. And maybe this one: • 1 dB: This means the number is a little over 25% more (multiply by 5/4) in linear terms. or it referred to a name ‘Bell’ so it was capitalized. not voltage gains and losses. whenever you see a function of n (or k.2 Decibel Notation We often use a decibel (dB) scale for power. In particular. whenever you see a function of t (or perhaps τ ) it is a continuous-time function. So if we say. i. • 10 dB: This means the number is ten times in linear terms. we refer to discrete samples of the sampled signal x as x(n).

lin L−d cable. d is the path length (m). The main idea is that we have a limited amount of power which will be available at the receiver. for all integers n. This is the Friis free space path loss formula. Periodic signals have Fourier series representations. G G λ2 4 4. Py. 33 dBm 3.lin is the received power (W). where d is the cable length (typically in units of km). 30 dBW 2. -20 dB 4. 1.lin t. Pt. 4 dB Example: Convert linear values to dB: 1.lin and Gt. 3.2 W 2. Def ’n: Periodic (discrete-time) A DT signal x(n) is periodic if x(n) = x(n + N0 ) for some integer N0 = 0.lin is the gain in any connectors.4.lin = Gconnector. 0.lin . These last two are what we will need in Section 6.lin 2. 40 mW Example: Convert power relationships to dB: Convert the expression to one which involves only dB terms. If a signal is not periodic it is aperiodic. and Lcable.lin = Pt.lin (4πd)2 are the linear gains in the antennas. . as defined in Rice Ch.lin is the transmit power (W) and Pr. where λ is the wavelength (m).ECE 5520 Fall 2009 15 1.lin = 100Px. Po. Gconnector.lin t. when we discuss link budgets.lin is the received power in a fiber-optic link.lin is a loss in a 1 km cable. and Gt. where Po. The smallest such constant T0 > 0 is the period.1 Time-Domain Concept Review Periodicity Def ’n: Periodic (continuous-time) A signal x(t) is periodic if x(t) = x(t + T0 ) for some constant T0 = 0 for all t ∈ R. The smallest positive integer N0 is the period.lin . Pr. 2.

ECE 5520 Fall 2009

16

4.2

Impulse Functions

Def ’n: Impulse Function The (Dirac) impulse function δ(t) is the function which makes
∞ −∞

x(t)δ(t)dt = x(0)

(5)

true for any function x(t) which is continuous at t = 0. We are defining a function by its most important property, the ‘sifting property’. Is there another definition which is more familiar? Solution: δ(t) = lim
T →0

1/T , −T ≤ t ≤ T 0, o.w.

You can visualize δ(t) here as an infinitely high, infinitesimally wide pulse at the origin, with area one. This is why it ‘pulls out’ the value of x(t) in the integral in (5). Other properties of the impulse function: • Time scaling, • Symmetry, • Sifting at arbitrary time t0 , The continuous-time unit step function is u(t) = 1, t ≥ 0 0, o.w.

Example: Sifting Property ∞ What is −∞ sin(πt) δ(1 − t)dt? πt

The discrete-time impulse function (also called the Kronecker delta or δK ) is defined as: δ(n) = 1, n = 0 0, o.w.

(There is no need to get complicated with the math; this is well defined.) Also, u(n) = 1, n ≥ 0 0, o.w.

5

Bandwidth

Bandwidth is another critical resource for a digital communications system; we have various definitions to quantify it. In short, it isn’t easy to describe a signal in the frequency domain with a single number. And, in the end, a system will be designed to meet a spectral mask required by the FCC or system standard.

ECE 5520 Fall 2009 Periodicity Time Continuous-Time Periodic Aperiodic Laplace Transform x(t) ↔ X(s) Fourier Transform x(t) ↔ X(jω) ∞ X(jω) = t=−∞ x(t)e−jωt dt ∞ 1 x(t) = 2π ω=−∞ X(jω)ejωt dω z-Transform x(n) ↔ X(z) Discrete Time Fourier Transform (DTFT) x(n) ↔ X(ejΩ ) −jΩn X(ejΩ ) = ∞ n=−∞ x(n)e π 1 x(n) = 2π n=−π X(ejΩ )dΩ

17

Fourier Series x(t) ↔ ak Discrete-Time Discrete Fourier Transform (DFT) x(n) ↔ X[k] 2π N −1 X[k] = n=0 x(n)e−j N kn 2π N −1 1 x(n) = N k=0 X[k]ej N nk

Table 1: Frequency Transforms Intuitively, bandwidth is the maximum extent of our signal’s frequency domain characterization, call it X(f ). A baseband signal absolute bandwidth is often defined as the W such that X(f ) = 0 for all f except for the range −W ≤ f ≤ W . Other definitions for bandwidth are • 3-dB bandwidth: B3dB is the value of f such that |X(f )|2 = |X(0)|2 /2. • 90% bandwidth: B90% is the value which captures 90% of the energy in the signal:
B90% −B90% ∞ |=−∞

|X(f )|2 df = 0.90

Xd|(f )|2 df

As a motivating example, I mention the square-root raised cosine (SRRC) pulse, which has the following desirable Fourier transform:  √ 0 ≤ |f | ≤ 1−α  Ts , 2Ts   πTs Ts 1−α 1−α (6) HRRC (f ) = 1 + cos α |f | − 2Ts , 2Ts ≤ |f | ≤ 1+α 2 2Ts    0, o.w.

where α is a parameter called the “rolloff factor”. We can actually analyze this using the properties of the Fourier transform and many of the standard transforms you’ll find in a Fourier transform table. The SRRC and other pulse shapes are discussed in Appendix A, and we will go into more detail later on. The purpose so far is to motivate practicing up on frequency transforms.

5.1

Continuous-time Frequency Transforms

Notes about continuous-time frequency transforms: 1. You are probably most familiar with the Laplace Transform. To convert it to the Fourier tranform, we replace s with jω, where ω is the radial frequency, with units radians per second (rad/s).

ECE 5520 Fall 2009

18

2. You may prefer the radial frequency representation, but also feel free to use the rotational frequency f (which has units of cycles per sec, or Hz. Frequency in Hz is more standard for communications; you should use it for intuition. In this case, just substitute ω = 2πf . You could write X(j2πf ) as the notation for this, but typically you’ll see it abbreviated as X(f ). Note that the definition of the Fourier tranform 1 in the f domain loses the 2π in the inverse Fourier transform definition.s X(j2πf ) = x(t) =
∞ t=−∞ ∞ f =−∞

x(t)e−j2πf t dt X(j2πf )ej2πf t df (7)

3. The Fourier series is limited to purely periodic signals. Both Laplace and Fourier transforms are not limited to periodic signals. 4. Note that ejα = cos(α) + j sin(α). See Table 2.4.4 in the Rice book. Example: Square Wave Given a rectangular pulse x(t) = rect(t/Ts ), x(t) = 1, −Ts /2 < t ≤ Ts /2 0, o.w.

What is the Fourier transform X(f )? Calculate both from the definition and from a table. Solution: Method 1: From the definition:
Ts /2

X(jω) =
t=−Ts /2

e−jωt dt
Ts /2 t=−Ts /2

=

1 e−jωTs /2 − ejωTs /2 = −jω sin(ωTs /2) sin(ωTs /2) = 2 = Ts ω ωTs /2
1 This uses the fact that −2j e−jα − ejα = sin(α). While it is sometimes convenient to replace sin(πx)/(πx) with sincx, it is confusing because sinc(x) is sometimes defined as sin(πx)/(πx) and sometimes defined as (sin x)/x. No standard definition for ‘sinc’ exists! Rather than make a mistake because of this, the Rice book always writes out the expression fully. I will try to follow suit. Method 2: From the tables and properties:

1 −jωt e −jω

x(t) = g(2t) where g(t) =

1, |t| < Ts 0, o.w.

(8)

1 From the table, G(jω) = 2Ts sin(ωTs ) . From the properties, X(jω) = 2 G j ω . So ωTs 2

X(jω) = Ts

sin(ωTs /2) ωTs /2

4. Time shift property: F {x(t − t0 )} = e−jωt0 X(jω) . Duality property: x(jω) = F {X(−t)} x(−jω) = F {X(t)} (Confusing. Assume that F {x(t)} = X(jω).1 Fourier Transform Properties frequency See Table 2.ECE 5520 Fall 2009 19 See Figure 4(a). It says is that you can go backwards on the Fourier transform. Question: What if Y (jω) was a rect function? What would the inverse Fourier transform y(t) be? 5. 20 log10 (X(j2πf )/Ts ). and (b) Power vs.) 2.3 in the Rice book. Important properties of the Fourier transform: 1. Ts Ts sinc(π f Ts) Ts/2 0 (a) −4/Ts −3/Ts −2/Ts −1/Ts 0 f 1/Ts 2/Ts 3/Ts 4/Ts 0 −10 (X(f)/T ) 20 log s −20 10 −30 −40 (b) −50 −4/Ts −3/Ts −2/Ts −1/Ts 0 f 1/Ts 2/Ts 3/Ts 4/Ts Figure 4: (a) Fourier transform X(j2πf ) of rect pulse with period Ts .1. just remember to flip the result around the origin.

3 Examples Example: Applying FT Properties If w(t) has the Fourier transform W (jω) = find X(jω) for the following waveforms: 1. Modulation property: 1 1 F {x(t)cos(ω0 t)} = X(ω − ω0 ) + X(ω + ω0 ) 2 2 6. x(t) = w(2t + 2) 2. Parceval’s theorem: The energy calculated in the frequency domain is equal to the energy calculated in the time domain.ECE 5520 Fall 2009 20 3. Convolution property: if.2 Linear Time Invariant (LTI) Filters If a (deterministic) signal x(t) is input to a LTI filter with impulse response h(t). x(t) = w(1 − t) Solution: To be worked out in class. ∞ ∞ ∞ 1 |X(jω)|2 dω |X(f )|2 df = |x(t)|2 dt = 2π ω=−∞ f =−∞ t=−∞ So do whichever one is easiest! Or. x(t) = e−jt w(t − 1) 3. Y (jω) = X(jω) · H(jω) 5. 5. x(t) = 2 ∂w(t) ∂t 4. additionally y(t) has Fourier transform X(jω). check your answer by doing both. the output signal is y(t) = h(t) ⋆ x(t) Using the above convolution property. F {x(t) ⋆ y(t)} = X(jω) · Y (jω) 5. Scaling property: for any real a = 0. jω 1 + jω Lecture 3 Today: (1) Bandpass Signals (2) Sampling Solution: From the Examples at the end of Lecture 2: . F {x(at)} = ω 1 X j |a| a 4.

2 jω/2 Again. Then Z(jω) = ej2ω 1 jω . a bandpass signal x(t) has Fourier transform X(f ) = 0 for all f such that |f ± fc | > B/2. let x(t) = r(t + 1) where r(t) = w(2t). jω 1 + jω 6 Bandpass Signals Def ’n: Bandpass Signal A bandpass signal x(t) has Fourier transform X(jω) = 0 for all ω such that |ω ± ωc | > W/2. Equivalently. 2 2.ECE 5520 Fall 2009 21 1. and X(jω) = Z(j(ω − 1)). where y(t) = w(1 + t). X(jω) = 1 ejω 1+jω/2 . Let x(t) = y(−t). Realistically. and X(jω) = Z(jω/2). 2 4. X(jω) will not be exactly zero outside of the bandwidth. 1 + jω 2 jω/2 So X(jω) = 1 ejω 1+jω/2 . 6. Then 2 1 R(jω) = W (jω/2). X(jω) = 2jω 1+jω = − 1+jω . 6. where B/2 < fc . Let x(t) = e−jt z(t) where z(t) = w(t − 1). So j(ω+1) X(jω) = e−j(ω−1) W (j(ω − 1)) = e−j(ω+1) 1+j(ω+1) .1 Upconversion We can take a baseband signal xC (t) and ‘upconvert’ it to be a bandpass filter by multiplying with a cosine at the desired center frequency. where W/2 < ωc . there will be some ‘sidelobes’ out of the main band. Then X(jω) = Y (−jω). where Y (jω) = ejω W (jω) = ejω −jω So X(jω) = e−jω 1−jω . as shown in Fig. Alternatively. Then Z(ω) = e−jω W (jω). B X(f) B -fc fc Figure 5: Bandpass signal with center frequency fc . and X(jω) = ejω R(jω). jω 2ω 3. . Let x(t) = z(2t) where z(t) = w(t + 2).

This time. Note that I use “rect” as follows: 1. 6. −1/2 < t ≤ 1/2 rect(t) = 0.ECE 5520 Fall 2009 22 xC(t) cos(2pfct) Figure 6: Modulator block diagram. Input to a low-pass filter (LPF) with cutoff frequency ωc . Multiply (again) by the carrier 2 cos(ωc t). we will look at: x(t) = rect(t/Ts ) cos(ωc t) where ωc is the center frequency in radians/sec (fc = ωc /(2π) is the center frequency in Hz). 8. What is X(jω)? Solution: X(jω) = F {rect(t/Ts )} ⋆ F {cos(ωc t)} sin(ωTs /2) 1 Ts ⋆ π [δ(ω − ωc ) + δ(ω + ωc )] X(jω) = 2π ωTs /2 Ts sin[(ω − ωc )Ts /2] Ts sin[(ω + ωc )Ts /2] X(jω) = + 2 (ω − ωc )Ts /2 2 (ω + ωc )Ts /2 The plot of X(j2πf ) is shown in Fig. 7. Example: Modulated Square Wave We dealt last time with the square wave x(t) = rect(t/Ts ). 2. X(f) -fc fc Figure 7: Plot of X(j2πf ) for modulated square wave example.w. This is shown in Fig.2 Downconversion of Bandpass Signals How will we get back the desired baseband signal xC (t) at the receiver? 1. . o.

Theorem: (Nyquist Sampling Theorem. But the theorem really has to do with signal reconstruction from a sampled signal. how would you find the maximum of x(t)? • This is only precise when X(jω) = 0 for all |ω| ≥ 2πB.) Let xc (t) be a baseband.. 2πB(t − nT ) (9) Proof: Not covered. x(t) = xC (t) cos(ωc t) 1 1 X(jω) = XC (j(ω − ωc )) + XC (j(ω + ωc )) 2 2 What happens after the multiplication with the carrier in the receiver? Solution: X1 (jω) = X1 (jω) = 1 X(jω) ⋆ 2π [δ(ω − ωc ) + δ(ω + ωc )] 2π 1 1 XC (j(ω − 2ωc )) + XC (jω) + 2 2 1 1 XC (jω) + XC (j(ω + 2ωc )) 2 2 There are components at 2ωc and −2ωc which will be canceled by the LPF. continuous signal with bandwidth B (in 1 Hz).ECE 5520 Fall 2009 23 x(t) 2cos(2pfct) LPF Figure 8: Demodulator block diagram. Let xc (t) be sampled at multiples of T . 7 Sampling A common statement of the Nyquist sampling theorem is that a signal can be sampled at twice its bandwidth. where T ≥ 2B to yield the ∞ sequence {xc (nT )}n=−∞ . Assume the general modulated signal was. .e. Notes: • This is an interpolation procedure. (Does the LPF need to be ideal?) X2 (jω) = XC (jω) So x2 (t) = xC (t). i. Xc (jω) = 0 for all |ω| ≥ 2πB. Then xc (t) = 2BT ∞ n=−∞ xc (nT ) sin(2πB(t − nT )) . • Given {xn }∞ n=−∞ .

Figure 9: The effect of sampling on the frequency spectrum in terms of frequency f in Hz. What observations would you make about Figure 10(b). The Fourier transform of the sampled signal is many copies of X(jω) strung at integer multiples of 2π/T . Also consider a parabola pulse (this doesn’t . 9. and indicate what the spectrum is of the sampled signals. Example: Sinusoid sampled above and below Nyquist rate Consider two sinusoidal signals sampled at 1/T = 25 Hz: x1 (nT ) = sin(2π5nT ) x2 (nT ) = sin(2π20nT ) What are the two frequencies of the sinusoids. as shown in Fig.12.ECE 5520 Fall 2009 24 7. compared to Figure 10(a)? Example: Square vs. this is a convolution: Xsa (jω) = X(jω) ⋆ = = 2π T ∞ n=−∞ ∞ n=−∞ δ(t − nT ) x(nT )δ(t − nT ) 2π T ∞ n=−∞ δ ω− 2πn T 2πn T for all ω (10) X j ω− 1 X(jω) T for |ω| < 2πB This is shown graphically in the Rice book in Figure 2. sampling is the multiplication of a impulse train (at period T with the desired signal x(t): xsa (t) = x(t) xsa (t) = What is the Fourier transform of xsa (t)? Solution: In the frequency domain. x1 (t) = rect(t/Ts ). and what is the Nyquist rate? Which of them does the Nyquist theorem apply to? Draw the spectrums of the continuous signals x1 (t) and x2 (t).1 Aliasing Due To Sampling ∞ n=−∞ Essentially. Figure 10 shows what happens when the Nyquist theorem is applied to the each signal (whether or not it is valid). round pulse shape Consider the square pulse considered before.

ECE 5520 Fall 2009

25
1.5 1 0.5

Sampled Interp.

x(t)

0 −0.5 −1 −1 −0.5

(a)
1.5 1 0.5

0 Time

0.5

1

Sampled Interp.

x(t)

0 −0.5 −1 −1 −0.5

(b)

0 Time

0.5

1

Figure 10: Sampled (a) x1 (nT ) and (b) x2 (nT ) are interpolated (—) using the Nyquist interpolation formula. really exist in the wild – I’m making it up for an example.) x2 (t) = 1− 0,
2t Ts 2 1 , − 2Ts ≤ t ≤ 1 2Ts

o.w.

What happens to x1 (t) and x2 (t) when they are sampled at rate T ? In the Matlab code EgNyquistInterpolation.m we set Ts = 1/5 and T = 1/25. Then, the sampled pulses are interpolated using (9). Even though we’ve sampled at a pretty high rate, the reconstructed signals will not be perfect representations of the original, in particular for x1 (t). See Figure 11.

7.2

Connection to DTFT

Recall (10). We calculated the Fourier transform of the product by doing the convolution in the frequency domain. Instead, we can calculate it directly from the formula.

ECE 5520 Fall 2009

26

1 0.8

Sampled Interp.

x(t)

0.6 0.4 0.2 0 −0.2 −1 −0.5 0 Time 0.5 1

(a)
1 0.8

Sampled Interp.

x(t)

0.6 0.4 0.2 0 −0.2 −1 −0.5 0 Time 0.5 1

(b)

Figure 11: Sampled (a) x1 (nT ) and (b) x2 (nT ) are interpolated (—) using the Nyquist interpolation formula. Solution: Xsa (jω) = = =
∞ ∞

t=−∞ n=−∞ ∞ n=−∞ ∞ n=−∞

x(nT )δ(t − nT )e−jωt
∞ t=−∞

x(nT )

δ(t − nT )e−jωt

(11)

x(nT )e−jωnT

The discrete-time Fourier transform of xsa (t), I denote DTFT {xsa (t)}, and it is: DTFT {xsa (t)} = Xd (ejΩ ) =
∞ n=−∞

x(nT )e−jΩn

Essentially, the difference between the DTFT and the Fourier transform of the sampled signal is the relationship Ω = ωT = 2πf T

ECE 5520 Fall 2009

27

But this defines the relationship only between the Fourier transform of the sampled signal. How can we Ω relate this to the FT of the continuous-time signal? A: Using (10). We have that Xd (ejω ) = Xsa 2πT . Then plugging into (10), Solution: Xd (ejΩ ) = Xsa j = = 2π T 2π T
∞ n=−∞ ∞ n=−∞

Ω T X j X j Ω 2πn − T T Ω − 2πn T (12)

If x(t) is sufficiently bandlimited, Xd (ejΩ ) ∝ X(jΩ/T ) in the interval −π < Ω < π. This relationship holds for the Fourier transform of the original, continuous-time signal between −π < Ω < π if and only if the original signal x(t) is sufficiently bandlimited so that the Nyquist sampling theorem applies. Notes: • Most things in the DTFT table in Table 2.8 (page 68) are analogous to continuous functions which are not bandlimited. So - don’t expect the last form to apply to any continuous function. • The DTFT is periodic in Ω with period 2π. • Don’t be confused: The DTFT is a continuous function of Ω.

Lecture 4 Today: (1) Example: Bandpass Sampling (2) Orthogonality / Signal Spaces I

7.3

Bandpass sampling

Bandpass sampling is the use of sampling to perform down-conversion via frequency aliasing. We assume that the signal is truly a bandpass signal (has no DC component). Then, we sample with rate 1/T and we rely on aliasing to produce an image the spectrum at a relatively low frequency (less than one-half the sampling rate). Example: Bandpass Sampling This is Example 2.6.2 (pp. 60-65) in the Rice book. In short, we have a bandpass signal in the frequency band, 450 Hz to 460 Hz. The center frequency is 455 Hz. We want to sample the signal. We have two choices: 1. Sample at more than twice the maximum frequency. 2. Sample at more than twice the bandwidth. The first gives a sampling frequency of more than 920 Hz. The latter gives a sampling frequency of more than 20 Hz. There is clearly a benefit to the second part. However, traditionally, we would need to downconvert the signal to baseband in order to sample it at the lower rate. (See Lecture 3 notes).

This is the critical concept needed to understand digital receivers. real-valued) channel. Sample at 1000 samples/sec: Ωc = 2π × 10 Hz 2π = π/50 radians/sample.1 Inner Product of Vectors Then we can take their inner product as: We often use an idea called inner product. Loosely. This choice of waveforms is partially determined by bandwidth. so the center frequency of the sampled signal is: Ωc = 2π × 455 140 mod 2π = 13π 2 mod 2π = π 2 10 The value π/2 radians/sample is equivalent to 35 cycles/second. If we have two vectors x and y. orthogonality of waveforms means that some combination of the waveforms can be uniquely separated at the receiver.  . x . we talked about how digital communications is largely about using (and choosing some combination of) a discrete set of waveforms to convey information on the (continuous-time. To build a better framework for it. 1000 samples/sec? 2. which we have covered. and what is the bandwidth of the sampled signal in radians/sample if the sampling rate is: 1. 8. xn yn xT y = i xi y i Note the T transpose ‘in-between’ an inner product. y .  . the norm of a vector is the square root of the inner product of a vector with itself.it occupies only 2. x = x.ECE 5520 Fall 2009 28 What is the sampled spectrum.  . which you’re familiar with from vectors (as the dot product). The inner product is also denoted x. 1000 samples/sec 455 500 = 455 500 π. The bandwidth is 2π 140 = π/7 radians/sample of sampled spectrum. 8 Orthogonality From the first lecture.  y= . At the receiver. It is also determined by orthogonality.     x1 y1  x2   y2      x= . 140 samples/sec? Solution: 1. Finally. The discrete-time frequency is modulo 2π.   . the idea is to determine which waveforms were sent and in what quantities. Sample at 140 samples/sec: Copies of the entire frequency spectrum will appear in the frequency domain at multiples of 140 Hz or 2π140 radians/sec. The signal is very sparse . we’ll start with something you’re probably more familiar with: vector multiplication.

y ? • z.e. 1]T . 1.e. i. The ‘space’ of square-integrable complex functions. when negative. 1]T . o. the two vectors are pointing in a similar direction. 0 < t ≤ 1 0.w. In words. 8. 0 < t ≤ 1 0. x(t) x(t) = = ∞ −∞ ∞ −∞ = = ∞ −∞ ∞ −∞ x(t)y(t)dt x(t)y ∗ (t)dt Real functions Complex functions x2 (t)dt |x(t)|2 dt Real functions Complex functions Example: Sine and Cosine Let x(t) = y(t) = cos(2πt). o. y(t) As a result. y(t) x(t). i. when zero. 0]T . they are pointing in somewhat opposite directions. It also is applied to the ‘space’ of square-integrable real functions. 1. 0. y ? • x ? • z ? Recall the inner product between two vectors can also be written as xT y = x y cos θ where θ is the angle between vectors x and y.. These norms are: x(t).2 Inner Product of Functions The inner product is not limited to vectors. they’re at cross-directions. the inner product is a measure of ‘same-direction-ness’: when positive. sin(2πt). A norm is always indicated with the Example: Example Inner Products Let x = [1. .w. those functions x(t) for which ∞ −∞ ∞ −∞ x2 (t)dt < ∞. y = [0. z = [1. What are: • x. those functions x(t) for which |x(t)|2 dt < ∞ also have an inner product.ECE 5520 Fall 2009 · 29 symbols outside of the vector..

ECE 5520 Fall 2009

30

What are x(t) and y(t) ? What is x(t), y(t) ?
1 Solution: Using cos2 x = 2 (1 + cos 2x), 1

x(t)

=
0

cos2 (2πt)dt =

1 2

1+

1 sin(2πt) dt 2π

1

=
0

1/2

The solution for y(t) also turns out to be x(t), y(t) =

1/2. Using sin 2x = 2 cos x sin x, the inner product is,
1

cos(2πt) sin(2πt)dt
0 1

=
0

1 sin(4πt)dt 2
1

=

−1 cos(4πt) 8π

=
0

−1 (1 − 1) = 0 8π

8.3

Definitions

Def ’n: Orthogonal Two signals x(t) and y(t) are orthogonal if x(t), y(t) = 0 Def ’n: Orthonormal Two signals x(t) and y(t) are orthonormal if they are orthogonal and they both have norm 1, i.e., x(t) = 1 y(t) = 1

(a) Are x(t) and y(t) from the sine/cosine example above orthogonal? (b) Are they orthonormal? √ Solution: (a) Yes. (b) No. They could be orthonormal if they’d been scaled by 2.

x1(t) A

x2(t) A

-A

-A

Figure 12: Two Walsh-Hadamard functions. Example: Walsh-Hadamard 2 Functions Let  0 < t ≤ 0.5  A, −A, 0.5 < t ≤ 1 x1 (t) =  0, o.w. A, 0 < t ≤ 1 0, o.w.

x2 (t) =

ECE 5520 Fall 2009

31

These are shown in Fig. 12. (a) Are x1 (t) and x2 (t) orthogonal? (b) Are they orthonormal? Solution: (a) Yes. (b) Only if A = 1.

8.4

Orthogonal Sets

Def ’n: Orthogonal Set M signals x1 (t), . . . , xM (t) are mutually orthogonal, or form an orthogonal set, if xi (t), xj (t) = 0 for all i = j. Def ’n: Orthonormal Set M mutually orthogonal signals x1 (t), . . . , xM (t) are form an orthonormal set, if xi = 1 for all i.

(a)

(b)

Figure 13: Walsh-Hadamard 2-length and 4-length functions in image form. Each function is a row of the image, where crimson red is 1 and black is -1.

Figure 14: Walsh-Hadamard 64-length functions in image form. Do the 64 WH functions form an orthonormal set? Why or why not? Solution: Yes. Every pair i, j with i = j agrees half of the time, and disagrees half of the time. So the function is half-time 1 and half-time -1 so they cancel and have inner product of zero. The norm of the function in row i is 1 since the amplitude is always +1 or −1. Example: CDMA and Walsh-Hadamard This example is just for your information (not for any test). The set of 64-length Walsh-Hadamard sequences

ECE 5520 Fall 2009

32

is used in CDMA (IS-95) in two ways: 1. Reverse Link: Modulation: For each group of 6 bits to be send, the modulator chooses one of the 64 possible 64-length functions. 2. Forward Link: Channelization or Multiple Access: The base station assigns each user a single WH function (channel). It uses 64 functions (channels). The data on each channel is multiplied by a Walsh function so that it is orthogonal to the data received by all other users.

Lecture 5 Today: (1) Orthogonal Signal Representations

9

Orthonormal Signal Representations

Last time we defined the inner product for vectors and for functions. We talked about orthogonality and orthonormality of functions and sets of functions. Now, we consider how to take a set of arbitrary signals and represent them as vectors in an orthonormal basis. What are some common orthonormal signal representations? • Nyquist sampling: sinc functions centered at sampling times. • Fourier series: complex sinusoids at different frequencies • Sine and cosine at the same frequency • Wavelets And, we will come up with others. Each one has a limitation – only a certain set of functions can be exactly represented in a particular signal representation. Essentially, we must limit the set of possible functions to a set. That is, some subset of all possible functions. We refer to the set of arbitrary signals as: S = {s0 (t), s1 (t), . . . , sM −1 (t)} For example, a transmitter may be allowed to send any one of these M signals in order to convey information to a receiver. We’re going to introduce another set called an orthonormal basis: B = {φ0 (t), φ1 (t), . . . , φN −1 (t)} Each function in B is called a basis function. You can think of B as an unambiguous and useful language to represent the signals from our set S. Or, in analogy to color printers, a color printer can produce any of millions of possible colors (signal set), only using black, red, blue, and green (basis set) (or black, cyan, magenta, yellow).

Then the signal is equal to the sum of its projections onto the basis functions. {−φk (t)}k=0 as another orthonormal basis. • All orthogonal bases will have size N : no smaller basis can include all signals si (t) in its span. If you can find one basis. The value of N is called the dimension of the signal set. The span is referred to as Span{B} and is Span{B} = N −1 k=0 ak φk (t) a0 . for which si (t) ∈ Span{B} for every i = 0.k = si (t).ECE 5520 Fall 2009 33 9. . An orthonormal basis for an arbitrary signal set tells us: • how to build the receiver. you can use. and a larger set would not be a basis by definition. .2 Synthesis Consider one of the signals in our signal set. and • how to quickly analyze the error rate of the scheme. .k φk (t) The particular constants ai. . the orthogonal basis is an orthogonal set of the smallest size N . Why is this? . The signal set might be a set of signals we can use to transmit particular bits (or sets of bits). si (t). for example.k are calculated using the inner product: ai. i=0 N −1 B = {φi (t)}i=0 . φk (t) The projection of one signal i onto basis function k is defined as ai.. si (t) = N −1 k=0 ai. Notes: N −1 • The orthonormal basis is not unique.aN−1 ∈R Def ’n: Orthogonal Basis For any arbitrary set of signals S = {si (t)}M −1 . M − 1. Given that it is in the span of our basis B. • how to represent the signals in ‘signal-space’.1 Orthonormal Bases Def ’n: Span The span of the set B is the set of all functions which are linear combinations of the functions in B. 9. φk (t) φk (t)... Def ’n: Orthonormal Basis The orthonormal basis is an orthogonal basis in which each basis function has norm 1..k φk (t) = si (t). it can be represented as a linear combination of the basis functions. (As the Walsh-Hadamard functions are used in IS-95 reverse link).

ai. and s2 = [1. which shows a block diagram of how a transmitter would synthesize one of the M signals to send. and s2 ? Solution: They are s0 = [1.k }k −1 such that 34 si (t) = Taking the inner product of both sides with φj (t). φj (t) = N −1 k=0 ai. . 0]T .ECE 5520 Fall 2009 N Solution: Since si (t) ∈ Span{B}. s0 (t) = u(t) − u(t − 1) s1 (t) = u(t − 1) − u(t − 2) s2 (t) = u(t) − u(t − 2) given the orthonormal basis. . we choose M to be a power of 2 for simplicity). By choosing one of the M signals.k φk (t). See Figure 5. s1 . Energy: • Energy can be calculated in signal space as Energy{si (t)} = −∞ φ1 (t) = u(t − 1) − u(t − 2) ∞s2 (t)dt = i N −1 k=0 a2 i. s1 = [0. based on an input bitstream. s0 . we convey information. φj (t) si (t). ai. now we can now represent a signal by a vector. φj (t) = ai. We can also synthesize any of our M signals in the signal set by adding the proper linear combination of the N bases. They are plotted in the signal space diagram in Figure 15. log2 M bits of information. φ0 (t) = u(t) − u(t − 1) What are the signal space vectors. specifically. si (t). Plotting {si }i in an N dimensional grid is termed a constellation diagram. we know there are some constants {ai. . φj (t) = ai.k Proof? . (Generally. Generally. si = [ai.k φk (t) N −1 k=0 N −1 k=0 ai.1 .5 in Rice (page 254).k φk (t).j So.0 . this space that we’re plotting in is called signal space. Example: Position-shifted pulses Plot the signal space diagram for the signals. φj (t) si (t). 1]T . 1]T . .N −1 ]T This and the basis functions completely represent each signal.

di.5[u(t) − u(t − 1)] s2 (t) = −0. s3 = [−1. What are the signal space vectors for the signals {si (t)}? What are their energies? Solution: s0 = [1.k )2 s1 (t) = 0.5]. 0.5[u(t) − u(t − 1)] φ1 (t) = u(t) − u(t − 1) and the orthonormal basis. • Although different ON bases can be used. the energy and distance between points will not change.5].ECE 5520 Fall 2009 35 f2 1 f1 1 Figure 15: Signal space diagram for position-shifted signals example. . See Figure 16.25. 9. -1 0 1 f1 Figure 16: Signal space diagram for amplitude-shifted signals example.25.5[u(t) − u(t − 1)] s3 (t) = −1. our job will be to analyze the received signal (a function) and to decide which of the M possible signals was sent. s1 = [0. . respectively. .j = for i.25.k − aj.25. • We will find out later that it is the distances between the points in signal space which determine the bit error rate performance of the receiver.there will be additional functions added to the signal function we sent. We won’t receive exactly what we sent . M − 1}. 0.5[u(t) − u(t − 1)] N −1 k=0 (ai.5]. s2 = [−0. and 2. Energies are just the squared magnitude of the vector: 2. This is the task of analysis.5]. j in {0.3 Analysis At a receiver. . Example: Amplitude-shifted signals Now consider s0 (t) = 1. It turns out that an orthonormal bases makes our analysis very straightforward and elegant. .

What is the best approximation to r(t) in the signal space? Specifically. φk (t) . . you’d ˆ see that the minimum error is at ∞ r(t)φk (t)dt xk = −∞ that is. 0. . for k = 0. . Let s0 (t) = φ0 (t) and s1 (t) = φ1 (t). . x1 . i. Lecture 6 Today: (1) Multivariate Distributions . N . But. Example: Analysis using a Walsh-Hadamard 2 Basis See Figure 17.w. so r(t) would not be in Span{B} either.. What is r(t)? ˆ Solution:  0≤t<1  1. . we might say that if we send signal m. r (t) = ˆ −1/2. i. 1 ≤ t < 2  o. then we would receive r(t) = sm (t) + w(t) where the w(t) is the sum of all of the additive signals that we did not intend to receive.ECE 5520 Fall 2009 • Thermal noise • Interference from other users • Self-interference 36 We won’t get into these problems today. and then find the minimum with respect to each xk . . But w(t) might not (probably not) be in the span of our basis B. . xk = r(t).e. .e. what is r (t) ∈ Span{B} such that the energy of the ˆ difference between r(t) and r(t) is minimized. sm (t) from our signal set. it can be represented as a vector in signal space. ˆ x = [x0 .. and the synthesis equation is r (t) = ˆ N −1 k=0 xk φk (t) If you plug in the above expression for r (t) into (13). ˆ argmin r (t)∈Span{B} ˆ ∞ −∞ r |ˆ(t) − r(t)|2 dt (13) Solution: Since r(t) ∈ Span{B}. xN −1 ]T .

x2 ) The pdf and pmf integrate / sum to one.+∞ FX1 . • Joint pdf: fX1 .1/2 r(t) 2 1 -2 2 2 -1 -2 Figure 17: Signal and basis functions for Analysis example.X2 (x1 .X2 (x1 .x2 )∈B . x2 ) = P [{X1 = x1 } ∩ {X2 = x2 }] It is the probability that both events happen simultaneously. x2 ) = P [{X1 ≤ x1 } ∩ {X2 ≤ x2 }] It is the probability that both events happen simultaneously. x2 ) = 0 and limx1 .x2 →. for event B ∈ S. • Discrete case: P [B] = • Continuous Case: P [B] = (X1 .1/2 r(t) 2 1 2 2 f2(t) 1/2 1 .X2 )∈B PX1 .X2 (x1 . you integrate.X2 (x1 .−∞ FX1 .9 should be: x FX (x) = P [X ≤ x] = and Eqn 4. x2 ) = ∂2 ∂x1 ∂x2 FX1 . x2 ) = 1. x2 ) fX1 . For example.X2 (x1 . Eqn 4. x2 )dx1 dx2 (x1 . Note: Two errors in Rice book dealing with the definition of the CDF. 10 Multi-Variate Distributions For two random variables X1 and X2 . • Joint CDF: FX1 . with limxi →.X2 (x1 .X2 (x1 . The CDF is non-negative and non-decreasing. • Joint pmf: PX1 . and are non-negative.X2 (x1 .15 should be: fX (t)dt −∞ FX (x) = P [X ≤ x] To find the probability of an event.ECE 5520 Fall 2009 37 f1(t) 1/2 1 .

10. . xn ) = P [X1 ≤ x1 .X2 (x1 . 2. .. X is PX (x) = PX1 . . . .v.X2 (x1 ..X2 (x1 . x2 ) = PX1 (x1 )PX2 (x2 ) • fX1 .X2 (x1 . The CDF of R.2 Conditional Distributions Given event B ∈ S which has P [B] > 0. P [B] 0. Xn ≤ xn ]. 0. X2 . • PX1 . x2 ) fX1 .V. X2 ) ∈ B o.V.V. . . . . X is fX (x) = fX1 . .V. . . . the joint probability conditioned on event B is • Discrete case: PX1 .X2 (x1 .. is fX1 |X2 (x1 |x2 ) = fX1 . Xn = xn ]. .X2 |B (x1 .. . Xn .w.. Xn ]T Here are the Models of R. where PX2 (x2 ) > 0.. x2 )/fX2 (x2 ) PX1 .1 Random Vectors Def ’n: Random Vector A random vector (R. x2 )dx1 Two random variables X1 and X2 are independent iff for all x1 and x2 . • Discrete case.X2 (x1 .Xn (x1 .. x2 ) = • Continuous Case: fX1 . where fX2 (x2 ) > 0. . . xn ) = P [X1 = x1 ... 3. is PX1 |X2 (x1 |x2 ) = PX1 . The pdf of a continuous R. The pmf of a discrete R. .V. . xn ) = ∂n ∂x1 ···∂xn FX (x). . x2 ) = fX2 (x2 )fX1 (x1 ) 10. X is FX (x) = FX1 .X2 (x1 .. . . .Xn (x1 .x2 ) . x2 ) = Given r.X2 |B (x1 . .Xn (x1 .ECE 5520 Fall 2009 38 The marginal distributions are: • Marginal pmf: PX2 (x2 ) = • Marginal pdf: fX2 (x2 ) = x1 ∈SX1 x1 ∈SX1 PX1 . . .s: 1..s X1 and X2 . P [B] (X1 .x2 ) . fX1 . X = [X1 . X2 ) ∈ B o. .w. .) is a list of multiple random variables X1 . X2 . x2 )/PX2 (x2 ) • Continuous Case: The conditional pdf of X1 given X2 = x2 . .. The conditional pmf of X1 given X2 = x2 . (X1 .X2 (x1 .

and count the number of bits that are in error. What is the mean and variance of S? Solution: Ei is called a Bernoulli r. Thus the simulation of one bit is a Bernoulli trial. and S is called a Binomial r.v. with pmf PS (s) = N s p (1 − pe )N −s s e The mean of S is the the same as the mean of the sum of {Ei }.3 Simulation of Digital Communication Systems A simulation of a digital communication system is often used to estimate a bit error rate. This trial Ei is in error (E1 = 1) with probability pe (the true bit error rate) and correct (Ei = 0) with probability 1 − pe . It is written either as: fX1 . or with error. What is the pmf of S? 3.v. What type of random variable is Ei ? Simulations run many bits.i. N N ES [S] = E{Ei } N Ei = i=1 i=1 EEi [Ei ] = i=1 [(1 − p) · 0 + p · 1] = N p We can find the variance of S the same way: N N VarS [S] = Var{Ei } N Ei = i=1 i=1 VarEi [Ei ] = i=1 N (1 − p) · (0 − p)2 + p · (1 − p)2 (1 − p)p2 + (1 − p)(p − p2 ) = i=1 = N p(1 − p) .d.). say N bits through a model of the communication system.ECE 5520 Fall 2009 Def ’n: Bayes’ Law Bayes’ Law is a reformulation of he definition of the marginal pdf.X2 (x1 . Let S = N Ei . Each bit can either be demodulated without error. and assume that {Ei } are independent and identically distributed i=1 (i.. What type of random variable is S? 2. x2 ) = fX2 |X1 (x2 |x1 )fX1 (x1 ) or fX1 |X2 (x1 |x2 ) = fX2 |X1 (x2 |x1 )fX1 (x1 ) fX2 (x2 ) 39 10. 1.

Example: Digital communication system in noise Let X1 is the transmitted signal voltage. {1. X1 may take a discrete set of values..4 Mixed Discrete and Continuous Joint Variables This was not covered in ECE 5510. then X2 will be continuous-valued. We’ll often have X1 discrete and X2 continuous. 10. PX1 (x1 ) = 0. random variable. e.g.5. What is the pmf of T1 ? 3.4δ(x1 − 1) This pdf has integral 1.g.. For example.5 would be an integral that would return 0.v. we won’t have a very good idea of the bit error rate. 1. Any probability integral will return the proper value. even if we run an experiment until the first bit error. fX1 (x1 ) = 0.6.ECE 5520 Fall 2009 40 We may also be interested knowing how many bits to run in order to get an estimate of the bit error rate. 0. −0. What is the mean of T1 ? Solution: T1 is a Geometric r. For instance. with pmf PT1 (t) = (1 − pe )t−1 pe The mean of T1 is E [T1 ] = 1 pe Note the variance of T1 is Var [T1 ] = (1 − pe )/p2 . if x1 = 0 x1 = 1 o.4. For example. and non-negative values for all x1 . and X2 is the received signal voltage. Let T1 be the time (number of bits) up to and including the first error.  0. In a digital system. So. Gaussian with mean µ and variance σ 2 . if we run a simulation and get zero bit errors. although it was in the Yates & Goodman textbook.5. But the noise N may be continuous. our estimate of pe will have relatively high variance. Work-around: We will sometimes use a pdf for a discrete   0. so the standard deviation for very low pe is almost the same e as the expected value.5 < x1 < 0. As long as σ 2 > 0.6. . which is modeled as X2 = aX1 + N where N is a continuous-valued random variable representing the additive noise of the channel. What type of random variable is T1 ? 2. and a is a constant which represents the attenuation of the channel. the probability that −0.5}. then we would write a pdf with the probabilities as amplitudes of a dirac delta function centered at the value that has that probability. e.5. −1.6δ(x1 ) + 0.w.

x1 = 1 0. x2 ) =    fX1 .5. fX1 .v. . X1 is independent of N with   0. of (X1 . x1 = 1 . X2 ) ? Solution: 1. 0.s are added. See Figure 18.w.f. the pdf of the sum is the convolution of the pdfs of the inputs.5δ(x1 − 1) we convolve this with fN (n) above. x2 ) =  down the pdf of X2 . x1 = 1 0. Writing fX1 (x1 ) = 0.  0.5fN (x2 − 1) 1 2 2 2 2 √ e−x2 /(2σ ) + e−(x2 −1) /(2σ ) fX2 (x2 ) = 2 2 2πσ 2.X2 (x1 . X2 Consider the channel model X2 = X1 + N .w.X2 (x1 . o.5fN (x2 − 1)δ(x1 − 1) These last two lines are completely equivalent. o.X2 (x1 . x1 = 0 PX1 (x1 ) = 0.ECE 5520 Fall 2009 41 Example: Joint distribution of X1 . It is necessary to use Bayes’ Law.5fN (x2 ).f.5δ(x1 ) + 0. What is the joint p. Use whichever seems more convenient for you. fX1 . to get fX2 (x2 ) = 0. we cannot simply multiply the marginal pdfs together.5fN (x2 − 1).5fN (x2 )δ(x1 ) + 0.v.5fN (x2 ) + 0. x2 ) = 0. and fN (n) = √ 1 2πσ 2 e−n 2 /(2σ 2 ) where σ 2 is some known variance. x1 = 0 fX2 |X1 (x2 |1)fX1 (1). When two independent r.d. of (X1 . o.X2 (x1 . and the r. What is the joint p.d.w. X2 ) ? Since X1 and X2 are NOT independent. x1 = 0 0. x2 ) = fX2 |X1 (x2 |x1 )fX1 (x1 ) Given a value of X1 (either 0 or 1) we can write   fX1 . So break this into two cases: fX2 |X1 (x2 |0)fX1 (0). What is the pdf of X2 ? 2.5. 1.

v. X2 ) = X2 will result in the means µX1 and µX2 . X2 )] = Typical functions g(X1 . • Variance (or 2nd central moment) of X1 or X2 : g(X1 . 10.6 Gaussian Random Variables 2 For a single Gaussian r. X2 )PX1 . x2 ) g(X1 . also called the ‘correlation’ of X1 and X2 : g(X1 . for X.ECE 5520 Fall 2009 42 4 2 0 −1 0 0 X 1 1 −2 2 −4 X 2 Figure 18: Joint pdf of X1 and X2 . • Expected value of the product of X1 and X2 . X2 )fX1 . fX (x) = 1 2 2πσX e− (x−µX )2 2σ 2 Consider Y to be Gaussian with mean 0 and variance 1. X with mean µX and variance σX . which has non-zero mean and non-unit-variance. Then. we can write its CDF as FX (x) = P [X ≤ x] = Φ x − µX σX . when σ = 1. the CDF of Y is denoted as FY (y) = P [Y ≤ y] = Φ(y).X2 (x1 . X2 ) = (X2 − µX2 )2 . we have the pdf. 1. X2 ) = X1 X2 . Continuous: E [g(X1 . • Covariance of X1 and X2 : g(X1 . 10. Discrete: E [g(X1 . 2 2 Often denoted σX1 and σX2 . X2 ) = (X1 − µX1 )2 or g(X1 .5 Expectation Def ’n: Expected Value (Joint) The expected value of a function g(X1 . X2 ) = X1 or g(X1 . X2 ) = (X1 − µX1 )(X2 − µX2 ). x2 ) 2. X2 ) are: • Mean of X1 or X2 : g(X1 . the input and output (respectively) of the example additive noise communication system. So. X2 ) of random variables X1 and X2 is given by.X2 (x1 . X2 )] = X1 ∈SX1 X1 ∈SX1 X2 ∈SX2 X2 ∈SX2 g(X1 .

the Q(x) function is not used.2 Error Function x − µX σX =1−Φ x − µX σX In math. erf(x) 2 √ 2π = 2 √ 2x e−u 2 /2 du 1 2 √ e−u /2 du 2π 0 √ 1 = 2 Φ( 2x) − 2 Equivalently. X with variance σX . zero mean Gaussian random variable. we can write the probability of this event using the unit-variance. Instead. Q(x) = P [X > x] = 1 − Φ (x) What is Q(x) in integral form? Q(x) = x 2 For an Gaussian r. we can write Φ(·) in terms of the erf(·) function. √ 1 1 Φ( 2x) = erf(x) + 2 2 Finally let y = √ 2x. 10.ECE 5520 Fall 2009 You can prove this by showing that the event X ≤ x is the same as the event x − µX X − µX ≤ σX σX 43 Since the left-hand side is a unit-variance. and in Matlab. that is. This is so common in digital communications. zero mean Gaussian CDF.1 Complementary CDF The probability that a unit-variance.6. Q(x). 1 − Φ(x). it is given its own name. X exceeds some value x is one minus the CDF. there is a function called erf(x) erf(x) 2 √ π x 0 e−t dt 2 Example: Relationship between Q(·) and erf(·) What is the functional relationship between Q(·) and erf(·)? √ √ Solution: Substituting t = u/ 2 (and thus dt = du/ 2). so that 1 Φ(y) = erf 2 y √ 2 + 1 2 0 √ 2x .6. ∞ 1 2 √ e−w /2 dw 2π P [X > x] = Q 10. in some texts.v. zero mean Gaussian r.v.

what is the probability that the receiver decides that a ‘0’ was sent? 2.v. The receiver decides as follows: • If X2 ≤ 1/3. 1} with equal probabilities and N is independent of X1 and zero-mean Gaussian with variance 1/4. Here. • If X2 > 1/3. P [error|X1 = 1] = P [X2 ≤ 1/3] = P 1/3 − 1 X2 − 1 ≤ 1/4 1/4 = 1 − Q(−4/3) = 1 − Q ((−2/3)/(1/2)) 2. X1 ∈ {0.7 Examples Example: Probability of Error in Binary Example As in the previous example. Given that X1 = 1. it is clear that X2 is also a Gaussian r. Then the probability that the receiver decides ‘0’ is the probability that X2 ≤ 1/3. the probability that the receiver decides ‘1’ is the probability that X2 > 1/3.* erf(y. since X2 = X1 + N . Given that X1 = 0. what is the probability that the receiver decides that a ‘1’ was sent? Solution: 1.ECE 5520 Fall 2009 44 Or in terms of Q(·). 1.1/2 . Given that X1 = 1. Given that X1 = 0. 10. Q(y) = 1 − Φ(y) = 1 1 − erf 2 2 y √ 2 (14) You should go to Matlab and create a function Q(y) which implements: function rval = Q(y) rval = 1/2 ./sqrt(2)). then decide that the transmitter sent a ‘0’. then decide that the transmitter sent a ‘1’. P [error|X1 = 0] = P [X2 > 1/3] = P X2 1/4 > 1/3 1/4 = Q ((1/3)/(1/2)) = Q(2/3) . with mean 1 and variance 1/4. we have a model system in which the receiver sees X2 = X1 + N .

X is multivariate Gaussian with mean µX . the pdf of R is fR (r) = r σ2 r exp − 2σ2 .V. If the elements of X were independent random variables. Rician) distribution.d. if the vector is (X1 . Both pdfs are sometimes needed to derive probability of error formulas.a.w.1 Envelope The envelope of a 2-D random vector is its distance from the origin (0. Any linear combination of jointly Gaussian random variables is another jointly Gaussian random variable. the envelope R would now have a Rice (a. An n-length R.w. Lecture 7 Today: (1) Matlab Simulation (2) Random Processes (3) Correlation and Matched Filter Receivers . respectively. and I0 (·) is the zeroth order modified Bessel function of the first kind. 2 1 Note that both the Rayleigh and Rician pdfs can be derived from the Gaussian distribution and (15) using the transformation of random variables methods studied in ECE 5510. . 10.8. Gaussian with mean 0 and variance σ 2 . This is the Rayleigh distribution. 0). o. fX (x) = 1 (2π)n det(CX ) 1 −1 exp − (x − µX )T CX (x − µX ) 2 −1 where det() is the determinant of the covariance matrix. and covariance matrix CX if it has the pdf. where s2 = µ2 + µ2 .3 spends some time with 2-D Gaussian random vectors. the envelope R is R= 2 2 X1 + X2 (15) If both X1 and X2 are i. Specifically. For example.ECE 5520 Fall 2009 45 10.i.V. r≥0 o. if we have a matrix A and we let a new random vector Y = AX. and CX is the inverse of the covariance matrix. and identical variances σ 2 . which is the dimension with which we spend most of our time in this class.k. r ≥ 0 2 0. then Y is also a Gaussian random vector with mean AµX and covariance matrix ACX AT . X2 ).8 Gaussian Random Vectors Def ’n: Multivariate Gaussian R. the pdf would be the product of the individual pdfs (as with any random vector) and in this case the pdf would be: fX (x) = 1 (2π)n i 2 σi n exp − i=1 (xi − µXi )2 2 2σXi Section 4. fR (r) = r σ2 +s exp − r 2σ2 2 2 I0 rs σ2 0. If instead X1 and X2 are independent Gaussian with means µ1 and µ2 .

ECE 5520 Fall 2009 46 11 Random Processes A random process X(t. Outcome or realization is included because there could be ways to record X even at the same time. Consider what happens when a pollster takes either a time-average or a ensemble average: . For an ergodic random process. respectively. s) is a sequence of random variables indexed by time index n and realization s ∈ S. s) is a function of time t and outcome (realization) s. k) = E [X(n)X(n − k)] Def ’n: Wide-sense stationary (WSS) A random process is wide-sense stationary (WSS) if 1. s) is µX (t) = E [X(t. An example that helps demonstrate the difference between time averages and ensemble averages is the non-ergodic process of opinion polling. We then denote the autocorrelation function as RX (τ ). µX = µX (t) = E [X(t)] is independent of t. A random process is wide-sense stationary (WSS) if 1. We can estimate the autocorrelation function of an ergodic. We then denote the autocorrelation function as RX (k).1 Autocorrelation and Power Def ’n: Mean Function The mean function of the random process X(t. its time averages are equivalent to its ensemble averages. k) depends only on k and not on n. and abbreviate it to X(t) or X(n). For example. µX = µX (n) = E [X(n)] is independent of n. but we now must define “ergodic”. we omit the “s” when writing the name of the random process or random sequence. Def ’n: Autocorrelation Function The autocorrelation function of a random process X(t) is RX (t. 2. The power of a signal is given by RX (0). multiple receivers would record different noisy signals even of the same transmission. If you record one signal over all time t. RX (t. 2. Typically. s)] Note the mean is taken over all possible realizations s. RX (τ ) = lim 1 T →∞ T T /2 t=−T /2 x(t)x(t − τ )dt This is not a critical topic for this class. τ ) = E [X(t)X(t − τ )] The autocorrelation of a random sequence X(n) is RX (n. RX (n. wide-sense stationary random process from one realization of a power-type signal x(t). you don’t have anything to average to get the mean function µX (t). 11. τ ) depends only on the time difference τ and not on t. Outcome s lies in sample space S. A random sequence X(n.

but probably not. SX (f ) = F {RX (τ )} SX (ejΩ ) = DTFT {RX (k)} 11. The time average is taken by dividing the total number of “Yes”s by N .1 White Noise Let X(n) be a WSS random sequence with autocorrelation function RX (k) = E [X(n)X(n − k)] = σ 2 δ(k) This says that each element of the sequence X(n) has zero covariance with every other sample of the sequence. • Ensemble Average: The pollster asks N different people.5.ECE 5520 Fall 2009 47 • Time Average: The pollster asks the same person. i. i=1 This transmission conveys to us which of M messages we want to send.1.g. whether or not they will vote for person X. thermal noise is not constant in frequency. that is. • Does this mean it is independent of every other sample? • What is the PSD of the sequence? This sequence is commonly called ‘white noise’ because it has equal parts of every frequency (analogy to light). The Rice book (4. For continuous time signals. Let X(t) be a WSS random process with autocorrelation function RX (τ ) = E [X(t)X(t − τ )] = σ 2 δ(τ ) • What is the PSD of the sequence? • What is the power of the signal? Realistically. This makes the process non-ergodic. log2 M bits of information.2) has a good analysis of the physics of thermal noise. the power spectral density goes to zero. whether or not she will vote for person X. we receive signal si (t) scaled and corrupted by noise: r(t) = bsi (t) + n(t) How do we decide which signal i was transmitted? . 12 Correlation and Matched-Filter Receivers We’ve been talking about how our digital transmitter can transmit at each time one of a set of signals {si (t)}M .. on the same day. every day for N days. At the receiver.e. Power Spectral Density We have seen that for a WSS random process X(t) (and for a random sequence X(n) we compute the power spectral density as. 1015 Hz). Will they come up with the same answer? Perhaps.. white noise is a commonly used approximation for thermal noise. because as frequency goes very high (e. m = n. The ensemble average is taken by dividing the total number of “Yes”s by N . it is uncorrelated with X(m).

.1. Correlation with the basis functions gives k=1 xk = r(t). xN ]T Now that we’ve done these N correlations (inner products). −∞ What is x in terms of ai and the noise {nk }? What are the mean and covariance of {nk }? Solution: xk = = ∞ −∞ ∞ −∞ r(t)φk (t)dt [si (t) + n(t)]φk (t)dt ∞ −∞ = ai. . correlating with the basis functions. Noisy Channel Now. N . shown in Figure 5. As notation.k + n(t)φk (t)dt = ai. . . nk is zero mean: E [nk ] = ∞ −∞ E [n(t)] φk (t)dt = 0 . In other terms. φk (t) = for k = 1 . φk = ∞ n(t)φk (t)dt. If si (t) are non-orthogonal. Noise-free channel Since r(t) = bsi (t) + n(t).1 Correlation Receiver ∞ −∞ What we’ve been talking about it the inner product.6 in Rice (page 226). if n(t) = 0 and b = 1 then x = ai where ai is the signal space vector for si (t). then we can reduce the amount of correlation (and thus multiplication and addition) done in the receiver by instead. we can compute the estimate of the received signal as K−1 k=0 ∞ −∞ r(t)φk (t)dt r(t) = ˆ xk φk (t) This is the ‘correlation receiver’. the inner product is a correlation: r(t). .ECE 5520 Fall 2009 48 12. Define nk = n(t). let b = 1 but consider n(t) to be a white Gaussian random process with zero mean and PSD SN (f ) = N0 /2. {φk (t)}N . si (t) = r(t)si (t)dt But we want to build a receiver that does the minimum amount of calculation.k + nk First. x = [x1 . x2 . .

m. . Then since nk is fXk (xk ) = And.k + nk and ai.k ) 2(N0 /2) 12. 2 Is {nk } WSS? Is it a Gaussian random sequence? Since the noise components are independent. m=k δk. k) = E [nk nm ] = = = = ∞ t=−∞ ∞ t=−∞ ∞ τ =−∞ ∞ τ =−∞ E [n(t)n(τ )] φk (t)φm (τ )dτ dt N0 δ(t − τ )φk (t)φm (τ )dτ dt 2 N0 ∞ φk (t)φm (t)dτ dt 2 t=−∞ N0 N0 2 . .w. o. since {Xk } are independent. (Why?) What is the pdf of xk ? What is the pdf of x? Solution: Gaussian.k )2 2(N0 /2) fx (x) = k=1 N fXk (xk ) (x −a ) 1 − k i.k e 2(N0 /2) 2π(N0 /2) 2 = k=1 = An example is in ece5510 lec07.i. by calculating the autocorrelation Rn (m. . nN are i.2 Matched Filter Receiver xk = ∞ t=−∞ Above we said that r(t)φk (t)dt But there really are finite limits – let’s say that the signal has a duration T .m = 0. . N 1 2π(N0 /2) e − (xk −ai. Rn (m.ECE 5520 Fall 2009 49 Next we can show that n1 . then xk are also independent.d. k). xk are independent because xk = ai. 1 − e N/2 [2π(N0 /2)] PN 2 k=1 (xk −ai.k is a deterministic constant. and then rewrite the integral as T xk = t=0 r(t)φk (t)dt T This can be written as xk = t=0 r(t)hk (T − t)dt .

Lecture 8 Today: (1) Optimal Binary Detection 12. textbooks will assume that the automatic gain control (AGC) works properly. We might. 12. and then will assume the noise signal is multiplied accordingly.) This is the output of a convolution. . • These are just two different physical implementations. we assume we receive signal si (t) corrupted by noise: r(t) = si (t) + n(t) How do we decide which signal i ∈ {0. have a physical filter with the impulse response φk (T − t) and thus it is easy to do a matched filter implementation. gain control. r’(t)e -j2pfct r’(t) Downconverter AGC r(t) Correlation or Matched Filter x Optimal Detector b Figure 19: A block diagram of the receiver blocks which discussed in lecture 7 & 8.3 Amplitude Before we said b = 1. At the receiver. which is posted on WebCT. This effectively multiplies the noise. taken at time T . This transmission i=0 conveys to us which of M messages we want to send. and detection. • It may be easier to ‘see’ why the correlation receiver works.m. that is. A real channel attenuates! The job of our receiver will be to amplify the signal by approximately 1/b. as shown in Figure 19. • This works at time T . we can transmit at each time one of a set of signals {si (t)}M −1 . correlation and matched filter rx.4. correlation or matched filter reception.1. Notes: • The xk can be seen as the output of a ‘matched’ filter at time T . for example. . In general. . .ECE 5520 Fall 2009 50 where hk (t) = φk (T − t). The output for other times will be different in the correlation and matched filter. xk = r(t) ⋆ hk (t)|t=T Or equivalently xk = r(t) ⋆ φk (T − t)|t=T This is Rice Section 5. Try out the Matlab code. M − 1} was transmitted? We split the task into down-conversion. .4 Review At a digital transmitter. (Plug in (T − t) in this formula and the T s cancel out and only positive t is left. log2 M bits of information.

ai. The joint pdf of x is K−1 fX (x) = k=0 1 − e fXk (xk ) = N/2 [2π(N0 /2)] PN 2 k=1 (xk −ai. . Gaussian. In general.k is a deterministic constant • The {nk } are i.m. . ece5520 lec07. . ai. K − 1. . nK−1 ]T Hopefully. We denote vectors: x = [x0 . 1]T and receiving r(t) (and thus x) in noise. . 13 Optimal Detection We receive r(t) and use the matched filter or correlation receiver to calculate x. In an example Matlab simulation. because: • xk = ai. how exactly do we decide which si (t) was sent? Consider this in signal space. pick the i with ai closest to x.m. x and ai should be close since i was actually sent. xK−1 ]T ai = [ai. the pdf of x is multivariate Gaussian with each component xk independent.d. . Correlation with the basis k=0 functions gives xk = r(t). . . then the decision will be. n1 . . Given the signal space diagram of the transmitted signals and the received signal space vector x. 13.1 Overview We’ll show two things: • If each symbol i is equally probable. what rules should we have to decide on i? This is the topic of detection.1 . then we’ll need to shift the decision boundaries. -1 0 X 1 f1 Figure 20: Signal space diagram of a PAM system with an X to mark the receiver output x. .k + nk • ai.i.ECE 5520 Fall 2009 51 12. Both result in the same output x. we simulated sending ai = [1. .k ) 2(N0 /2) (16) We showed that there are two ways to implement this: the correlation receiver. {φk (t)}K−1 . Now. φk (t) = ai. as shown in Figure 20. and the matched filter receiver. x1 . .k + nk for k = 0 . . Optimal detection uses rules designed to minimize the probability that a symbol error will occur. We simulated this in the Matlab code correlation and matched filter rx. .5 Correlation Receiver Our signal set can be represented by the orthonormal basis functions.0 .K−1 ]T n = [n0 . • If symbols aren’t equally probable. .

a0 and a1 . P [error] = P [error|H0 ] P [H0 ] + P [error|H1 ] P [H1 ] (17) X = a0 + N X = a1 + N r(t) = s0 (t) + n(t) r(t) = s1 (t) + n(t) .e. we mean that we want the smallest probability of error. Then. and S is the complete event space. it is very applicable in digital receivers. we list the events that could have occurred. the symbols that could have been sent. • Radar applications: Obstruction detection. We follow all detection and statistics textbooks and label these classes Hi . motion detection. The event listing is. for example. 13. When we talk about them as random variables. and n. detection is applicable in a wide variety of problems. Then using Bayes’ Law. and write: H0 : H1 : HM −1 : ··· r(t) = s0 (t) + n(t) r(t) = s1 (t) + n(t) r(t) = sM −1 (t) + n(t) ··· This must be a complete listing of events. That is. we’ll substitute N for n and X for x.2 Bayesian Detection When we say ‘optimal detection’ in the Bayesian detection framework. we mean that a different signal was detected than the one that was sent. 14 Binary Detection Let’s just say for now that there are only two signals s0 (t) and s1 (t). H0 : H1 : Equivalently. We need to decide from X whether s0 (t) or s1 (t) was sent. signal detection. i. So even though it is somewhat theoretical. The probability of error is denoted P [error] By error.ECE 5520 Fall 2009 52 Detection theory is a major learning objective of this course. and only one basis function φ0 (t). At the start of every detection problem. we’ll have only scalars: x.. H0 : H1 : We use the law of total probability to say that P [error] = P [error ∩ H0 ] + P [error ∩ H1 ] Where the cap means ‘and’. and n. instead of vectors x. where the ∪ means union. a0 and a1 . the events H0 ∪ H1 ∪ · · · ∪ HM −1 = S. Further. • Medical applications: Does an image show a tumor? Does a blood sample indicate a disease? Does a breathing sound indicate an obstructed airway? • Communications applications: Receiver design.

Then R0 is the area in which fX|H0 (x|H0 )P [H0 ] > fX|H1 (x|H1 )P [H1 ] If P [H0 ] = P [H1 ].we just say what region of x. the difference between the joint densities. 14. Which x’s should we include in the region? Should we include x which has a positive value of the integrand? Or should we include the parts of x which have a negative value of the integrand? Solution: Select R0 to be all x such that the integrand is negative. We can pick R0 however we want . Finally. how do you pick R0 to minimize (19)? 14. P [error] = (1 − P [X ∈ R0 |H0 ])P [H0 ] + P [X ∈ R0 |H1 ] P [H1 ] P [error] = P [H0 ] − P [X ∈ R0 |H0 ] P [H0 ] + P [X ∈ R0 |H1 ] P [H1 ] Now note that probabilities that X ∈ R0 are integrals over the event (region) R0 .ECE 5520 Fall 2009 53 14. So the question is. Over some set R0 of values of X. but the only thing we can change is the region R0 . we’ll decide that H0 happened (s0 (t) was sent). . • There are no values of x disregarded: R0 ∪ R1 = S. Figure ?? shows the joint densities (the conditional pdfs multiplied by the bit probabilities P [H0 ] and P [H1 ]. we’ll decide H1 occurred (that s1 (t) was sent). since the two are complementary sets. The objective is to minimize the probability of error. P [error] = P [H0 ] − + x∈R0 x∈R0 fX|H0 (x|H0 )P [H0 ] dx fX|H1 (x|H1 )P [H1 ] dx (19) fX|H1 (x|H1 )P [H1 ] − fX|H0 (x|H0 )P [H0 ] dx = P [H0 ] + x∈R0 We’ve got a lot of things in the expression in (19). so • There is no overlap: R0 ∩ R1 = ∅. Figure ?? shows the full integrand of (19). Figure 21 shows the conditional probability density functions. Everything else is determined by the time we get to this point. We can’t be indecisive.1 Decision Region We’re making a decision based only on X.2 Formula for Probability of Error So the probability of error is P [error] = P [X ∈ R1 |H0 ] P [H0 ] + P [X ∈ R0 |H1 ] P [H1 ] (18) The probability that X is in R1 is one minus the probability that it is in R0 . and the integral in (19) will integrate over it. Over a different set R1 of values. then this is the region in which X is more probable given H0 than given H1 .3 Selecting R0 to Minimize Probability of Error We can see what the integrand looks like.

2 0 −3 −2 −1 (a) 0 r 1 2 3 (b) 0.s (likelihood functions) fX|H0 (x|H0 ) and fX|H1 (x|H1 ).4 −0.8 f X|H X|H 1 2 Probability Density 0.f.6 0.4 f 0.4 0.s fX|H0 (x|H0 )P [H0 ] and fX|H1 (x|H1 )P [H1 ]. fX|H1 (x|H1 )P [H1 ] − fX|H0 (x|H0 )P [H0 ].2 X∩H −f 2 X ∩H 1 Integrand 0 −0.f. .d.d. and (c) difference between the joint p.s.2 −0.6 −3 −2 −1 (c) 0 r 1 2 3 Figure 21: The (a) conditional p.ECE 5520 Fall 2009 54 1 f 0.d.f. which is the integrand in (19). (b) joint p.

then we’ll decide H0 . i. i. a1 = 1 in Gaussian noise 2 In this example. The log-likelihood ratio? 3. 14. The log of a Gaussian pdf? 2. that s0 (t) was sent.. 2. Why can we do this? Solution: 1. Otherwise. Whenever x indicates that the likelihood ratio is less than the threshold. log fX|H1 (x|H1 ) − log fX|H0 (x|H0 ) < log [H0 ] [H1 ] 2 2 (x − 1) [H0 ] x 2 − 2 [H1 ] 2σN 2σN P [H0 ] 2 x2 − (x − 1)2 < 2σN log P [H1 ] P [H0 ] 2 2x − 1 < 2σN log P [H1 ] P [H0 ] 1 2 + σN log x < 2 P [H1 ] P P P < log P . Continuing with the log-likelihood ratio. Equation (20) is a very general result.5 Case of a0 = 0. n ∼ N (0. The right hand side is a threshold. What is: 1.4 Log-Likelihood Ratio For the Gaussian distribution. we’ll decide H1 . applicable no matter what conditional distributions x has. The log() function is strictly increasing. Both sides are positive.e. the math gets much easier if we take the log of both sides. that s1 (t) was sent. the log-likelihood ratio is fX|H1 (x|H1 ) P [H0 ] < log log fX|H0 (x|H0 ) P [H1 ] 14.e. assume for a minute that a0 = 0 and a1 = 1. fX|H1 (x|H1 ) P [H0 ] < fX|H0 (x|H0 ) P [H1 ] (20) The left hand side is called the likelihood ratio. In addition..ECE 5520 Fall 2009 55 Rearranging the terms. σN ). Now. The decision regions for x? Solution: What is the log of a Gaussian pdf? log fX|H0 (x|H0 ) = log   1 2 2πσN e 2 − x2 2σ N   (21) 1 x2 2 = − log(2πσN ) − 2 2 2σN The log fX|H1 (x|H1 ) term will be the same but with (x − 1)2 instead of x2 .

γ= 2 σN P [H0 ] a0 + a1 + log 2 a1 − a0 P [H1 ] (23) 14. What is the decision threshold for x? . For simplicity. then x = 1 and the logarithm of the fraction is zero. we had arbitrary values for them (the signal space representations of s0 (t) and s1 (t)).4. This receiver is also called a maximum likelihood detector. we’d still have r H1 > < H0 γ. we could have derived the result in the last section the same way. The boundary is exactly half-way in between the two signal space vectors. we use the following notation: x H1 > < H0 1 P [H0 ] 2 + σN log 2 P [H1 ] This completely describes the detector receiver. instead of a0 = 0 and a1 = 1. but now. Rather than writing both inequalities each time. And if x is closer to a1 . 2 Example: Let a0 = −1. there is a simple test for x . decide H0 .1.6 General Case for Arbitrary Signals If. σN = 0. P [H1 ] = P [H0 ]. we also write x H1 > < H0 γ where γ= 1 P [H0 ] 2 + σN log 2 P [H1 ] (22) 14. 14. a1 = 1. P [H0 ] 1 2 x > + σN log 2 P [H1 ] decide H1 . P [H1 ] = 0. So then a0 + a1 2 The decision above says that if x is closer to a0 . and P [H0 ] = 0. which direction will the optimal threshold move. If it is above the decision threshold. decide that s1 (t) was sent.8 Examples Example: When H1 becomes less likely.ECE 5520 Fall 2009 56 In the end result.if it is below the decision threshold. because we only decide which likelihood function is higher (neither is scaled by the prior probabilities P [H0 ] or P [H1 ].7 Equi-probable Special Case P [H1 ] P [H0 ] H1 > < H0 If symbols are equally likely. towards a0 or towards a1 ? Solution: Towards a1 .6. As long as a0 < a1 . decide that s0 (t) was sent.

05 log 1. receiver? 2 Given a0 . • We showed the formula for that threshold. given that r|H0 is Gaussian with mean a0 and variance σN ? What is the second 2 ? What is then the overall probability probability. and P [H0 ]. • We found the decision regions which minimized the probability of error. a1 . γ =0+ 0.6 0.9 Review of Binary Detection We did three things to prove some things about the optimal detector: • We wrote the formula for the probability of error. and in the general case. σN . you should be able to calculate the optimal decision threshold γ. given all of the above constants and the optimal threshold γ.4 Example: Can the decision threshold be higher than both a0 and a1 in this binary. • We used the log operator to show that for the Gaussian error case the decision regions are separated by a single threshold. given that x|H1 is Gaussian with mean a1 and variance σN of error? Solution: P [x > γ|H0 ] = Q γ − a0 σN γ − a1 a1 − γ P [x < γ|H1 ] = 1 − Q =Q σN σN γ − a0 P [error] = P [H0 ] Q + P [H1 ] Q σN a1 − γ σN 14. both in the equi-probable symbol case. P [H1 ]. Starting from P [error] = P [x ∈ R1 |H0 ] P [H0 ] + P [x ∈ R0 |H1 ] P [H1 ] we can use the decision regions in (22) to write P [error] = P [x > γ|H0 ] P [H0 ] + P [x < γ|H1 ] P [H1 ] 2 What is the first probability.1 log = 0. Example: In this example. one-dimensional signalling.0203 2 0. calculate the probability of error from (18).5 ≈ 0. Lecture 9 Today: (1) Finish Lecture 8 (2) Intro to M-ary PAM .ECE 5520 Fall 2009 57 Solution: From (23).

• Problem 6 typo: x1 (t) = Notes on 2007 sample exam: • Problem 2 should have said “with period Tsa ” rather than “at rate Tsa ”. we are sending one symbol every Ts units of time. aM −1 } is the nth symbol transmitted. M where ai is the amplitude of waveform i. si (t) = ai p(t). t. . 15 Pulse Amplitude Modulation (PAM) Def ’n: Pulse Amplitude Modulation (PAM) M -ary Pulse Amplitude Modulation is the use of M scaled versions of a single basis function p(t) = φ0 (t) as a signal set. . 15. 0. and Ts is the symbol period (not the sample period!).w. for i = 0. it is on the probability topic. . Note we’re calling it p(t) instead of φ0 (t). Note: Keep in mind that when s(t) is to be transmitted on a bandpass channel. That is. −1 ≤ t ≤ 1 . and ai instead of ai. the “x”s o. both for simplicity. .a. Each sent waveform (a.ECE 5520 Fall 2009 58 Sample exams (2007-8) and solutions posted on WebCT. “symbol”) conveys k = log2 M bits of information. . Notes on 2008 sample exam: • Problem 1(b) was not material covered this semester. it is modulated with a carrier cos(2πfc t). The resulting signal we’ll call s(t) (without any subscript) and is s(t) = n a(n)p(t − nTs ) where a(n) ∈ {a0 . Instead of just sending one symbol.w. x(t) = ℜ s(t)ej2πfc t . . • Problem 6 is on the topic of detection theory. • Problem 3 is not on the topic of detection theory. 1 2 2 (3t − 1). by putting these sample exams up. you’ll just add each time-delayed signal to the transmitted waveform. on the RHS of the given expressions should be “t”s. you know the actual exam won’t contain the same exact problems.0 . . −1 ≤ t ≤ 1 and x1 (t) = 0. since there’s only one basis function. . o. There is a typo: fN (n) should be fN (t). That makes our symbol rate as 1/Ts symbols per second.k. • The book that year used ϕi (t) instead of φi (t) as the notation for a basis function. There is no guarantee this exam will be the same level of difficulty of either past exam.1 Baseband Signal Examples When you compose a signal of a number of symbols. and αi instead of si as the signal space vector for signal si (t). Of course.

3 and 5.6 0. a3 = 3A is shown in Figure 23. which is 1/ log 2 M times the symbol energy Es . −A. +(M − 3)A. The average energy per symbol has decreased. of PAM transmitters and receivers.4 Time t 0. it would represent unipolar PAM.w. a1 = A. Example: 4-ary PAM A 4-ary PAM signal set using amplitudes a0 = −3A. A. a2 = A. o. . −(M − 3)A. .2. respectively. .5 Value 0 −0. a1 = A. √ 1/ Ts . It shows a signal set using square pulses.6 show continuous-time and discrete-time realizations. If the y-axis in Figure 22 was scaled such that the minimum was 0 instead of −1.ECE 5520 Fall 2009 59 Rice Figures 5. we want to calculate the average bit energy Eb . . Typical M -ary PAM (for all even M ) uses the following amplitudes: −(M − 1)A.2. . This is also called On-Off Keying (OOK).1 shows the signal space diagram of M-ary PAM for different values of M .2. 1 0.w.5 −1 0 0. √ 1/ Ts . 15.2 Average Bit Energy in M-ary PAM Assuming that each symbol si (t) is equally likely to be sent. +(M − 1)A Rice Figure 5. a1 = −A.8 1 Figure 22: Example signal for binary bipolar PAM example for A = √ Ts . Example: Binary Bipolar PAM Figure 22 shows a 4-ary PAM signal set using amplitudes a0 = −A. . It shows a signal set using square pulses. 0 ≤ t < Ts p(t) = 0. . . Es = E = E ∞ −∞ 2 ai a2 p2 (t)dt = i ∞ −∞ E a2 φ2 (t)dt i 0 (24) . 0 ≤ t < Ts φ0 (t) = p(t) = 0. o.2 0. Example: Binary Unipolar PAM Unipolar binary PAM uses the amplitudes a0 = 0.

Fourier transform of signals. Sampling and aliasing 4. . 3 and Eb = 1 (M 2 − 1) 2 A log2 M 3 Lecture 10 Today: Review for Exam 1 16 Topics for Exam 1 1. Let {a0 . Correlation / matched filter receivers .ECE 5520 Fall 2009 60 3 2 1 Value 0 −1 −2 −3 0 0.2 0. . Then E a2 i = = A2 (1 − M )2 + (3 − M )2 + · · · + (M − 1)2 M A2 M M m=1 (2m − 1 − M )2 = A2 M (M 2 − 1) M 3 (M 2 − 1) = A2 3 So Es = (M 2 −1) 2 A .6 0.4 Time t 0. Orthogonality / Orthonormality 5. . Bandpass signals 3. Correlation and Covariance 6. . Signal space representation 7.8 1 Figure 23: Example signal for 4-ary PAM example. . . (M − 1)A. aM −1 } = −(M − 1)A. . −(M − 3)A. properties (12) 2. as described above to be the typical M-ary PAM case. .

99. (b) Find the Fourier transform of x(t). Xn = 2. Ts 2 17.4. Read over the lecture notes 1-7. 3. (from L.w. Three functions are shown in Figure P2-49. why not? 17. The spectrum of x(t) is bandlimited to −W ≤ ω ≤ W .w. Haykin & Moyer Problem 2.W. A signal x(t) of finite energy is applied to a square law device whose output y(t) is defined by y(t) = x2 (t).3 Orthogonality and Signal Space 1. You will be provided with Table 2. 4).2 Sampling and Aliasing = 0 for |f | > W . If anything is confusing.25. Prove that the spectrum of y(t) is bandlimited to −2W ≤ ω ≤ 2W .52 cos 0. 2. if x(t) cannot be determined. Rice 2.39. 1. When x(t) is sampled at a rate n = ±1 n=0 o. 4. ask me.   1. o. its samples Xn are. Assume that x(t) is a bandlimited signal with X(f ) 1 T = 2W .4 and Table 2. 4). 2. .w. πt Ts .100 What was the original continuous-time signal x(t)? Or. − Ts < t < 2 o. Rice 2. since they will reappear. in signal space using the orthonormal set found in part (b). 2. 0 ≤ t ≤ 4 0.8.5×11 sheet of paper for notes. 17 17. (a) Show that these functions are orthogonal over the interval (−4.4. Bateman Problem 1. You may have one side of an 8. How does this affect the spectrum of the output signal? 2.9: An otherwise ideal mixer (multiplier) generates an internal DC offset which sums with the baseband signal prior to multiplication with the cos(2πfc t) carrier. Problem 2-49). (c) Express the waveform w(t) = 1.ECE 5520 Fall 2009 61 Make sure you know how to do each HW problem in HWs 1-3.  0. Consider the pulse shape x(t) = (a) Draw a plot of the pulse shape x(t). (b) Find the scale factors needed to scale the functions to make them into an orthonormal set over the interval (−4. Couch 2007.1 Additional Problems Spectrum of Communication Signals 1. It will be good to know the ‘tricks’ of how each problem is done.

14. 5. Rice 5. . a signal x(t) = si (t) + n(t) is received. 5. πt T . −T ≤ t ≤ 2 o. (From Couch 2007 Problem 2-80) An RC low-pass filter is the circuit shown in Figure 24.w.ECE 5520 Fall 2009 62 2. i. PSD 1.w. and n1 is zero-mean Gaussian with variance 0. At the receiver. Its transfer function is + R C + X(t) - Y(t) - Figure 24: An RC low-pass filter..13. Rice Example 5.1 on Page 219. 5. and n0 is zero-mean Gaussian with variance 0. 3. That is.4 Random Processes. 2.1.5 Correlation / Matched Filter Receivers 1. find the value of RC to satisfy the design specifications.1.23. Let a binary 1-D communication system be described by two possible signals. H(f ) = Y (f ) 1 = X(f ) 1 + j2πRCf Given that the PSD of the input signal is flat and constant at 1.e. design an RC low-pass filter that will attenuate this signal by 20 dB at 15 kHz. i. H0 : H1 : r = a0 + n0 r = a1 + n1 where α0 = −1 and α1 = 1.26 17. 5. (a) Is s1 (t) an energy-type or power-type signal? (b) What is the energy or power (depending on answer to (a)) in s1 (t)? (c) Describe in words and a block diagram the operation of an correlation receiver. (a) What is the conditional pdf of r given H0 ? (b) What is the conditional pdf of r given H1 ? 17.e.2.5. SX (f ) = 1. s1 (t) = s2 (t) = cos 0. 5. and noise with different variance depending on which signal is sent.. −T ≤ t ≤ 2 o. 5.3. A binary communication system uses the following equally-likely signals. . πt T T 2 T 2 − cos 0.16.

2 BER Function of Distance. when talking about signal space diagrams.8.16.0 = |a1 − a0 | (26) 18.k ) 2 d1. M 1/2 di. and with P [H0 ] = P [H1 ].k − aj. In the Gaussian noise case (with equal variance of noise in H0 and H1 ). Prove that when a sinc pulse gT (t) = sin(πt/T )/(πt/T ) is passed through its matched filter. we came up with threshold γ = (a0 + a1 )/2. Proakis & Salehi 7.1 Signal Distance We mentioned. T n1 = 0 T s1 (t)n(t)dt s2 (t)n(t)dt 0 n2 = Prove that E [n1 n2 ] = 0. Proakis & Salehi 7. white noise process is correlated with s1 (t) and 2 (t) to yield. k=1 (ai. 3.0 .1. two signal case. Lecture 11 Today: (1) M-ary PAM Intro from Lecture 9 notes (2) Probability of Error in PAM 18 Probability of Error in Binary PAM This is in Section 6. the output is the same sinc pulse.2 of Rice. and P [error] = P [x > γ|H0 ] P [H0 ] + P [x < γ|H1 ] P [H1 ] . there is only d1.ECE 5520 Fall 2009 63 2. So: 2 σN = N0 . A sample function n(t) of a zero-mean.j = ai − aj = For the 1-D. 2 (25) 18. Noise PSD We have consistently used a1 > a0 . Suppose that two signal waveforms s1 (t) and s2 (t) are orthogonal over the interval (0. How are N0 /2 and σN related? Recall from lecture 7 that the variance of nk came out to be N0 /2. T ). a distance between vectors. We had 2 also called the variance of noise component nk as σN .

 A2 2N0   P [error] = Q  . when A0 = −A and A1 = A. This will be important as we talk about binary PAM. using (25) and (26).1 2 N0 /2 = Q  d2 0. Now we can use (30) to compute the probability of error.3 Binary PAM Error Probabilities For binary PAM (M = 2) it takes one symbol to encode one bit. For bipolar signaling. P [x > γ|H0 ] = Q γ − a0 σN γ − a1 P [x < γ|H1 ] = 1 − Q σN =Q a1 − γ σN (27) Thus P [x > γ|H0 ] = Q P [x < γ|H1 ] = Q So both are equal. we’d have seen that in general.ECE 5520 Fall 2009 64 Which simplified using the Q function.1 2N0   (30) Is one that we will see over and over again in this class. so if we had also calculated for the case a1 < a0 . P [error] = Q (a1 − a0 )/2 σN (29) (a1 − a0 )/2 σN (a1 − a0 )/2 σN (28) We’d assumed a1 > a0 .   2 d0. and again with P [H0 ] = P [H1 ] = 1/2. the following formula works: |a1 − a0 | P [error] = Q 2σN Then. our signals s0 (t) = a0 p(t) and s1 (t) = a1 p(t) are just s0 = a0 and s1 = a1 . when A0 = 0 and A1 = A. In signal space. we have P [error] = Q d0. Thus ‘bit’ and ‘symbol’ are interchangeable. This expression.1  Q 2N0 18.     2 2 4A  2A  = Q P [error] = Q  2N0 N0 For unipolar signaling.

ECE 5520 Fall 2009 65 However. they take no signal energy. so Eb = Es = A2 . Since half of the bits are exactly zero. dB b 0 12 14 Figure 25: Probability of Error in binary PAM signalling. • What is the difference in Eb /N0 for a constant probability of error? • What is the difference in terms of probability of error for a given Eb /N0 ? Lecture 12 Today: (0) Exam 1 Return. • Even for the same bit energy Eb . For the unipolar signalling. 10 0 Theoretical Probability of Error 10 10 10 10 10 10 −1 Unipolar Bipolar −2 −3 −4 −5 −6 0 2 4 6 8 10 E / N ratio. what is the average energy of bipolar and unipolar signaling? Solution: For the bipolar signalling. A2 m m equals A2 with probability 1/2 and zero with probability 1/2. we re-write the probability of error expressions in terms of Eb . the bit error rate of bipolar PAM beats unipolar PAM. this doesn’t take into consideration that the unipolar signaling method uses only half of the energy to transmit. Using (24). P [error] = Q Discussion 2Eb 2N0 =Q Eb N0 These probabilities of error are shown in Figure 25.   2 4A  2Eb =Q P [error] = Q  2N0 N0 For unipolar signaling. (2) Probability of Error in M-ary PAM . Thus Eb = Es = 1 A2 . (1) Multi-Hypothesis Detection Theory. 2 Now. A2 = A2 always. For bipolar signaling.

Essentially. The decision between two neighboring signal vectors ai and ai+1 will be r Hi+1 > < Hi γi. Symbol Decision = arg max fX|Hi (x|Hi ) i 2 For Gaussian (conditional) r. See Figure 26.) If P [H0 ] = · · · = P [HM −1 ] then we only need to find the i that makes the likelihood fX|Hi (x|Hi ) maximum. For this class.i+1 = ai + ai+1 2 As an example: 4-ary PAM. it is very rare in higher M communication systems. we saw that our decision was based on which of the following was higher: fX|H0 (x|H0 )P [H0 ] and fX|H1 (x|H1 )P [H1 ] For M -ary signals. we’ll see that we need to consider the highest of all M possible events Hm . it is better to maximize the log of the likelihood rather than the likelihood directly. we’ll only consider the case of equally probable signals.ECE 5520 Fall 2009 66 19 Detection with Multiple Symbols When we only had s0 (t) and s1 (t). so (x − ai )2 1 2 log fX|Hi (x|Hi ) = − log(2πσN ) − 2 2 2σN This is maximized when (x − ai )2 is minimized. H0 : H1 : HM −1 : ··· fX|H0 (x|H0 )P [H0 ] fX|H1 (x|H1 )P [H1 ] fX|HM −1 (x|HM −1 )P [HM −1 ] ··· ··· r(t) = s0 (t) + n(t) r(t) = s1 (t) + n(t) r(t) = sM (t) + n(t) ··· We will find which of these joint probabilities is highest.s with equal variances σN . So. maximum likelihood detection. (While equi-probable signals is sometimes not the case for M = 2 binary detection. H0 : H1 : HM −1 : which have joint probabilities.v. that is. a1 g12 a2 g23 a3 g34 a4 r Figure 26: Signal space representation and decision thresholds for 4-ary PAM. find the ai which is closest to x. the decision is. . this is a (squared) distance between x and ai .

Assuming neighboring symbols ai are spaced by 2A. Also the noise σN = N0 /2. Eb = A= So P (symbol error) = Equation (31) is plotted in Figure 27. For the symbols i in the ‘middle’. as discussed above. P (symbol error|Hi ) = Q So overall.ECE 5520 Fall 2009 67 20 20. .1. the decision threshold is always A away from the symbol values.1 M-ary PAM Probability of Error Symbol Error The probability that we don’t get the symbol correct is the probability that x does not fall within the range 2 between the thresholds in which it belongs. Here. P (symbol error) = 20. P (symbol error|Hi ) = 2Q For the symbols i on the ‘sides’. 10 0 which means that 3 log2 M Eb M2 − 1 6 log2 M Eb M 2 − 1 N0 (31) 2(M − 1) Q M Theoretical Probability of Error 10 10 10 10 10 10 −1 −2 −3 −4 −5 M=2 M=4 M=8 M=16 −6 0 2 4 6 8 10 E / N ratio.1 2(M − 1) Q M A N0 /2 A N0 /2 A N0 /2 Symbol Error Rate and Average Bit Energy (M 2 −1) 2 1 A . each ai = Ai . dB b 0 12 14 Figure 27: Probability of Symbol Error in M -ary PAM. log2 M 3 How does this relate to the average bit energy Eb ? From last lecture.

g. as in Option 2 in Figure 28.1 Bit Error Probabilities How many bit errors are caused by a symbol error in M -ary PAM? • One. The key is to recall the model for noise. 20. more than multiple bits flipped. the errors will tend to be just one bit flipped. QAM) when this is not a good approximation. M . we have to carefully assign bit codes to symbols 1 . . As of now . bits and symbols are not synonymous. up to log2 M in the worst case. one will lead to a higher bit error rate than the other.2 Bit Errors and Gray Encoding For binary PAM. Is one is better or worse? Option 1: “00” “11” “01” “10” Option 2: “00” “01” “11” “10” -1 0 1 Figure 28: Two options for assigning bits to symbols. there are only two symbols. The neighbors of ai will be more likely than distant signal space vectors. Example: Bit coding of M = 4 symbols While the two options shown in Figure 28 both assign 2-bits to each symbol in unique ways. . At least at high Eb /N0 . Instead. Then. Example: Bit coding of M = 8 symbols Assign three bits to each symbol such that any two nearest neighbors are different in only one bit (Gray encoding).ECE 5520 Fall 2009 68 20.use (32). in almost all cases. When you make one symbol error (decide H0 or H1 in error) then it will cause one bit error. It will not shift a signal uniformly across symbols. one will be assigned binary 0 and the other binary 1.2. It will tend to leave the received signal x close to the original signal ai . . Thus Gray encoding will only change one bit across boundaries.. Solution: Here is one solution. If Gray encoding is used. For M -ary PAM. P (error) ≈ 1 P (symbol error) log2 M (32) • Maybe more.if you have to estimate a bit error rate for M -ary PAM . We will show that this approximation is very good. We will also show examples in multi-dimensional signalling (e. “110” “111” “101” “100” “000” “001” “011” “010” f1 -1 0 2 1 3 -3 -2 Figure 29: Gray encoding for 8-ary PAM. we need to study further the probability that x will jump more than one decision region.

Channel: The wideband radio channel is a sum of time-delayed impulses. Receiver: Bandpass filters are used to reduce interference. (33) where αl and φl are the amplitude and phase of the lth multipath component. and τl is its time delay. call its impulse response h(τ. Typically. 2. then r(t) = s(t) ⋆ h(τ. or scattering adds an additional impulse to (33) with a particular time delay (depending on the extra path length compared to the line-of-sight path) and complex amplitude (depending on the losses and phase shifts experienced along the path). RF filters are used to limit the spectral output of the signal (to meet out-of-band interference requirements). (2) N-Dim Detection Theory 21 Inter-symbol Interference 1. .1 Multipath Radio Channel (Background knowledge). diffraction. noise power entering receiver. Wired channels have (nonideal) bandwidth – the attenuation of a cable or twisted pair is a function of frequency. If we try (1) above and use FDMA (frequency division multiple access) then the interference is out-of-band interference. Note that the ‘matched filter’ receiver is also a filter! 21. 3.ECE 5520 Fall 2009 69 Lecture 13 Today: (1) Pulse Shaping / ISI. 1. If we try (2) above and put symbols right next to each other in time. the radio channel is modeled as L h(τ. Filters will be seen in 1. speed of digital circuitry. Consider the spectrum of the ideal 1-D PAM system with a square pulse. Waveguides have bandwidth limits (and modes). and (2) occupies too much time. consider the effect of filtering in the path between transmitter and receiver. We don’t want either: (1) occupies too much spectrum. In reality. Essentially. t) will be received. Transmitter: Baseband filters come from the limited bandwidth. t). each reflection. our own symbols experience interference called inter-symbol interference. The measured radio channel is a filter. but multipath with delays of hundreds of nanoseconds often exist in indoor links and delays of several microseconds often exist in outdoor links. 2. The amplitudes of the impulses tend to decay over time. If s(t) is transmitted. we want to compromise between (1) and (2) and experience only a small amount of both. 2. Consider the time-domain of the signal which is close to a rect function in the frequency domain. Now. t) = l=1 αl ejφl δ(τ − τl ).

ns 300 400 Figure 30: Multiple delay profiles measured on two example links: (a) 13 to 43. and (b) 14 to 43. each symbol would fully bleed into the next symbol.05 0 0 Amplitude Delay Profile 100 200 Delay.ECE 5520 Fall 2009 70 The channel in (33) has infinite bandwidth (why?) so isn’t really what we’d see if we looked just at a narrow band of the channel.2 0.2 Transmitter and Receiver Filters Example: Bipolar PAM Signal Input to a Filter How does this effect the correlation receiver and detector? Symbols sent over time are no longer orthogonal – one symbol ‘leaks’ into the next (or beyond). How about for a 5 kHz symbol rate? Why or why not? Other solutions: • OFDM (Orthogonal Frequency Division Multiplexing) • Direct Sequence Spread Spectrum / CDMA • Adaptive Equalization 21. Actually what we observe PDP(t) = r(t) ⋆ x(t) = (x(t) ⋆ h(t)) ⋆ x(t) PDP(t) = r(t) ⋆ x(t) = RX (t) ⋆ h(t) PDP(f ) = H(f )SX (f ) (34) Note that 200 ns is 0.05 0 0 100 (a) (b) 200 Delay.15 0. 0. ns 300 400 0. Key insight: .15 0. 22 Nyquist Filtering • We don’t need the leakage to be zero at all time.1 0.25 Amplitude Delay Profile 0. Example: 2400-2480 MHz Measured Radio Channel The ‘Delay Profile’ is shown in Figure 30 for an example radio channel.25 0. At this symbol rate.1 0.2 0.2µs is 1/ 5 MHz.

making the sum of ‘leakage’ or ISI from one symbol be exactly 0 at all other sampling times nTs .t. X f+ m Ts = Ts Proof: on page 833. of Rice book.−1.. symmetric about f = 1/(2Ts ). The Nyquist filtering theorem provides an easy means to come up with others. Where have you seen this condition before? This condition. What we need are pulse shapes (waveforms). we need p(t) such that {φn (t) = p(t − nTs )}n=. ∆ 1 −f 2Ts =∆ 1 +f 2Ts 1 Ts + X(f ) + X f + 1 Ts + ··· for −1/Ts ≤ f < 1/Ts . are orthogonal to each other..ECE 5520 Fall 2009 • The value out of the matched filter (correlator) is taken only at times nTs .e. for integer n. forms an orthonormal basis. Theorem: Nyquist Filtering Proof: A necessary and sufficient condition for x(t) to satisfy x(nTs ) = is that its Fourier transform X(f ) satisfy ∞ m=−∞ 1. X(f ) = 0 for all |f | > Ts ).. The square root raised cosine.w. In more mathematical language.. that there is an inner product of zero. Thus X(f ) is allowed to bleed into other frequency bands – but the neighboring frequency-shifted copy of X(f ) must be lower s. Appendix A. shown in Figure 32 accomplishes this. But lots of other functions also accomplish this. 71 Nyquist filtering has the objective of. we denote the difference between the ideal rect function and our X(f ) as ∆(f ). See Figure 31. o. exactly constant within − 2Ts < f < 1 2Ts band and zero outside. Essentially. the sum is constant.. • It may bleed into the next ‘channel’ but the sum of ··· + X f − must be constant across all f . See Figure 31 ..1. Basically. n = 0 0.. at the matched filter output. is the condition for orthogonality of two waveforms. X(f ) could be: 1 • X(f ) = rect(f Ts ).0. 1 If X(f ) only bleeds into one neighboring channel (that is. ∆(f ) = |X(f ) − rect(f Ts )| then we can rewrite the Nyquist Filtering condition as. that when shifted in time by Ts . i.

aM −1 .  √ 0 ≤ |f | ≤ 1−α  Ts . . Setup: • Transmit: one of M possible symbols. the middle is 0. ≤ |f | 1−α 2Ts ≤ 1+α 2Ts (35) But we usually need do design a system with two identical filters.5 × 1/Ts .ECE 5520 Fall 2009 72 X(f) + X(f+1/T) X(f) X(f+1/T) f 1 1 2T T Figure 31: The Nyquist filtering criterion – 1/Ts -frequency-shifted copies of X(f ) must add up to a constant. 23 M-ary Detection Theory in N -dimensional signal space We are going to start to talk about QAM. 22. one at the transmitter and one at the receiver.” Activity: Do-it-yourself Nyquist filter. In other words. which in series. |f | − 1−α 2Ts . Leave a tiny bit so that it is not completely disconnected into two halves. and then in half the other direction. We have developed M -ary detection theory for 1-D signal vectors.  Ts 2 1 + cos πTs α 22. and the vertical axis it X(f ). 1−α ≤ |f | ≤ 1+α 2 α 2Ts 2Ts 2Ts    0. Bateman presents this whole condition as “A Nyquist channel response is characterized by the transfer function having a transition band that is symmetrical about a frequency equal to 0. Take a sheet of paper and fold it in half. 2Ts   Ts (36) HRRC (f ) = 1 + cos πTs |f | − 1−α .w. and later even higher dimensional signal vectors. and now we will extend that to N -dimensional signal vectors. Unfold. a0 .2 Square-Root Raised Cosine Filtering   0. a modulation with two dimensional signal vectors. . we need |H(f )|2 to meet the Nyquist filtering condition.w.5/Ts .1 Raised Cosine Filtering HRC (f ) =   Ts . produce a zero-ISI filter. . . o. Cut a function in the thickest side (the edge that you just folded). . Drawing a horizontal line for the frequency f axis. 0 ≤ |f | ≤ 1−α 2Ts o.

• Assume: Symbols are equally likely.05 0 −0.2 0.15 0. delayed in time by nTs for any integer n. are orthogonal to each other.j )2 2 1 2 (2πσN )N/2 x − ai 2 2σN x − ai 2 2σN So when we want to solve (37) we can simplify quickly to: ˆ = argmax log i i 1 2 (2πσN )N/2 2 2 − ˆ = argmax − x − ai i 2 2σN i ˆ = argmin x − ai 2 i i ˆ = argmin x − ai i i (38) . each component ni is independent with zero mean and variance σN = N0 /2.ECE 5520 Fall 2009 73 0.1 0.25 Basis Function φ (t) 0.05 −6 −5 −4 −3 −2 −1 i 0 1 2 Time. the optimal decision turns out to be given by the maximum likelihood receiver. • Question: What are the optimal decision regions? When symbols are equally likely. t/Tsy 3 4 5 6 7 Figure 32: Two SRRC Pulses. ˆ = argmax log fX|H (x|Hi ) i (37) i i Here. • Receive: the symbol plus noise: H0 : HM −1 : ··· X = a0 + n X = aM −1 + n ··· • Assume: n is multivariate Gaussian. fX|Hi (x|Hi ) = which can also be written as fX|Hi (x|Hi ) = 1 2 (2πσN )N/2 exp − exp − j (xj 2 2σN − ai.

and reorganize to find a linear equation in terms of x. p(t) is only non-zero within that window. Lecture 14 Today: (1) QAM & PSK. That is. Draw a line segment connecting ai and aj . Decision Regions Each pairwise comparison results in a linear division of space. This is left as an exercise. Draw the perpendicular bisector of the line segment through that point.ECE 5520 Fall 2009 74 Again: Just find the ai in the signal space diagram which is closest to x. To find this line: 1. 3. (2) QAM Probability of Error 24 Quadrature Amplitude Modulation (QAM) Quadrature Amplitude Modulation (QAM) is a two-dimensional signalling method which uses the in-phase and quadrature (cosine and sine waves. Draw a point in the middle of that line segment. (All conditions must be satisfied. respectively) as the two dimensions. we’ve separated frequency up-conversion and down-conversion from pulse shaping. Pairwise Comparisons When is x closer to ai than to aj for some other signal space point j? Solution: The two decision regions are separated by a straight line (Note: replace “line” with plane in 3-D. Thus QAM uses two basis functions. cancel. These are: √ 2p(t) cos(ω0 t) φ0 (t) = √ φ1 (t) = 2p(t) sin(ω0 t) where p(t) is a pulse shape (like the ones we’ve looked at previously) with support on T1 ≤ t ≤ T2 . or subspace in N -D). This definition of the orthonormal basis specifically considers a pulse shape at a frequency ω0 . 2. The combined decision region of Ri is the space which is the intersection of all pair-wise decision regions. Previously. We include it here because . Why? Solution: Try to find the locus of point x which satisfy x − ai 2 = x − aj 2 You can do this by using the inner product to represent the magnitude squared operator: (x − ai )T (x − ai ) = (x − aj )T (x − aj ) Then use FOIL (multiply out).) Example: Optimal Decision Regions See Figure 33.

. Draw the optimal decision regions.ECE 5520 Fall 2009 75 (a) (b) (c) (d) Figure 33: Example signal space diagrams.

it has low frequency content compared to ω0 t.ECE 5520 Fall 2009 76 it is critical to see how. 24. consider the constant p(t) = c for a constant c.1 Showing Orthogonality Let’s show that the basis functions are orthogonal. for T1 ≤ t ≤ T2 . φ1 (t) We can see that there are two cases: = c2 sin(ω0 (T2 + T1 )) sin(ω0 (T2 − T1 )) ω0 . we can have two orthogonal basis functions. with the same pulse shape p(t). φ0 (t). Are the two bases orthogonal? First. see how far we can get before plugging in for p(t): T2 φ0 (t). φ1 (t) = c2 T2 sin(2ω0 t)dt T1 = c2 − = −c2 cos(2ω0 t) 2ω0 T2 T1 cos(2ω0 T2 ) − cos(2ω0 T1 ) 2ω0 cos(2ω0 T2 ) − cos(2ω0 T1 ) = −c2 2ω0 I want the answer in terms of T2 − T1 (for reasons explained below). for T1 ≤ t ≤ T2 . that is. show that φ0 (t). We’ll need these cosine / sine function relationships: sin(2A) = 2 cos A sin A A+B cos A − cos B = −2 sin 2 sin A−B 2 As a first example. φ1 (t) = c2 sin(ω0 (T2 + T1 )) sin(ω0 (T2 − T1 )) ω0 Solution: First. φ1 (t) = T1 √ √ 2p(t) cos(ω0 t) 2p(t) sin(ω0 t)dt T2 = 2 T1 T2 p2 (t) cos(ω0 t) sin(ω0 t)dt = T1 p2 (t) sin(2ω0 t)dt Next. so φ0 (t). let p(t) = c for a constant c . (This is not intuitive!) We have two restrictions on p(t) that makes these two basis functions orthonormal: • p(t) is unit-energy. • p(t) is ‘low pass’.

when we divide by numbers on the order of 106 or 109 . . Proof? Solution: How many cycles are there? Consider the period [T1 . the numerator bounded above and below by +1 and -1. How many times can it be divided by π/ω0 ? Let the integer number be (T2 − T1 )ω0 . Zooming in on any few cycles. T1 + iπ/ω0 ]. . . The only remainder is the remainder (partial cycle) period. . this part of the integral is zero here. T2 ]. φ1 (t) ω0 ω0 which for very large ω0 . we can attempt the proof for the case of arbitrary pulse shape p(t). ak. That is. is nearly zero. for i = 1. [T1 + Lπ/ω0 . when we integrate p2 (t) sin(2ω0 t) across one cycle. Thus. This is well-illustrated in Figure 5. you can see that the pulse p2 (t) is largely constant across each cycle. φ1 (t) ≤ ω0 ω0 Typically. [T1 + (i − 1)π/ω0 . − c2 c2 ≤ φ0 (t). assume p2 (t) is nearly constant. In this case. L. For engineering purposes. . where each signal space vector ak is two-dimensional: ak = [ak.0 . we’re going to get a very small inner product. Otherwise. we’re going to end up with approximately zero. That is. and so the correlation is exactly zero. Finally. In this figure. L= π Then each cycle is in the period. M -ary QAM is defined as an arbitrary signal set a0 . Within each of these cycles. frequencies 2πω0 could be in the MHz or GHz ranges. . Certainly. Then.1 ]T . T2 ]. aM −1 . Thus φ0 (t).15 in the Rice book (page 298). .2 Constellation With these two basis functions. in the same way that the earlier integral was zero. In this case. φ1 (t) = p2 (T1 + Lπ/ω0 ) = p2 (T1 + Lπ/ω0 ) T2 sin(2ω0 t)dt T1 +Lπ/ω0 cos(2ω0 T2 ) − cos(2ω0 T1 + 2πL) 2ω0 sin(ω0 (T2 + T1 )) sin(ω0 (T2 − T1 )) = p2 (T1 + Lπ/ω0 ) ω0 Again. The pulse duration T2 − T1 is an integer number of periods. 24. . because it is a sinusoid. ω0 (T2 − T1 ) = πk for some integer k. For example. This assumption allows us to assume that p(t) is nearly constant over the course of one cycle of the carrier sinusoid. the inner product is bounded on either side: − p2 (T1 + Lπ/ω0 ) p2 (T1 + Lπ/ω0 ) ≤ φ0 (t). we see a pulse modulated by a sine wave at frequency 2πω0 . the right sin is zero. 2.ECE 5520 Fall 2009 77 1. ω0 is a large number. φ0 (t) and φ1 (t) are orthogonal. we use the ‘low-pass’ assumption that the maximum frequency content of p(t) is much lower than 2πω0 .

we can see that the signal space representation ak is given by ak = [ak. . or use squares with the corners cut out. • Figure 5. but you should be able to read other books and know what they’re talking about. . These constellations use M = 2a for some even integer a.1 ) This is called ‘Complex Baseband’.1 ]T for k = 0. • See Figure 5. you will see them write a QAM signal in shorthand as sCB (t) = p(t)(ak.ECE 5520 Fall 2009 78 The signal corresponding to symbol k in (M -ary) QAM is thus s(t) = ak. . M − 1 24. One such diagram for M = 64 square QAM is also given here in Figure 34.0 φ0 (t) + ak.1 ) (39) In many textbooks.0 .0 e−jω0 t + jak.1 sin(ω0 t)] = Note that we could also write the signal s(t) as √ 2p(t)R ak. If you do the following operation you can recover the real signal s(t) as √ s(t) = 2R e−jω0 t sSB (t) You will not be responsible for Complex Baseband notation.0 2p(t) cos(ω0 t) + ak.1 2p(t) sin(ω0 t) √ 2p(t) [ak.1 φ1 (t) √ √ = ak.1 e−jω0 t s(t) = √ = 2p(t)R e−jω0 t (ak.17 for examples of square QAM.0 cos(ω0 t) + ak.18 shows examples of constellations which use M = 2a for some odd integer a. These are either rectangular grids. . .0 + jak. and arrange the points in a grid.3 Signal Constellations M=64 QAM Figure 34: Square signal constellation for 64-QAM. ak.0 + jak. Then. and arrange the points in a grid.

24. each with up to 28 = 256 QAM.Lite uses up to 128 carriers. .e. We’ll work in class some examples of finding Eb in different constellation diagrams. M − 1 are uniformly spaced on the unit circle. • DSL.7 Systems which use QAM See [Couch 2007]. • Dial-up modems: use a M = 16 or M = 8 QAM constellation. and 11 GHz.4 Angle and Magnitude Representation a2 + a2 k. .0 k.0 . G. Thus BPSK is the same as bipolar (2-ary) PAM. binary phase-shift keying (BPSK) is the case of M = 2. |a0 | = |a1 | = · · · = |aM −1 | The points ai for i = 0.6 Phase-Shift Keying Some implementations of QAM limit the constellation to include only signal space vectors with equal magnitude.3kHz) carrier. What is the probability of error in BPSK? The same as in bipolar PAM. 6 GHz.5 Average Energy in M-QAM Recall that the average energy is calculated as: Es = Eb = 1 M M −1 i=0 |ai |2 M −1 i=0 1 M log2 M |ai |2 where Es is the average energy per symbol and Eb is the average energy per bit. G. . Note how QPSK is the same as M = 4 square QAM. and is shown in Figure 35(a). . 2Eb .1 ak. i. BPSK For example. . wikipedia. it can send up to 215 QAM (32. Some examples are shown in Figure 35. P [error] = Q N0 QPSK M = 4 PSK is also called quadrature phase shift keying (QPSK). i.768 QAM). You can plot ak in signal space and see that it has a magnitude (distance from the origin) of |ak | = and angle of ∠ak = tan−1 In the continuous time signal s(t) this is s(t) = √ 2p(t)|ak | cos(ω0 t + ∠ak ) 24. so both ‘versions’ are identical in concept (although would be a slightly different implementation).DMT uses multicarrier (up to 256 carriers) methods (OFDM).1 ak.. Note that the rotation of the signal space diagram doesn’t matter. 24. and the Rice book: • Digital Microwave Relay. and on each narrowband (4.ECE 5520 Fall 2009 79 24.e. various manufacturer-specific protocols. for equally probable symbols..

2. 25 25. Points with equal distances separating them and their neighboring points tend to reduce probability of error. • 802.11g: Adaptive modulation method. you should realize that there are two main considerations when a constellation is designed for a particular M : 1. Upstream: 6 MHz bandwidth channel. . Downstream: QPSK or 16 QAM. Some constellations have more complicated decision regions which require more computation to implement in a receiver.11a and 802. There are also some practical considerations that we will discuss 1. uses up to 64 QAM.1 QAM Probability of Error Overview of Future Discussions on QAM Before we get into details about the probabilities of error. • Digital Video Broadcast (DVB): APSK used in ETSI standard. 2. with 64 QAM or 256 QAM. The highest magnitude points (furthest from the origin) strongly (negatively) impact average bit energy. Some constellations are more difficult (energy-consuming) to amplify at the transmitter. • Cable modems.ECE 5520 Fall 2009 80 QPSK QPSK (a) M=8 M=16 (b) Figure 35: Signal constellations for (a) M = 4 PSK and (b) M = 8 and M = 16 PSK.

Similarly. It is not an approximation. consider M -ary PSK. P (symbol error) = 1 − r∈Ri 1 − e 2πσ 2 r−α i 2 2σ 2 (40) This integral is a double integral. This is a way to get a solution that is analytically easier to handle. We needed a Q function to get a tail probability for a 1-D Gaussian pdf. Now we need not just a tail probability. They require an integration of a N -D Gaussian pdf across an area. and x1 has no impact on the decision . the second bit decision is made using only x2 . E Typically these approximations are good at high Nb . 0 25. The decision is made using only one dimension. Exact formula. 3. Nearest Neighbor Approximation. For example..4 Probability of Error in QPSK In QPSK. Union bound. This is because our decisions regions are more complex than one threshold test. The decisions are decoupled – x2 has no impact on the decision about bit one. the probability of error is analytically tractable. specifically x1 .2 Options for Probability of Error Expressions In order of preference: 1. In a few cases. 2. Consider the QPSK constellation diagram. Essentially. the noise contribution to each element is independent. and we don’t generally have any exact expression to use to express the result in general. π minus the area in the sector within ± M of the correct angle φi .ECE 5520 Fall 2009 81 25. It can be used for “worst case” analysis which is often very useful for engineering design of systems. You have already calculated the decision regions for each symbol. This is a provable upper bound on the probability of error. but more like a section probability which is the volume under some part of the N -dimensional Gaussian pdf.3 Exact Error Analysis Exact probability of error formulas in N -dimensional modulations can be very difficult to find. there is an exact expression for P [error] in an AWGN environment. when Gray encoding is used. Also. 25.. This is. of the received signal vector x. now consider the decision region for the first bit. we must find calculate the probability of symbol error as 1 M=8 M=16 Figure 36: Signal space diagram for M -ary PSK for M = 8 and M = 16.

n P [symbol error] ≤ P [Ei ] i=1 (44) The union bound gives a conservative estimate. and the first axiom of probability. As we will show later. In this case. we have that P [E ∪ F ] ≤ P [E] + P [F ] .5 Union Bound From 5510 or an equivalent class (or a Venn diagram) you may recall the probability formula. Furthermore.ECE 5520 Fall 2009 82 on bit two. It is useful when the overlaps Ei ∩ Ej are small but difficult to calculate. know this one. 25. . P [symbol error] = 1 − 1−Q 2Eb N0 2 (45) In contrast. En we have that n n (42) P i=1 Ei ≤ P [Ei ] i=1 (43) This is called the union bound. when some other symbol not equal to i was actually sent. E2 . Assume s1 (t) is sent. (This holds for any events E and F !) Then. and it is very useful across communications. that for two events E and F that P [E ∪ F ] = P [E] + P [F ] − P [E ∩ F ] You can prove this from the three axioms of probability. let’s calculate the union bound on the probability of error. We define the two error events as: . This can be useful for quick initial study of a modulation type. using the above formula. let’s study the union bound for QPSK. . but in theory. our events are typically decision events. as shown in Figure 37(a). Example: QPSK First.6 Application of Union Bound In this class. . We know that (from previous analysis): 2Eb P [error] = Q N0 So the probability of symbol error is one minus the probability that there were no error in either bit. Since we know the bit error probability for each bit decision (it is the same as bipolar PAM) we can see that the bit error probability is also P [error] = Q 2Eb N0 (41) This is an extraordinary result – the bit rate will double in QPSK. The event Ei may represent the event that we decide Hi . the bandwidth of QPSK is identical to that of BPSK. from (42) it is straightforward to show that for any list of sets E1 . If you know one inequality. 25. the bit error rate does not increase.

(a) is QPSK with symbols of equal energy [0. Is E2 ∪ E0 ∪ E3 = E2 ∪ E0 ? Why or why not? Because of the answer to the question (2. and both are identical. both produce nearly identical numerical results. Then we write P [symbol error|H1 ] = P [E2 ∪ E0 ∪ E3 ] Questions: 1. • E2 . The amplitudes of the top and bottom symbols are 3 times the . Only at very low Eb /N0 is there any noticeable difference! Example: 4-QAM with two amplitude levels √ This is shown (poorly) in Figure 37(b). In (b) a1 = −a3 = E2 a2 E4 a0 Figure 37: Union bound examples. so P [symbol error|H1 ] ≤ 2Q 2Eb N0 What is missing / What is the difference in this expression compared to (45)? See Figure 38 to see the union bound probability of error plot. we use the union bound. and in fact. the event that r falls on the wrong side of the decision boundary with a2 . compared to the exact expression. However. The Rice book will not tell you to do this! His strategy is to ignore redundancies.). 0]T . Is this equation exact? 2. it is perfectly acceptable (and in fact a better bound) to remove redundancies and then apply the bound. • E0 . Then. Either way produces an upper bound. the event that r falls on the wrong side of the decision boundary with a0 . 3Es /2]T and a2 = −a0 = [ Es /2.ECE 5520 Fall 2009 83 a1 a1 E2 a2 (a) a3 E4 a0 (b) a3 √ Es . P [symbol error|H1 ] = P [E2 ∪ E0 ] But don’t be confused. because we are only looking for an upper bound any way. P [symbol error|H1 ] ≤ P [E2 ] + P [E0 ] These two probabilities are just the probability of error for a binary modulation.

the probability is the same as above. the exact probability of symbol error expression vs.5) = Es Eav = Es 4 Thus there is no advantage to the two-amplitude QAM modulation in terms of average energy. overall. dB 5 6 Figure 38: For QPSK. given H1 ? Solution: Given symbol 1. the union bound. Thus Eav = Es . What is the union bound on the probability of symbol error. it is 2Eb N0 P [symbol error|H2 ] ≤ 3Q So. these two distances between symbol a1 and a2 or a0 are the same: 2Es /N0 . the symbol energies are all equal. What is the union bound on the probability of symbol error.1 General Formula for Union Bound-based Probability of Error This is given in Rice 6.) I am calling this ”2-amplitude 4-QAM” (I made it up).ECE 5520 Fall 2009 84 Probability of Symbol Error 10 −1 10 −2 QPSK Exact QPSK Union Bound 0 1 2 3 4 Eb/N0 Ratio. For the twoamplitude 4-QAM modulation. given H2 ? Solution: Now. It is probably less than this value.76: P [symbol error] ≤ 1 M M −1 M −1 m=0 n=0 n=m P [decide Hn |Hm ] Note the ≤ sign. This means the actual P [symbol error] will be at most this value. This is a general formula and is not necessarily the best upper bound. amplitude of the symbols on the right and left. 2(0. Defining E2 and E0 as above.6.5) + 2(1. the union bound on probability of symbol error is P [symbol error|H2 ] ≤ 3·2+2·2 Q 4 2Eb N0 How about average energy? For QPSK. (They are positioned to keep the distance between points in the signal space equal to 2Es /N0 . Thus the formula for the union bound is the same. What do we mean? If I draw any . 25.

n . A little extra distance means a much lower value of the Q-function. then. M − 1.1) with the smallest dm. . In particular. the factors A will cancel out of the expression.n Es  = Q  (46) P [decide Hn |Hm ] = Q  2N0 2Es N0 where dm.6. which in turn means a lower value of the Q-function.ECE 5520 Fall 2009 85 function that is always above the actual P [symbol error] formula. Note that the Rice book uses Eavg where I use Es to denote average symbol energy. Proof of (46)? Lecture 14 Today: (1) QAM Probability of Error. the pairwise probability of error. . and n = 0. . and Nmin is the number of pairs of symbols which are separated by distance dmin . the probability of error is often well approximated by the terms in the Union Bound (Lecture 14. (2) Union Bound and Approximations Lecture 15 Today: (1) M-ary PSK and QAM Analysis 26 26.n will be proportional to A2 .1 QAM Probability of Error Nearest-Neighbor Approximate Probability of Error As it turns out. and do not need to be included.n . . is given by:     2 2 dm. some higher than others. If we use a constant A when describing the signal space vectors (as we usually do). I prefer Es to make clear we are talking about Joules per symbol. But I could draw lots of functions that are upper bounds. This is because higher dm.n dm. M − 1. decide Hn |Hm may be redundant. So approximately. The Rice book uses Eb where I use Eb to denote average bit energy. . .   Nmin  d2  min Q P [symbol error] ≈ M 2N0   Nmin  d2 Es  min P [symbol error] ≈ Q (47) M 2Es N0 where dmin = min dm. These neighbors are the ones that are necessary to include in the union bound. We will talk about the concept of “neighboring” symbols to symbol i.n is the Euclidean distance between the symbols m and n in the signal space diagram. for some of the error events. Section 2. P [decide Hn |Hm ]. I have drawn an upper bound. . since Es will be proportional to 2 A2 and dm.n means a higher argument in the Q-function. Given this. . . m=n m = 0.

j) and (j. These steps include: 1.2 Summary and Examples We’ve been formulating in class a pattern or step-by-step process for finding the probability of bit or symbol error in arbitrary M-ary PSK and QAM modulations. Calculate the average energy per bit. 4. di. 3. and Nmin is the number of ordered pairs (i. Calculate the average energy per symbol. if they are not already. Approximate using the nearest neighbor approximation if needed:   Nmin  d2  min Q P [symbol error] ≈ M 2N0 where dmin is the shortest pairwise distance in the constellation diagram. that is. 2. • The nearest-neighbor approximation. Eb . Convert distances to be in terms of Eb /N0 . 5. 8. using the answer to step (1.). i)!). Calculate the union bound on probability of symbol error using the formula derived in the previous lecture. .j = dmin (don’t forget to count both (i. Convert probability of symbol error into probability of bit error if Gray encoding can be assumed. P [error] ≈ 1 P [symbol error] log2 M Example: Additional QAM / PSK Constellations Solve for the probability of symbol error in the signal space diagrams in Figure 39. using Eb = Es / log2 M . You should calculate: • An exact expression if one should happen to be available. j) which are that distance apart. 6. P [symbol error] ≤ 1 M M −1 M −1 m=0 n=0 n=m P [decide Hn |Hm ] P [decide Hn |Hm ] = Q   d2 m.n  2N0  7. • The Union bound. an error region which is completely contained in a larger error region. as a function of any amplitude parameters (typically we’ve been using A). Calculate pair-wise distances between pairs of symbols which contribute to the decision boundaries. Es .ECE 5520 Fall 2009 86 26. Draw decision boundaries. It is not necessary to include redundant boundary information.

ECE 5520 Fall 2009 87 Figure 39: Signal space diagram for some example (made up) constellation diagrams. .

this requires that ∆f Ts = n/2. φm (t) = ∞ t=−∞ Ts (2/Ts ) cos(2πfc t + 2πk∆f t) cos(2πfc t + 2πk∆f t) cos(2π(k − m)∆f t)dt + cos(4πfc t + 2π(k + m)∆f t)dt t=0 Ts t=0 = 1/Ts t=0 Ts 1/Ts = 1/Ts = sin(2π(k − m)∆f t) 2π(k − m)∆f sin(2π(k − m)∆f Ts ) 2π(k − m)∆f Ts Yes. . probability of symbol error. o.. fM −1 }. The plots are in terms of amplitude A. but we don’t want to waste spectrum. in frequency shift keying.e. they are orthogonal if 2π(k − m)∆f Ts is a multiple of π. Lecture 16 Today: (1) QAM Examples (Lecture 15) . ∆f = n fsy 1 =n 2Ts 2 . first. 27. but all probability of error expressions should be written in terms of Eb /N0 .) For general k = m. f1 .1 Orthogonal Frequencies We will want our cosine wave at these M different frequencies to be orthonormal. which for example. • Does this function have unit energy? • Are these functions orthogonal? Solution: φk (t). symbols are selected to be sinusoids with frequency selected among a set of M different frequencies {f0 . i. could be a rectangular pulse: √ 1/ Ts . (2) FSK 27 Frequency Shift Keying x(t) = cos(2πf (t)t).ECE 5520 Fall 2009 88 on. and thus √ (48) φk (t) = 2p(t) cos(2πfc t + 2πk∆f t) where p(t) is a pulse shape. Consider fk = fc + k∆f . and then also probability of bit error.w. (They are also approximately orthogonal if ∆f is realy big. . 0 ≤ t ≤ Ts p(t) = 0. Frequency modulation in general changes the center frequency over time: Now. .

. . bandwidth will be lower when we send less steep transitions. For M = 2 and M = 3 these vectors are plotted in Figure 40. Signal space vectors ai are given by a0 = [A. Note that we don’t need to sent square wave input into the VCO. aM −1 = [0. . 27. But usually it is generated from a single VCO. . A. (Otherwise. . . . . no they’re not.2 Transmission of FSK At the transmitter. 0. Signal f1(t) Signal f2(t) Switch FSK Out Figure 41: Block diagram of a binary FSK transmitter. FSK can be seen as a switch between M different carrier signals. 0. In fact. . A] √ What is the energy per symbol? Show that this means that A = Es . . 0] a1 = [0. . 0] . SRRC pulses.) 1 Thus we need to plug into (48) for ∆f = n 2Ts for some integer n in order to have an orthonormal basis. for example. as shown in Figure 41.ECE 5520 Fall 2009 89 for integer n. as seen in Figure 42. M=2 FSK M=3 FSK Figure 40: Signal space diagram for M = 2 and M = 3 FSK modulation. . . . What n? We will see that we either use an n of 1 or 2. . Def ’n: Voltage Controlled Oscillator (VCO) A sinusoidal generator with frequency that linearly proportional to an input voltage.

If we only correlate it with one (the sine wave. As we know. for every frequency. This will give us two inner products. respectively. though – we have a fundamental problem. for example). Also. the carrier cos(2πfk t + θk ) is sent with the transmitted signal. This is the only case where I’ve heard of using coherent FSK reception. so we’d need to know and be synchronized to M different phases θi . 1/M of the time (assuming equally-probable symbols). .4 Coherent Reception FSK reception can be done via a correlation receiver. and the phase makes the signal the other (a cosine wave) we would get a inner product of zero! The solution is that we need to correlate the received signal with both a sine and a cosine wave at the frequency fk . VCO Frequency Signal f(t) FSK Out Figure 42: Block diagram of a binary FSK transmitter. the sign or phase of the sinusoid is not very important – only one symbol exists in each dimension. lets call them xI using the capital I to denote in-phase and k xQ with Q denoting quadrature. we don’t know whether or not the energy at frequency fk correlates highly with the cosine wave or with the sine wave. and thus φ(t− + iTs ) = φ(t+ + iTs ) where t− and t+ are the limiting times just to the left and to the right of 0. These M PLLs must operate even though they can only synchronize when their symbol is sent. . there are two orthogonal functions. cosine and sine (see QAM and PSK). coherent detection becomes difficult. There are a variety of flavors of CPFSK.5 Non-coherent Reception Notice that in Figure 40. which are beyond the scope of this course. In one flavor of CPFSK (called Sunde’s FSK). ˆ Each phase θk is estimated to be θk by a separate phase-locked loop (PLL). cos(2πfc t + 2π(M − 1)∆f t + θM −1 ) 27. 27. This is more difficult than it sounds. the phase of the output signal φk (t) does not change instantaneously at symbol boundaries iTs for integer i. k . just as we’ve seen for previous modulation types. In non-coherent reception. to aid in demodulation (at the expense of the additional energy). Since we will not know the phase of the received signal.ECE 5520 Fall 2009 90 Def ’n: Continuous Phase Frequency Shift Keying FSK with no phase discontinuity between symbols. As M gets high. there are M possible carrier frequencies. Here. one for each symbol frequency: cos(2πfc t + θ0 ) cos(2πfc t + 2π∆f t + θ1 ) . we just measure the energy in each frequency. In other words. having M PLLs is a drawback.3 Reception of FSK FSK reception is either phase coherent or phase non-coherent. 27.

Compare this to the coherent FSK receiver in Figure 44. that is. M −1. xQ ]T .6 Receiver Block Diagrams The block diagram of the the non-coherent FSK receiver is shown in Figures 45 and 46 (copied from the Proakis & Salehi book).ECE 5520 Fall 2009 sin(2pfit) 91 xQ i length: (xI )2 + (xQ)2 i i xI i cos(2pfit) Figure 43: The energy in a non-coherent FSK receiver at one frequency fk is calculated by finding its correlation with the cosine wave (xI ) and sine wave (xQ ) at the frequency of interest. . and calculating the squared length k k of the vector [xI . 1. You could prove this. . k = 0. Figure 44: (Proakis & Salehi) Phase-coherent demodulation of M -ary FSK signals. Is this a threshold test? We would need new analysis of the FSK non-coherent detector to find its analytical probability of error. Q Efk = (xI )2 + (xk )2 k is calculated for each frequency fk . 27. fk . but the optimal detector (Maximum likelihood detector) is then to decide upon the frequency with the maximum energy Efk . . . k k The energy at frequency fk . .

Figure 46: (Proakis & Salehi) Demodulation and square-law detection of binary FSK signals.ECE 5520 Fall 2009 92 Figure 45: (Proakis & Salehi) Demodulation of M -ary FSK signals for non-coherent detection. .

binary FSK is about 1.5 dB worse than bipolar PAM (requires 1. . f2 . What do they do to prove this in Proakis & Salehi? They prove it for binary non-coherent FSK.8 Probability of Error for Noncoherent Binary FSK The energy detector shown in Figure 46 uses the energy in each frequency and selects the frequency with maximum energy.d.Hi (r|φ. What is the distance between points in the Binary FSK signal space? What is the probability of error for coherent binary FSK? It is the same as bipolar PAM.5 dB better than OOK (requires 1.7 Probability of Error for Coherent Binary FSK First. let’s look at coherent detection of binary FSK.5 dB more energy per bit). so rm is termed the envelope. 2π fr|Hi (r|Hi ) = = 0 2π 0 fr. 1.1  P [error]2−ary = Q  2N0 √ 2/2 compared to bipolar PAM. H0 ) is a 2-D Gaussian random vector with i. φ|Hi )dφ fr|θm. The ‘envelope’ is a term used for the square root of the energy. What is the detection threshold line separating the two decision regions? 2. Note that this depends on θm . but the symbols are spaced differently (more closely) as a function of Eb . . 27. but 1. 2. to d0. . and independent of the noise. sM . Hi )fθm |Hi (φ|Hi )dφ (49) Note that fr|θm. which is assumed to be uniform between 0 and 2π.θm|Hi (r. Now. components. 2 This energy is denoted rm in Figure 46 for frequency m and is 2 2 2 rm = rmc + rms This energy measure is a statistic which measures how much energy was in the signal at frequency fm .i. They formulate the prior probabilities fr|Hi (r|Hi ). the spacing between symbols has reduced by a factor of So P [error]2−Co−F SK = Q For the same probability of bit error.5 dB less energy per bit). 2 Question: What will rm equal when the noise is very small? As it turns out.ECE 5520 Fall 2009 93 27. 1. given the non-coherent receiver and rmc and rms . Define the received vector r as a 4 length vector of the correlation of r(t) with the sin and cos at each frequency f1 .H0 (r|φ.1 = Eb N0 √ 2Eb . It takes quite a bit of 5510 to do this proof. We had that   d2 0. the envelope rm is an optimum (sufficient) statistic to use to decide between s1 .

You will use them to be able to design communication systems that use FSK. Example: Probability of Error for Non-coherent M = 2 case Use the above expressions to find the P [symbol error] and P [error] for binary non-coherent FSK. Otherwise. These energy values no longer have a Gaussian distribution (due to the squaring of the amplitudes in the energy calculation). page 430.6. Section 7. They instead are either Ricean (for the transmitted frequency) or Rayleigh distributed (for the “other” M − 1 frequencies).6. Where the joint probability fr∩H0 (r ∩ H0 ) is greater than fr|H1 (r|H1 ). and you should make note of them. The probability that the correct frequency is selected is the probability that the Ricean random variable is larger than all of the other random variables measured at the other frequencies. it decides H1 . and is presented since it does not appear in the Rice book. The proof is in Proakis & Salehi. 5.1 FSK Error Probabilities. are shown to reduce to this decision (given that P [H0 ] = P [H1 ]): 2 2 r1c + r1s H0 > < H1 2 2 r2c + r2s The “envelope detector” can equally well be called the “energy detector”. after manipulation of the pdfs. 94 4. Solution: 1 − 1 Eb P [symbol error] = P [bit error] = e 2 N0 2 M/2 P [symbol error] M −1 .9. Part 2 M -ary Non-Coherent FSK For M -ary non-coherent FSK.9 27.9 shows that P [symbol error] = and P [error]M −nc−F SK = M −1 n=1 (−1)n+1 n E M −1 1 − log2 M n+1 Nb 0 e n+1 n See Figure 47.9. Lecture 17 Today: (1) FSK Error Probabilities (2) OFDM 27. and it often is. Eb 1 (50) P [error]2−N C−F SK = exp − 2 2N0 The expressions for probability of error in binary FSK (both coherent and non-coherent) are important. An exact expression for the probability of error can be derived. which is posted on WebCT. Proof Summary: Our non-coherent receiver finds the energy in each frequency. section 7. as well. They formulate the joint probabilities fr∩H0 (r ∩ H0 ) and fr∩H1 (r ∩ H1 ).ECE 5520 Fall 2009 3. the derivation in the Proakis & Salehi book. The expression for probability of error in binary non-coherent FSK is given by. The above proof is simply FYI. The decisions in this last step. the receiver decides H0 .

B = (1 + α)/Ts . 10 log 10 |H(f )|2 .9.6dB. because phone lines were not designed for higher frequency signals. For the null-to-null bandwidth of raisedcosine pulse shaping. For M -ary FSK. so the sum of the energy is constant. which shows an example frequency selective fading pattern.10 Bandwidth of FSK Carson’s rule is used to calculate the bandwidth of FM signals. it tells us that the approximate bandwidth is. frequency dependent gains are also experienced in DSL. For each band. Our choice is arbitrary. Note that in the multiplexed system. But.it is related to the Shannon-Hartley 0 theorem on the limit of reliable communication. 28. you now have BT /K bandwidth on each subchannel. where H(f ) is an .ECE 5520 Fall 2009 −1 95 10 10 −2 Probability of Error 10 −3 10 −4 10 −5 10 −6 M=2 M=4 M=8 M=16 M=32 2 4 6 8 Eb/N0 Ratio. we might send a PSK or PAM or QAM signal on that band. each subchannel has a symbol period of Ts K. for the case when Nb > −1. B = 1/Ts . BT = (M − 1)∆f + 2B where B is the one-sided bandwidth of the digital baseband signal. you show that the total bit rate is the same in the multiplexed system.1 Frequency Selective Fading Frequency selectivity is primarily a problem in wireless channels. divide your BT bandwidth into K subchannels. For example. 28 Frequency Multiplexing Frequency multiplexing is the division of the total bandwidth (call it BT ) into many smaller frequency bands. so you don’t lose anything by this division. and sending a lower bit rate on each one. consider Figure 48. Furthermore. the probability of error E goes to zero. 27. The energy would be divided by K in each subchannel.2 Summary The key insight is that probability of error decreases for increasing M . As M → ∞. your transmission energy is the same. longer by a factor of K so that the bandwidth of that subchannel has narrower bandwidth. Note for square wave pulses. Remember this for later . 27. Specifically. dB 10 12 0 Figure 47: Probability of bit error for non-coherent reception of M-ary FSK. In HW 7.

If you put down the transmitter and receiver and have them operate at a particular f ′ .. Problems: 1. the channel stays constant over time. or is in a severe fade. The H(f ) acts as a filter. each of rate R/K.e.5 (the . ˜ i. than the error rate will be too high. The power is E increased by a fade margin which makes the Nb high enough for low BER demodulation except in the 0 most extreme fading situations. • Direct-sequence spread spectrum (DS-SS). (The f is relative. designers may use a wideband modulation method which is robust to frequency selective fading. Some channels experience gain.2 Benefits of Frequency Multiplexing Now.) • You can see that some channels experience severe attenuation. • Frequency multiplexing methods (including OFDM). and experiences a BER of 0. take the actual frequency f = f + fc for some center frequency fc . We’re going to mention frequency multiplexing. then the loss 10 log10 |H(f ′ )|2 will not change throughout the communication. Wideband: The bandwidth is used for one channel. there are K parallel bitstreams. As a first order approximation. 28.ECE 5520 Fall 2009 BT/K 96 Severely faded channel Figure 48: An example frequency selective fading pattern. or • Frequency-hopping spread spectrum (FH-SS). bit errors are not an ‘all or nothing’ game. and each subchannel’s channel filter is mostly constant. For example. The whole bandwidth is divided into K subchannels. at a lower bit rate. has a very low SNR. a subchannel either experiences a high SNR and makes no errors. while most experience little attenuation. In frequency multiplexing. example wireless channel. which introduces ISI. 2. • For stationary transmitter and receiver and stationary f . If the channel loss is too great. where R is the total bit rate. When designing for frequency selective fading. This H(f ) might be experienced in an average outdoor channel. Narrowband: Part of the bandwidth is used for one narrow channel.

The . Similarly. .w. 2/Ts sin(2πfc t + 2π(M − 1)∆f t).5β. 1 where ∆f = 2Ts .3 OFDM as an extension of FSK This is section 5. a wideband system with very high ISI might be completely unable to demodulate the received signal. (Note we have 2M basis functions here!) The signal on subchannel k might be represented as: xk (t) = 2/Ts [ak. 28.I (t) = φ1. . 1.w.w. In QAM. OFDM is the combination: φ0. Does this look like an inverse discrete fourier transform? If yes.Q (t) = .5 in the Rice book. 0 ≤ t ≤ Ts 0. 0 ≤ t ≤ Ts 0. FFT implementation: There is a particular implementation of the transmitter and receiver that use FFT/IFFT operations. 2/Ts cos(2πfc t + 2π(M − 1)∆f t). 0.I (t) + jak.5 probability of error.I (t) = φ0. the overall probability of error will be 0.I (t) + jak.Q (t). o.I (t) = φM −1. 2/Ts cos(2πfc t).w. which has probability β that it will have a 0.w.Q (t) sin(2πfk t)] The complex baseband signal of the sum of all K signals might then be represented as K xl (t) = xl (t) = 2/Ts R k=1 K (ak. we use a single basis function at each of different frequencies. These are all orthogonal functions! We can transmit much more information than possible in M -ary FSK.Q (t) = 0. 0 ≤ t ≤ Ts o. we use two basis functions at the same frequency. 0 ≤ t ≤ Ts o. If β is the probability that a subchannel experiences a severe fade. Frequency multiplexing is typically combined with channel coding designed to correct a small percentage of bit errors. o. than you can see why it is possible to use an IFFT and FFT to generate the complex baseband signal. 2/Ts cos(2πfc t + 2π∆f t). 2/Ts sin(2πfc t). 0 ≤ t ≤ Ts o.I (t) cos(2πfk t) + ak.Q (t))ej2πk∆f t Ak (t)ej2πk∆f t k=1 2/Ts R (51) where Ak (t) = ak.Q (t) = φ1. In FSK. 0. 0 ≤ t ≤ Ts 0.ECE 5520 Fall 2009 97 worst bit error rate!). Compare this to a single narrowband channel. This avoids having K independent transmitter chains and receiver chains. φM −1. 2/Ts sin(2πfc t + 2π∆f t). o.w.

1 Differential Encoding for BPSK Def ’n: Coherent Reception The reception of a signal when its carrier phase is explicitly determined and used for demodulation. Each data subcarrier can be modulated in different ways. An example state space diagram for K = 3 and PAM on each channel is shown in Figure 49. Thus the bit rate is 250 × 103 OFDM symbols subcarriers coded bits Mbits 48 4 = 48 sec OFDM symbol subcarrier sec Lecture 18 Today: Comparison of Modulation Methods 29 Comparison of Modulation Methods This section first presents some miscellaneous information which is important to real digital communications systems but doesn’t fit nicely into other lectures. Example: 802. .11a is 250k/sec.ECE 5520 Fall 2009 98 FFT implementation (and the speed and ease of implementation of the FFT in hardware) is why OFDM is popular. Four of the subcarriers are reserved for pilot tones. The symbol rate in 802.11a uses OFDM with 52 subcarriers. 29.11a IEEE 802. Since the K carriers are orthogonal. But. rather than transmitting on one of the K carriers at a given time (like FSK) we transmit information in parallel on all K channels simultaneously. One example is to use 16 square QAM on each subcarrier (which is 4 bits per symbol per subcarrier). the signal is like K-ary FSK. so effectively 48 subcarriers are used for data. OFDM. 3 subchannels of 4-ary PAM Figure 49: Signal space diagram for K = 3 subchannel OFDM with 4-PAM on each channel.

0].1 DPSK Transmitter Now.1. Typically k0 = 0. this means transmitting a training sequence. send a0 if bn = 0. 1. π. 1. What symbols k = [k0 . will always need some kind of phase synchronization in BPSK. 0. while staying at the same angle indicates a bit 0. 1. π. 0.ECE 5520 Fall 2009 99 For coherent reception of PSK. show that the receiver will decode the original bitstream with differential decoding. 1. in differential encoding. Now. . 1. Example: Differential encoding Let b = [1. we also include kn−1 . Typically. How do we decide which phase to send? Prior to this. π. 1. π]T 29. 0. 0. Basically. For non-coherent reception of PSK. 1. Example: Differential decoding 1. bn = 0 1 − kn−1 . where bn is the nth bit that we want to send. Note that we have to just agree on the “zero” phase. . if kn−1 = 0 then 1 − kn−1 = 1.1.2 DPSK Receiver Now. at the receiver. we’ve said. and send a1 if bn = 1. The sequence bn is a sequence of 0’s and 1’s. Solution: Assuming φi0 = 0. Assume b0 = 0. 0. 0. 0. 0. 1. 0]T . Assuming no phase shift in the above encoding example. 1]T These values of kn correspond to a symbol stream with phases: ∠ak = [π. . What our receiver does. 0. and if it is greater than zero. 0. kn = kn−1 . decide bn = 0. k8 ]T will be sent? Solution: k = [1. 0. 1. we use differential encoding (at the transmitter) and decoding (at the receiver). . we find bn by comparing the phase of xn to the phase of xn−1 . Instead of setting k for ak only as a function of bn . 29. consider the bit sequence {bn }. 1. is to measure the statistic ∗ ℜ{xn xn−1 } = |xn ||xn−1 | cos(∠xn − ∠xn−1 ) as the statistic – if it less than zero. b . π. for differential BPSK. decide bn = 1. ˆn = [1. bn = 1 Note that 1 − kn−1 is the complement or negation of kn−1 – if kn−1 = 1 then 1 − kn−1 = 0. a switch in the angle of the signal space vector from 0o to 180o or vice versa indicates a bit 1. 1. 0. 1.

29.2 Points for Comparison • Linear amplifiers (transmitter complexity) • Receiver complexity • Fidelity (P [error]) vs.1. 0. π. 0].3 Probability of Bit Error for DPSK The probability of bit error in DPSK is slightly worse than that for BPSK: P [error] = Eb 1 exp − 2 N0 Eb N0 For a constant probability of error. • Bandwidth efficiency Eb N0 . 0. 0. 1. 0. 0]. b Rotating all symbols by π radians only causes one bit error. π. 0.ECE 5520 Fall 2009 100 2. 1. 1. Both are plotted in Figure 50. π. assume that all bits are shifted π radians and we receive ∠x′ = [0. DPSK requires about 1 dB more of bit error Q 2Eb N0 than BPSK. which has probability . 0. dB 10 12 14 0 Figure 50: Comparison of probability of bit error for BPSK and Differential BPSK. 10 0 Probability of Bit Error 10 −2 10 −4 10 −6 BPSK DBPSK 2 4 6 8 Eb/N0. Now. 0. 29. 1. What will be decoded at the receiver? Solution: ˆn = [0.

3. BT = (M − 1)∆f + 2B .5 1.. or BT = where Rb = 1/Tb is the bit rate. we divide by log2 M bits per symbol to get Tb = Ts / log2 M seconds per bit.e.67 1. i.2 FSK (1 + α)Rb log2 M log2 M 1+α We’ve said that the bandwidth of FSK is. the null-null bandwidth is 1 + α times the bandwidth of the case when we use pure sinc pulses. and correspondingly increase the bandwidth. the quantity of spectrum our signal will use.67 4.0 6. i. Typically.0 101 Table 2: Bandwidth efficiency of PSK and QAM modulation methods using raised cosine filtering as a function of α.0 α = 0. The key figure of merit: bits per second / Hertz.0 4.0 α=1 0. 29.1 PSK.3 Bandwidth Efficiency We’ve talked about measuring data rate in bits per second.. For root raised cosine filtering. The transmission bandwidth (for a bandpass signal) is BT = 1+α Ts Since Ts is seconds per symbol.5 2.0 2. We’ve also talked about Hertz. PAM and QAM In these three modulation methods.5 0. 29. 29. This relationship is typically linearly proportional.ECE 5520 Fall 2009 η BPSK QPSK 16-QAM 64-QAM α=0 1.e. bps/Hz.3.33 2. Since it is usually used for comparative purposes. to increase the bit rate by decreasing the symbol period. Bandwidth efficiency is then η = Rb /BT = See Table 2 for some numerical examples. Def ’n: Bandwidth efficiency The bandwidth efficiency. typically denoted η. we just make sure we use the same definition of bandwidth throughout a comparison. is the ratio of bits per second to bandwidth: η = Rb /BT Bandwidth efficiency depends on the definition of “bandwidth”. the bandwidth is largely determined by the pulse shape. we can scale a system.0 3.

DPSK. and 2. 2B = (1 + α)/Ts . log2 M (M − 1)∆f Ts + (1 + α) log2 M M +α η= 29. Non-square QAM constellations 5. and E • Energy per bit ( Nb ) requirements to achieve a given probability of error. . So. 0 29.5 to be about 14 dB. M-ary PSK.ECE 5520 Fall 2009 102 where B is the one-sided bandwidth of the digital baseband signal. 2. Nb for M = 8 PSK 0 E What is the required Nb for 8-PSK to achieve a probability of bit error of 10−6 ? What is the bandwidth 0 efficiency of 8-PSK when using 50% excess bandwidth? Solution: Given in Rice Figure 6. bandwidth efficiency) pairs. Eb N0 For each modulation format. 3. Eb N0 Eb N0 : Main modulations which we have evaluated probability of error vs. OOK. we have quantities of interest: • Bandwidth efficiency. E We can plot these (required Nb . 1. including QPSK. See also Rice pages 325-331. M-ary PAM. M-ary FSK In this part of the lecture we will break up into groups and derive: (1) the probability of error and (2) probability of symbol error formulas for these types of modulations.3. Square QAM 4. 0 E Example: Bandwidth efficiency vs.6. See Rice Figure 6. For the null-to-null bandwidth of raisedcosine pulse shaping. BT = (M − 1)∆f + (1 + α)/Ts = since Rb = 1/Ts for η = Rb /BT = Rb {(M − 1)∆f Ts + (1 + α)} log2 M If ∆f = 1/Ts (required for non-coherent reception).5 Fidelity (P [error]) vs.4 Bandwidth Efficiency vs.3. FSK. including Binary PAM or BPSK.

6 Transmitter Complexity What makes a transmitter more complicated? • Need for a linear power amplifier • Higher clock speed • Higher transmit powers • Directional antennas 29. If PDC is the input power (e.1 Linear / Non-linear Amplifiers An amplifier uses DC power to take an input signal and increase its amplitude at the output.6. and Pout is the output signal power. the conduction angle is slightly higher than 180o . • Class AB: Like class B. . and as a result of their configuration. Power is dissipated at all times. from battery) to the amplifier. Output signal is a scaled up version of the input signal. • Class A: linear amplifiers with maximum power efficiency of 50%. Amplifiers are separated into ‘classes’ which describe their configuration (circuits). their linearity and power efficiency.. the power efficiency is rated as ηP = Pout /PDC .5%.g. 29. Two in parallel are used to amplify a sinusoidal signal. That is.ECE 5520 Fall 2009 Name BPSK OOK DPSK M-PAM QPSK M-PSK Square M-QAM 2-non-co-FSK M-non-co-FSK 2-co-FSK M-co-FSK = Eb 1 2 exp − 2N0 M −1 M −1 (−1)n+1 n=1 ( n ) n+1 exp 103 P [symbol error] =Q =Q = = 1 2 2Eb N0 Eb N0 E − Nb 0 6 log2 M Eb M 2 −1 N0 P [error] same same same ≈ ≈ 1 log2 M P exp 2(M −1) Q M [symbol error] 2Eb N0 =Q ≤ 2Q E 2(log2 M ) sin2 (π/M ) Nb 0 ≈ = E − n log2 M Nb n+1 0 1 log2 M P [symbol error] √ 3 log2 M Eb ( M −1) 4 √ Q log2 M M −1 N0 M same = M/2 M −1 P M/2 M −1 P [symbol error] same =Q ≤ (M − 1)Q Eb N0 E log2 M Nb 0 = [symbol error] Table 3: Summary of probability of bit and symbol error formulas for several modulations. but each amplifier stays on slightly longer to reduce the “dead zone” at zero. • Class B: linear amplifiers turn on for half of a cycle (conduction angle of 180o ) with maximum power efficiency of 78. one for the positive part and one for the negative part.

battery-powered transmitters are often willing to use Class C amplifiers. We could have also written s(t) as √ s(t) = 2p(t) [a0 (t) cos(ω0 t) + a1 (t) sin(ω0 t)] The problem is that when the phase changes 180 degrees. you’ll see a circle. if you look at the constellation diagram. .ECE 5520 Fall 2009 104 • Class C : A class C amplifier is closer to a switch than an amplifier. Can only amplify signals with a nearly constant envelope. which precludes use of a class C amplifier. It is in a discrete set. The signal must never go through the origin (envelope of zero) or even near the origin. the signal s(t) will go through zero. we just need to delay the sampling on the quadrature half of a sample period with respect to the in-phase signal. In order to double the power efficiency. This generates high distortion. . This means that. For offset QPSK (OQPSK). π 3π 5π 7π ∠a(t) ∈ { . See Figure 52(b) and Figure 51 (a) to see this graphically. They can do this if their output signal has constant envelope. the envelope |s(t)| is largely constant. Rewriting s(t) √ √ s(t) = 2p(t)a0 (t) cos(ω0 t) + 2p(t − Ts /2)a1 (t − Ts /2) sin(ω0 (t − Ts /2)) At the receiver. we delay the quadrature Ts /2 with respect to the in-phase. } 4 4 4 4 and p(t) is the pulse shape. 29. See Figure 52 for a comparison of QPSK and OQPSK. However. we wrote the modulated signal as √ s(t) = 2p(t) cos(ω0 t + ∠a(t)) where ∠a(t) is the angle of the symbol chosen to be sent at time t.7 Offset QPSK QPSK without offset: Q-channel Offset QPSK: Q-channel I-channel I-channel (a) (b) Figure 51: Constellation Diagram for (a) QPSK and (b) O-QPSK. and has the same Eb /N0 vs. Class C non-linear amplifiers have power efficiencies around 90%. probability of bit error performance. The new transmitted signal takes the same bandwidth and average power. For QPSK. . but then is band-pass filtered or tuned to the center frequency to force out spurious frequencies.

1 Imag 0 −0.05 −0.05 −0. showing the (d) largely constant envelope of OQPSK.15 −0.1 0 1 2 Time t 3 4 (b) −0. compared to (b) that for QPSK.1 Imag 0 −0.1 0 1 2 Time t 3 4 Imag 0.1 0. phase. your mission (if you choose to accept it) is to achieve: 1. High data rate 2.05 0 −0. timing) • Multiple parallel receiver chains Lecture 19 Today: (1) Link Budgets and System Design 30 Link Budgets and System Design As a digital communication system designer.ECE 5520 Fall 2009 0.1 (a) 0. Low transmit power .1 0 1 2 Time t 3 4 Imag 0.2 Real 0 −0.8 Receiver Complexity What makes a receiver more complicated? • Synchronization (carrier.15 −0. High fidelity (low bit error rate) 3.15 0.2 −0.1 0.1 0 Real 0.2 0.1 Real 0 −0.1 0 Real 0.1 105 0.1 0.1 (c) 0 1 2 Time t 3 4 (d) −0.1 0.2 0.15 0.05 0 −0.1 −0.2 0.2 Figure 52: Matlab simulation of (a-b) QPSK and (c-d) O-QPSK. 29.

This lecture is about system design. radio propagation. See Figure 53. Figure 53: Relationships between important system design variables (rectangular boxes). .. Typically some subset of requirements are given. For example. the relationship between probability of bit error and Nb . it is just a matter of setting some system variables (at the given limits) and determining what that means about the values of other system variables. division).. for example. So the system design depends on what the desired system really needs and what are the acceptable tradeoffs.ECE 5520 Fall 2009 106 4. what data rate can be achieved? Using which modulation? To answer this question. but in the Rice book it is typically denoted C.1 Link Budgets Given C/N0 The received power is denoted C.g. and define each of the variables we see in Figure 53. antennas. which is drawn as a dotted line. because you can’t have everything at the same time. the operator is drawn in. You are expected to apply what you have learned in other classes (or learn the functional relationships that we describe here). and bit error rate limit. Low bandwidth 5. optics. but it summarizes what we need to know about the signal and the noise for the purposes of system design. It is an odd quantity. if I was given the bandwidth limits and C/N0 ratio. What is C/N0 ? It is received power divided by noise energy. and it cuts across ECE classes.g. The effect of the choice of modulation impacts several functional relationships. I’d be able to determine the probability of error for several different modulation types. Functional relationships between variables are given as circles – if the relationship is a single operator (e. in particular circuits. 0 30. and the material from this class. Low transmitter/receiver complexity But this is truly a mission impossible. and to what they are related. E e. otherwise it is left as an unnamed function f (). In this lecture. given the bandwidth limits. we discuss this procedure. For example. C/N0 is shown to have a divide by relationship with C and N0 . it has units of Watts. • We often describe both the received power at a receiver PR . the received signal power and noise power.

4172 6 × 10−6 = 4.6114 3 × 10−6 = 4. We can write Eb = CTb .6708 2 × 10−6 = 4.5401 Q−1 3 × 10−4 = 3.5244 Q−1 4 × 10−1 = 0. great.2 MBps or 200 kBps.5548 Q−1 7 × 10−2 = 1.8461 7 × 10−5 = 3.3776 7 × 10−6 = 4. If you can program it into your calculator.5 × 10−6 = 4. since energy is power times time. we can now relate bit error rate.3408 Q−1 1 × 10−1 = 1.1701 Q−1 2 × 10−2 = 2. For example. Q Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 Q−1 −1 1× = 4.3263 Q−1 1.3145 9 × 10−6 = 4.8906 6 × 10−5 = 3.3528 Q−1 5 × 10−4 = 3.25335 Q−1 5 × 10−1 = 0 .5 × 10−4 = 3. bit rate. and bandwidth. But the bit rate is 8/5 Mbps or 1.7478 Q−1 4 × 10−3 = 2.7534 1. To separate the effect of Tb .5121 Q−1 7 × 10−3 = 2. The noise energy is N0 . which is 10 log10 C N0 Eb N 0 Rb What are the units of C/N0 ? Answer: • Be careful of Bytes per sec vs bits per sec. modulation. Otherwise. E Note: We typically use Q (·) and Q−1 (·) to relate BER and Nb in each direction. • Note that people often report C/N0 in dB Hz.9677 Q−1 2 × 10−3 = 2. then software often reports that the data rate is 1/5 = 0.775 9 × 10−5 = 3.6 × 106 bps. if it takes 5 seconds to transfer a 1MB file. I am not picky about getting lots of correct decimal places. 1/s.1559 Q−1 9 × 10−4 = 3.2389 Q−1 7 × 10−4 = 3. CS people use Bps (kBps or MBps) when describing data rate. we often denote: C C/N0 Eb = Tb = N0 N0 Rb where Rb = 1/Tb is the bit rate.2905 Q−1 6 × 10−4 = 3.2816 Q−1 1.4051 Q−1 9 × 10−2 = 1.3656 −1 FUNCTION: Q−1 1 × 10−2 = 2. Given C/N0 .7507 Q−1 5 × 10−2 = 1.5 × 10−2 = 2.0537 Q−1 3 × 10−2 = 1.1214 Q−1 1 × 10−3 = 3.6449 Q−1 6 × 10−2 = 1.0128 4 × 10−5 = 3.719 −1 Q 1.2649 1. Commonly.5758 Q−1 6 × 10−3 = 2.6153 −1 Q 2 × 10−4 = 3.5 × 10−3 = 2.8782 Q−1 3 × 10−3 = 2.5 × 10−5 = 4.0364 Q−1 2 × 10−1 = 0.4758 Q−1 8 × 10−2 = 1. the following tables/plots of Q−1 (x) will appear on Exam 2.4652 5 × 10−6 = 4. While you have Matlab.4089 Q−1 9 × 10−3 = 2.1947 Q−1 8 × 10−4 = 3.8808 Q−1 4 × 10−2 = 1. C/N0 = Hz. The 0 bit energy is Eb .4573 Q−1 8 × 10−3 = 2. this 0 is easy to calculate.2884 1 × 10−5 = 4.84162 Q−1 3 × 10−1 = 0.1075 3 × 10−5 = 4.6521 Q−1 5 × 10−3 = 2.ECE 5520 Fall 2009 107 E • We know the probability of bit error is typically written as a function of Nb . In other words.0902 Q−1 1. it’s really not a big deal to pull it off of a chart or table.5264 4 × 10−6 = 4.1735 2 × 10−5 = 4.5 × 10−1 = 1.8082 8 × 10−5 = 3.3439 8 × 10−6 = 4. For your convenience.7455 10−6 TABLE OF THE Q−1 (·) Q 1 × 10−4 = 3.9444 5 × 10−5 = 3.4316 Q−1 4 × 10−4 = 3.

2 Power and Energy Limited Channels Assume the C/N0 .5 1 0. Otherwise. You just need to try to solve the problem and see which one limits your system. Here is a step-by-step version of what you might need do in this case: Method A: Start with power-limited assumption: 1. In this case. Use the probability of error constraint to determine the of error formula for the modulation. but be sure to express both in linear units. your system is power limited. x −3 10 −2 10 −1 10 0 30.5 0 −7 10 10 −6 10 −5 10 −4 10 Value.5 Inverse Q function. and your Rb is achievable. Compare the bandwidth at maximum Rs to the bandwidth constraint: If BW at Rs is too high. Given a maximum bit rate. Such links (or channels) are called power limited channels. Rb log2 M and then compute the required 4. then the system is bandwidth limited.5 3 2. given the appropriate probability C/N0 Eb N0 constraint. 3. the link (or channel) is called a bandwidth limited channel. the maximum bandwidth.ECE 5520 Fall 2009 108 5 4.5 2 1. Given the C/N0 constraint and the Eb N0 Eb N0 constraint. 2. Sometimes power is the limiting factor in determining the maximum achievable bit rate. and the maximum BER are all given. Sometimes bandwidth is the limiting factor in determining the maximum achievable bit rate. Method B: Start with a bandwidth-limited assumption: . Q−1(x) 4 3. reduce your bit rate to conform to the BW constraint. Note that Rb = 1/Tb = . find the maximum bit rate. calculate the maximum symbol rate Rs = bandwidth using the appropriate bandwidth formula.

27×106 /4 = 5.5 MHz.585 × 108 = 2.84 and thus Rs = Rb / log2 M = 2. and can operate with bit rate 2. your system is bandwidth-limited. make sure that everything is in linear 3. the system is power limited.5(2. Find the probability of error at that using the appropriate probability of error formula. Use the previous method to find the maximum bit rate.33 Consider a bandpass communications link with a bandwidth of 1. For M = 16 PSK. Solving for Rb . what is the maximum achievable bit rate on this link? Is this a power limited or bandwidth limited channel? Solution: 1. The required bandwidth for this system is (1 + α)Rb BT = = 1. Find the units. If the modulation is 16-PSK using the SRRC pulse shape with α = 0.67×105 Msymbols/s.27×106 )/4 = 851 kHz log2 M This is clearly lower than the maximum bandwidth of 1. The maximum bit error rate is 10−6 .84 (52) Converting C/N0 to linear. Rb = C/N0 Eb N0 = 1.) Eb N0 at the given bit rate by computing Eb N0 .ECE 5520 Fall 2009 109 1. what is the maximum achievable bit rate on the link? Is this a power limited or bandwidth limited channel? 2. we would have needed to reduce Rb to meet the bandwidth limit.) . 2.27 Mbits/s 69. and you have found the correct maximum bit rate.585 × 108 . then your system is power limited. (If BT had come out > 1. C/N0 = 1082/10 = 1. If the modulation is square 16-QAM using the SRRC pulse shape with α = 0. Use the bandwidth constraint and the appropriate bandwidth formula to find the maximum symbol rate Rs and then the maximum bit rate Rb .5 MHz.5. Eb N0 = C/N0 Rb . we can find 10−6 = P [error] = 10−6 = Eb N0 Eb N0 = 2 Q 4 2 Eb N0 for the maximum BER: 2(log2 M ) sin2 (π/M ) Eb N0 2 2 Q log2 M Eb N0 2(4) sin2 (π/16) 1 Q−1 2 × 10−6 8 sin (π/16) = 69. 4. Try Method A. So.27×106 = 2. Otherwise. Example: Rice 6.5 MHz and with an available C/N0 = 82 dB Hz. (Again. If the computed P [error] is greater than the BER constraint.5. 1.27 Mbits/s.

How can we calculate how much power will be received? We can do this for both wireless and wired channels.5 MHz = 4 MHz 1+α 1. we can find 10 −6 10−6 Eb N0 Eb N0 Solving for Rb . 30. the wavelength is nearly constant across the bandwidth. Wireless propagation models are required. respectively. but there are others. so we must reduce the bit rate to Rb = BT log2 M 4 = 1.55 (53) Rb = C/N0 Eb N0 = 1. the received power is calculated from the Friis formula.75×106 = 5. we have a bandwidth-limited system with a bit rate of 4 MHz. Note that λ = c/fc where c = 3 × 108 meters per second is the speed of light.5 MHz.55 The required bandwidth for this bit rate is: BT = (1 + α)Rb = 1.16 MHz log2 M This is greater than the maximum bandwidth of 1.75 Mbits/s 27. = √ 4 ( M − 1) √ = P [error] = Q log2 M M   4 (4 − 1)  3(4) Eb  Q = 4 4 15 N0 15 −1 Q (4/3) × 10−6 12 2 = 27. But. C = PR = where • GT and GR are the antenna gains at the transmitter and receiver.585 × 108 = 5.ECE 5520 Fall 2009 Eb N0 110 for the maximum BER: 3 log2 M Eb M − 1 N0 2.75×106 )/4 = 2.3 Computing Received Power We need to design our system for the power which we will receive. Try Method A. so we just use the center frequency fc . and can really only be used for deep space communications. this section of the class provides two examples. For narrowband signals. 30. PT GT GR λ2 (4πR)2 . • λ is the wavelength at signal frequency.3.1 Free Space ‘Free space’ is the idealization in which nothing exists except for the transmitter and receiver.5 In summary.5(5. For M = 16 (square) QAM. this formula serves as a starting point for other radio propagation formulas. In free space.

The Rice book writes the Friis formula as [C]dBm = [GT ]dB + [GR ]dB + [PT ]dBm + [Lp ]dB where λ − 20 log 10 R 4π where [Lp ]dB is called the “channel loss”. But. for some constants n and L0 . For example. it should be called “channel gain” and have a negative gain value. 111 Here. it may be more like 1/Rn . My opinion is. and we will write [PR ]dBm or [GT ]dB to denote that they are given by: [PR ]dBm = 10 log 10 PR [GT ]dB = 10 log 10 GT (54) In lecture 2. the average loss may increase more quickly than 1/R2 .2 Non-free-space Channels We don’t study radio propagation path loss formulas in this course. etc. Lots of other formulas exist that approximate the received power as a function of distance. but it is typically very negative. Typically people use decibels to express these numbers. is λ [C]dBm = [GT ]dB + [GR ]dB + [PT ]dBm + 20 log 10 − 20 log 10 R 4π This says the received power. we showed that the Friis formula.3. is linearly proportional to the log of the distance R. [C]dBm = [PT ]dBm − R [L1m ]dB where PT is the transmit power and R is the cable length in meters. Rice lumps these in as the term L and writes: [Lp ]dB = +20 log10 [C]dBm = [GT ]dB + [GR ]dB + [PT ]dBm + [Lp ]dB + [L]dB 30.ECE 5520 Fall 2009 • PT is the transmit power. trees. a very short summary is that radio propagation on Earth is different than the Friis formula suggests. type of environment. and L1m is the loss per meter. (This name that Rice chose is unfortunate because if it was positive. everything is in linear terms.) There are also typically other losses in a transmitter / receiver system. but the loss is modeled as linear in the length of the cable. etc. 30. C.3.. because of shadowing caused by buildings.3 Wired Channels Typically wired channels are lossy as well. antenna heights. whenever path loss is linear with the log of distance. given in dB. Effectively. . it would be a “gain”. etc. [Lp ]dB = L0 − 10n log10 R. losses in a cable. For example. other imperfections. instead.

075m. The receiver has an equivalent noise temperature of 400 K and an im. incidental losses.48dBW − 20 log 10 R (55) . the wavelength is λ = 3 × 108 m/s/4 × 109 1/s = 0.4 dBW.ECE 5520 Fall 2009 112 30. so: λ [C]dBW = [GT ]dB + [GR ]dB + [PT ]dBW + 20 log10 − 20 log10 R − 2dB − 2dB − 2dB − 1dB 4π 0. 255 N0 2 3 log2 M Eb M − 1 N0 255 Q−1 24 32 −8 10 15 = 319. So Eb = 319.0 × 5. Neal’s hint: Use the dB version of Friis formula and subtract these mentioned dB losses: atmospheric losses.plementation loss of 1 dB. But in short. all receiver circuits add noise to the received signal. The modulation is 51.36 Consider a “point-to-point” microwave link. Basically. √ Solution: Starting with the modulation.075m − 20 log10 R − 7dB = 20dB + 20dB + 10dBW + 20 log10 4π = −1.4 Computing Noise Energy N0 = kTeq The noise energy N0 can be calculated as: where k = 1. C = (51.84 Mbits/sec 256 square QAM with a carrier frequency of 4 GHz. M = 16). Teq can be kept low. Since Eb = C/Rb and the bit rate Rb = 51.0 The noise power N0 = kTeq = 1.52 × 10−21 J. (Such links were the key component in the telephone company’s long distance network before fiber optic cables were installed. With proper design. the equivalent temperature is a function of the receiver design.84 × 106 )J(1.52 × 10−21 = 1. 10 −8 = 10−8 = 10−8 = Eb N0 = √ 4 ( M − 1) √ Q log2 m M   4 15  3(8) Eb  Q 8 16 255 N0 15 Q 32 24 Eb . How far away can the two towers be if the bit error rate is not to exceed 10−8 ? Include the pigeon.5 Examples Example: Rice 6.38 × 10−23 (J/K)400(K) = 5. Atmospheric losses are 2 dB and other incidental losses are 2 dB. and the pigeon. Teq is always going to be higher than the temperature of your receiver. M = 256 square QAM (log2 M = 8.) Both antenna gains are 20 dB and the transmit antenna power is 10 W.76 × 10−18 )1/sec = 9.38 × 10−23 J/K is Boltzmann’s constant and Teq is called the equivalent noise temperature in Kelvin.76 × 10−18 J. to achieve P [error] = 10−8 . or -100. implementation loss. Switching to finding an expression for C. A pigeon in the line-of-sight path causes an additional 2 dB loss. 30.13 × 10−11 W. This is a topic covered in another course. and so you are not responsible for that material here.84 × 106 bits/sec.

Lecture 20 Today: (1) Timing Synchronization Intro. note the ADC has nothing controlling it. Figure 55 shows a receiver with an ADC immediately after the downconverter. xn (k) = xn (k) = k m r(t). φn (t − kTs − τ ) (57) Note that if tk = kTs + τ . (2) Interpolation Filters 31 Timing Synchronization At the receiver. For the correlation with n. This tk is the correct timing delay at each correlator for the kth symbol. The received complex baseband signal (for QAM. Figure 54: Block diagram for a continuous-time receiver. let’s compare receiver implementations for a (mostly) continuous-time and a (more) discrete-time receiver.ECE 5520 Fall 2009 113 Plugging in [C]dBm = −100. or QPSK) can be written (assuming carrier synchronization) as r(t) = k m am (k)φm (t − kTs − τ ) (56) where ak (m) is the mth transmitted symbol. then the correlation φn (t − tk ). PAM. As a start.3 km (about 55 miles) apart. and for some delay tk . But. φn (t − kTs − τ ) is highest and closest to 1. The input signal is downconverted and then run through matched filters. . including analog timing synchronization (from Rice book.1).4 dBW = −1. an interpolator corrects the sampling time problems using discrete-time processing.3 km.3. a transmitted signal arrives with an unknown delay τ .48dBW − 20 log 10 R and solving for R. This interpolation is the subject of this section. Instead. these are generally unknown to the receiver until timing synchronization is complete. Figure 54 has a timing synchronization loop which controls the sampler (the ADC). after the matched filter. Thus microwave towers should be placed at most 88. we find R = 88. which correlate the signal with φn (t − tk ) for each n. Figure 8. Here. φn (t) am (k) φn (t − tk ).

but with a timing error. Some example sampling clocks. compared to the actual symbol clock. . 32 Interpolation In this class. we mean within an integer multiple. and reducing part counts and costs. we will emphasize digital timing synchronization using an interpolation filter. we’ll talk about interpolation. Also.5 −1 4 6 8 10 Time t 12 14 Figure 56: Samples of the matched filter output (BPSK. which is not preferred. unsynchronized with the symbol clock. including discrete-time timing synchronization. 1 0. after the signal has been sampled. In this figure. the timing synch control block is fed back to the ADC. and then. The industry trend is more and more towards digital implementations. These are shown in degrees of severity of correction for the receiver. When we say ‘synchronized in rate’. consider Figure 56. First. because of the processing delay. A ‘software radio’ follows the idea that as much of the radio is done in digital. this digital and analog feedback loop can be problematic. But again. a BPSK receiver samples the matched filter output at a rate of twice per symbol. The idea is to “bring the ADC to the antenna” for purposes of reconfigurability. Another implementation is like Figure 55 but instead of the interpolator. this requires a DAC and feedback to the analog part of the receiver. resulting in samples r(nT). If down-sampling (by 2) results in the symbol samples rk given by red squares. we’ll consider the control loop. then sampling sometimes reduces the magnitude of the desired signal. since the sampling clock must operate at (at least) double the symbol rate. RRC with α = 1) taken at twice the correct symbol rate (vertical lines). are shown in Figure 57.ECE 5520 Fall 2009 r(nTsa) ADC cts-time bandpass signal Matched Filter Digital Interpolator Timing Synchronization 114 r(kTsy) x(t) p/2 PLL Timing Synch Control Optimal Detector 16 2cos(2pfct+f) bm 2sin(2pfct+f) ADC Matched Filter Digital Interpolator r(nTsa) r(kTsy) Figure 55: Block diagram for a digital receiver for QAM/PSK. For example.5 Value 0 −0.

but the sampling clock does not have the correct phase (τ is not equal to an integer multiple of T ).8. T/Ts = 1/2. where T /Ts = 1/2 (the clocks are commensurate). and every 5 samples the delay until the next correct symbol sample will repeat. kTs + τ is always µ(k)T after the most recent sampling instant. (4) synchronized neither in rate nor phase. In other words. two clocks are commensurate if the ratio T/Ts can be written as n/m where n. We can write that kTs + τ = [m(k) + µ(k)]T where m(k) is an integer. our receivers always deal with type (4) sampling clock error as drawn in Figure 57. . m are integers. for both cases (3) and (4) in Figure 57. we sample exactly 2. 2. we cannot count on them ever repeating. the two clocks are commensurate and we sample exactly twice per symbol period. In general. As another example. Symbol clock may be (2) synchronized in rate and phase. (3) synchronized in rate but not in phase. 3.ECE 5520 Fall 2009 115 Figure 57: (1) Sampling clock and (2-4) possible actual symbol clocks. with sample clock. Instead. where µ(k) is called the fractional interval. but no samples were taken at those instants. the sampling clock has neither the same exact rate nor the same phase as the actual symbol clock.5 times per symbol. Def ’n: Incommensurate Two clocks with rates T and Ts are incommensurate if the ratio T/Ts is irrational. The situation shown in Figure 56 is case (3). the highest integer such that nT < kTs + τ . and 0 ≤ µ(k) < 1.1 and τ /T = 1. 32. if T/Ts = 2/5. Since clocks are generally incommensurate. the correct sampling times should be kTs + τ . In contrast.1 Sampling Time Notation In general. Calculate (m(k). µ(k)) for k = 1. m(k) = kTs + τ T (58) where ⌊x⌋ is the greatest integer less than function (the Matlab floor function). This means that µ(k) is given by kTs + τ µ(k) = − m(k) T Example: Calculation Example Let Ts /T = 3. That is. For example.

and Figure 8. The problem is that in general. The analog output of the matched filter could be represented as a function of its samples r(nT). Note: Good illustrations are given in M. Plugging these times in for t in (59). Figure 8. and in between samples 11 and 12. r([m(k) + µ(k)]T) = n r(nT)hI ([m(k) − n + µ(k)]T). Call the correct symbol sampling times as kTs + τ for integer k.1) + 1. 32.4. πt/T (59) where hI (t) = Why is this so? What are the conditions necessary for this representation to be accurate? If we wanted the signal at the correct sampling times. Re-indexing with i = m(k) − n.3 Approximate Interpolation Filters Clearly.8⌋ = 11. Instead. this requires an infinite sum over i from −∞ to ∞.ECE 5520 Fall 2009 116 Solution: m(1) = ⌊3. µ(k)) notation.1 m(3) = ⌊3(3. Rice. (60) is a filter.9 µ(1) = 0 µ(1) = 0. where the filter coefficients are dependent on µ(k). This isn’t so great of an approximation.13.1 + 1. since kTs + τ = [m(k) + µ(k)]T. in which we draw a straight line between the two nearest sampled values to approximate the values of the continuous signal between the samples. r([m(k) + µ(k)]T) = i r([m(k) − i]T)hI ([i + µ(k)]T).8⌋ = 8. . µ(1) = 0. we use polynomial approximations for hI (t): • The easiest one we’re all familiar with is linear interpolation (a first-order polynomial).12. r(t) = n r(nT)hI (t − nT) sin(πt/T) . using the (m(k).1) + 1. where Ts is the actual symbol period used by the transmitter. Thus your interpolation will be done: in between samples 4 and 5. 32. The desired sample at [m(k) + µ(k)]T is calculated by adding the weighted contribution from the signal at each sampling time. we could have it – we just need to calculate r(t) at another set of times (not nT). at sample 8. (60) This is a filter on samples of r(·). we have that r(kTs + τ ) = n r(nT)hI (kTs + τ − nT) Now.8⌋ = 4. m(2) = ⌊2(3.2 Seeing Interpolation as Filtering Consider the output of the matched filter.4. because the sinc function has infinite support. r(t) as given in (57).

. Note that the i indices seem backwards. and get a linear-phase filter. and one on the other. Farrow. 0 r([m(k) + µ(k)]T) = i=−1 r([m(k) − i]T)hI (i. we should take the r(m(k)T) sample exactly. of AT&T Bell Labs. Given three points.) Instead. the filter is hI (i) but its values are a function of µ(k).4 Implementations Note: These polynomial filters are called Farrow filters and are named after Cecil W. we could use a cubic interpolation filter.1 Higher order polynomial interpolation filters p In general. Rice handout. . . Thus we could re-write (60) as r([m(k) − i]T)hI (i. (To see this. we can use four points to fit a second-order polynomial. see M. µ(k)) What are the filter elements hI for a linear interpolation filter? Solution: r([m(k) + µ(k)]T) = µ(k)r([m(k) + 1]T) + [1 − µ(k)]r(m(k)T) Here we have used hI (−1.a parabola) is actually a very good approximation. Rice Figure 8. .ECE 5520 Fall 2009 117 • A second-order polynomial (i. As µ(k) → 1. we should take the r([m(k) + 1]T) sample exactly. This is temporal asymmetry. µ(k)) = 1 − µ(k). µ(k)) = µ(k) and hI (0. From (60). To see results for different order polynomial filters. the three point fit does not result in a linear-phase filter. Four points determine a 3rd order polynomial. Essentially. µ(k)).e. µ(k)1 . hI (i. The filter coefficients are a polynomial function of µ(k). µ(k)2 . 32. µ(k)p for a pth order polynomial filter. These Farrow filters started to be used in the 90’s and are now very common due to the dominance of digital processing in receivers.4. • However. • Finally. who has the US patent (1989) for the “Continuously variable digital delay circuit”. one can determine a parabola that fits those three points exactly.1 of the M.24. the fractional interval. given µ(k). consider the linear interpolator. . Example: First order polynomial interpolation For example. 32. (61) r([m(k) + µ(k)]T) = i That is. we form a weighted average of the two nearest samples. As µ(k) → 0. note in the time domain that two samples are on one side of the interpolation point. µ(k)) = bl (i)µ(k)l l=0 A full table of bl (i) is given in Table 8. they are a weighted sum of µ(k)0 . that is. and result in a linear filter. we can see that the filter coefficients are a function of µ(k).

125 − 0. Example: 2nd order Farrow filter What is the Farrow filter for α = 0. and division by two is extremely easy in digital filters. 0.5) = αµ2 − αµ = 0. but people tend to use α = 0. timing synchronization for QPSK. Be careful.125 hI (−1. 0.625 (62) Does this make sense? Do the weights add up to 1? Does it make sense to subtract a fraction of the two more distant samples? Example: Matlab implementation of Farrow Filter My implementation is called ece5520 lec20. Filter Timing e(n) Loop Error Filter Detector s [ n] h[n]=p(-nT) 2 cos(W0n) v(n) symbol strobe VCC m Interp.25 + 1 = 0. 0.5) = −αµ2 + (α − 1)µ + 1 = −0.625 hI (1. we can calculate that hI (−2. Since µ2 = 0. as my implementation uses a loop. rather than vector processing.m and is posted on WebCT. Lecture 21 Today: (1) Timing Synchronization (2) PLLs 33 Final Project Overview xI(n) MF N/2 Interp.5) = αµ2 − αµ = 0.125 hI (0. 0.5) = −αµ2 + (1 + α)µ = −0. µ0 = 1.125 + 0.43 is best.5.25.25 = −0. Filter MF h[n]=p(-nT) 2 sin(W0n) N/2 rI(k) rQ(k) xQ(n) Symbol Decision All Bits Preamble Detector DATA Bits Figure 58: Flow chart of final project.5 which interpolates exactly half-way between sample points? Solution: From the problem statement.ECE 5520 Fall 2009 118 For the 2nd order Farrow filter.5 because it is only slightly worse. µ = 0. µ = 0.25 = −0.75 = 0. It has been shown by simulation that α = 0.125 − 0.5.125 − 0. there is an extra degree of freedom – you can select parameter α to be in the range 0 < α < 1. .

you can interpolate between samples to find the desired sample. Compared to last year’s class. This is a good source for detail on many possible methods. Section 8. Interpolation Filter 2. • Problem: After the matched filter. analogous to a phased locked loop. which has output denoted xI (n). Preamble Detector These notes address the motivation and overall design of each block. However this solution leads to new problems: • Problem: True interpolation requires significant calculation – the sinc filter has infinite impulse response. Loop Filter 4. there may be no analog feedback to the ADC. we leave out the Ts and simply talk about the index n or fractional delay µ. There are several possible timing error detection (TED) methods. you will need to understand how and why they work. Timing Error Detector (TED) 3. We will talk through two common discrete-time implementations: 1. and in modern discrete-time implementations. you have the advantage of having access to the the Rice book. We focus on the discrete time solutions. you will implement the above receiver.1 Review of Interpolation Filters Timing synchronization is necessary to know when to sample the matched filter output. discrete time. which contains much of the code for a BPSK implementation.4. • Problem: How do you know when the correct symbol sampling time should be? • Solution: Use a timing locked loop. it works nearly as well. or a mix.ECE 5520 Fall 2009 119 For your final project. 33.4. in order to fix any bugs.2 Timing Error Detection We consider timing error detection blocks that operate after the matched filter and interpolator in the receiver. Voltage Controlled Clock (VCC) 5.4.1. When you put these together and implement them for QPSK. as related in Rice 8. • Solution: From samples taken at or above the Nyquist rate. • Solution: Approximate the sinc function with a 2nd or 3rd order polynomial interpolation filter. Implementations may be continuous time. Often. It adds these blocks to your QPSK implementation: 1. Early-late timing error detector (ELTED) . the samples may be at incorrect times. We want to sample at times [n + µ]Ts . where n is the integer part and µ is the fractional offset. The job of the timing error detector is to produce a voltage error signal e(n) which is proportional to the correct fractional offset between the current sampling offset (ˆ) and the correct µ sampling offset. 33.

In general. at time n. When the bit sequence is The early-late TED is a discrete-time implementation of the continuous-time “early-late gate” timing error detector. In particular.5 −1 (b) −0..5 0 0. When the bit is constant (e. Zero-crossing timing error detector (ZCTED) We’ll use a continuous-time realization of a BPSK received matched filter output.5 0 0. to illustrate the operation of timing error detectors. You can imagine that the interpolator output xI (n) is a sampled version of x(t).5 −1 −4 (a) x 10 5 −0. hopefully with some samples taken at exactly the correct symbol sampling times.5 1 Time from Symbol Sample Time 1. ˙ 33.ECE 5520 Fall 2009 120 2. to 1. x(t). In ˙ both. the error e(n) is determined by the slope and the sign of the sample x(n).5 −1 −1. we can use it to indicate how far from the sampling time we might be. consider for BPSK the value of x(t)sgn {x(t)} ˙ .5 x(t) Value 0 −0. 1 0.5 1 Time from Symbol Sample Time 1.5 Slope of x(t) 0 −5 −1. 1. and increases away from the correct ˙ sampling time. to 1. to 1) the slope is close to zero.3 Early-late timing error detector (ELTED) Since the derivative x(t) is close to zero at the correct sampling time. Figure 59 shows that the derivative is a good indication of timing error when the bit is changing from -1. time 0 corresponds to a correct symbol sampling time. Here (a) shows the signal x(t) and (b) shows its derivative x(t). x(t). Figure 59(a) shows an example eye diagram of the signal x(t) and Figure 59(b) shows its derivative x(t). to -1.g.5 Figure 59: A sample eye-diagram of a RRC-shaped BPSK received signal (post matched filter).

which is a function of the excess bandwidth of the RRC filter used. but would mean that we’re behind for a -1 symbol. In particular. if the sign changes between n − 2 and n. ˆ ˆ (64) (65) The error detector is non-zero at symbol sampling times. ˆ a(k − 1) = sgn {x((k − 1)Ts + τ )} . we don’t divide by the time duration 2Ts because it is just a scale factor. This factor contributes to the gain Kp of this part in the loop. and we just need something proportional to the timing error.8 of the Rice book. The signum function sgn {x(t)} just corrects for the sign: • A positive slope would mean we’re behind for a +1 symbol. to 1. which will be 2 when the sign changes. then the error e(n) = 0.7 of the Rice book. the intermediate sample n − 1 should be approximately zero. we can send alternating bits during the synchronization period in order to help the ELTED more quickly converge. because every second sample is a symbol sampling time. The error is. but would mean that we’re ahead for a -1 symbol. This error detector assumes that the sampling rate is SP S = 2. as it is called in the Rice book) is high. In the project. If there was a symbol transmission. For a discrete time system. (63) can be re-written as e(n) = xI (n − 1)[sgn{xI (n − 2)} − sgn{xI (n)}] (66) This operation is drawn in Figure 60. the slope will be somewhat positive even when the sample is taken at the right time. This timing error detector has a theoretical gain Kp which shown in Figure 8. In particular see Figure 8. and then multiply it by 2 to find the gain Kp of your ZCTED. a negative slope would mean we’re ahead for a +1 symbol. that indicates a symbol transition.4. It is described here for BPSK systems. you will need to look up Kp in Figure 8. to 1.1 of the Rice book. if it was taken at the correct sampling time. and when the bit is changing from -1. if the symbol strobe is not activated at sample n.4. We’d estimate x(t) from the samples out of the interpolator and see ˙ that e(n − 1) = {xI (n) − xI (n − 2)} sgn {xI (n − 1)} Here.17 of Rice. . The factor of two comes from the difference of signs in (66). That is. to -1. you get only an approximation of the slope unless the sampling rate SP S (or N .4. the slope will be somewhat negative even when the sample is taken at the right time. to -1. 33. These are imperfections in the ELTED which (hopefully) average out over many bits. e(k) = x((k − 1/2)Ts + τ )[ˆ(k − 1) − a(k)] ˆ a ˆ (63) where the a(k) term is the estimate of the kth symbol (either +1 or −1). ˆ ˆ a(k) = sgn {x(kTs + τ )} .ECE 5520 Fall 2009 121 from 1. Basically. The theoretical gain in the ZCTED is twice that of the ELTED. • In the opposite manner. In terms of n.4 Zero-crossing timing error detector (ZCTED) The zero-crossing timing error detector (ZCTED) is also described in detail in Section 8.

) An example trace of the numerically controlled oscillator (NCO) which is the heart of the VCC is shown in Figure 62.. both the in-phase and quadrature signals are used to calculate the error term. 33.. the output is 2xI (n − 1). See (8. which would be approximately zero if the sample clock is synchronized.5 Voltage Controlled Clock (VCC) We’ll use a decrementing VCC in the project. NCO(n-1) m W(n) Figure 62: Numerically-controlled oscillator signal N CO(n).ECE 5520 Fall 2009 122 Timing Error Detector sgn sgn e(n) xI(n) z-1 xI(n-1) z-1 xI(n-2) symbol strobe Figure 60: Block diagram of the zero-crossing timing error detector (ZCTED). 1 when positive and -1 when negative. The error is simply the sum of the two errors calculated for each signal. This is shown in Figure 61. . and the symbol has switched.1 QPSK Timing Error Detection When using QPSK. (The choice of incrementing or decrementing is arbitrary.100) in the Rice book for the exact expression. When this happens. Strobe high: change m to m=NCO(n-1)/W(n) 1/2 v(n) W(n) symbol strobe m underflow? symbol strobe z -1 modulo-1 NCO(n-1) Voltage Controlled Clock (VCC) Figure 61: Block diagram of decrementing VCC with control voltage v(n). The input strobe signal activates the sampler when it is the correct symbol sampling time. 33. where sgn indicates the signum function.4. NCO(n-2) NCO(n) .

the NCO decrements by 1/SP S + v(n). If the phase estimate is exactly correct. This section is included for reference.. 33. Acos(2pfct+q(t)) Phase Detector g(q(t)-q(t)) Loop Filter v(t) cos(2pfct+q(t)) VCO t q(t)=k0òv(x)dx -¥ Figure 63: General block diagram of a phase-locked loop for carrier synchronization [Rice]. The incoming signal is a carrier wave. ˆ The job of the the PLL is to estimate φ(t). the phase detector produces a voltage to correct the phase estimate. µ(n) = µ(n − 1).e. that is. the phase offset is a ramp. This is the effect of the ‘S’ curve function g(θe ) in Figure 64. it may include a frequency offset. The error in the phase estimate is ˆ θe (t) = θ(t) − θ(t) If the PLL is perfect. i. meaning that a sample must be taken. (on average) the NCO voltage drops below zero. 1. . We call this estimate θ(t). Initially. It is. θe (t) = 0.1 Phase Detector If the estimate of the phase is not perfect. For example.6. ahead of the carrier. then g(θe ) = 0 and VCO output phase remains constant. then g(θe ) < 0 and the VCO decreases the phase of the VCO output. and then θ(t) = ∆f t + α.ECE 5520 Fall 2009 123 The NCO starts at 1. Once per symbol. When the symbol strobe is activated. A cos(2πfc t + θ(t)) where θ(t) is the phase offset. then g(θe ) > 0 and the VCO increases the phase of the VCO output. 2. You will prove that (67) is the correct form for µ(n) in the HW 9 assignment.6 Phase Locked Loops Carrier synchronization is done using a phase-locked loop (PLL). that is. behind the carrier. the operation of the timing synchronization loop may be best understood by analogy to the continuous time carrier synchronization loop. 3. 33. For those of you familiar with PLLs. µ(n) is kept the same. because it may not be constant. At each sample n. the strobe is set to high. Consider a generic PLL block diagram in Figure 63. When this happens. the fractional interval is also recalculated. In this case. let’s ignore the loop filter. The function g(θe ) may look something like the one shown in Figure 64. It is a function of time. µ(n) = N CO(n − 1) W (n) (67) When the strobe is not activated. If the phase estimate is too high. t. If the phase estimate is too low.

for continuous time.ECE 5520 Fall 2009 124 g(qe) qe Figure 64: Typical phase detector input-output relationship. 33. ˆ In the left half of Figure 65. We represent the loop filter in the Laplace or frequency domain as F (s) or F (f ). as we know. Note that if you just raise the voltage for a short period and then reduce it back (a constant plus rect function input). respectively.6.2 Loop Filter The loop filter is just a low-pass filter to reduce the noise.3 VCO A voltage-controlled oscillator (VCO) simply integrates the input and uses that integral as the phase of a cosine wave..6. The input is essentially the gas pedal for a race car driving around a track . This linear approximation is generally fine when the phase error is reasonably small. in the Laplace domain [Rice]. you change the phase of the output. If we do this.6. i. This phase-equivalent model of the PLL allows us to do analysis. phaseequivalent perspective Q(s) kp F(s) V (s ) Q(s) k0 s Figure 65: Phase-equivalent and linear model for the continuous-time PLL.e.increase the gas (voltage). but leave the frequency the same. Note that in addition to the A cos(2πfc t + θ(t)) signal term we also have channel noise coming into the system. In particular. We will be interested in discrete time later. is the cosine of 2πfc plus that phase. θe = 0. 33. we end up making simplifications. This curve is called an ‘S’ Curve. so we could also write the Z-transform response as F (z).4 Analysis To analyze the loop in Figure 63. we model the ‘S’ curve (Figure 64) as a line. . 33. and the engine (VCO) will increase the frequency of rotation around the track. we can re-draw Figure 63 as it is drawn in Figure 65. because the shape of the curve looks like the letter ‘S’ [Rice]. we write just Θ(s) and Θ(s) even though the physical process.

5 Discrete-Time Implementations You can alter F (s) to be a discrete time filter using Tustin’s approximation. However.2 details how to set these parameters to achieve a desired bandwidth and a desired damping factor (to avoid ringing. T 1 + z −1 1 → s 2 1 − z −1 Because it is only a second-order filter. or ramp input. and kp . and in case of a frequency offset.57 (p 736) show how to calculate the constants K1 and K2 . 33. which responds to the input during phase offset. k1 . its implementation is quite efficient.6. and other gain constants K0 and Kp . k0 .1 (p 733. damping coefficient ζ. then some manipulation shows you that Ha (s) = = ˆ kp F (s) ks0 Θ(s) = Θ(s) 1 + kp F (s) ks0 k0 kp F (s) s + k0 kp F (s) You can use Ha (s) to design the loop filter F (s) and gains kp and k0 to achieve the desired PLL goals. Desired response to particular phase error models. In this case there are four parameters to set. Designing a filter to respond to these. given the desired natural frequency θn .ECE 5520 Fall 2009 125 We can write that ˆ Θe (s) = Θ(s) − Θ(s) = Θ(s) − Θe (s)kp F (s) k0 s To get the transfer function of the filter (when we consider the phase estimate to be the output). The Rice Section C. if the temperature of the transmitter or receiver varied quickly. You will. The latter deserves more explanation. 2. the frequency of the crystal will also change quickly. will ramp up the phase at the proper rate. design the particular loop filter to use in your final project. but to converge quickly). this is not typically fast enough to be a problem in most radios. Here those goals include: 1.56 and C. k2 . The Rice book recommends a 2nd order proportional-plus-integrator loop filter. For example. .2. but not to AWGN noise (as much as possible). Desired bandwidth. is a challenge. Note: A accelerating frequency change wouldn’t be handled properly by the given filter. the normalized bandwidth Bn T . Equations C. F (s) = k1 + k2 s This filter has a proportional part (k1 ) which is just proportional to the input. phase error models might be: step input. For example. The filter also has an integrator part (k2 /s) which integrates the phase input. Rice book) for diagrams of this discrete-time PLL. in Homework 9. See Figure C.

• Exact expression. 3-4. Signal space 3.. Please come to me with questions today or tomorrow. SRRC pulse shaping 5. 11-19. unipolar • M -ary: PAM. Differential encoding and decoding for BPSK 6. • I will be out of town Mon-Wed.ECE 5520 Fall 2009 126 Note that the bandwidth Bn T is the most critical factor. This means that the loop filter effectively averages over many samples when setting the input to the VCO. Lectures 20.21 are not covered. 0. • A non-standard modulation method. but of course you need to know some things from the start of the semester to be able to perform well on the second part of this course. Eb N0 . The exam will be proctored. Both know the formula and know how it can be derived. It is typically much less than one. Inter-symbol interference. Energy detection for FSK 7. nearest-neighbor approximation. • Office hours: Today: 2-3:30. • Homeworks covered: 4-8. Please review these homeworks and know the correct solutions. 9. • Rice book: Chapters 5 and 6.g. No material from 1-7 will be directly tested. Detection theory 2. Nyquist filtering. 34 Exam 2 Topics Where to study: • Lectures covered: 8 (detection theory). (e. union bound. Gray encoding 8. Digital modulation methods: • Binary. Probability of Error vs. • Standard modulation method. QAM. given signal space diagram. PSK. Lecture 22 Today: (1) Exam 2 Review • Exam 2 is Tue Apr 14. Friday 11-12.005). What topics: 1. FSK 4. . bipolar vs.

com/cm/ms/what/shannonday/ paper. System design: Here are some engineering requirements for this communication system. and energy efficiency is energy per bit over noise PSD. Text files can be presented in different ways. 27. or in bits per second. Here are two misconceptions: . Lecture 23 Today: (1) Source Coding • HW 9 is due today at 5pm. E. pp.cs. but the basic ideas can be presented pretty quickly. Work out solution. Our information source is measured in bits. . Example: What is the probability of bit error vs. • Final homework: HW 10. 35 Source Coding This topic is a semester-long course at the graduate level in itself. Everything is measured in bits! But how do we measure the bits of a source (e. email. and the plot of Qinv. but there may be many wrong answers (or right answer / wrong reason combinations). with a 1-2 sentence limit. Shannon. From this list of possible modulation.ECE 5520 Fall 2009 127 9. Bandwidth assuming SRRC pulse shaping for PAM/QAM/PSK. Modulation schemes’ bandwidth efficiency is measured in bits per Hertz. You can use two sides of an 8. FSK. Short answer... vol. audio. which one would you recommend? What bandwidth and bit rate would it achieve? You will be provided the table of the Qinv function. concept of frequency multiplexing 10. which will be due April 28 at 5pm. True / false or multiple choice. Eb N0 for this signal space diagram? 4. There may be multiple correct answers. video. One good reference: • C.. Images and sound can be encoded in different ways.html We have done a lot of counting of bits as our primary measure of communication systems.g.5 x 11 sheet of paper. Monday 4-5pm. July and October. “A mathematical theory of communication. Comparison of digital modulation methods: • Need for linear amplifiers (transmitter complexity) • Receiver complexity • Bandwidth efficiency Types of questions (format): 1. 379-423 and 623-656. 1948.)? Information can often be represented in many different ways.” Bell System Technical Journal. OH today 2-3pm. 3. 2. http://www.bell-labs.

What if we had a fixed number of bits to send any image. that telling conveys quite a bit of information to you.) above? Sometimes.g. Randomness and information.} is an ordering of the possible values in SX . So what is the intrinsic measure of bits of text. . that telling conveys very little information to you. You could store it as a list of pixels. For example. For example. Two types of compression algorithms: • Lossless: e. • If you are told the value of a very “random” r.ECE 5520 Fall 2009 128 1. consider a digital black & white image (not grayscale. Using the file size tells you how much information is contained within the file. is defined as. Take the log2 of the number of different messages you could send. MP3 Note: Both “zip” and the unix “compress” commands use the Lempel-Ziv algorithm for source compression.g. 2. You could simply send the coordinates of the pixels of one of the colors (e..v.zip or . How many bits would be used in these two representations? What would make you decide which one is more efficient? You can see that equivalent representations can use different number of bits. 2.tar files represent the exact same information that was contained in the original files. H [X] = − pi log2 pi (68) i Notes: . and we used the sparse B&W image coding scheme (2. or video? 35. thus we could encode it in log2 2 or one bit per pixel. audio. Def ’n: Entropy Let X be a discrete random variable with pmf pX (xi ) = P [X = x]. Each pixel has two possibilities (possible messages). . Here. This would introduce errors into the image. We will shorten the notation by using pi as follows: pi = pX (xi ) = P [X = xi ] where {x1 . that doesn’t vary that much. Our technical definition of entropy of a random variable is as follows. an image. in units of bits.. but with fewer bits. Then the entropy of X. the number of bits in the compressed image would exceed what we had allocated. x2 .1 Entropy Entropy is a measure of the randomness of a random variable. are just two perspectives on the same thing: • If you are told the value of a r.g. but truly black or white). .all black pixels). there is a finite or countably infinite set SX . and x ∈ SX . 1. in non-technical language. Zip or compress. This is the idea behind source compression. • Lossy: e. JPEG. ..v.

w.  x=1  s. has pmf. Actually. x = 1     1/2.w.8 1 Figure 66: Entropy of a binary r. • Use that 0 log 0 = 0.ECE 5520 Fall 2009 129 • H [X] is an operator on a random variable. • All “log” functions are log-base-2 in information theory unless otherwise noted. 1 0. Model the r. N when |SX | = N < ∞. . It returns a (deterministic) number. x = −2    0. they express information in “nats”. short for “natural” digits. not a function of a random variable.v. when theorists use loge or the natural log. x = 0 pX (x) =  1/8. • Entropy of a discrete random variable X is calculated using the probability values of the pmf of X. pX (x) = 1 − s. o.4 0. The “reason” the units are bits is because of the base-2 of the log.8 Entropy H[X] 0. Keep this in mind when reading a book on information theory.2 0 0 0.v. . This it is like E [X]. not another random variable. x = −1    1/16.audio).2 0. Nothing else is needed. x = 2   1/4. What is the entropy H [X] as a function of s? Solution: Entropy is given by (68) and is: H[X] = −s log2 s − (1 − s) log2 (1 − s) The solution is plotted in Figure 66. another operator on a random variable.v.6 0.v. . Example: Non-uniform source with five messages Some signals are more often close to zero (e.g. x = 0  0. pi . • The sum will be from i = 1 .4 0. o.6 P[X=1] 0. X to have pmf   1/16. Example: Binary r. A binary (Bernoulli) r. This is true in the limit of x log x as x → 0+ .

. random variables has N times the entropy of any one of them. Would an arbitrary one-to-one function change the entropy of X? 1 1 1 1 log2 2 + log2 4 + log2 8 + 2 log2 16 2 4 8 16 15 bits 8 (69) 35. the entropy of any N independent (but possibly with different distributions) r. . X2 with event sets SX1 and SX2 is defined as H[X1 . .XN (x1 .v. . . .ECE 5520 Fall 2009 130 What is its entropy H [X]? Solution: H [X] = = Other questions: 1. . random variables? You can show that H[X1 .d. “How much additional information do I get from both.. ..v. For example.v. .s are not independent. XN ] = −N pX1 (x1 ) log2 pX1 (x1 ) = N H(X1 ) x1 ∈SX1 The entropy of N i. H[X2 |X1 ]: H[X2 |X1 ] = H[X2 . Intuitively..XN (x1 . We call this difference the conditional entropy. x2 ) log2 pX1 . X2 compared just to one of them? This is often an important question because it answers the question. . x2 ) x1 ∈SX1 x2 ∈SX2 (70) For N joint random variables. the B&W image. Would multiplying X by 2 change its entropy? 3.v. XN ] = − ··· pX1 .. (71) . . X1 ] − H[X1 ]. . the joint entropy of N r. if you know some of them.2 Joint Entropy Def ’n: Joint Entropy The joint entropy of two random variables X1 . Do you need to know what the symbol set SX is? 2. xN ) xN ∈SXN x1 ∈SX1 What is the entropy for N i. . since pixels are correlated in space.s is just the sum of the entropy of each individual r.i. In addition. When r. xN ) log2 pX1 .X2 (x1 . .3 Conditional Entropy How much additional entropy is in the joint random variables X1 . X2 ] = − pX1 . X1 .. .. the joint r. 35. . XN . . the rest that you don’t know become less informative. . compared to just one of them?”.v. of several neighboring pixels will have less entropy than the sum of the individual pixel entropies.. because of the dependence or correlation.s is less than N times the entropy of one of them.d.i. . . entropy is H[X1 .X2 (x1 . .

. we’re interested in discrete-time random processes. X1 ]. are needed for the average r.X2 (x1 . How many additional bits.X2 (x1 . is defined as H = lim H[XN |XN −1 . x2 ) + x1 ∈SX1 x2 ∈SX2 x1 ∈SX1 x2 ∈SX2 x1 ∈SX1 x2 ∈SX2 pX1 . . x2 ) log2 pX1 . X2 . .X2 (x1 . in units of bits per random variable (a. .X2 (x1 . 35. Solution: Plugging in (68) for H[X2 . .X2 (x1 . x2 ) pX1 (x1 ) pX1 .k. given the values of the N − 1 previous random variables. the joint entropy of all of them may go to infinity as N → ∞. x2 ) log2 pX1 . as N → ∞? Def ’n: Entropy Rate The entropy rate of a stationary discrete-time random process. x2 ) − log2 pX1 (x1 )) pX1 . . . x2 ) log2 pX1 (x1 ) x1 ∈SX1 x2 ∈SX2 x1 ∈SX1 x2 ∈SX2 pX1 .ECE 5520 Fall 2009 131 What is an equation for H[X2 |X1 ] as a function of the joint probabilities pX1 . N →∞ It can be shown that entropy rate can equivalently be written as H = lim 1 H[X1 .. x2 ) log2 pX2 |X1 (x2 |x1 ) (72) Note the asymmetry – there is the joint probability multiplied by the log of the conditional probability. in the limit. x2 ) + x1 ∈SX1 x2 ∈SX2 x1 ∈SX1 pX1 (x1 ) log2 pX1 (x1 )    pX1 . . .. For this case.. x2 ) (log2 pX1 . XN ].v.XN (x1 ..X2 (x1 ..a. H[XN |XN −1 . What is the sample space SXi ? Is Xi uniform on that space? . .. .X2 (x1 . . in which we have random variables X1 . . x2 ) and the conditional probabilities pX2 |X1 (x2 |x1 ). This is not like either the joint or the marginal entropy. N →∞ N Example: Entropy of English text Let Xi be the ith letter or space in a common English sentence. X1 ] and H[X1 ]. . . xN ) log2 pXN |XN−1 . . H[X2 |X1 ] = − = − = − = − = − pX1 ..X2 (x1 .X2 (x1 . X2 . x1 ) xN−1 ∈SXN−1 x1 ∈SX1 which is the additional entropy (or information) contained in the N th random variable.X2 (x1 . .4 Entropy Rate Typically. X1 ] = − ··· pX1 . Since there are infinitely many of them. We could also have multi-variate conditional entropy.X1 (xN |xN −1 .. x2 ) log2 pX1 . .. we are more interested in the rate. . source output).X2 (x1 .

See Figure 67(a). Notes: • Here. an ‘error’ occurs when your compressed version of the data is not exactly the same as the original. You can see that the average entropy in bits per letter is decreasing quickly. Proof: Proof: Using typical sequences.2]. the joint entropy was 11. • R is our 1/Tb .015 0.46. the joint entropy was 10.1 0. at any rate R (bits / source output) as long as R > H. I calculated an entropy of H = 4.73.05 0 (a) sp f k p Space or Letter u z (b) sp sp f k p u Second Space or Letter z Figure 67: PMF of (a) single letters and (b) two-letter combinations (including spaces) in Shakespeare’s Romeo and Juliet. This gives me the two-dimensional pmf shown in Figure ??(b).15 u p k f Poccurrence 0.5 Source Coding Theorem The key connection between this mathematical definition of entropy and the bit rate that we’ve been talking about all semester is given by the source coding theorem. For four-letter combinations. 35.04 = 3 · 3. using Matlab on Shakespeare’s Romeo and Juliet.99.98 = 4 · 2. which is 2 · 3.025 0. • What is the minimum possible rate to encode English text (if you remove all punctuation)? • The theorem does not tell us how to do it – just that it can be done.ECE 5520 Fall 2009 132 What is H[Xi ]? Solution: I had Matlab read in the text of Shakespeare’s Romeo and Juliet.3. independent of the complexity of the encoder and the decoder employed. I calculated the entropy of the joint pmf of each two-letter combination. What is the entropy rate.01 0. we have H = 1. the error probability will be bounded away from zero. What is H[Xi .005 0. It is one of the two fundamental theorems of information theory. Conversely. See Shannon’s original 1948 paper. and was introduced by Claude Shannon in 1948. Theorem: A source with entropy rate H can be encoded with arbitrarily small error probability.1199.35. Example: B&W images. For this pmf.2 z 0. . The Proakis & Salehi book mentions that this value for general English text is about 4.03 First Space or Letter 0. For the three-letter combinations. Xi+1 ]? Solution: Again.02 0. if R < H. 0. I calculate an entropy of 7. • Theorem fact: Information measure (entropy) gives us a minimum bit rate.3 bits/letter [taken from Proakis & Salehi Section 6. H? Solution: For N = 10.

Then. That is. without loss. he published in the Bell System Technical Journal a paper on “Transmission of Information”. . X2 . In particular. Hartley Ralph V. we defined entropy. N →∞ N We showed that entropy can be used to quantify information. L. which says that a source with entropy rate H can be encoded with arbitrarily small error probability. we turn to the noisy channel. and entropy rate. if they . 1 ≤ 2B. . When transmitting a sequence of pulses. each pulse could represent more or less information. each of duration Tsy . Here he developed the “Hartley oscillator”. he developed relationships useful for determining the capacity of bandlimited communication channels. This discussion of entropy allows us to consider the maximum data rate which can be carried without error on a bandlimited channel. involved in radio telephony. Given our information source X or {Xi }. at any rate R (bits / source output) as long as R > H. 1 H[X1 . for a finite source.1 R. Hartley was particularly influenced by Nyquist’s result. which is affected by additive White Gaussian noise (AWGN). The major result was the Shannon’s source coding theorem.ECE 5520 Fall 2009 133 • The theorem does not tell us how well it can be done if N is not infinite. Hartley assumed that the maximum amplitude available to the transmitter was A. 1888) received the A. as described by Nyquist. degree from the University of Utah in 1909. Lecture 24 Today: (1) Channel Capacity 36 Review H[X] = − pi log2 pi i Last time. . XN ]. he considered digital transmission in pulse-amplitude modulated systems. H = lim 37 Channel Coding Now. V. Hartley (born Nov.B. But. at Bell Laboratories. . L. Afterwards. the value of H[X] or H gives us a measure of how many bits we actually need to use to encode. He worked as a researcher for the Western Electric Company. depending on how pulse amplitudes were chosen. The pulse rate was limited to 2B. Tsy In Hartley’s 1928 paper. In July 1928. Any lower rate than H would guarantee loss of information. the rate may need to be higher. 30. Nyquist determined that the pulse rate was limited to two times the available channel bandwidth B. Hartley made the assumption that the communication system could discern between pulse amplitudes. the source data. 37.

as n → ∞. This law says that the following event. E. regardless of modulation type. This late decision will decide all values of y = [x1 . yi = xi + zi where X is the transmitted signal and Z is the noise in the channel. This asymptotic limit as n → ∞ allows for a proof using the statistical convergence of a sequence of random variables.i. the capacity formula was for a particular type of PAM system. Furthermore. of length n. assuming perfect syncronization) is yi . In fact. Next. log2 M = log2 1 + A Aδ Finally. 37. as n → ∞. the ith symbol sample at the receiver (after the matched filter. Hartley quantified the data rate using Nyquist’s relationship to determine the maximum rate C. and did not deal well with negative amplitudes.d. 37. . the number of different pulse amplitudes (symbols) is M =1+ A Aδ Note that early receivers were modified AM envelope detectors. 37. his capacity bound considers simultaneously demodulating sequences of received symbols. All n symbols are received before making a decision. In other words. . in bits per second. possible from the digital communication system. Shannon What was left unanswered by Hartley’s capacity formula was the relationship between noise and the minimum amplitude separation between symbols.2 C. we need a law called the law of large numbers (ECE 6962 topic). . Engineers would have to be conservative when setting Aδ to ensure a low probability of error. . . Shannon’s proof considers the limiting case as n → ∞. xn ] simultaneously. Gaussian with variance PN . Given that a PAM system operates from 0 to A in increments of Aδ . and used statistics to develop a universal bound for capacity. The noise term zi is assumed to be i. n 1 (yi − xi )2 ≤ PN n i=1 happens with probability one. . .2. In particular. .ECE 5520 Fall 2009 134 were at separated by at least a voltage spacing of Aδ . In this AWGN channel.2. Hartley used the ‘bit’ measure to quantify the data which could be encoded using M amplitude levels.1 Noisy Channel Shannon did take into account an AWGN channel. C = 2B log2 1 + A Aδ (Note that the Hartley transform came later in his career in 1942). y = [y1 . yn ]. the measured value y will be located √ within an n-dimensional sphere (hypersphere) of radius nPN with center x.2 Introduction of Latency Shannon’s key insight was to exchange latency (time delay) for reduced probability of error. . Further. and did not say anything fundamental about the relationship between capacity and bandwidth for arbitrary modulation.

show how we many different symbols we could have uniquely distinguished. That is. we can replace the noise variance PN with the relationship between noise two-sided PSD N0 /2. is contained within a hypersphere of radius n(P + PN ) centered at the origin. The result of this geometrical problem is that P ( n/2) (73) M = 1+ PN 37. in which the average power in the desired signal xi was limited to P . within a period of n sample times. A] such that they are all separated by Aδ .2 Final Results Finally. by PN = N0 B . C = P 2B n log2 1 + n 2 PN P = B log2 1 + PN 37.9 (pg. if we could send M messages now in n pulses (rather than 1 pulse) we would adjust capacity to be: 2B C= log2 M n Using the M from (76) above. This is shown in P&S in Figure 9. together.ECE 5520 Fall 2009 135 37. 37.2.1 Returning to Hartley Adjusting Hartley’s formula.3 Combining Two Results The two results. This number M is the number of different messages that could have been sent in n pulses. 584). 1 n As a result y 2 n i=1 2 yi ≤ 1 n n x2 + i i=1 1 n n i=1 (yi − xi )2 ≤ P + PN ≤ n(P + PN ) This result says that the vector y. with probability one as n → ∞.3. Hartley asked how many symbol amplitudes could be fit into [0. such that hyperspheres of radius nPN do not overlap.3. Shannon’s formulation asks us how many multidimensional amplitudes xi √ be fit can into a hypersphere of radius n(P + PN ) centered at the origin. n 1 x2 ≤ P i n i=1 This combination of signal power limitation and noise power results in the fact that.3 Introduction of Power Limitation Shannon also formulated the problem as a power-limited case.

η ≤ log2 1 + η Eb N0 This expression can’t analytically be solved for η. That is. • The final project is due Tue. Thus from (74). 5pm. This relationship is shown in Figure 70. However. please the Robert Graves lecture 3:05 PM in Room 105 WEB. Note that the ratio NPW is the signal power divided by the noise power. C is just a capacity limit. the energy per bit is the power multiplied by the bit duration.4 Efficiency Bound Recall that Eb = P Tb . C = B log2 1 + Typically. set your deadline to May 4. If you think you might be late. May 5. Instead. 0 Lecture 25 Today: (1) Channel Capacity • Lecture Tue April 28.ECE 5520 Fall 2009 136 So finally we have the Shannon-Hartley Theorem. 37. We know that our bit rate Rb ≤ C. • Everyone gets a 100% on the “Discussion Item” which I never implemented. C = W log2 1 + or since Rb = 1/Tb . No late projects are accepted. “Digital Terrestrial Television Broadcasting”. Shannon also proved that any system which operates at a bit rate higher than the capacity C will certainly incur a positive bit error rate. is canceled. people also use W = B so you’ll see also C = W log2 1 + P N0 W (75) P N0 B (74) This result says that a communication system can operate at bit rate C (in a bandlimited channel with width W given power limit P and noise value N0 ). . Any practical communication system must operate at Rb < C. you can look at it as a bound on the bandwidth E efficiency as a function of the Nb ratio. where Rb is the operating bit rate. so Rb Eb Rb ≤ log2 1 + W W N0 Defining η = Rb W. because of the late due date. Thus 0 the capacity bound is also written C = W log2 (1 + SN R). or signal to noise ratio (SNR). with arbitrarily low probability of error. C = W log2 1 + Eb /Tb N0 W Rb Eb W N0 Here.

bound on bandwidth efficiency. on a (a) linear plot and (b) log plot. .ECE 5520 Fall 2009 137 14 12 Bits/second per Hertz 10 8 6 4 2 0 0 5 10 15 20 E /N ratio (dB) b 0 25 30 (a) 10 1 Bits/second per Hertz 10 0 10 −1 0 5 (b) 10 15 20 E /N ratio (dB) b 0 25 30 Figure 68: From the Shannon-Hartley theorem. η.

without loss. V. Given that a PAM system operates from 0 to A in increments of Aδ . we turn to the noisy channel. at any rate R (bits / source output) as long as R > H. if they were at separated by at least a voltage spacing of Aδ . Hartley (born Nov. and entropy rate.B. Hartley used the ‘bit’ measure to quantify the data which could be encoded using M amplitude levels. . L. When transmitting a sequence of pulses. the source data. This discussion of entropy allows us to consider the maximum data rate which can be carried without error on a bandlimited channel. at Bell Laboratories. he considered digital transmission in pulse-amplitude modulated systems. each of duration Ts . The major result was the Shannon’s source coding theorem. 30. he developed relationships useful for determining the capacity of bandlimited communication channels. The pulse rate was limited to 2B. degree from the University of Utah in 1909. as described by Nyquist. and did not deal well with negative amplitudes. Hartley made the assumption that the communication system could discern between pulse amplitudes. depending on how pulse amplitudes were chosen. . XN ]. log2 M = log2 1 + A Aδ . In particular.1 R. 1888) received the A. the number of different pulse amplitudes (symbols) is M =1+ A Aδ Note that early receivers were modified AM envelope detectors. In July 1928. X2 . Afterwards. L. Any lower rate than H would guarantee loss of information. the value of H[X] or H gives us a measure of how many bits we actually need to use to encode. Then. Ts In Hartley’s 1928 paper. Hartley Ralph V. .ECE 5520 Fall 2009 138 38 Review H[X] = − pi log2 pi i Last time. . 39. involved in radio telephony. which is affected by additive White Gaussian noise (AWGN). Given our information source X or {Xi }. which says that a source with entropy rate H can be encoded with arbitrarily small error probability. 1 ≤ 2B. N We showed that entropy can be used to quantify information. Hartley was particularly influenced by Nyquist’s sampling theorem. He worked as a researcher for the Western Electric Company. 39 Channel Coding Now. we defined entropy. Nyquist determined that the pulse rate was limited to two times the available channel bandwidth B. he published in the Bell System Technical Journal a paper on “Transmission of Information”. But. each pulse could represent more or less information. H = lim N →∞ 1 H[X1 . Next. Hartley assumed that the maximum amplitude available to the transmitter was A.

These might be truly an n-dimensional signal (e. In either case. FSK). 39. Shannon uses all n dimensions in the constellation – the detector must use all n samples of y to make a decision. . as n → ∞.2. The noise term zi is assumed to be i. So the received vector is y = [y1 . his capacity bound considers n-dimensional signaling. regardless of modulation type. This law says that the following event. Gaussian with variance EN = N0 /2. or they might use multiple symbols over time (recall that symbols at different multiples of Ts are orthogonal). In this AWGN channel. in which the maximum symbol energy in the desired signal xi was limited to E. . the measured value y will be located √ within an n-dimensional sphere (hypersphere) of radius nEN with center x. . we need a law called the law of large numbers. yn ]. . of length n. In fact.2. This asymptotic limit as n → ∞ allows for a proof using the statistical convergence of a sequence of random variables. . Furthermore.d. .i.. Hartley quantified the data rate using Nyquist’s relationship to determine the maximum rate C. Shannon’s proof considers the limiting case as n → ∞. and did not say anything fundamental about the relationship between capacity and bandwidth for arbitrary modulation. xn ] simultaneously. Shannon What was left unanswered by Hartley’s capacity formula was the relationship between noise and the minimum amplitude separation between symbols.g.3 Introduction of Power Limitation Shannon also formulated the problem as a energy-limited case. C = 2B log2 1 + A Aδ 39. Further. In other words. and used statistics to develop a universal bound for capacity. 39. n 1 2 xi ≤ E n i=1 . possible from the digital communication system.2. yi = xi + zi where X is the transmitted signal and Z is the noise in the channel. Engineers would have to be conservative when setting Aδ to ensure a low probability of error.1 Noisy Channel Shannon did take into account an AWGN channel. . this late decision will decide all values of x = [x1 . . the capacity formula was for a particular type of PAM system. That is.2 C. In particular. 39. in bits per second. 1 n n i=1 (yi − xi )2 ≤ EN happens with probability one. E. assuming perfect synchronization) is yi . as n → ∞. the ith symbol sample at the receiver (after the matched filter.ECE 5520 Fall 2009 139 Finally. In the multiple symbols over time.2 Introduction of Latency Shannon’s key insight was to exchange latency (time delay) for reduced probability of error.

A] such that they are all separated by Aδ . within a period of n sample times.1 Returning to Hartley Adjusting Hartley’s formula. Shannon’s formulation asks us how many multidimensional amplitudes xi √ be fit can into a hypersphere of radius n(E + EN ) centered at the origin. 1 n As a result y 2 n i=1 2 yi ≤ 1 n n x2 + i i=1 1 n n i=1 (yi − xi )2 ≤ E + EN This result says that the vector y. such that hyperspheres of radius nEN do not overlap. radius nEN ) EN E+ n( Figure 69: Shannon’s capacity formulation simplifies to the geometrical question of: how many hyperspheres of √ a smaller radius nEN fit into a hypersphere of radius n(E + EN )? This number M is the number of different messages that could have been sent in n pulses.3. if we could send M messages now in n pulses (rather than 1 pulse) we would adjust capacity to be: 2B C= log2 M n Using the M from (76) above. together. is contained within a hypersphere of radius n(E + EN ) centered at the origin. C= 2B n E log2 1 + n 2 EN = B log2 1 + E EN . This is shown in Figure 69. Hartley asked how many symbol amplitudes could be fit into [0. ≤ n(E + EN ) 39.ECE 5520 Fall 2009 140 This combination of signal energy limitation and noise energy results in the fact that.3 Combining Two Results The two results. The result of this geometrical problem is that E n/2 (76) M = 1+ EN 39. with probability one as n → ∞. show how we many different symbols we could have uniquely distinguished.

39. and EN = N0 /2. Eb /Tb C = B log2 1 + N0 B or since Rb = 1/Tb . Eb = P Tb . Figure 71 is the plot on a log-y 0 axis with some of the modulation types discussed this semester. where Rb is the operating bit rate.ECE 5520 Fall 2009 141 39. i. the energy per bit is the maximum power multiplied by the bit duration. with arbitrarily low probability of error. or signal to noise ratio (SNR).e. That is.. C = B log2 1 + P N0 B . Thus from (77).4 Efficiency Bound Another way to write the maximum signal power P is to multiply it by the bit period and use it as the maximum energy per bit. Shannon also proved that any system which operates at a bit rate higher than the capacity C will certainly incur a positive bit error rate.3. Thus 0 the capacity bound is also written C = B log2 (1 + SN R). you can look at it as a bound on the bandwidth E efficiency as a function of the Nb ratio. C is just a capacity limit. so Rb Rb Eb ≤ log2 1 + B B N0 Defining η = Rb B (the spectral efficiency). C = B log2 1 + Rb Eb B N0 Here. Note that the ratio NEB is the signal power divided by the noise power. . This relationship is shown in Figure 70. Any practical communication system must operate at Rb < C. Be know that our bit rate Rb ≤ C. E = P Ts = 2B where P is the maximum signal power and B is the bandwidth.2 Final Results P Since energy is power multiplied by time. However. (77) This result says that a communication system can operate at bit rate C (in a bandlimited channel with width B given power limit E and noise value N0 ). η ≤ log2 1 + η Eb N0 This expression can’t analytically be solved for η. we have the Shannon-Hartley Theorem.

and M-FSK. . η.ECE 5520 Fall 2009 142 14 12 Bits/second per Hertz 10 8 6 4 2 0 0 5 10 15 20 Eb/N0 ratio (dB) 25 30 Figure 70: From the Shannon-Hartley theorem. M-PSK. bound on bandwidth efficiency. 10 1 Bandwidth Efficiency (bps/Hz) 10 0 1024−QAM 256−QAM 64−QAM 64−PSK 16−QAM 32−PSK 16−PSK 8−PSK 4−QAM 4−FSK 16−FSK 64−FSK 10 −1 256−FSK 10 −10 −2 1024−FSK 0 10 Eb/N0 (dB) 20 30 Figure 71: From the Shannon-Hartley theorem bound with achieved bandwidth efficiencies of M-QAM.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.