Algorithms for Communications Systems and their Applications
Algorithms for Communications Systems and their Applications
Nevio Benvenuto University of Padova, Italy
Giovanni Cherubini IBM Zurich Research Laboratory, Switzerland
Copyright c 2002
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (C44) 1243 779777
Email (for orders and customer service enquiries): csbooks@wiley.co.uk Visit our Home Page on www.wileyeurope.com or www.wiley.com Reprinted with corrections March 2003 All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to permreq@wiley.co.uk, or faxed to (C44) 1243 770571. Neither the author(s) nor John Wiley & Sons, Ltd accept any responsibility or liability for loss or damage occasioned to any person or property through using the material, instructions methods or ideas contained herein, or acting or refraining from acting as a result of such use. The author(s) and Publisher expressly disclaim all implied warranties, including merchantability of ﬁtness for any particular purpose. Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley & Sons is aware of a claim, the product names appear in initial capital or capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration.
Other Wiley Editorial Ofﬁces John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA JosseyBass, 989 Market Street, San Francisco, CA 941031741, USA WileyVCH Verlag GmbH, Boschstr. 12, D69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #0201, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0470843896
A Produced from L TEX ﬁles supplied by the authors, processed by Laserwords Private Limited, Chennai, India Printed and bound in Great Britain by Biddles Ltd, Guildford and King’s Lynn This book is printed on acidfree paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.
To Adriana, and to Antonio, Claudia, and Mariuccia
Contents
Preface Acknowledgements 1 Elements of signal theory 1.1 Signal space : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Properties of a linear space : : : : : : : : : : : : : : : : Inner product : : : : : : : : : : : : : : : : : : : : : : : 1.2 Discrete signal representation : : : : : : : : : : : : : : : : : : : : The principle of orthogonality : : : : : : : : : : : : : : Signal representation : : : : : : : : : : : : : : : : : : : Gram–Schmidt orthonormalization procedure : : : : : : 1.3 Continuoustime linear systems : : : : : : : : : : : : : : : : : : : 1.4 Discretetime linear systems : : : : : : : : : : : : : : : : : : : : : Discrete Fourier transform (DFT) : : : : : : : : : : : : The DFT operator : : : : : : : : : : : : : : : : : : : : : Circular and linear convolution via DFT : : : : : : : : : Convolution by the overlapsave method : : : : : : : : : IIR and FIR ﬁlters : : : : : : : : : : : : : : : : : : : : 1.5 Signal bandwidth : : : : : : : : : : : : : : : : : : : : : : : : : : The sampling theorem : : : : : : : : : : : : : : : : : : Heaviside conditions for the absence of signal distortion 1.6 Passband signals : : : : : : : : : : : : : : : : : : : : : : : : : : : Complex representation : : : : : : : : : : : : : : : : : : Relation between x and x .bb/ : : : : : : : : : : : : : : : Baseband equivalent of a transformation : : : : : : : : : Envelope and instantaneous phase and frequency : : : : 1.7 Secondorder analysis of random processes : : : : : : : : : : : : : 1.7.1 Correlation : : : : : : : : : : : : : : : : : : : : : : : : : : Properties of the autocorrelation function : : : : : : : : 1.7.2 Power spectral density : : : : : : : : : : : : : : : : : : : : Spectral lines in the PSD : : : : : : : : : : : : : : : : : Crosspower spectral density : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
xxix xxxi 1 1 1 3 4 6 6 8 13 17 19 20 21 23 25 28 30 32 33 33 34 42 43 44 45 46 46 47 48
viii
Contents
1.8
1.9 1.10 1.11
1.12
1.13
Properties of the PSD : : : : : : : : : : : : : : : PSD of processes through linear transformations : PSD of processes through ﬁltering : : : : : : : : 1.7.3 PSD of discretetime random processes : : : : : : : Spectral lines in the PSD : : : : : : : : : : : : : PSD of processes through ﬁltering : : : : : : : : Minimumphase spectral factorization : : : : : : 1.7.4 PSD of passband processes : : : : : : : : : : : : : PSD of the quadrature components of a random process : : : : : : : : : Cyclostationary processes : : : : : : : : : : : : : The autocorrelation matrix : : : : : : : : : : : : : : : : : : Deﬁnition : : : : : : : : : : : : : : : : : : : : : Properties : : : : : : : : : : : : : : : : : : : : : Eigenvalues : : : : : : : : : : : : : : : : : : : : Other properties : : : : : : : : : : : : : : : : : : Eigenvalue analysis for Hermitian matrices : : : Examples of random processes : : : : : : : : : : : : : : : Matched ﬁlter : : : : : : : : : : : : : : : : : : : : : : : : Matched ﬁlter in the presence of white noise : : Ergodic random processes : : : : : : : : : : : : : : : : : : 1.11.1 Mean value estimators : : : : : : : : : : : : : : : : Rectangular window : : : : : : : : : : : : : : : Exponential ﬁlter : : : : : : : : : : : : : : : : : General window : : : : : : : : : : : : : : : : : : 1.11.2 Correlation estimators : : : : : : : : : : : : : : : : Unbiased estimate : : : : : : : : : : : : : : : : : Biased estimate : : : : : : : : : : : : : : : : : : 1.11.3 Power spectral density estimators : : : : : : : : : : Periodogram or instantaneous spectrum : : : : : Welch periodogram : : : : : : : : : : : : : : : : Blackman and Tukey correlogram : : : : : : : : Windowing and window closing : : : : : : : : : Parametric models of random processes : : : : : : : : : : : ARMA. p; q/ model : : : : : : : : : : : : : : : MA(q) model : : : : : : : : : : : : : : : : : : : AR(N ) model : : : : : : : : : : : : : : : : : : : Spectral factorization of an AR(N ) model : : : : Whitening ﬁlter : : : : : : : : : : : : : : : : : : Relation between ARMA, MA and AR models : 1.12.1 Autocorrelation of AR processes : : : : : : : : : : 1.12.2 Spectral estimation of an AR.N / process : : : : : : Some useful relations : : : : : : : : : : : : : : : AR model of sinusoidal processes : : : : : : : : Guide to the bibliography : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
48 49 50 50 51 52 53 54 54 56 63 63 63 63 64 65 67 73 74 76 78 80 81 82 82 82 83 84 84 85 86 86 90 90 91 91 94 94 94 96 98 99 101 102
Contents
ix
Bibliography : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : 1.A Multirate systems : : : : : : : : : : 1.A.1 Fundamentals : : : : : : : : 1.A.2 Decimation : : : : : : : : : 1.A.3 Interpolation : : : : : : : : : 1.A.4 Decimator ﬁlter : : : : : : : 1.A.5 Interpolator ﬁlter : : : : : : : 1.A.6 Rate conversion : : : : : : : 1.A.7 Time interpolation : : : : : : Linear interpolation : : : : Quadratic interpolation : : 1.A.8 The noble identities : : : : : 1.A.9 The polyphase representation Efﬁcient implementations : 1.B Generation of Gaussian noise : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
103 104 104 104 106 109 110 112 113 116 116 118 118 119 120 127 129 129 130 132 134 135 135 136 140 140 141 142 143 144 145 146 147 148 149 149 150 150 151 152 152 153 154 155 157
2 The Wiener ﬁlter and linear prediction 2.1 The Wiener ﬁlter : : : : : : : : : : : : : : : : : : : : : : : : Matrix formulation : : : : : : : : : : : : : : : : : Determination of the optimum ﬁlter coefﬁcients : : The principle of orthogonality : : : : : : : : : : : Expression of the minimum meansquare error : : : Characterization of the cost function surface : : : : The Wiener ﬁlter in the zdomain : : : : : : : : : 2.2 Linear prediction : : : : : : : : : : : : : : : : : : : : : : : : Forward linear predictor : : : : : : : : : : : : : : Optimum predictor coefﬁcients : : : : : : : : : : : Forward “prediction error ﬁlter” : : : : : : : : : : Relation between linear prediction and AR models First and second order solutions : : : : : : : : : : 2.2.1 The Levinson–Durbin algorithm : : : : : : : : : : : Lattice ﬁlters : : : : : : : : : : : : : : : : : : : : 2.2.2 The Delsarte–Genin algorithm : : : : : : : : : : : : 2.3 The least squares (LS) method : : : : : : : : : : : : : : : : Data windowing : : : : : : : : : : : : : : : : : : : Matrix formulation : : : : : : : : : : : : : : : : : Correlation matrix : : : : : : : : : : : : : : : : Determination of the optimum ﬁlter coefﬁcients : : 2.3.1 The principle of orthogonality : : : : : : : : : : : : : Expressions of the minimum cost function : : : : : The normal equation using the T matrix : : : : : : Geometric interpretation: the projection operator : : 2.3.2 Solutions to the LS problem : : : : : : : : : : : : : Singular value decomposition of T : : : : : : : : : Minimum norm solution : : : : : : : : : : : : : :
x
Contents
Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : 2.A The estimation problem : : : : : : : : : : : : : : : : : The estimation problem for random variables MMSE estimation : : : : : : : : : : : : : : : Extension to multiple observations : : : : : : MMSE linear estimation : : : : : : : : : : : MMSE linear estimation for random vectors : 3
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
158 159 159 159 159 160 161 162 165 166 166 168 169 170 171 173 173 175 175 175 177 178 179 180 183 184 186 186 187 187 188 189 189 191 191 197 198 199 201 202 203 203 203 204 205
Adaptive transversal ﬁlters 3.1 Adaptive transversal ﬁlter: MSE criterion : : : : : : : : : : : : : : 3.1.1 Steepest descent or gradient algorithm : : : : : : : : : : : Stability of the steepest descent algorithm : : : : : : : : Conditions for convergence : : : : : : : : : : : : : : : : Choice of the adaptation gain for fastest convergence : : Transient behavior of the MSE : : : : : : : : : : : : : : 3.1.2 The least meansquare (LMS) algorithm : : : : : : : : : : Implementation : : : : : : : : : : : : : : : : : : : : : : Computational complexity : : : : : : : : : : : : : : : : Canonical model : : : : : : : : : : : : : : : : : : : : : Conditions for convergence : : : : : : : : : : : : : : : : 3.1.3 Convergence analysis of the LMS algorithm : : : : : : : : Convergence of the mean : : : : : : : : : : : : : : : : : Convergence in the meansquare sense (real scalar case) Convergence in the meansquare sense (general case) : : Basic results : : : : : : : : : : : : : : : : : : : : : : : : Observations : : : : : : : : : : : : : : : : : : : : : : : Final remarks : : : : : : : : : : : : : : : : : : : : : : : 3.1.4 Other versions of the LMS algorithm : : : : : : : : : : : : Leaky LMS : : : : : : : : : : : : : : : : : : : : : : : : Sign algorithm : : : : : : : : : : : : : : : : : : : : : : Sigmoidal algorithm : : : : : : : : : : : : : : : : : : : Normalized LMS : : : : : : : : : : : : : : : : : : : : : Variable adaptation gain : : : : : : : : : : : : : : : : : LMS for lattice ﬁlters : : : : : : : : : : : : : : : : : : : 3.1.5 Example of application: the predictor : : : : : : : : : : : : 3.2 The recursive least squares (RLS) algorithm : : : : : : : : : : : : Normal equation : : : : : : : : : : : : : : : : : : : : : Derivation of the RLS algorithm : : : : : : : : : : : : : Initialization of the RLS algorithm : : : : : : : : : : : : Recursive form of E min : : : : : : : : : : : : : : : : : : Convergence of the RLS algorithm : : : : : : : : : : : : Computational complexity of the RLS algorithm : : : : : Example of application: the predictor : : : : : : : : : : 3.3 Fast recursive algorithms : : : : : : : : : : : : : : : : : : : : : : 3.3.1 Comparison of the various algorithms : : : : : : : : : : :
Contents
xi
Block adaptive algorithms in the frequency domain : : : : : : : : : 3.4.1 Block LMS algorithm in the frequency domain: the basic scheme : : : : : : : : : : : : : : : : : : : : : : : : Computational complexity of the block LMS algorithm via FFT : : : : : : : : : : : : : : : 3.4.2 Block LMS algorithm in the frequency domain: the FLMS algorithm : : : : : : : : : : : : : : : : : : : : : : Computational complexity of the FLMS algorithm : : : : : Convergence in the mean of the coefﬁcients for the FLMS algorithm : : : : : : : : : : : : 3.5 LMS algorithm in a transformed domain : : : : : : : : : : : : : : : 3.5.1 Basic scheme : : : : : : : : : : : : : : : : : : : : : : : : : On the speed of convergence : : : : : : : : : : : : : : : : 3.5.2 Normalized FLMS algorithm : : : : : : : : : : : : : : : : : 3.5.3 LMS algorithm in the frequency domain : : : : : : : : : : : 3.5.4 LMS algorithm in the DCT domain : : : : : : : : : : : : : : 3.5.5 General observations : : : : : : : : : : : : : : : : : : : : : 3.6 Examples of application : : : : : : : : : : : : : : : : : : : : : : : 3.6.1 System identiﬁcation : : : : : : : : : : : : : : : : : : : : : Linear case : : : : : : : : : : : : : : : : : : : : : : : : : Finite alphabet case : : : : : : : : : : : : : : : : : : : : : 3.6.2 Adaptive cancellation of interfering signals : : : : : : : : : : General solution : : : : : : : : : : : : : : : : : : : : : : : 3.6.3 Cancellation of a sinusoidal interferer with known frequency 3.6.4 Disturbance cancellation for speech signals : : : : : : : : : : 3.6.5 Echo cancellation in subscriber loops : : : : : : : : : : : : : 3.6.6 Adaptive antenna arrays : : : : : : : : : : : : : : : : : : : : 3.6.7 Cancellation of a periodic interfering signal : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.A PN sequences : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Maximallength sequences : : : : : : : : : : : : : : : : : CAZAC sequences : : : : : : : : : : : : : : : : : : : : : Gold sequences : : : : : : : : : : : : : : : : : : : : : : : 3.B Identiﬁcation of a FIR system by PN sequences : : : : : : : : : : : 3.B.1 Correlation method : : : : : : : : : : : : : : : : : : : : : : Signaltoestimation error ratio : : : : : : : : : : : : : : : 3.B.2 Methods in the frequency domain : : : : : : : : : : : : : : : System identiﬁcation in the absence of noise : : : : : : : : System identiﬁcation in the presence of noise : : : : : : : 3.B.3 The LS method : : : : : : : : : : : : : : : : : : : : : : : : Formulation using the data matrix : : : : : : : : : : : : : Computation of the signaltoestimation error ratio : : : : 3.B.4 The LMMSE method : : : : : : : : : : : : : : : : : : : : : 3.B.5 Identiﬁcation of a continuoustime system : : : : : : : : : : 3.4
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
205 206 206 207 209 211 211 212 214 214 214 215 216 216 216 217 220 221 222 224 224 225 226 227 229 233 233 233 235 236 239 239 241 242 242 243 244 246 246 249 251
xii
Contents
4
Transmission media 4.1 Electrical characterization of a transmission system : : : : : Simpliﬁed scheme of a transmission system : : : Characterization of an active device : : : : : : : Conditions for the absence of signal distortion : : Characterization of a 2port network : : : : : : : Measurement of signal power : : : : : : : : : : 4.2 Noise generated by electrical devices and networks : : : : : Thermal noise : : : : : : : : : : : : : : : : : : : Shot noise : : : : : : : : : : : : : : : : : : : : : Noise in diodes and transistors : : : : : : : : : : Noise temperature of a twoterminal device : : : Noise temperature of a 2port network : : : : : : Equivalentnoise models : : : : : : : : : : : : : Noise ﬁgure of a 2port network : : : : : : : : : Cascade of 2port networks : : : : : : : : : : : : 4.3 Signaltonoise ratio (SNR) : : : : : : : : : : : : : : : : : SNR for a twoterminal device : : : : : : : : : : SNR for a 2port network : : : : : : : : : : : : Relation between noise ﬁgure and SNR : : : : : 4.4 Transmission lines : : : : : : : : : : : : : : : : : : : : : : 4.4.1 Fundamentals of transmission line theory : : : : : : Ideal transmission line : : : : : : : : : : : : : : Nonideal transmission line : : : : : : : : : : : : Frequency response : : : : : : : : : : : : : : : : Conditions for the absence of signal distortion : : Impulse response of a nonideal transmission line Secondary constants of some transmission lines : 4.4.2 Crosstalk : : : : : : : : : : : : : : : : : : : : : : Nearend crosstalk : : : : : : : : : : : : : : : : Farend crosstalk : : : : : : : : : : : : : : : : : 4.5 Optical ﬁbers : : : : : : : : : : : : : : : : : : : : : : : : : Description of a ﬁberoptic transmission system : 4.6 Radio links : : : : : : : : : : : : : : : : : : : : : : : : : : 4.6.1 Frequency ranges for radio transmission : : : : : : Radiation masks : : : : : : : : : : : : : : : : : : 4.6.2 Narrowband radio channel model : : : : : : : : : : Equivalent circuit at the receiver : : : : : : : : : Multipath : : : : : : : : : : : : : : : : : : : : : 4.6.3 Doppler shift : : : : : : : : : : : : : : : : : : : : : 4.6.4 Propagation of wideband signals : : : : : : : : : : Channel parameters in the presence of multipath : Statistical description of fading channels : : : : : 4.6.5 Continuoustime channel model : : : : : : : : : : : Power delay proﬁle : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
255 255 255 257 259 259 262 263 263 265 265 265 266 267 268 270 272 272 273 274 275 275 276 279 279 282 282 283 286 288 290 291 292 294 295 296 296 299 299 303 305 307 307 309 310
Contents
xiii
Doppler spectrum : : : : : : : : : : : : : : : : : : : Doppler spectrum models : : : : : : : : : : : : : : : Shadowing : : : : : : : : : : : : : : : : : : : : : : Final remarks : : : : : : : : : : : : : : : : : : : : : 4.6.6 Discretetime model for fading channels : : : : : : : : Generation of a process with a preassigned spectrum 4.7 Telephone channel : : : : : : : : : : : : : : : : : : : : : : : : 4.7.1 Characteristics : : : : : : : : : : : : : : : : : : : : : : Linear distortion : : : : : : : : : : : : : : : : : : : Noise sources : : : : : : : : : : : : : : : : : : : : : Nonlinear distortion : : : : : : : : : : : : : : : : : Frequency offset : : : : : : : : : : : : : : : : : : : Phase jitter : : : : : : : : : : : : : : : : : : : : : : Echo : : : : : : : : : : : : : : : : : : : : : : : : : : 4.8 Transmission channel: general model : : : : : : : : : : : : : : Power ampliﬁer (HPA) : : : : : : : : : : : : : : : : Transmission medium : : : : : : : : : : : : : : : : : Additive noise : : : : : : : : : : : : : : : : : : : : : Phase noise : : : : : : : : : : : : : : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 Digital representation of waveforms 5.1 Analog and digital access : : : : : : : : : : : : : : : : : 5.1.1 Digital representation of speech : : : : : : : : : : Some waveforms : : : : : : : : : : : : : : : : Speech coding : : : : : : : : : : : : : : : : : : The interpolator ﬁlter as a holder : : : : : : : : Sizing of the binary channel parameters : : : : 5.1.2 Coding techniques and applications : : : : : : : : 5.2 Instantaneous quantization : : : : : : : : : : : : : : : : : 5.2.1 Parameters of a quantizer : : : : : : : : : : : : : 5.2.2 Uniform quantizers : : : : : : : : : : : : : : : : Quantization error : : : : : : : : : : : : : : : : Relation between 1, b and −sat : : : : : : : : Statistical description of the quantization noise Statistical power of the quantization error : : : Design of a uniform quantizer : : : : : : : : : Signaltoquantization error ratio : : : : : : : : Implementations of uniform PCM encoders : : 5.3 Nonuniform quantizers : : : : : : : : : : : : : : : : : : Three examples of implementation : : : : : : : 5.3.1 Companding techniques : : : : : : : : : : : : : : Signaltoquantization error ratio : : : : : : : : Digital compression : : : : : : : : : : : : : : : Signaltoquantization noise ratio mask : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
311 313 313 314 315 316 318 318 319 319 319 319 321 321 322 322 326 326 326 328 331 331 332 332 337 338 340 341 344 344 346 347 350 350 352 353 354 357 358 359 360 364 365 366
xiv
Contents
5.4
5.5
5.6
5.7
5.8
Optimum quantizer in the MSE sense : : : : : : : : : : : : Max algorithm : : : : : : : : : : : : : : : : : : : : : : Lloyd algorithm : : : : : : : : : : : : : : : : : : : : : : Expression of 3q for a very ﬁne quantization : : : : : : Performance of nonuniform quantizers : : : : : : : : : Adaptive quantization : : : : : : : : : : : : : : : : : : : : : : : : General scheme : : : : : : : : : : : : : : : : : : : : : : 5.4.1 Feedforward adaptive quantizer : : : : : : : : : : : : : : : Performance : : : : : : : : : : : : : : : : : : : : : : : : 5.4.2 Feedback adaptive quantizers : : : : : : : : : : : : : : : : Estimate of ¦s .k/ : : : : : : : : : : : : : : : : : : : : : Differential coding (DPCM) : : : : : : : : : : : : : : : : : : : : : 5.5.1 Conﬁguration with feedback quantizer : : : : : : : : : : : 5.5.2 Alternative conﬁguration : : : : : : : : : : : : : : : : : : 5.5.3 Expression of the optimum coefﬁcients : : : : : : : : : : : Effects due to the presence of the quantizer : : : : : : : 5.5.4 Adaptive predictors : : : : : : : : : : : : : : : : : : : : : Adaptive feedforward predictors : : : : : : : : : : : : : Sequential adaptive feedback predictors : : : : : : : : : Performance : : : : : : : : : : : : : : : : : : : : : : : : 5.5.5 Alternative structures for the predictor : : : : : : : : : : : Allpole predictor : : : : : : : : : : : : : : : : : : : : : Allzero predictor : : : : : : : : : : : : : : : : : : : : : Polezero predictor : : : : : : : : : : : : : : : : : : : : Pitch predictor : : : : : : : : : : : : : : : : : : : : : : APC : : : : : : : : : : : : : : : : : : : : : : : : : : : : Delta modulation : : : : : : : : : : : : : : : : : : : : : : : : : : 5.6.1 Oversampling and quantization error : : : : : : : : : : : : 5.6.2 Linear delta modulation (LDM) : : : : : : : : : : : : : : : LDM implementation : : : : : : : : : : : : : : : : : : : Choice of system parameters : : : : : : : : : : : : : : : 5.6.3 Adaptive delta modulation (ADM) : : : : : : : : : : : : : Continuously variable slope delta modulation (CVSDM) ADM with secondorder predictors : : : : : : : : : : : : 5.6.4 PCM encoder via LDM : : : : : : : : : : : : : : : : : : 5.6.5 Sigma delta modulation (6DM) : : : : : : : : : : : : : : : Coding by modeling : : : : : : : : : : : : : : : : : : : : : : : : : Vocoder or LPC : : : : : : : : : : : : : : : : : : : : : : RPE coding : : : : : : : : : : : : : : : : : : : : : : : : CELP coding : : : : : : : : : : : : : : : : : : : : : : : Multipulse coding : : : : : : : : : : : : : : : : : : : : : Vector quantization (VQ) : : : : : : : : : : : : : : : : : : : : : : 5.8.1 Characterization of VQ : : : : : : : : : : : : : : : : : : : Parameters determining VQ performance : : : : : : : : : Comparison between VQ and scalar quantization : : : : 5.3.2
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
366 369 370 371 374 377 377 379 380 381 382 385 386 389 391 392 393 394 394 398 398 398 399 399 400 401 404 404 407 408 408 410 411 412 412 413 413 414 415 416 417 417 418 418 420
Contents
xv
Optimum quantization : : : : : : : : : : : : : : : Generalized Lloyd algorithm : : : : : : : : : : 5.8.3 LBG algorithm : : : : : : : : : : : : : : : : : : Choice of the initial codebook : : : : : : : : : Description of the LBG algorithm with splitting Selection of the training sequence : : : : : : : 5.8.4 Variants of VQ : : : : : : : : : : : : : : : : : : Tree search VQ : : : : : : : : : : : : : : : : : Multistage VQ : : : : : : : : : : : : : : : : : Product code VQ : : : : : : : : : : : : : : : : 5.9 Other coding techniques : : : : : : : : : : : : : : : : : : Adaptive transform coding (ATC) : : : : : : : Subband coding (SBC) : : : : : : : : : : : : : 5.10 Source coding : : : : : : : : : : : : : : : : : : : : : : : 5.11 Speech and audio standards : : : : : : : : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : 5.8.2
: : : : : : : : : : : : : : : : : : : : : : : : procedure : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
421 422 424 425 426 426 429 429 430 430 432 433 433 433 434 435 437 437 439 440 440 442 445 447 449 449 450 452 454 454 455 456 458 459 461 465 470 472 474 475 477 477 478
6 Modulation theory 6.1 Theory of optimum detection : : : : : : : : : : : : : : : : : : Statistics of the random variables fwi g : : : : : : : : Sufﬁcient statistics : : : : : : : : : : : : : : : : : : Decision criterion : : : : : : : : : : : : : : : : : : : Theorem of irrelevance : : : : : : : : : : : : : : : : Implementations of the maximum likelihood criterion Error probability : : : : : : : : : : : : : : : : : : : 6.1.1 Examples of binary signalling : : : : : : : : : : : : : : Antipodal signals (² D 1) : : : : : : : : : : : : : : Orthogonal signals (² D 0) : : : : : : : : : : : : : : Binary FSK : : : : : : : : : : : : : : : : : : : : : : 6.1.2 Limits on the probability of error : : : : : : : : : : : : Upper limit : : : : : : : : : : : : : : : : : : : : : : Lower limit : : : : : : : : : : : : : : : : : : : : : : 6.2 Simpliﬁed model of a transmission system and deﬁnition of binary channel : : : : : : : : : : : : : : : : : : : : : : : : Parameters of a transmission system : : : : : : : : : Relations among parameters : : : : : : : : : : : : : 6.3 Pulse amplitude modulation (PAM) : : : : : : : : : : : : : : : 6.4 Phaseshift keying (PSK) : : : : : : : : : : : : : : : : : : : : Binary PSK (BPSK) : : : : : : : : : : : : : : : : : Quadrature PSK (QPSK) : : : : : : : : : : : : : : : 6.5 Differential PSK (DPSK) : : : : : : : : : : : : : : : : : : : : 6.5.1 Error probability for an MDPSK system : : : : : : : : 6.5.2 Differential encoding and coherent demodulation : : : : Binary case (M D 2, differentially encoded BPSK) : Multilevel case : : : : : : : : : : : : : : : : : : : :
xvi
Contents
AMPM or quadrature amplitude modulation (QAM) : : : : : : : : : : Comparison between PSK and QAM : : : : : : : : : : : : : 6.7 Modulation methods using orthogonal and biorthogonal signals : : : : 6.7.1 Modulation with orthogonal signals : : : : : : : : : : : : : : : Probability of error : : : : : : : : : : : : : : : : : : : : : : Limit of the probability of error for M increasing to inﬁnity 6.7.2 Modulation with biorthogonal signals : : : : : : : : : : : : : : Probability of error : : : : : : : : : : : : : : : : : : : : : : 6.8 Binary sequences and coding : : : : : : : : : : : : : : : : : : : : : : Optimum receiver : : : : : : : : : : : : : : : : : : : : : : 6.9 Comparison between coherent modulation methods : : : : : : : : : : : Tradeoffs for QAM systems : : : : : : : : : : : : : : : : : Comparison of modulation methods : : : : : : : : : : : : : 6.10 Limits imposed by information theory : : : : : : : : : : : : : : : : : Capacity of a system using amplitude modulation : : : : : : Coding strategies depending on the signaltonoise ratio : : : Coding gain : : : : : : : : : : : : : : : : : : : : : : : : : : Cutoff rate : : : : : : : : : : : : : : : : : : : : : : : : : : 6.11 Optimum receivers for signals with random phase : : : : : : : : : : : ML criterion : : : : : : : : : : : : : : : : : : : : : : : : : Implementation of a noncoherent ML receiver : : : : : : : Error probability for a noncoherent binary FSK system : : : Performance comparison of binary systems : : : : : : : : : 6.12 Binary modulation systems in the presence of ﬂat fading : : : : : : : : Diversity : : : : : : : : : : : : : : : : : : : : : : : : : : : 6.13 Transmission methods : : : : : : : : : : : : : : : : : : : : : : : : : : 6.13.1 Transmission methods between two users : : : : : : : : : : : : Three methods : : : : : : : : : : : : : : : : : : : : : : : : 6.13.2 Channel sharing: deterministic access methods : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6.A Gaussian distribution function and Marcum function : : : : : : : : : : 6.A.1 The Q function : : : : : : : : : : : : : : : : : : : : : : : : : 6.A.2 The Marcum function : : : : : : : : : : : : : : : : : : : : : : 6.B Gray coding : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6.C Baseband PPM and PDM : : : : : : : : : : : : : : : : : : : : : : : : Signaltonoise ratio : : : : : : : : : : : : : : : : : : : : : 6.D Walsh codes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6.6 7 Transmission over dispersive channels 7.1 Baseband digital transmission (PAM systems) : : Transmitter : : : : : : : : : : : : : : : Transmission channel : : : : : : : : : : Receiver : : : : : : : : : : : : : : : : : Power spectral density of a PAM signal : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
480 485 486 486 489 492 493 494 496 498 499 502 502 503 504 506 508 509 509 510 512 516 519 520 521 522 522 523 523 525 527 527 527 529 531 532 532 536 539 539 539 541 542 543
Contents
xvii
7.2
Passband digital transmission (QAM systems) : : : : : : : : : Transmitter : : : : : : : : : : : : : : : : : : : : : : Power spectral density of a QAM signal : : : : : : : Three equivalent representations of the modulator : : Coherent receiver : : : : : : : : : : : : : : : : : : : 7.3 Baseband equivalent model of a QAM system : : : : : : : : : 7.3.1 Signal analysis : : : : : : : : : : : : : : : : : : : : : : Signaltonoise ratio : : : : : : : : : : : : : : : : : 7.3.2 Characterization of system elements : : : : : : : : : : Transmitter : : : : : : : : : : : : : : : : : : : : : : Transmission channel : : : : : : : : : : : : : : : : : Receiver : : : : : : : : : : : : : : : : : : : : : : : : 7.3.3 Intersymbol interference : : : : : : : : : : : : : : : : : Discretetime equivalent system : : : : : : : : : : : Nyquist pulses : : : : : : : : : : : : : : : : : : : : Eye diagram : : : : : : : : : : : : : : : : : : : : : : 7.3.4 Performance analysis : : : : : : : : : : : : : : : : : : Symbol error probability in the absence of ISI : : : : Matched ﬁlter receiver : : : : : : : : : : : : : : : : 7.4 Carrierless AM/PM (CAP) modulation : : : : : : : : : : : : : 7.5 Regenerative PCM repeaters : : : : : : : : : : : : : : : : : : : 7.5.1 PCM signals over a binary channel : : : : : : : : : : : Linear PCM coding of waveforms : : : : : : : : : : Overall system performance : : : : : : : : : : : : : 7.5.2 Regenerative repeaters : : : : : : : : : : : : : : : : : : Analog transmission : : : : : : : : : : : : : : : : : Digital transmission : : : : : : : : : : : : : : : : : : Comparison between analog and digital transmission Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7.A Line codes for PAM systems : : : : : : : : : : : : : : : : : : 7.A.1 Line codes : : : : : : : : : : : : : : : : : : : : : : : : Nonreturntozero (NRZ) format : : : : : : : : : : : Returntozero (RZ) format : : : : : : : : : : : : : : Biphase (B ) format : : : : : : : : : : : : : : : : : Delay modulation or Miller code : : : : : : : : : : : Block line codes : : : : : : : : : : : : : : : : : : : Alternate mark inversion (AMI) : : : : : : : : : : : 7.A.2 Partial response systems : : : : : : : : : : : : : : : : : The choice of the PR polynomial : : : : : : : : : : : Symbol detection and error probability : : : : : : : : Precoding : : : : : : : : : : : : : : : : : : : : : : : Error probability with precoding : : : : : : : : : : : Alternative interpretation of PR systems : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
544 544 546 547 548 549 550 552 553 553 553 555 556 556 559 562 565 565 567 568 571 571 572 573 575 576 577 578 581 583 583 583 583 584 584 585 585 586 587 590 594 596 597 599
xviii
Contents
7.B Computation of Pe for some cases of interest 7.B.1 Pe in the absence of ISI : : : : : : : 7.B.2 Pe in the presence of ISI : : : : : : Exhaustive method : : : : : : : : Gaussian approximation : : : : : Worstcase limit : : : : : : : : : : Saltzberg limit : : : : : : : : : : GQR method : : : : : : : : : : : 7.C Coherent PAMDSB transmission : : : : : : General scheme : : : : : : : : : : Transmit signal PSD : : : : : : : Signaltonoise ratio : : : : : : : 7.D Implementation of a QAM transmitter : : : 7.E Simulation of a QAM system : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
602 602 604 604 605 605 606 607 608 608 609 609 611 613 619 619 620 620 622 626 627 628 630 633 635 638 639 642 644 645 645 646 648 648 649 649 655 655 657 657 658 659 662 663
8 Channel equalization and symbol detection 8.1 Zeroforcing equalizer (LEZF) : : : : : : : : : : : : : : : : : 8.2 Linear equalizer (LE) : : : : : : : : : : : : : : : : : : : : : : 8.2.1 Optimum receiver in the presence of noise and ISI : : : Alternative derivation of the IIR equalizer : : : : : : Signaltonoise ratio : : : : : : : : : : : : : : : : 8.3 LE with a ﬁnite number of coefﬁcients : : : : : : : : : : : : : Adaptive LE : : : : : : : : : : : : : : : : : : : : : 8.4 Fractionally spaced equalizer (FSE) : : : : : : : : : : : : : : : Adaptive FSE : : : : : : : : : : : : : : : : : : : : : 8.5 Decision feedback equalizer (DFE) : : : : : : : : : : : : : : : Adaptive DFE : : : : : : : : : : : : : : : : : : : : : Design of a DFE with a ﬁnite number of coefﬁcients Design of a fractionally spaced DFE (FSDFE) : : : Signaltonoise ratio : : : : : : : : : : : : : : : : Remarks : : : : : : : : : : : : : : : : : : : : : : : : 8.6 Convergence behavior of adaptive equalizers : : : : : : : : : : Adaptive LE : : : : : : : : : : : : : : : : : : : : : Adaptive DFE : : : : : : : : : : : : : : : : : : : : : 8.7 LEZF with a ﬁnite number of coefﬁcients : : : : : : : : : : : 8.8 DFE: alternative conﬁgurations : : : : : : : : : : : : : : : : : DFEZF : : : : : : : : : : : : : : : : : : : : : : : : DFEZF as a noise predictor : : : : : : : : : : : : : DFE as ISI and noise predictor : : : : : : : : : : : : 8.9 Benchmark performance for two equalizers : : : : : : : : : : : Performance comparison : : : : : : : : : : : : : : : Equalizer performance for two channel models : : : 8.10 Optimum methods for data detection : : : : : : : : : : : : : : 8.10.1 Maximum likelihood sequence detection : : : : : : : : Lower limit to error probability using the MLSD criterion : : : : : : : : : : :
: : : : :
Contents
xix
The Viterbi algorithm (VA) : : : : : : : : : : : : : : : : : : : Computational complexity of the VA : : : : : : : : : : : : : : 8.10.2 Maximum a posteriori probability detector : : : : : : : : : : : : Statistical description of a sequential machine : : : : : : : : : The forwardbackward algorithm (FBA) : : : : : : : : : : : : Scaling : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Likelihood function in the absence of ISI : : : : : : : : : : : Simpliﬁed version of the MAP algorithm (MaxLogMAP) : : Relation between MaxLogMAP and LogMAP : : : : : : : : 8.11 Optimum receivers for transmission over dispersive channels : : : : : : Ungerboeck’s formulation of the MLSD : : : : : : : : : : : : 8.12 Error probability achieved by MLSD : : : : : : : : : : : : : : : : : : : Computation of the minimum distance : : : : : : : : : : : : : 8.13 Reduced state sequence detection : : : : : : : : : : : : : : : : : : : : : Reduced state trellis diagram : : : : : : : : : : : : : : : : : : RSSE algorithm : : : : : : : : : : : : : : : : : : : : : : : : : Further simpliﬁcation: DFSE : : : : : : : : : : : : : : : : : : 8.14 Passband equalizers : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8.14.1 Passband receiver structure : : : : : : : : : : : : : : : : : : : : Joint optimization of equalizer coefﬁcients and carrier phase offset : : : : : : : : : : : : : : Adaptive method : : : : : : : : : : : : : : : : : : : : : : : : 8.14.2 Efﬁcient implementations of voiceband modems : : : : : : : : : 8.15 LE for voiceband modems : : : : : : : : : : : : : : : : : : : : : : : : : Detection of the training sequence : : : : : : : : : : : : : : : Computations of the coefﬁcients of a cyclic equalizer : : : : : Transition from training to data mode : : : : : : : : : : : : : Example of application: a simple modem : : : : : : : : : : : 8.16 LE and DFE in the frequency domain with data frames using cyclic preﬁx 8.17 Numerical results obtained by simulations : : : : : : : : : : : : : : : : QPSK transmission over a minimum phase channel : : : : : : QPSK transmission over a nonminimum phase channel : : : : 8PSK transmission over a minimum phase channel : : : : : : 8PSK transmission over a nonminimum phase channel : : : 8.18 Diversity combining techniques : : : : : : : : : : : : : : : : : : : : : : Antenna arrays : : : : : : : : : : : : : : : : : : : : : : : : : Combining techniques : : : : : : : : : : : : : : : : : : : : : Equalization and diversity : : : : : : : : : : : : : : : : : : : Diversity in transmission : : : : : : : : : : : : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8.A Calculus of variations and receiver optimization : : : : : : : : : : : : : 8.A.1 Calculus of variations : : : : : : : : : : : : : : : : : : : : : : : Linear functional : : : : : : : : : : : : : : : : : : : : : : : : Quadratic functional : : : : : : : : : : : : : : : : : : : : : :
663 667 668 668 670 673 674 675 677 678 680 682 686 691 691 694 695 697 698 700 701 703 705 706 707 709 709 710 713 713 715 716 716 717 718 719 722 722 726 731 731 731 731 732
xx
Contents
8.A.2 Receiver optimization : : : : : : : : : : : : : : 8.A.3 Joint optimization of transmitter and receiver : : 8.B DFE design: matrix formulations : : : : : : : : : : : : 8.B.1 Method based on correlation sequences : : : : : 8.B.2 Method based on the channel impulse response and i.i.d. symbols : : : : : : : : : : : : : : : : 8.B.3 Method based on the channel impulse response and any symbol statistic : : : : : : : : : : : : : 8.B.4 FSDFE : : : : : : : : : : : : : : : : : : : : : 8.C Equalization based on the peak value of ISI : : : : : : 8.D Description of a ﬁnite state machine (FSM) : : : : : : :
: : : :
: : : :
: : : :
: : : :
: : : :
: : : :
: : : :
: : : :
: : : :
735 739 741 741 744 746 747 749 751 753 753 755 755 755 755 756 757 759 760 764 769 770 771 771 773 773 775 777 779 780 780 781 781 782 783 786 788 793 795 795 795
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
9 Orthogonal frequency division multiplexing 9.1 OFDM systems : : : : : : : : : : : : : : : : : : : : : : : : 9.2 Orthogonality conditions : : : : : : : : : : : : : : : : : : : : Time domain : : : : : : : : : : : : : : : : : : : : Frequency domain : : : : : : : : : : : : : : : : : ztransform domain : : : : : : : : : : : : : : : : : 9.3 Efﬁcient implementation of OFDM systems : : : : : : : : : OFDM implementation employing matched ﬁlters : Orthogonality conditions in terms of the polyphase components : : : : : OFDM implementation employing a prototype ﬁlter 9.4 Noncritically sampled ﬁlter banks : : : : : : : : : : : : : : 9.5 Examples of OFDM systems : : : : : : : : : : : : : : : : : Discrete multitone (DMT) : : : : : : : : : : : : : Filtered multitone (FMT) : : : : : : : : : : : : : : Discrete wavelet multitone (DWMT) : : : : : : : : 9.6 Equalization of OFDM systems : : : : : : : : : : : : : : : : Interpolator ﬁlter and virtual subchannels : : : : : Equalization of DMT systems : : : : : : : : : : : Equalization of FMT systems : : : : : : : : : : : : 9.7 Synchronization of OFDM systems : : : : : : : : : : : : : : 9.8 Passband OFDM systems : : : : : : : : : : : : : : : : : : : Passband DWMT systems : : : : : : : : : : : : : Passband DMT and FMT systems : : : : : : : : : Comparison between OFDM and QAM systems : : 9.9 DWMT modulation : : : : : : : : : : : : : : : : : : : : : : Transmit and receive ﬁlter banks : : : : : : : : : : Approximate interchannel interference suppression Perfect interchannel interference suppression : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
10 Spread spectrum systems 10.1 Spread spectrum techniques : : : : : : : : : : : : : : : : : : : : : : : : 10.1.1 Direct sequence systems : : : : : : : : : : : : : : : : : : : : : :
Contents
xxi
Classiﬁcation of CDMA systems : : : : : : : : : Synchronization : : : : : : : : : : : : : : : : : : 10.1.2 Frequency hopping systems : : : : : : : : : : : : : Classiﬁcation of FH systems : : : : : : : : : : : 10.2 Applications of spread spectrum systems : : : : : : : : : : 10.2.1 Antijam communications : : : : : : : : : : : : : : 10.2.2 Multipleaccess systems : : : : : : : : : : : : : : : 10.2.3 Interference rejection : : : : : : : : : : : : : : : : 10.3 Chip matched ﬁlter and rake receiver : : : : : : : : : : : : Number of resolvable rays in a multipath channel Chip matched ﬁlter (CMF) : : : : : : : : : : : : 10.4 Interference : : : : : : : : : : : : : : : : : : : : : : : : : Detection strategies for multipleaccess systems : 10.5 Equalizers for singleuser detection : : : : : : : : : : : : : Chip equalizer (CE) : : : : : : : : : : : : : : : : Symbol equalizer (SE) : : : : : : : : : : : : : : 10.6 Block equalizer for multiuser detection : : : : : : : : : : : 10.7 Maximum likelihood multiuser detector : : : : : : : : : : : Correlation matrix approach : : : : : : : : : : : Whitening ﬁlter approach : : : : : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
802 804 804 806 807 808 810 811 811 811 813 816 818 818 818 819 820 823 823 824 824 827 828 830 830 830 833 836 837 837 838 839 842 843 845 846 849 851 854 857 859 861 862 862
11 Channel codes 11.1 System model : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11.2 Block codes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11.2.1 Theory of binary codes with group structure : : : : : : : : Properties : : : : : : : : : : : : : : : : : : : : : : : : : Parity check matrix : : : : : : : : : : : : : : : : : : : : Code generator matrix : : : : : : : : : : : : : : : : : : Decoding of binary parity check codes : : : : : : : : : : Cosets : : : : : : : : : : : : : : : : : : : : : : : : : : : Two conceptually simple decoding methods : : : : : : : Syndrome decoding : : : : : : : : : : : : : : : : : : : : 11.2.2 Fundamentals of algebra : : : : : : : : : : : : : : : : : : Modulo q arithmetic : : : : : : : : : : : : : : : : : : : Polynomials with coefﬁcients from a ﬁeld : : : : : : : : The concept of modulo in the arithmetic of polynomials Devices to sum and multiply elements in a ﬁnite ﬁeld : : Remarks on ﬁnite ﬁelds : : : : : : : : : : : : : : : : : : Roots of a polynomial : : : : : : : : : : : : : : : : : : Minimum function : : : : : : : : : : : : : : : : : : : : Methods to determine the minimum function : : : : : : Properties of the minimum function : : : : : : : : : : : 11.2.3 Cyclic codes : : : : : : : : : : : : : : : : : : : : : : : : : The algebra of cyclic codes : : : : : : : : : : : : : : : :
xxii
Contents
Properties of cyclic codes : : : : : : : : : : : : : : : : Encoding method using a shift register of length r : : Encoding method using a shift register of length k : : Hard decoding of cyclic codes : : : : : : : : : : : : : Hamming codes : : : : : : : : : : : : : : : : : : : : : Burst error detection : : : : : : : : : : : : : : : : : : 11.2.4 Simplex cyclic codes : : : : : : : : : : : : : : : : : : : Relation to PN sequences : : : : : : : : : : : : : : : : 11.2.5 BCH codes : : : : : : : : : : : : : : : : : : : : : : : : An alternative method to specify the code polynomials Bose–Chaudhuri–Hocquenhem (BCH) codes : : : : : : Binary BCH codes : : : : : : : : : : : : : : : : : : : Reed–Solomon codes : : : : : : : : : : : : : : : : : : Decoding of BCH codes : : : : : : : : : : : : : : : : Efﬁcient decoding of BCH codes : : : : : : : : : : : : 11.2.6 Performance of block codes : : : : : : : : : : : : : : : : 11.3 Convolutional codes : : : : : : : : : : : : : : : : : : : : : : : : 11.3.1 General description of convolutional codes : : : : : : : : Parity check matrix : : : : : : : : : : : : : : : : : : : Generator matrix : : : : : : : : : : : : : : : : : : : : Transfer function : : : : : : : : : : : : : : : : : : : : Catastrophic error propagation : : : : : : : : : : : : : 11.3.2 Decoding of convolutional codes : : : : : : : : : : : : : Interleaving : : : : : : : : : : : : : : : : : : : : : : : Two decoding models : : : : : : : : : : : : : : : : : : Viterbi algorithm : : : : : : : : : : : : : : : : : : : : Forwardbackward algorithm : : : : : : : : : : : : : : Sequential decoding : : : : : : : : : : : : : : : : : : : 11.3.3 Performance of convolutional codes : : : : : : : : : : : 11.4 Concatenated codes : : : : : : : : : : : : : : : : : : : : : : : : Softoutput Viterbi algorithm (SOVA) : : : : : : : : : 11.5 Turbo codes : : : : : : : : : : : : : : : : : : : : : : : : : : : : Encoding : : : : : : : : : : : : : : : : : : : : : : : : The basic principle of iterative decoding : : : : : : : : The forwardbackward algorithm revisited : : : : : : : Iterative decoding : : : : : : : : : : : : : : : : : : : : Performance evaluation : : : : : : : : : : : : : : : : : 11.6 Iterative detection and decoding : : : : : : : : : : : : : : : : : 11.7 Lowdensity parity check codes : : : : : : : : : : : : : : : : : : Encoding procedure : : : : : : : : : : : : : : : : : : : Decoding algorithm : : : : : : : : : : : : : : : : : : : Example of application : : : : : : : : : : : : : : : : : Performance and coding gain : : : : : : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
864 869 870 871 872 875 875 877 878 878 880 883 885 887 891 899 900 903 905 906 907 910 912 913 913 915 915 917 919 921 921 924 924 929 930 939 941 943 946 948 948 953 954 956 960
Contents
xxiii
11.A Nonbinary parity check codes : : : : : : : : : : : : : Linear codes : : : : : : : : : : : : : : : : Parity check matrix : : : : : : : : : : : : : Code generator matrix : : : : : : : : : : : Decoding of nonbinary parity check codes : Coset : : : : : : : : : : : : : : : : : : : : Two conceptually simple decoding methods Syndrome decoding : : : : : : : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
960 961 962 963 964 964 965 965 967 968 968 970 970 973 975 980 985 987 990 990 993 995 996 999 999 1002 1002 1003 1007 1008 1009 1012 1018 1025 1027 1027 1029 1031 1032 1034 1035 1036 1040
12 Trellis coded modulation 12.1 Linear TCM for one and twodimensional signal sets : : : : 12.1.1 Fundamental elements : : : : : : : : : : : : : : : : : Basic TCM scheme : : : : : : : : : : : : : : : : : Example : : : : : : : : : : : : : : : : : : : : : : : 12.1.2 Set partitioning : : : : : : : : : : : : : : : : : : : : 12.1.3 Lattices : : : : : : : : : : : : : : : : : : : : : : : : 12.1.4 Assignment of symbols to the transitions in the trellis 12.1.5 General structure of the encoder/bitmapper : : : : : Computation of dfree : : : : : : : : : : : : : : : : 12.2 Multidimensional TCM : : : : : : : : : : : : : : : : : : : : Encoding : : : : : : : : : : : : : : : : : : : : : : Decoding : : : : : : : : : : : : : : : : : : : : : : 12.3 Rotationally invariant TCM schemes : : : : : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 13 Precoding and coding techniques for dispersive channels 13.1 Capacity of a dispersive channel : : : : : : : : : : : : : 13.2 Techniques to achieve capacity : : : : : : : : : : : : : : Bit loading for OFDM : : : : : : : : : : : : : Discretetime model of a single carrier system : Achieving capacity with a single carrier system 13.3 Precoding and coding for dispersive channels : : : : : : : 13.3.1 Tomlinson–Harashima (TH) precoding : : : : : : 13.3.2 TH precoding and TCM : : : : : : : : : : : : : : 13.3.3 Flexible precoding : : : : : : : : : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
14 Synchronization 14.1 The problem of synchronization for QAM systems : : : : : : 14.2 The phaselocked loop : : : : : : : : : : : : : : : : : : : : : 14.2.1 PLL baseband model : : : : : : : : : : : : : : : : : Linear approximation : : : : : : : : : : : : : : : : 14.2.2 Analysis of the PLL in the presence of additive noise Noise analysis using the linearity assumption : : : 14.2.3 Analysis of a secondorder PLL : : : : : : : : : : : : 14.3 Costas loop : : : : : : : : : : : : : : : : : : : : : : : : : :
xxiv
Contents
14.3.1 PAM signals : : : : : : : : : : : : : : : : : : : : : : : : 14.3.2 QAM signals : : : : : : : : : : : : : : : : : : : : : : : 14.4 The optimum receiver : : : : : : : : : : : : : : : : : : : : : : : Timing recovery : : : : : : : : : : : : : : : : : : : : Carrier phase recovery : : : : : : : : : : : : : : : : : 14.5 Algorithms for timing and carrier phase recovery : : : : : : : : : 14.5.1 ML criterion : : : : : : : : : : : : : : : : : : : : : : : : Assumption of slow time varying channel : : : : : : : 14.5.2 Taxonomy of algorithms using the ML criterion : : : : : Feedback estimators : : : : : : : : : : : : : : : : : : Earlylate estimators : : : : : : : : : : : : : : : : : : 14.5.3 Timing estimators : : : : : : : : : : : : : : : : : : : : : Nondata aided : : : : : : : : : : : : : : : : : : : : : Nondata aided via spectral estimation : : : : : : : : : Dataaided and datadirected : : : : : : : : : : : : : : Data and phasedirected with feedback: differentiator scheme : : : : : : : : : : : : Data and phasedirected with feedback: Mueller & Muller scheme : : : : : : : : : Nondata aided with feedback : : : : : : : : : : : : : 14.5.4 Phasor estimators : : : : : : : : : : : : : : : : : : : : : Data and timingdirected : : : : : : : : : : : : : : : : Nondata aided for MPSK signals : : : : : : : : : : : Data and timingdirected with feedback : : : : : : : : 14.6 Algorithms for carrier frequency recovery : : : : : : : : : : : : 14.6.1 Frequency offset estimators : : : : : : : : : : : : : : : : Nondata aided : : : : : : : : : : : : : : : : : : : : : Nondata aided and timingindependent with feedback : Nondata aided and timingdirected with feedback : : : 14.6.2 Estimators operating at the modulation rate : : : : : : : : Dataaided and datadirected : : : : : : : : : : : : : : Nondata aided for MPSK : : : : : : : : : : : : : : : 14.7 Secondorder digital PLL : : : : : : : : : : : : : : : : : : : : : 14.8 Synchronization in spread spectrum systems : : : : : : : : : : : 14.8.1 The transmission system : : : : : : : : : : : : : : : : : Transmitter : : : : : : : : : : : : : : : : : : : : : : : Optimum receiver : : : : : : : : : : : : : : : : : : : : 14.8.2 Timing estimators with feedback : : : : : : : : : : : : : Nondata aided: noncoherent DLL : : : : : : : : : : : Nondata aided MCTL : : : : : : : : : : : : : : : : : Data and phasedirected: coherent DLL : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : :
: : : : : : : : : : : : : : :
: : : : : : : : : : : : : : :
: : : : : : : : : : : : : : :
1040 1042 1044 1046 1050 1051 1051 1051 1051 1053 1055 1055 1055 1057 1059 1062 1064 1065 1066 1066 1066 1067 1068 1069 1069 1071 1071 1072 1073 1073 1074 1074 1074 1074 1075 1076 1076 1077 1077 1081 1083 1083 1086
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
15 Selftraining equalization 15.1 Problem deﬁnition and fundamentals : : : : : : : : : : : : : : : : : : : Minimization of a special function : : : : : : : : : : : : : : :
Contents
xxv
15.2 Three algorithms for PAM systems : : : : : : : : : : : : : The Sato algorithm : : : : : : : : : : : : : : : : Benveniste–Goursat algorithm : : : : : : : : : : Stopandgo algorithm : : : : : : : : : : : : : : Remarks : : : : : : : : : : : : : : : : : : : : : : 15.3 The contour algorithm for PAM systems : : : : : : : : : : Simpliﬁed realization of the contour algorithm : : 15.4 Selftraining equalization for partial response systems : : : The Sato algorithm for partial response systems : Contour algorithm for partial response systems : 15.5 Selftraining equalization for QAM systems : : : : : : : : The Sato algorithm for QAM systems : : : : : : 15.5.1 Constant modulus algorithm : : : : : : : : : : : : : The contour algorithm for QAM systems : : : : Joint contour algorithm and carrier phase tracking 15.6 Examples of applications : : : : : : : : : : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : 15.A On the convergence of the contour algorithm : : : : : : : :
: : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
1090 1090 1091 1092 1092 1093 1095 1096 1096 1098 1100 1100 1101 1102 1104 1106 1111 1113 1113 1115 1116 1117 1118 1119 1120 1121 1124 1128 1131 1136 1137 1142 1145 1145 1145 1145 1146 1152 1156 1158
16 Applications of interference cancellation 16.1 Echo and near–end crosstalk cancellation for PAM systems : Crosstalk cancellation and full duplex transmission Polyphase structure of the canceller : : : : : : : : Canceller at symbol rate : : : : : : : : : : : : : : Adaptive canceller : : : : : : : : : : : : : : : : : Canceller structure with distributed arithmetic : : : 16.2 Echo cancellation for QAM systems : : : : : : : : : : : : : 16.3 Echo cancellation for OFDM systems : : : : : : : : : : : : : 16.4 Multiuser detection for VDSL : : : : : : : : : : : : : : : : : 16.4.1 Upstream power backoff : : : : : : : : : : : : : : : 16.4.2 Comparison of PBO methods : : : : : : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
17 Wired and wireless network technologies 17.1 Wired network technologies : : : : : : : : : : : : : : : : : : : : : 17.1.1 Transmission over unshielded twisted pairs in the customer service area : : : : : : : : : : : : : : : : : : : : : : : : : Modem : : : : : : : : : : : : : : : : : : : : : : : : : : Digital subscriber line : : : : : : : : : : : : : : : : : : 17.1.2 High speed transmission over unshielded twisted pairs in local area networks : : : : : : : : : : : : : : : : : : : : 17.1.3 Hybrid ﬁber/coaxial cable networks : : : : : : : : : : : : : Ranging and power adjustment for uplink transmission : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
xxvi
Contents
17.2 Wireless network technologies : : : : : : : 17.2.1 Wireless local area networks : : : Medium access control protocols 17.2.2 MMDS and LMDS : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : 17.A Standards for wireless systems : : : : : : 17.A.1 General observations : : : : : : : Wireless systems : : : : : : : : Modulation techniques : : : : : Parameters of the modulator : : Cells in a wireless system : : : 17.A.2 GSM standard : : : : : : : : : : : System characteristics : : : : : : Radio subsystem : : : : : : : : GSMEDGE : : : : : : : : : : : 17.A.3 IS136 standard : : : : : : : : : : 17.A.4 JDC standard : : : : : : : : : : : 17.A.5 IS95 standard : : : : : : : : : : : 17.A.6 DECT standard : : : : : : : : : : 17.A.7 HIPERLAN standard : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : :
1161 1162 1164 1165 1167 1170 1170 1171 1171 1171 1171 1172 1172 1172 1175 1177 1177 1180 1180 1182 1185 1189 1189 1189 1190 1190 1191 1192 1192 1192 1194 1196 1197 1198 1198 1198 1200 1201 1202 1203 1203 1203 1207 1207
18 Modulation techniques for wireless systems 18.1 Analog frontend architectures : : : : : : : : : : : : : : : : : Conventional superheterodyne receiver : : : : : : : Alternative architectures : : : : : : : : : : : : : : Direct conversion receiver : : : : : : : : : : : : : Single conversion to lowIF : : : : : : : : : : : : Double conversion and wideband IF : : : : : : : : 18.2 Three noncoherent receivers for phase modulation systems : 18.2.1 Baseband differential detector : : : : : : : : : : : : : 18.2.2 IFband (1 Bit) differential detector (1BDD) : : : : : Performance of MDPSK : : : : : : : : : : : : : : 18.2.3 FM discriminator with integrate and dump ﬁlter (LDI) 18.3 Variants of QPSK : : : : : : : : : : : : : : : : : : : : : : : 18.3.1 Basic schemes : : : : : : : : : : : : : : : : : : : : : QPSK : : : : : : : : : : : : : : : : : : : : : : : : Offset QPSK or staggered QPSK : : : : : : : : : : Differential QPSK (DQPSK) : : : : : : : : : : : : ³=4DQPSK : : : : : : : : : : : : : : : : : : : : 18.3.2 Implementations : : : : : : : : : : : : : : : : : : : : QPSK, OQPSK, and DQPSK modulators : : : : : ³=4DQPSK modulators : : : : : : : : : : : : : : 18.4 Frequency shift keying (FSK) : : : : : : : : : : : : : : : : : 18.4.1 Power spectrum of MFSK : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
Contents
xxvii
Power spectrum of noncoherent binary FSK : : Power spectrum of coherent MFSK : : : : : : : 18.4.2 FSK receivers and corresponding performance : : : Coherent demodulator : : : : : : : : : : : : : : Noncoherent demodulator : : : : : : : : : : : : Limiterdiscriminator FM demodulator : : : : : : 18.5 Minimum shift keying (MSK) : : : : : : : : : : : : : : : : 18.5.1 Power spectrum of continuousphase FSK (CPFSK) 18.5.2 The MSK signal viewed from two perspectives : : Phase of an MSK signal : : : : : : : : : : : : : MSK as binary CPFSK : : : : : : : : : : : : : : MSK as OQPSK : : : : : : : : : : : : : : : : : Complex notation of an MSK signal : : : : : : : 18.5.3 Implementations of an MSK scheme : : : : : : : : 18.5.4 Performance of MSK demodulators : : : : : : : : : MSK with differential precoding : : : : : : : : : 18.5.5 Remarks on spectral containment : : : : : : : : : : 18.6 Gaussian MSK (GMSK) : : : : : : : : : : : : : : : : : : 18.6.1 GMSK via CPFSK : : : : : : : : : : : : : : : : : 18.6.2 Power spectrum of GMSK : : : : : : : : : : : : : 18.6.3 Implementation of a GMSK scheme : : : : : : : : Conﬁguration I : : : : : : : : : : : : : : : : : : Conﬁguration II : : : : : : : : : : : : : : : : : : Conﬁguration III : : : : : : : : : : : : : : : : : 18.6.4 Linear approximation of a GMSK signal : : : : : : Performance of GMSK demodulators : : : : : : Performance of a GSM receiver in the presence of multipath : : : : : : : : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : 18.A Continuous phase modulation (CPM) : : : : : : : : : : : : Alternative deﬁnition of CPM : : : : : : : : : : Advantages of CPM : : : : : : : : : : : : : : : 19 Design of high speed transmission systems over unshielded twisted pair cables 19.1 Design of a quaternary partial response classIV system sion at 125 Mbit/s : : : : : : : : : : : : : : : : : : : : Analog ﬁlter design : : : : : : : : : : : : : : Received signal and adaptive gain control : : Nearend crosstalk cancellation : : : : : : : Decorrelation ﬁlter : : : : : : : : : : : : : : Adaptive equalizer : : : : : : : : : : : : : : Compensation of the timing phase drift : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
1208 1209 1212 1212 1213 1213 1214 1217 1217 1217 1219 1220 1222 1224 1224 1227 1228 1229 1229 1231 1234 1234 1234 1236 1238 1238 1243 1244 1246 1246 1246 1248
1249 for data transmis: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1249 1249 1250 1251 1251 1252 1252
2.xxviii Contents Adaptive equalizer coefﬁcient adaptation : : : : : : Convergence behavior of the various algorithms : : 19.1.1 VLSI implementation : : : : : : : : : : : : : : : : : Adaptive digital NEXT canceller : : : : : : : : : : Adaptive digital equalizer : : : : : : : : : : : : : : Timing control : : : : : : : : : : : : : : : : : : : Viterbi detector : : : : : : : : : : : : : : : : : : : 19.2 Design of a dual duplex transmission system at 100 Mbit/s : : Dual duplex transmission : : : : : : : : : : : : : : Physical layer control : : : : : : : : : : : : : : : : Coding and decoding : : : : : : : : : : : : : : : : 19.A Interference suppression : : : : : : : : : : : : : : : : : : : : Index : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1253 1253 1255 1255 1258 1261 1262 1263 1263 1265 1266 1269 1269 1270 1272 1273 1274 1274 1277 .1 Signal processing functions : : : : : : : : : : : : : : The 100BASET2 transmitter : : : : : : : : : : : : The 100BASET2 receiver : : : : : : : : : : : : : Computational complexity of digital receive ﬁlters : Bibliography : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Appendices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 19.
The contents reﬂect our experience in teaching courses on Algorithms for Telecommunications at the University of Padova. The ﬁrst three sections of Chapter 11. with an emphasis on secondorder statistical descriptions. The book is divided into nineteen chapters. we present a discussion of fundamental algorithms and structures for telecommunication technologies. First. Track 2. we provide a didactic tool to students of communications systems. The Wiener ﬁlter and the linear prediction theory.Preface The motivation for this book is twofold. Chapter 3 lists iterative methods to achieve the objectives stated . focuses on modulation techniques. which constitute fundamental elements for receiver design. Track 1 includes the basic elements for a ﬁrst course on telecommunications. The text explains the procedures for solving problems posed by the design of systems for reliable communications over wired or wireless channels. We refer to Shannon theorem to establish the maximum bit rate that can be transmitted reliably over a noisy channel. we focus on fundamental developments in the ﬁeld in order to provide the reader with the necessary insight to design essential elements of various communications systems. where we introduce methods for increasing transmission reliability by exploiting the redundancy added to the information bits. parametric models of random processes are analyzed in Chapter 1. for a ﬁrst course it is suggested that emphasis be placed on PCM. We brieﬂy indicate four tracks corresponding to speciﬁc areas and course work offered. It covers Chapter 1. which recalls fundamental concepts on signals and random processes. In this track we focus on the description of noise in electronic devices and on the laws of propagation in transmission lines and radio channels. The representation of waveforms by sequences of binary symbols is treated in Chapter 5. together with a brief introduction to computer simulations. A discussion of the characteristics of transmission media follows in Chapter 4. which is an extension of Track 1. Next. On the other hand. Signal dispersion caused by a transmission channel is then analyzed in Chapter 7. Track 1. as well as our professional experience acquired in industrial research laboratories. On the one hand. Chapter 6 examines the fundamental principles of a digital transmission system. Examples of elementary and practical implementations of transmission systems are presented. Track 2. where a sequence of information symbols is sent over a transmission channel. which we regard as an introduction to the remaining tracks. Italy. are dealt with in Chapter 2. conclude the ﬁrst track. In particular.
Because of electromagnetic coupling. This is followed by Chapter 18. Cancellation techniques to suppress interference signals are treated in Chapter 16. as well as various applications of the Wiener ﬁlter. as well as the algorithms introduced in Chapter 8. are dealt with in Chapter 14. ﬁxed ﬁltering. the desired signal at the receiver is often disturbed by other transmissions taking place simultaneously. which illustrate the fundamental principles of transmission system design. with emphasis to applications for simultaneous channel access by several users that share a wideband channel. for example channel identiﬁcation and interference cancellation. where blind equalization techniques are discussed. respectively. are analyzed. In the ﬁrst part of Chapter 8. which investigates individual building blocks for channel equalization and symbol detection. A further method to mitigate channel dispersion is precoding. In the second part of the chapter.xxx Preface in Chapter 2. Initially singlecarrier modulation systems are considered. Channel coding techniques to improve the reliability of transmission are investigated in depth in Chapters 11 and 12. which are preferable for transmission over very dispersive channels and/or applications that require ﬂexibility in spectral allocation. as well as various methods for timing and carrier recovery. In Chapter 10 spread spectrum systems are examined. We observe the trend towards implementing transceiver functions using digital signal processors. Track 4 addresses various challenges encountered in designing wired and wireless communications systems. In line with the above considerations. Track 3. Track 3 begins with a review of Chapters 2 and 3. channel equalization is examined as a further application of the Wiener ﬁlter. are also discussed. which rely on the Viterbi algorithm and on the forwardbackward algorithm. In Chapter 9. are investigated in depth in Chapters 9 and 10. and speciﬁc examples of system design are given in Chapters 18 and 19. more sophisticated methods of equalization and symbol detection. which are increasingly being adopted in communications systems. are essential for this track. The assumption that the transmission channel characteristics are known a priori is removed in Chapter 15. This approach enhances the ﬂexibility of transceivers. as well as their implementations. which can be utilized for more than one transmission standard. The principles of multicarrier and spread spectrum modulation techniques. The inherent narrowband interference rejection capabilities of spread spectrum systems. and digitaltoanalog and analogtodigital conversion. which illustrates speciﬁc modulation techniques developed for mobile radio applications. The design of the receiver frontend. . we introduce multicarrier modulation techniques. Therefore the algorithmic aspects of a transmission system are becoming increasingly important. Applications of interference cancellation and multiuser detection are addressed in Chapter 16. Track 4. and considerably reduces development time. An overview of wired and wireless access technologies appears in Chapter 17. The elements introduced in Chapters 2 and 3. These applications are further developed in the ﬁrst two sections of Chapter 16. The operations of systems that employ joint precoding and channel coding are explained in Chapter 13. and of Chapter 8. Hardware devices are assigned wherever possible only the functions of analog frontend.
We gratefully acknowledge our colleague and mentor Jack Wolf for letting us include his lecture notes in the chapter on channel codes. Charlotte Bolliger and Lilli M. Although space limitations preclude mentioning them all by name. Giancarlo Calvagno. Ezio Obetti. Alberto Bononi. Paola Bisaglia. the contribution of Barbara Sicoli was indispensable. Giovanna Sostrato. for their continuing support. Riccardo Rahely. Roberto Corvaja. Stefano Tomasin. Carlo Monti. our thanks also go to Jane Frankenﬁeld Zanin for her help in translating the text into English. and Urs Bitterli and Darja Kropaci for their help with the graphics editing. For text processing of the Italian version.Acknowledgements We gratefully acknowledge all who have made the realization of this book possible. Roberto Rinaldo. We are pleased to thank the following colleagues for their invaluable assistance throughout the revision of the book: Antonio Assalini. We also thank Christian Bolis and Chiara Paci for their support in developing the software for the book. Nevio Benvenuto Giovanni Cherubini . and Silvano Pupolin of the University of Padua. we nevertheless express our sincere gratitude. Elena Costa. Andrea Scaggiante. Andrea Galtarossa. In particular. Pavka for their assistance in administering the project. Antonio Salloum. Antonio Mian. Giulio Colavolpe. A special acknowledgment goes to our colleagues Werner Bux and Evangelos Eleftheriou of the IBM Zurich Research Laboratory. Fortunato Santucci. the editing of the various chapters would never have been completed without the contributions of numerous students in our courses on Algorithms for Telecommunications. and Luciano Tomba.
% ¦.To make the reading of the adopted symbols easier. " 1 E Z Á Â. & − × . The Greek alphabet Þ þ A B 0 Ž ž. a table containing the Greek alphabet is included.' N 4 O 5 P P T Y 8 X 9 nu xi omicron pi rho sigma tau upsilon phi chi psi omega . # Ã Ä ½ ¼ H 2 I K 3 M alpha beta gamma delta epsilon zeta eta theta iota kappa lambda mu ! ¹ ¾ o ³ ².
z and 0 be elements of a linear space. and the product of a vector by a scalar coincides with the vector obtained by multiplying each component for that scalar. IIR and FIR impulse responses) and signals (complex representation of passband signals and the baseband equivalent). ARMA. In any event. in an Euclidean space.2) (1. 1. the sum between vectors and the multiplication of a vector by a scalar. Properties of a linear space Let x. We will conclude with the study of random processes. we will begin with the deﬁnition of signal space and its discrete representation.e. and Þ and þ be complex numbers (scalars). then move to the study of discretetime linear systems (discrete Fourier transforms.. such that 0CxDx (1. There exists a unique vector 0. The Euclidean space is an example of linear space in which the sum of two vectors coincides with the vector obtained by adding the individual components.3) (1. together with two operators deﬁned over the elements of the set. Addition is commutative xCyDyCx 2. A majority of readers will simply ﬁnd this chapter a review of known principles. those with complexvalued components.y C z/ D . correlogram. called null.1) . In our case of particular interest is the set of complex vectors. MA and especially AR models). y.Chapter 1 Elements of signal theory In the present chapter we recall fundamental concepts of signal theory and random processes. for which we recommend the items in the bibliography. 1.1 Signal space Deﬁnition 1. Addition is associative x C .1 A linear space is a set of elements called vectors. with emphasis on the statistical estimation of ﬁrst.and secondorder ergodic processes (periodogram.x C y/ C z 3. while others will ﬁnd it a useful incentive for further indepth study. i.
whose elements are the signals fx.6) (1.4) x C .1. t 2< (1.10) Figure 1. such that (1.Þþ/x In particular.k/g. Multiplication by scalars is associative Þ.kTc /g k integer (1.1 and the continuoustime signal space. Elements of signal theory 4.8) A geometrical interpretation of the two elementary operations in a twodimensional Euclidean space is given in Figure 1. whose elements are the signals x. the Euclidean space is an example of a linear space. x/ D 0 5.Þ C þ/x D Þx C þx 0x D 0 (1. omitting the indication of the sampling period. As previously mentioned.7) (1.x C y/ D Þx C Þy . Two other examples of linear spaces are: the discretetime signal space (an Euclidean space with inﬁnite dimensions). Distributive laws Þ. For each x. there is a unique vector x. called additive inverse.þx/ D .2 Chapter 1. Geometrical interpretation in the twodimensional space of the sum of two vectors and the multiplication of a vector by a scalar.t/ where < denotes the set of real numbers.9) where Tc is the sampling period or interval. we have 1x D x 6. we will indicate by fxk g a sequence of real or complex numbers not necessarily generated at instants kTc . 1 Later a discretetime signal will be indicated simply as fx. In general.5) (1.1. .
2. Signal space 3 Inner product In an I dimensional Euclidean space. that is the vector length. there is an important geometrical interpretation of the inner product in the Euclidean space. : : : . Deﬁnition 1. xi D I X i D1 (1. y I ]T . Ł for complex conjugate and H for transpose complex conjugate or Hermitian.1.13) Observation 1. yi D 0.2 given the two vectors x D [x1 .14) y y x θ x Figure 1. hx. Geometrical representation of the inner product between two vectors. 2 Henceforth: T stands for transpose.2. Note that hx.1. (I=2) (1.12) jxi j2 D jjxjj2 (1. : : : . that is if the angle they form is 90Ž . represented in Figure 1.12). yi the inner product: hx. yi D jjxjj jjyjj cos Â where jjxjj denotes the norm or length of the vector x.11) If hx. x I ]T and y D [y1 . jjxjj is the norm of x. yi D jjxjj cos Â jjyjj is the length of the projection of x onto y.1 From (1. yi is real. that is obtained from the relation: hx. yi D I X i D1 xi yiŁ (1. . we indicate with hx.2 Two vectors x and y are orthogonal (x ? y) if hx.
whereas Ž.17) Recall that the inner product enjoys the following properties: 1. yi D Þhx. Z E x D hx. hx C y.t/ denotes the Dirac delta. zi.t/. ji D i .3 A basis of orthonormal signals (orthogonal signals with unit norm) f i . 5. t 2 <.16) for continuoustime signals. and Z C1 1 x. Equality holds if and only if x D Ky. t 2 <. zi C hy. yi. for continuoustime signals it must be: Z C1 1 jx. with K a complex scalar. where I is a ﬁnite or numerable set. 1.19) 3 This procedure can easily be extended to discretetime signals. xi D 1 C1 jx.t/j2 dt < 1 Z and C1 1 jy. Given a ﬁniteenergy signal x. . In both cases it is assumed that the energy of signals is ﬁnite. Elements of signal theory We can extend these concepts to a signal space.t/j2 dt < 1 (1. i 2 I.k/ (1. Hence.15) for discretetime signals.2 Discrete signal representation Let us consider the problem of associating a sequence (possibly ﬁnite) of numbers with a continuoustime signal. yij Ä jjxjjjjyjj. deﬁning the inner product as C1 X kD 1 x. 2.18) h i.t/ y Ł . 3.4 Chapter 1.t/g.t/ j .t/ dt (1. yi D hy. xi > 0 8x 6D 0. hx. xiŁ . hÞx.t/j2 dt < 1 (1.t/ dt D Ži j D 0 if i 6D j 1 In this text.k/ y Ł . 4. is deﬁned by ² Z C1 1 if i D j Ł (1. zi D hx. (Schwarz inequality) jhx. Žn is the Kronecker delta. hx.
t/. then f i .t/g.t/þ dt (1.24) ii i 2I (1. ei D xi i .xi xiŁ one ﬁnds X jxi Ee D E x C i 2I ci j2 X i 2I jci j2 (1.t/ D O xi i . and x. i 2 I. xi O O O xi ciŁ xiŁ ci / (1. i i D 1 and Ee D E x where Ex D O X i 2I Ex O jxi j2 (1.22) þx. The minimum of (1.25) as jxi j2 xi ciŁ xiŁ ci C jci j2 D jxi ci j2 D . is a complete basis for x.26) xi D ci D hx.21) the most common method to determine the coefﬁcients fxi g in (1.25) is obtained if the second term is zero.xi ci /Ł . xi C hx. deﬁned as þ2 Z C1 þ þ þ X þ þ E e D he.t/ (1.20) i 2I If we deﬁne the error as e. xi D Ex C By adding and subtracting jci j2 . Discrete signal representation 5 we want to express x.t/ x.t/ can be expressed as X xi i .t/ iŁ .t/g. and express E e as E e D hx x.27) (1.29) x.28) If E e D 0. xi O X i 2I .xi ci /.t/ D x. x O xi O hx.t/.t/ þ 1 þ i 2I Let ci D hx.t/ D i 2I .20) is by minimizing the energy of e.t/ dt i 2I (1.23) D hx.2. as a linear combination of the functions f i . t 2 <.t/ (1. hx.1. t 2 <.t/ O (1. i 2 I. Consider the signal X x. Hence the coefﬁcients to be determined are given by Z C1 x.
Therefore. 8i 2 I.29) X jxi j2 (1. . given a sequence of orthonormal signals. for a generic signal x and for a basis formed by two signals one gets the geometrical representation in Figure 1. O ii D xi xi D 0 8i 2 I (1. i 2 I. is called the projection of O x.3. t 2 <.32) φ2 x e ^ x φ 1 Figure 1. i 2 I.6 Chapter 1.t/g.t/. i 2 I.30) Ex D i 2I that is the energy of the signal coincides with the sum of the squares of the coefﬁcient amplitudes.3.t/ can be represented by a sequence of numbers fxi g.t/. belongs to the space spanned by f i .t/g. the signal x. Elements of signal theory where equality must be intended in terms of quadratic norm. As an example. ii D hx x. Geometrical representation of the projection of x onto the space spanned by 1 and 2 . t 2 <. 1 and 2. which form a complete basis for x.t/. given by (1. ii (1. that is.31) that is e ? i .t/g.3 The signal x. i 2 I.t/. onto the space spanned by the signals f i . Signal representation Deﬁnition 1. i 2 I.26) satisﬁes the following important property: he. the space whose signals are expressed as linear combinations of f i . then x. from (1. If E e D 0. Moreover. The principle of orthogonality The vector identiﬁed by the optimum coefﬁcients (1. t 2 <.20) with fxi D ci g. given by xi D hx.
yi (1. respectively. .41) We note that if ² is real. with Â the angle formed by x and y.40) be the correlation coefﬁcient between x and y.t/ D xi i . yi (1.t/ (1.t/ D yi i . Z Ex D 1 C1 jx. xi D jjxjj2 D E x (1.t/ dt D hx.37) D X i 2I jxi yi j2 2 D d.t/ t 2< ! x D [: : : . Moreover. observing (1. the real and the imaginary part of c. If ² is real.x. the following relation holds:4 ð Ł d 2 .t/j dt 2 Ã1 2 (1. x 1 .38) In other words. y/ (1. x/. yi D 1 C1 x.34) i 2I i 2I then Z hx. Discrete signal representation 7 In short. we have the following correspondence between a signal and its vector representation: x.t/j2 dt D X i 2I jxi j2 D hx.39) O In particular we have E e D d 2 . x 1 .39) becomes: p d 2 . it is sufﬁcient that their inner product is real. : : :] T (1.t/ and y. y/ D p ÂZ Ex y C1 D 1 jx. yi jjxjj jjyjj (1. the Euclidean distance between two signals coincides with the Euclidean distance between the corresponding vectors. x 0 . y/ D E x C E y 2Re hx.t/ y Ł .35) In particular.x. the signals are not necessarily real.2.t/ !1 y. 4 The symbols Re [c] and Im [c] denote.x. Let ²D hx.36) We introduce now the Euclidean distance between two signals: d. y/ D E x C E y 2² E x E y (1. Then (1.x. Let X X x.33) It is useful to analyze the inner product between signals in terms of the corresponding vector representations.12) ² D cos Â .x.1.
t/ hs2 .42) Tc D i .1 Resorting to the sampling theorem.140)). in (1.t/ D q1 0 E 1 (1. Let 0 2 .44) a procedure to derive an orthonormal basis for this set is now outlined. M (1. 1i D hs2 D hs2 .8 Chapter 1.t/ D qi (1. 1i D0 (1. with ﬁnite bandwidth B (see (1.4. it can be shown that for a real valued signal x.t/g.t/g m D 1.50) . 2. 1i hs2 . 1 i 1 .48) 2 .t/ (1.t/ 1 i 1 .t i Tc / Tc forms a complete orthogonal basis for x. xi D x. from (1.t/.47) 2. in fact. Then.48) hs2 .t/ D s1 . Let 0 1 .t/ D s2 .48) it is easy to see that h 0 2.t/g a set of orthogonal functions and by 8 9 < 0 .45) : i E i0 . 1i 0 2 ? 1.t/ = . : : : .t/ at the time instants t D i Tc . We indicate by f i0 . the orthonormal functions obtained from f i0 .43) Gram–Schmidt orthonormalization procedure Given a set of M signals fsm . 1i 1.46) Then it follows 1 .t i Tc / 1 Tc (1.t/ D q2 0 E 2 (1. the sequence of functions Â Ã 1 sin ³ . The coefﬁcients are given by the samples of x.t/ D 1 2B ³ . t 2 <.t/ is the projection of s2 upon 0 . Elements of signal theory Example 1.2.i Tc / (1.49) 1. hs2 . As illustrated in Figure 1.t/ 0 . 1.t/ (1.
t/ hs3 .53) 1. Observation 1.54) In such a case. limited to φ 0 and φ 0 . it happens that in (1. m D 1. 1 i 1 .2.t/ is not identically zero. Obviously.t/ D s3 . 2.t/ D qi E i0 (1. such that M X mD1 cm sm .t/g can be lower than M if the signals fsm . not all equal to zero. Discrete signal representation 9 Figure 1. Let 0 3 .t/ (1.1. 2 i 2 . we choose i .4. Geometrical representation of the GramSchmidt orthonormalization procedure.t/ D si . a null signal cannot be an element of the basis.t/ i 1 X jD1 hsi .t/g. i in Figure 1. The procedure is represented geometrically It follows that i ? j for j D 1.t/ 0 3 hs3 .2 The set of f i . ji j . that is if there exists a set of coefﬁcients.t/g is not unique. M.4. are linearly dependent.t/ (1. 2 3 Observation 1.51) then one gets In general 0 3 ? 1 and ? 0 i .3 The number of dimensions I of f i . : : : . in any case the reciprocal distances between signals remain unchanged. : : : . 2. Let us look at a few examples of discrete representation of a set of signals. 3.52) for some i the signal f i0 .t/ 0 .t/g is identically zero.52) and if 0 i . .t/ D 0 8t (1.
Elements of signal theory Example 1.2.6.2 For the three signals 8 < A sin 2³ t T s1 .10 Chapter 1.t/ D : 0 T 2 elsewhere 0<t < 0<t <T elsewhere T <t <T 2 elsewhere (1.t/ D : 0 elsewhere A A s1(t) 0 s (t) 0 T 0 −A 2 −A 0 T t t A s (t) 0 3 −A 0 T t Figure 1. an orthonormal basis.57) which are depicted in Figure 1.5.59) T 2 . represented in Figure 1.55) (1.58) T 1 . The three signals.t/ D : 0 8 < A sin 2³ t T s3 .56) (1.t/ D : 0 8 < A sin 2³ t T s2 .5. .t/ D : 0 elsewhere 8 2 T < p sin 2³ t <t <T T 2 (1. is the following: 8 T 2 < p sin 2³ t 0<t < T 2 (1.
t/ ! s1 D T.61) (1.60) Ap T 2 2 .7.5.t/ ! s2 D T.63) 2 Ä ½ Ap Ap T s2 .t/ 1 .t/ (1. Discrete signal representation 11 _ _ 2/ \T _ _ 2/ \T φ1(t) φ2(t) 0 0 _ _ −2/ \T 0 _ _ −2/ \T T 0 T t t Figure 1. s1 .64) 2 2 Ä ½ Ap T s3 . Moreover. T (1.t/ C 2 .65) 2 φ2 s3 A T 2 s2 s1 0 A T 2 φ1 Figure 1.0 (1.6.1.t/ D T 2 Ap s3 .2. .t/ D Ap T 2 Ap s2 .5.t/ (1.7): Ä ½T Ap s1 . Orthonormal basis for signals of Figure 1.62) from which the correspondence between signals and their vector representation is (see Figure 1.t/ ! s3 D 0. T (1.t/ D T 2 1 . Vector representation or constellation of signals of Figure 1.
67) 0<t <T elsewhere A A s (t) 0 s (t) 0 T/2 0 1 −A 2 −A T 0 T/2 T t t A A s (t) 0 s (t) 0 0 3 −A 4 −A T/2 T 0 T/2 T t t Figure 1. Example 1. to determine the basis functions we write sm . 4 (1. 2.2³ f t/ 0 T 1 .t/ D > : 0 (1.2³ f 0 t/ 2 2 2 2 We choose the following basis: 8 r > < C 2 cos.t/ D : 0 1 2 Ã ³ 2 Ã 0<t <T elsewhere m D 1. Deﬁnition 1.8.t/ as ÄÂ ÄÂ Ã ½ Ã ½ 1 ³ 1 ³ sm .8.3 (4PSK) Given the set of four signals 8 Â Â < A cos 2³ f 0 t C m sm . .2.2³ f 0 t/ A sin m sin. Elements of signal theory We note that the three signals are represented as a linear combination of only two functions (I D 2).4 The vector representation of a set of M signals is often called a signal constellation.66) depicted in Figure 1.t/ D A cos m cos. 3.12 Chapter 1. Modulated 4PSK signals for f0 D 1=T.
3. 1.70) then it results h 1 .t/ and 2 .69) D sin2 2³ f 0 T 2³ f 0 T (1. 2i 1 D1C sin ³ 4 f 0 T ³ 4 f0 T E 2 D1 sin ³ 4 f 0 T ³ 4 f0 T (1.9.71) 1.68) One ﬁnds that E and h Hence if f0 D k (k integer) 2T or f0 × 1 T (1.8. 2. where x and y are the input and output signals. 4PSK constellation.9. whose constellation is given in Figure 1.10.3 Continuoustime linear systems A timeinvariant continuoustime continuousamplitude linear system. . also called analog ﬁlter. respectively. and h denotes the ﬁlter impulse response.t/ D 8 > < > : 0 r 2 sin. Continuoustime linear systems 13 φ2 s2 A 2 s1 T 0 A 2 T φ1 s3 s4 Figure 1. and 2 . 1 .2³ f 0 t/ T 0<t <T elsewhere (1. Under these conditions for f 0 . 2 i ' 0. is represented in Figure 1. and E i ' 1 for i D 1.1.t/ form an othonormal basis for the four signals in Figure 1.
72).76) where H is the ﬁlter frequency response. The magnitude of the frequency response.5 Deﬁnition 1. f / f 2< (1.t/.t/ D X .10.t/] D x.74) The inverse Fourier transform is given by Z 1 x.− / d− D h.72) In short we will use the notation y. Z C1 X .75) In the frequency domain. f / D F[x.1. t 2 <. The output at a certain instant t 2 < is given by the convolution integral Z 1 Z 1 y. f /j. Analog ﬁlter as a timeinvariant linear system with continuous domain. jH. f / D X . We also introduce the Fourier transform of the signal x.73) becomes Y.t − / x. (1.t − / d− 1 1 (1.t/ e j2³ ft dt f 2< 1 (1.t/ D ( sign function: sgn. f / H.− / x. f / e j2³ ft d f 1 (1.t/ D h.t/ where the symbol ‘Ł’ denotes the convolution operation (1. is usually called the magnitude response or amplitude response.t/ D 1 0 1 1 t >0 t <0 t >0 t <0 .77) 5 Two important functions that will be used very often are: ( step function: 1. Elements of signal theory x(t) h y(t) Figure 1.73) (1.5 We introduce two functions that will be extensively used: 8 F Â Ã < 1 jfj< f 2 D rect : F 0 elsewhere (1. General properties of the Fourier transform are given in Table 1.t/ D x Ł h.14 Chapter 1.
− / Ł y Ł .t/ D x Ł . f / D X Ł .t/ x. f / X. f / X .t/ 2 x.t t0 / 1 [X .t/ linearity duality time inverse complex conjugate real part imaginary part scaling time shift frequency shift modulation a x.t/] D x. X .t/ D x Ł .t/ D Z C1 1 C1 X Parseval’s theorem Poisson sum formula Ex D jX . X. f /j2 d f D E X Â Ã 1 ` x. Property Signal x.3. t/ x.t/ D x.− /]. f / D X imaginary and odd differentiation integration convolution correlation product real signal imaginary signal real and even signal real and odd signal d x.t/ dt Z t x. f 2j 1 j' f 0 / C e j' X Ł .t/ X . f 2 j2³ f X . f / 1 X .t/] D Im[x.t/ [x.− / d− D 1 Ł x.t/ x Ł . f f0 / x.Á/]. f / D X Ł. f /] even. f/ f /. f /. f f 0 /] [e X .kTc / D X Tc `D 1 Tc kD 1 1 C1 X jx. f /] 2j Â Ã f 1 X jaj a e j2³ f t0 X .1 Some general properties of the Fourier transform.t/ D x Ł . t/ x Ł . Im[X . t/ x.at/. f / x. f / a X .t/ e j2³ f 0 t x.t/ y.t/ 2j Fourier transform X.2³ f 0 tC'/ ] 1 j' f 0 / C e j' X . f / X . Continuoustime linear systems 15 Table 1. f 2 1 f 0 / e j' X . f / Y Ł.2³ f 0 t C '/ Re[x. f / X. f /.t/ D x Ł .2³ f 0 t C '/ x. f /] 2 1 [X . f / Y.Á/ Ł Y. X.t/ Re[x. X real and even X.t/j2 dt D .0/ Ž. f / D Z C1 X Ł. f / D X .t/ x. − /].t/ cos.t/ e j .t/ x. f / C 2 j2³ f X .t/ C b y. a 6D 0 x. f / [X . f / C X Ł . f/ f/ f/ x. f / X Ł . f C f 0 /] [e X .t/ 1 [x. X Ł.1.t/ x. X Hermitian.t/ x. jX . f /] odd. f /j2 even X. f / C b Y.− / Ł y.t/ C x Ł . Re[X . f C f 0 /] [e j' X . f / D X Ł.t/ sin.
t/.78) 1 F[sinc.t/e 1 st dt (1.³ t/ ³t Â Ã (1. H . Elements of signal theory sinc.80) with s complex variable. f / is related to H .11.t/ D The following relation holds: sin. Example of signal and Fourier transform pair. t 2 <: H . It is easy to observe that if the curve s D j2³ f in the splane belongs to the convergence region of the integral in (1.Ft/] D rect F as illustrated in Figure 1.s/ often used in practice is characterized by the ratio of two polynomials in s. We reserve the notation H .79) Further examples of signals and relative Fourier transforms are given in Table 1.s/ to indicate the Laplace transform of h. A class of functions H . then H.2. f / D H .s/ D Z C1 h. each with a ﬁnite number of coefﬁcients.s/ by H.s/jsD j2³ f sinc(tF) 1 (1. f F (1. .81) 4/F 3/F 2/F 1/F 0 1/F 2/F 3/F 4/F t 1/F·rect(f/F) 1/F F/2 0 F/2 f Figure 1.80).16 Chapter 1.s/ is also called the transfer function of the ﬁlter.11.
.t/. f / Ž.k/ and y.k/g.2³ f 0 t/ sin.12. a > 0 t e at 1. Signal x.t/ Ž. The impulse response of the system is denoted by fh. f C f 0 /] 2j 1 1 Ž.4 Discretetime linear systems A discrete–time timeinvariant linear system. where Z denotes the set of integers. f f 0 / C Ž. k 2 Z. x(k) Tc y(k) Tc h Figure 1. is shown in Figure 1. a > 0 e ajtj . a > 0 2 e at .t/. with sampling period Tc . or more simply by h.1. f f 0 / Ž. k 2 Z. f / 1 Ž.2³ f 0 t/ 1. Discretetime linear system (ﬁlter). f / C 2 j2³ f 1 j³ f T sinc. where x. f C f 0 /] 2 1 [Ž.t/ sgn.t/ Â Ã t rect T Â Ã t sinc T Â Ã Â Ã t jtj 1 rect T 2T e at 1. f T / T rect.2 Examples of Fourier transform signal pairs. a > 0 Fourier transform X.2³ f /2 r ³ ³ Á exp ³ f 2 a a 1. f f0 / 1 [Ž.4. Discretetime linear systems 17 Table 1.t/ 1 e j2³ f 0 t cos. f T / T sinc2 .k/ are respectively the input and output signals at the time instants kTc . f T / 1 a C j2³ f 1 .a C j2³ f /2 2a a 2 C .12.
k/ D bk .4.k/ D x Ł h.z/ D C1 X kD 1 h.z/ D under the condition ja=zj < 1. where b is a complex constant.k/z k (1.z/ zDe j2³ f Tc (1.k/ D [x. f /e j2³ f kTc d f (1. We list some deﬁnitions that are valid for timeinvariant linear systems.k/e j2³ f kTc D H .k/D k . Example 1. Elements of signal theory The relation between the input sequence fx.84) The inverse Fourier transform of the frequency response yields Z h.k/ D 0.m/ Ł h.b/ bk .k/ D 0 k<0 Applying the transform formula (1. k > 0). and H .82) becomes Y. f / the frequency response of the ﬁlter.85) We note the property that. In Table 1.18 Chapter 1. for x. given by H .1 A fundamental example of z–transform is that of the sequence: ( ak k½0 jaj < 1 h.z/ is replaced by P h.88) Sometimes the D transform is used instead of the ztransform.k/] D C1 X kD 1 h.k/ D 0. where D D z 1 .k/ D Tc 1 C 2T 1 2Tc c H.87) 1 1 az 1 D z z a (1.k/ D H . deﬁned as H. f /H. we will use the notation y.k/g is given by the convolution operation: y. in the frequency domain (1. k < 0 (if h.3 some further properties of the ztransform are summarized.83) We indicate with H.k/.k/g and the output sequence fy. the output is given by y. For discretetime linear systems. kD .82) In short. f / D F[h. 6 (1. We deﬁne as transfer function of the ﬁlter the ztransform6 of the impulse response h.D/ D C1 1 h.n/ (1.k n/x.83) we ﬁnd H .m/].k/ D C1 X nD 1 h.86) (1. f / where all functions are periodic of period 1=Tc . f / D X . We say the system is causal (anticausal ) if h.
be a continuoustime signal with Fourier transform Q. k/ x Ł . Property Sequence x.1. that is h k D q.k/. f / at the points f D m=. 1.m/ Ł y. N of the Fourier transform becomes G.4. m/].1.m=. t 2 <. : : : .N Tc /.z/ Â Ã 1 [x.z/.k/ X . the expression gk e j2³ f kTc (1. m D 0.z/Y . f / D N 1 X kD0 1. f / by Ã Â 1 Á 1 X 1 (1. f /. we obtain: Gm D N 1 X kD0 km gk W N WN D e 2³ j N (1. Discretetime linear systems 19 Table 1.z Ł / x.k/ Example 1.z/ a X .z/ X Ł .kTc / k2Z (1.k/ X .k/ z transform X .z/ z m X .z/ D X Ł .90) Q f l H.z/ C bY .m/].92) .k/ inverse time x.m/ Ł y Ł .2 Let q.k/ D x Ł .z Ł / Â Ã 1 X z Â Ã 1 XŁ Ł z X . k D 0.3 Properties of the ztransform. fgk g.89) Using the Poisson formula of Table 1. We now consider the sequence obtained by sampling q.k m/ complex conjugate x Ł . f / D F[h k ] D H e j2³ f Tc D Tc `D 1 Tc Discrete Fourier transform (DFT) For a sequence with a ﬁnite number of samples.k/ delay x. y. k/ scaling convolution correlation real sequence a k x.4.91) 1.z/Y Ł Ł z X .N Tc //.k/ C by. and setting Gm D Evaluating G.az/ linearity ax.t/. N G. f 2 <. one demonstrates that the Fourier transform of the sequence fh k g is related to Q. Y .t/. 1.k/ [x. : : : .
is called the DFT of fgk g. G N G D Fg 1] 1] (1. N 1.N 1/ 2 6 1 WN WN ::: WN 6 6 2. Elements of signal theory The sequence fGm g.98) 7 8 The computational complexity of the FFT is often expressed as N log2 N .8 The following property holds: if C is a right circulant square matrix whose rows are obtained by successive shift to the right of the ﬁrst row. 1. : : : . provided W N 1 is substituted with W N .N 2 4 WN ::: W N 1/ F D 6 1 WN 6 6 : : : : :: : : : 6 : : : : : 4 : .92) is given by gk D 1.1= N /F is a unitary matrix. i. g1 . and . G1 .97) observing (1.96) D DFT[g] (1.94) 1. g N and the vector of transform coefﬁcients G T D [G0 . k D 0.n D W N . then FCF 1 is a diagonal matrix whose elements are given by the DFT of the ﬁrst row of C. : : : . A square matrix A is unitary if A H A D I where I is the identity matrix. k D 0.N 1/. : : : . n D 0. The inverse operator (IDFT) is given by 1 Ł F N (1. : : : . Introducing the vector formed by the samples of the sequence fgk g. : : : . 1.92) requires N .92) it is immediate to verify the following relation: (1.N 1/ 1 WN WN : : : WN in with elements [F]i.N 1/ complex additions and N 2 complex multiplications.95) F 1 D p We note that F D FT . i.N 1/2 . m D 0. the expression of the inverse DFT (IDFT) coincides with that of the DFT. : : : . besides the factor 1=N .93) We note that. We also observe that direct computation of (1.20 Chapter 1. gT D [g0 . 1. X 1 N 1 Gm W N km N mD0 k D 0. the algorithm known as fast Fourier transform Ð (FFT) allows computation of the DFT by N log2 N complex additions and N log2 N N 2 complex multiplications. : : : .e. however. 1. 1. N 3 7 7 7 7 7 7 7 7 5 (1. . N 1.N 1/ .7 The DFT operator The DFT operator can be expressed in matrix form as 2 1 1 1 ::: 1 6 . N 1 (1. a matrix for which all elements are zero except the elements on the main diagonal that are all equal to one. N The inverse of (1.
1. L x(k) 0 Lx 1 k 0 1 N1 k 1.6 The circular convolution between x and h is a periodic sequence of period L deﬁned as y . Timelimited signals: fx(k)g. xrep L .105) with main period corresponding to k D 0. based on (1.13.i/ xrepL .k `L/ (1. h(k) i/ (1. k D 0. 1. Figure 1. we obtain gD 1 Ł F G N (1.4.104) Deﬁnition 1.95).k/ D 0 and h. : : : .13) with L x > N : x.k `L/ (1. and fh(k)g.103) where in order to avoid time aliasing it must be L ½ Lx ed L½N (1.k/ D and h rep L . : : : . N .102) h.100) k>N 1 (1. : : : . Discretetime linear systems 21 Moreover. Lx 1.k/ D L 1 X i D0 h rep L .k/ D C1 X `D 1 x.101) We deﬁne the periodic signals of period L. k D 0.k/ D h L x. 1.circ/ .1.k/ D 0 k<0 C1 X `D 1 k<0 k > Lx 1 (1.99) Circular and linear convolution via DFT Let the two sequences x and h have a ﬁnite support of L x and N samples respectively (see Figure 1.k 1.
and taking the inverse transform of the result. the Lpoint (1. (1. 1. and fYm g. : : : . indicated by ž and Ž. and we write y .110) require that both sequences be completed with zeros (zero padding) to get a length of L D L x C N 1 samples.k/ k D 0. (1. L .82): y.k/ D whose support is k D 0.22 Chapter 1. the result of circular convolution coincides with fy.106) becomes9 i h .k/ D y. L x C N to see that only if N 1 X i D0 h.circ/ . if we indicate with fXm g. performing the product (1.k/ only for k D N 1.105). : : : . L 1 (1.i/x.circ/ Then. L 1 (1. L 1. it is easy 1 (1.108) with (1. h. 1. we obtain DFT of sequences x. respectively. : : : . N .circ/ .circ/ D Y0 . 1.110) To compute the convolution between the two ﬁnitelength sequences x and h.107) where H is the column vector given by the Lpoint DFT of the sequence h. : : : . : : : . We verify that the two convolutions y .109) L ½ Lx C N then y.k/g. Y L 1 D diagfDFT[x]gH (1.k/ D y . Relation 1.106). fHm g.circ/ . 9 The notation diagfvg denotes a diagonal matrix whose elements on the diagonal are equal to the components of the vector v.111) Indeed.109) and (1. with reference to Figure 1. Y1 . one obtains the desired linear convolution. Elements of signal theory .circ/ and the linear convolution y.circ/ . Then. L 1 In vector notation (1. : : : .108) 2. taking the Lpoint DFT of the two sequences.circ/ T Y .circ/ Ym D Xm Hm 1. . output of the linear convolution. We are often interested in the linear convolution between x and h given by (1. : : : . m D 0. in both cases we use L D Lx with L > N .k i/ (1.106) m D 0. This is achieved only for k ½ N 1 and k Ä L 1. and y .circ/ and y coincide only for the instants k D N 1.circ/ . respectively.97). only for a delay k such that it is avoided the product between nonzero samples of the two periodic sequences h r ep L and xr ep L . N .14. By comparing (1. We give below two relations between the circular convolution y . 1.k/ D x Ł h.112) (1.
: : : .N 1/ samples. N .N 1/ samples of fy. L 1 is the following:10 10 In this section the superscript 0 indicates a vector of L components. 1.k/g into blocks of L samples such that adjacent blocks are characterized by an overlapping of .112) is to consider. px/ that is obtained by partially repeating x with a cyclic preﬁx of N px samples: ( x.1. 1. and neglect the ﬁrst .113) k D N px . : : : . .114) and (1. Discretetime linear systems 23 hrep L (i) xrep L (ki) 0 N1 L1 i k(L1) k i Figure 1. : : : . It is not restrictive to assume that the ﬁrst .114) 1 (1. Let us now subdivide the sequence fx. If N px ½ N 1. A fast procedure to compute the linear convolution fy. 1. with support f N px .N 1/ samples.k/ D (1. : : : .k/g are zero.circ/ .k/ D y . px/ .k/ D ( y . instead of the ﬁnite sequence x. L x elsewhere 1 (1. it is easy to prove the following relation y .N 1/ samples of the sequence fy. Illustration of the circular convolution operation between fx(k)g. If this were not true it would be sufﬁcient to shift the input by . k D 0. : : : . px/ and h. : : : .112) leads to the overlapsave method to determine the linear convolution between x and h.14. px/ be the linear convolution between x . the application of (1. 1. L x C N 2g.106) the following relation between the corresponding L x –point DFTs is obtained: Zm D Xm Hm m D 0. 1. 1.L x C k/ Let y . 2. an extended sequence x . N 1. px/ .k/ Let us deﬁne z.k/ 0 k D 0. L x k D 0. Relation 2.116) Convolution by the overlapsave method For a very long sequence x. and fh(k)g.115) then from (1. 1 x. L 1. k D 0. : : : . : : : .k/ k D 0. L x 1 . An alternative to (1. : : : .4. L x 1 (1. px/ x .k/g.k/g for instants k D N 1.
: : : .N 2//] (1.N /.2..N 2//. h.117) (1. x. : : : . : : : . 0.0/.2. Matrix product Y 0 D X 0 H0 4.N 2/ (1. 0 ] 1/.1/. N 2. Loading z } { 1/.119) (1. h. ] .3. The second loading contains x 0T D [x.L 1/ .0/. Transform H0 D DFT[h0 ] X 0 D diagfDFT[x 0 ]g 3.122) where the symbol ] denotes a component that is neglected.N 2//] (1.L 1/ . : : : .N N 1 terms vector matrix (1. y. 3. .1/.N D [x. x. Elements of signal theory 1.120) vector (1.k/ D 0. k D 0.125) k D L .N /.k/ The third loading contains x 0T D [x.k/ k D 2. x. : : : . 2. x. Inverse transform h i z } { D [ ].N (1.126) The algorithm proceeds until the entire input sequence is processed.123) and the desired output samples will be y.N 2/ C 1. : : : . : : : .L 1/ 2. x. : : : .124) and will yield the desired output samples y. 1.L 1/] (1. : : : .L 1/ 2.L 2. x. 1/] L N zeros h x 0T 0T D [h. : : : . y. y.L 1/ .N 2/ (1.L 1/ 2.N 2//.L 1/ .24 Chapter 1.118) in which we have assumed x.121) y 0T D DFT 1 Y 0T 1/. : : : .
128) and the transfer function for such systems assumes the form q X Y .k n/ C q X nD0 bn x. the zeros and poles of H .z/ D H . 2. (1. Discretetime linear systems 25 IIR and FIR ﬁlters An important class of linear systems is identiﬁed by the input–output relation p X nD0 an y. p.z/ D q X nD0 bn z n (1.1 / where fzn g and fpn g are. assuming known the ztransform H .129) nD0 1C p X nD1 an z nD1 p Y nD1 . Deﬁnition 1. To get the impulse response coefﬁcients. n D 1.k n/ (1.k n/ k½0 (1. pn z 1 ]jzDpn (1.n/ D bn .1.z/ bn z n b0 D n q Y .z/ D X .z/.1 zn z pn z 1 1 / (1.129) reduces to H .130) and we obtain a ﬁnite impulse response (FIR) ﬁlter with h.127) becomes y.z/[1 We give now two deﬁnitions. : : : . If q < p and assuming that all poles are distinct. we can expand H .k/ D p X nD1 an y.z/. page 19).132) .z/ in partial fractions and apply the linear property of the ztransform (see Table 1.k/ D 8 p >X < > : nD1 k rn pn k½0 k<0 (1. : : : . q.131) 0 where rn D H . (1. respectively.129) generally deﬁnes an inﬁnite impulse response (IIR) ﬁlter.4.7 A causal system is stable (bounded inputbounded output stability) if jpn j < 1. Equation (1.z/ D 1 rn pn z 1 H) h. 1. n D 0. 8n. In the case in which an D 0.127) where we can set a0 D 1 without loss of generality. we obtain p X nD1 H .k n/ D q X nD0 bn x.3. If the system is causal.
A general case.1/ 0 0:4e j0:31 0:24e j2:34 h.3/ 0:4e 0 0:58e j0:51 j0:31 h. the minimum (maximum) phase system presents a phase response.134) Comparing the partialenergy sequences for the three impulse responses of Figure 1.z/ is a ratio of polynomials in z 1 with poles and zeros Table 1.4.z/ is minimum phase.4. After determining the zeros of the transfer function.z/ and impulse response h 1 .4 Impulse responses of systems having the same magnitude of the frequency response.15a.0/ h 1 (minimum phase) h 2 (maximum phase) h 3 (general case) 0:9e 0:7e j1:57 h. The coefﬁcients of the impulse responses h 1 .15. If we move all the zeros outside the unit circle.15b.8 The system is minimum phase (maximum phase) if jpn j < 1 and jzn j Ä 1 (jpn j > 1 and jzn j > 1).e j2³ f Tc /.15a. h. we factorize H1 .4/ 0:3e 0:4e j0:63 0:3e j0:63 j1:57 0:9e j1:57 j0:63 .i/j2 (1.k/g. H1 . which is below (above) the phase response of all other systems.z/ D Hmin .15c.26 Chapter 1. 8n.k/ D k X i D0 jh. and h 3 are given in Table 1. Among all systems having the same magnitude response jH. i. a minimum (maximum) phase system concentrates all its energy on the ﬁrst (last) samples of the impulse response.z/ D b 0 4 Y nD1 . Example 1. if h 1 is a causal minimumphase ﬁlter.z/ whose impulse response is shown in Figure 1.k/ shown in Figure 1. h 2 .e. Extending our previous considerations also to IIR ﬁlters. The coefﬁcients are normalized so that the three impulse responses have equal energy. we get a maximumphase system H2 .e j2³ f Tc /j. Elements of signal theory Deﬁnition 1. Let us consider the system with transfer function H1 . the magnitude of the frequency responses being equal. We now observe that the magnitude of Ł the frequency response does not change if 1=z n is replaced with z n in (1.z/ as: H1 . that is a transfer function with some zeros inside and others outside the unit circle. H1 . In other words.2/ 0 0 0:15e j1:66 h. argH.1 zn z 1 / (1. We deﬁne the partial energy of a causal impulse response as E.133). is given in Figure 1. one ﬁnds that the minimum (maximum) phase system yields the largest (smallest) fE.133) As shown in Figure 1.3 It is interesting to determine the phase of a system for a given impulse response.
. 1 where B is the backward operator that orders the elements of a sequence from the last to the ﬁrst. Á Ł H2 . Impulse response magnitudes and zero locations for three systems having the same frequency response magnitude.z/ is a ratio of polynomials in z with poles and zeros outside the unit circle.n/g. i. the relation fh 2 .n/g D BŁ fh Ł . : : : .1.q n/g. Á Ł inside the unit circle. where K is a constant. then Hmax . n D 0. 1.n/.z/ D z q Hmin z1Ł is a causal maximumphase ﬁlter. n D 0. : : : . q.4. Discretetime linear systems 27 Figure 1. q. is satisﬁed.15. Hmax . is an anticausal maximumphase ﬁlter. In the case of a minimumphase FIR ﬁlter with impulse response h min .e.n/g D fh 1 . Moreover.z/ D K Hmin z1Ł . In this text we use the notation fh 2 . 1.
is the set of values ¾ 2 < for which jx. in which the ﬁlter is a lowpass ﬁlter (LPF) if the support jH.¾ /.f/j. f / D HŁ . Elements of signal theory In Appendix 1.16. f /j includes the origin. H.A multirate transformations for systems are described. If h assumes complex values.28 Chapter 1. Depending on the support of jH.17.¾ /j 6D 0. and jH. then H is Hermitian. in which the time domain of the input is different from that of the output.16 is usually done. the terminology is less standard. f /j. the classiﬁcation of Figure 1. . If h assumes real values. 1. together with their efﬁcient implementations. decimator and interpolator ﬁlters are introduced.5 Signal bandwidth Deﬁnition 1. f /j is an even function. Figure 1. ¾ 2 <. otherwise it is a passband ﬁlter (PBF).9 The support of a signal x. f /. In particular. We adopt the classiﬁcation of Figure 1. Let us consider a ﬁlter with impulse response h and frequency response H. Classiﬁcation of real valued analog ﬁlters on the basis of the support of jH.
f/j. f /j 6D 0 is called passband or simply band B: B D f f ½ 0 : jX .17. Signal bandwidth 29 Figure 1. other deﬁnitions are as average gain of the ﬁlter in the passband B. Deﬁnition 1. we have jX . f /j 6D 0. The bandwidth11 of x is given by the measure of B: Z df (1. or as max f jH. We give the following four deﬁnitions for the bandwidth B of h. f /.135) As jX . Let us consider an LPF having frequency response H. We note that B is equivalent to the support of X limited to positive frequencies. f 2 B.1. f /j. Analogously. the set of positive frequencies such that jX . f /j. The ﬁlter gain H0 is usually deﬁned as H0 D jH. for a signal x. f 2 <. f /j is an even function. f / D 0g (1.0/j.136) .10 In general. a) First zero: B D minf f > 0 : H. Classiﬁcation of complex valued analog ﬁlters on the basis of support of jH. includes or not the origin. f /j 6D 0g (1. for a realvalued signal x.5.140) BD B 11 The signal bandwidth may be given different deﬁnitions. we will use the same denomination and we will say that x is a baseband (BB) or passband (PB) signal depending on whether the support of jX .
c) Based on energy. d) Equivalent noise bandwidth: Z 1 BD 0 jH. If 1=Tc < B0 the signal cannot be perfectly reconstructed from its samples. and B is thus given by the measure of the entire support. B is equivalent to the support of X . bandwidth at A dB: ¦ ² A jH.t/. for which H is periodic of period 1=Tc . whereas for a PBF B D f 2 f 1 .141) univocally represent the signal q. f /j2 d f p 100 (1. In the case of discretetime highpass ﬁlters (HPF). which is based on the relation (1. in general complexvalued.16. f /j within a period. b) Based on amplitude. B0 is often referred to as the minimum sampling frequency. or 60. For discretetime ﬁlters.138) Typically p D 90 or 99. . the same deﬁnitions hold.2Tc / and 1=. we will state the following fundamental theorem. As discretetime signals are often obtained by sampling continuoustime signals. Elements of signal theory For example. whose Fourier transform Q. The samples of the signal q.137) jH. f / has support within an interval B of ﬁnite measure B0 . In the case of a complexvalued signal x.t/. The sampling theorem Let q. f /j. f /j2 d f H2 0 (1. we refer the reader to [1].30 Chapter 1. with the caution of considering the support of jH.90) between a signal and its samples. h k D q. bandwidth at p%: Z B Z 01 0 (1. t 2 <. taken with period Tc . we have that for an LPF B D f 2 . t 2 < be a continuoustime signal. f /j B D max f > 0 : D 10 20 H0 Typically A D 3.142) For the proof. the passband will extend from a certain frequency f 1 to 1=. f /j2 d f D jH. with regard to the signals of Figure 1. 40.kTc / (1. under the condition that the sampling frequency 1=Tc satisﬁes the relation 1 ½ B0 Tc (1.2Tc /.2Tc /. let’s say between 1=. originating the socalled aliasing phenomenon in the frequencydomain signal representation.18 illustrates the various deﬁnitions for a particular jH.t/.139) Figure 1.
BE. where it is employed as an interpolation ﬁlter having an ideal frequency response given by ( GI .5.6 0. The real signal bandwidth following the deﬁnitions of: 1) Bandwidth at ﬁrst zero: Bz D 0:652 Hz.4 0. the signal q.19.19. 2) Amplitudebased bandwidth: B3 dB D 0:5 Hz.2 1. Operation of (a) sampling and (b) interpolation. can be reconstructed from its samples fh k g according to the scheme of Figure 1. For passband signals. B50 dB D 1:62 Hz. Figure 1. Signal bandwidth 31 0 −10 B3dB Breq −20 −30 H(f) (dB) −40 −50 B z −60 B40dB −70 BE (p=90) −80 B50dB −90 BE (p=99) −100 0 0.2 0.8 2 Figure 1.4 1.1. 4) Equivalent noise bandwidth: Breq D 0:5 Hz.6 1.t/.8 1 f (Hz) 1. In turn.pD99/ D 1:723 Hz. f / D 1 0 f 2B elsewhere (1.pD90/ D 1:362 Hz. t 2 <. B40 dB D 0:87 Hz. 3) Energybased bandwidth: BE.18. f /.143) We note that for realvalued baseband signals B0 D 2B. . care must be taken in the choice of B0 ½ 2B to avoid aliasing between the positive and negative frequency components of Q.
the signal at the input is reproduced at the output with a gain factor H0 and a delay t0 . f /j D H0 2. and B is the passband of the ﬁlter input signal x. the ﬁlter frequency response may be arbitrary. We show in Figure 1.12) given by H. y. f / D H0 e j2³ f t0 f 2B (1.144).148) f 2B (1.t t0 / (1. f / D H0 X . f / D H. page 441). 12 For a complex number c. f / e j2³ f t0 (1. f /j D 0 outside the support.144) where H0 and t0 are two nonnegative constants.144) satisﬁes the Heaviside conditions for the absence of signal distortion and is characterized by 1. A ﬁlter of the type (1.20. . f /X . f / (see Figure 1.149) We emphasize that it is sufﬁcient that the Heaviside conditions are veriﬁed within the support of X . as jX . constant magnitude jH.145) or.10 or Figure 1. linear phase 12 arg H. Figure 1. f / D 2³ d f (1. for a ﬁlter of the type (1.32 Chapter 1. f / D t0 −.147) 3.20 the frequency response of a PBF. Elements of signal theory Heaviside conditions for the absence of signal distortion Let us consider a ﬁlter having frequency response H. with bandwidth B D f 2 f 1 . in the time domain.t/ D H0 x. that satisﬁes the conditions stated by Heaviside. constant group delay. f / D 2³ f t0 f 2B f 2B (1. also called envelope delay 1 d arg H. f2 ).146) In other words. Then the output is given by Y. Characteristics of a ﬁlter satisfying the conditions for the absence of signal distortion in the frequency interval (f1 . arg c denotes the phase of c (see note 3.
h . that extends from f 1 to f 2 . which usually belongs to the passband .1. PB ﬁlter.a/ .bb/ . The signal x . The ﬁlter output. f /. in which jH.21. f / ' 2. called reference carrier frequency. It is now convenient to introduce a suitable frequency f 0 .22.21. is Figure 1. Passband signals 33 1. x .a/ is called the analytic signal or preenvelope of x. f 2 / of x.6. f 1 . Let x be a PB realvalued signal with Fourier transform as illustrated in Figure 1.a/ be a complex PB ﬁlter. with passband. and stopband. f /j ' 0. having the following ideal frequency response ( 2 f >0 .bb/ . equal to that of X .21 and to the transformations illustrated in Figure 1.150) 0 f <0 In practice. f / D 2 Ð 1. in which H. Referring to Figure 1.a/ .6 Passband signals Complex representation For a passband signal x it is convenient to introduce an equivalent representation in terms of a baseband signal x . it is sufﬁcient that h . Illustration of transformations to obtain the baseband equivalent signal x.a/ .a/ H . given x we extract its positive frequency components using an analytic ﬁlter or phase splitter. f / D (1. The following two procedures can be adopted to obtain x . .bb/ around the carrier frequency f0 using a phase splitter. that extends from f 2 to f 1 .a/ .
also called complex envelope of x around the carrier frequency f 0 .158) .34 Chapter 1.t/ D Using (1. f / D X .24). In fact. BB ﬁlter.157) (1. One gets the same result using a frequency shift of x followed by a lowpass ﬁlter (see Figures 1.bb/ . f / D X .151) (1.t/ x . f / D X .t/ e j2³ f 0 t (1. f /1. f / D H. scaled by 2 and frequency shifted by f 0 .t/ D Re[x .154) one can derive the relation between the impulse response of the analytic ﬁlter and the impulse response of the lowpass ﬁlter: h .a/ .t/ D x Ł h .t/ D x . it follows X . f C f 0 / 0 (1. x. x .a/ . Analytically.bb/ j2³ f 0 t F F ! X . f / ! X .a/ . It is immediate to determine the relation between the frequency responses of the ﬁlters of Figure 1. f/ (1.a/ .bb/ around the carrier frequency f0 using a phase splitter. f C f 0 / (1. we have x .156) x . Elements of signal theory phase splitter x(t) h(a) x (a) (t) x(bb)(t) e j2 πf 0 t Figure 1.bb/ . Transformations to obtain the baseband equivalent signal x.22. f /H.152) ( .t/ C x .152) it also follows x.bb/ . f /1. making use of the property X . f /1. f / D X Ł .155) Relation between x and x(bb) A simple analytical relation exists between a real signal x and its complex envelope.t/ e and in the frequency domain X .bb/ is given by the components of x at positive frequencies. f /1. f / C X Ł .f/ D 2X . or. The signal x .bb/ is the baseband equivalent of x.23: H.t/ D Re[x .a/ .t/e j2³ as illustrated in Figure 1.a/ . f /. f / C X .25.153) In other words.a/ . frequency shifted by f 0 to obtain a BB signal.a/ . f C f 0 / for f > for f < f0 f0 (1.21 and Figure 1.a/ . equivalently.23 and 1.t/] 2 ] (1. f0 t f / D X .bb/ .a/ . x .a/Ł .t/ D h.154) From (1.
25. ] x(t) e j2π f0 t Figure 1.bb/ around the carrier frequency f0 using a lowpass ﬁlter.bb/ around the carrier frequency f0 using a lowpass ﬁlter.6. Transformations to obtain the baseband equivalent signal x. . LPF x(t) h e j2 πf 0 t x(bb)(t) Figure 1.1.24. its complex envelope and the analytic signal. Illustration of transformations to obtain the baseband equivalent signal x. x (bb) (t) x (a) (t) Re[ .23. Relation between a signal. Passband signals 35 Figure 1.
In practice these ﬁlter speciﬁcations are imposed only on the passband of the input signal. Substituting (1.t/ where . f / are shown in Figure 1.t/ D 1 ³t (1.h/ phaseshifts by ³=2 the positivefrequency components of the input and by ³=2 the negativefrequency components. given x. called inphase and quadrature components of x.t/] (1.159) .164) .13 To simplify the notation. f / D j sgn. We note that h .bb/ .27a holds.bb/ .24 and the relations (1. f / (1.27b employs instead an ideal Hilbert ﬁlter with frequency response given by H.26.158) we obtain . f / has Hermitiansymmetric characteristics with respect to the origin.159) in (1.161) are realvalued baseband signals.h/ .36 Chapter 1. one can use the scheme of Figure 1.163) Magnitude and phase of H. Relation between a signal and its baseband components.160) and (1.t/] (1.2 on page 17): h .bb/ .bb/ x .160) and . If the frequency response H. cos(2 π f 0 t) x (bb) (t) I x(t) x (bb) (t) Q sin(2 π f 0 t) Figure 1.h/ . in block diagrams a Hilbert ﬁlter is indicated as “ ³=2”. Conversely.t/ sin 2³ f 0 t (1.t/ D x I .bb/ x Q .28.162) as illustrated in Figure 1.26. respectively.h/ . 13 We note that the ideal Hilbert ﬁlter in Figure 1.bb/ x.161) to get the baseband components.bb/ x Q . Elements of signal theory Baseband components of a PB signal. We introduce the notation (1. f / D e ³ j 2 sgn.t/ C j x Q . The scheme of Figure 1.t/ D x I .t/ cos 2³ f 0 t . h is real and the scheme of Figure 1.bb/ .28 has an impulse response given by (see Table 1.t/ D Im[x .t/ D Re[x .bb/ x I .
165) ³ − 1 t . f / (1.150) and of the Hilbert ﬁlter (1.6. if x is the input signal.h/ .163). Relations to derive the baseband signal components.27.t/ D d− (1.− / 1 x . Comparing the frequency responses of the analytic ﬁlter (1.h/ . we obtain the relation H.a/ . Passband signals 37 Figure 1.1.167) Consequently. the output of the Hilbert ﬁlter (also denoted as Hilbert transform of x) is Z C1 x. f / D 1 C jH.
171) x Q .h/ .38 Chapter 1.163) .28. are implemented by Moreover. e.27b. .t/ the analytic signal can be expressed as x . x . requires the introduction of a suitable delay.27.169) (1. Then it results: Z C1 . Consequently.t/ Consequently.t/ D x.− / 1 x.t/ sin 2³ f 0 t as illustrated in Figure 1.t/ D x . Magnitude and phase responses of the ideal Hilbert ﬁlter. j sgn f /.168) (1.bb/ x I . and in particular of a Hilbert ﬁlter. (1.t/ sin 2³ f 0 t (1.t/ D x Ł h .bb/ x. also x.h/ .t/ D d− (1. Then. we are only able to produce an output with a delay t D .152). Elements of signal theory  H (h)(f) 1 0 f arg H (h) (f) π 2 0 π − 2 f Figure 1. the analytic signal.t/ cos 2³ f 0 t .14 We note that in practical systems.t/ D x. In other words. noting that from (1.t/ and the various sinusoidal waveforms must be delayed. or the Hilbert transform of a given signal.t t D /. j sgn f / D 1.h/ . letting x .166) ³ t − 1 14 We recall that the design of a ﬁlter.t/ cos 2³ f 0 t C x .. transformations to obtain.a/ .161): .h/ .h/ x .170) (1. the complex envelope.160) and (1. taking the Hilbert transform of a signal we get the initial signal with the sign changed.h/ . from (1.t/ C j x .h/ . in the block diagram of Figure 1.g.
6.t/ D A sinc.t/ be a sinusoidal signal.176) (1.172) Ž.2³ f 0 t C '0 / Then X. f / D rect X B B and x .179) (1. . f / D A j'0 e Ž.1. f / D Ae j'0 Ž.Bt/ cos.bb/ . f 2 f0/ C A e 2 j'0 (1.6.a/ . However.177) (1. Â Ã f A .2³ f 0 t/ with the Fourier transform given by Ä Â Ã Â Ã½ f f0 f C f0 A rect C rect X. f and X .175) We note that we have chosen as reference carrier frequency of the complex envelope the same carrier frequency as in (1.bb/ .172).1 Let x.t/ D A cos. Then.t/ D Ae j'0 (1.a/ . Example 1.2 Let x. f / F 1 f0 / F 1 ! x .178) Another analytical technique to get the expression of the signal after the various transformations is obtained by applying the following theorem. it is usually more convenient to perform signal analysis in the frequency domain by the Fourier transform.Bt/ (1.bb/ .bb/ . In the following two examples we use frequencydomain techniques to obtain the complex envelope of a PB signal. f / D Ae j'0 Ž.t/ D A sinc. using f 0 as reference carrier frequency.29. x.173) The analytic signal is given by: X .t/ D Ae j'0 e j2³ f 0 t (1. f C f 0 / (1.174) ! x . Passband signals 39 ﬁltering operations. Example 1.6. f / D 2B B B as illustrated in Figure 1.
Corollary 1. 1.a/Ł .40 Chapter 1.1 Let the product of two real signals be x. then the analytic signal of x is related to that of c by: x .t/ and x .t/ 2 c .157). Theorem 1.180) where a is a BB signal with Ba D [0. f 0 C B].t/ D a.t/ D Substituting (1.183) have disjoint support in the frequency domain and (1.h/ . the two terms in (1. B/ and c is a PB signal with Bc D [ f 0 .t/ D a. C1/.t/c.a/Ł .t/ 1 c. Frequency response of a PB signal and corresponding complex envelope.t/ (1.182) in (1. Elements of signal theory A 2B −f 0 − B B − f0 −f 0 + 2 2 A B X (f) 0 X (bb)(f) f0 − f0 f0 + − B 2 0 B 2 f R Figure 1.181) (1. while that of the second is equal to .t/ D a.t/ c.183) 1 .a/ 1 .29.t/ C 2 c In the frequency domain the support of the ﬁrst term in (1.bb/ .184) ∋ ∋ B 2 B f R 2 (1.t/c.183) is given by the interval [ f 0 B.a/ .182) .a/ .1 From (1.h/ .a/ .185) (1.180) yields x.t/ C a.t/ 2 2 (1.181) we obtain x . We consider the general relation (1.t/ (1.181) is immediately obtained. Under the hypothesis that f 0 ½ B. valid for every real signal c.bb/ .t/ Proof.t/ D a. C1/. If f 0 > B.t/ c.t/ 1 c.t/ D a.
Then.5 Some properties of the Hilbert transform.t/ sin.bb/ .t/ x.184).190) (1.168) that are easily obtained by using the Fourier transform and the properties of Table 1. from (1. even duality inverse time even signal odd signal product (see Theorem 1. from the above theorem we have the following relations: x . t/.a/ . odd x . Property (Real) signal x.t/ cos. x .169) we get x .6.191) (1.h/ . x .h/ .181).t/ x .t/ c.3 Let a modulated double sideband (DSB) signal be expressed as x.t/ D x.h/ .1. Table 1.bb/ j2³ f 0 t (1.186) is in the design of a Hilbert ﬁlter h .h/ .152).155) and (1.t/ D a.t/ D x .6.t/e j'0 We list in Table 1.t/ sin.t/e in (1.2³ f 0 tC'0 / x .t/ x .h/ .185) is obtained by substituting (1.5 some properties of the Hilbert transformation (1. (1.186) which substituted in (1. from (1.2³ f 0 t C '0 / x .t/ D a.t/ D a.t/ dt D 0 . t/ x.h/ .t/ D Im[x .186) we get h .a/ .h/ .t/ D x.t/ x .h/ 1 1 hx.h/ .2³ f 0 t/ Example 1.t/ D a.2³ f 0 t C '0 / Z C1 Z C1 Ex D jx.t/e j .t/ D x .t/ sin. t/ x .1. if f 0 > B.t/ x.t/j2 dt D jx .t/ x .h/ i D Z C1 1 x.181) yields (1. Finally.t/] (1.h/ .h/ .h/ . Passband signals 41 In fact. t/ x. An interesting application of (1.a/ .t/ D a.1) cosinusoidal signal energy orthogonality a.192) . t/ (Real) Hilbert transform x .h/ .t/ D h.t/j2 dt D E x . In fact.2³ f 0 t C '0 / (1.2³ f 0 t C '0 / cos.188) (1.189) where a is a BB signal with bandwidth B.h/ starting from a lowpass ﬁlter h.187) (1.h/ . t/.t/ c.
42 Chapter 1. We will prove the relation illustrated in Figure 1. Elements of signal theory Baseband equivalent of a transformation Given a transformation involving also passband signals.2³ f 0 − C '1 //]g. Assuming h is the realvalued impulse response of an LPF and using (1.t/ Figure 1.cos.30.− / Ł Re[x .− /e j2³ f 0 − .30.t/ D fh. .bb/ . it is often useful to determine an equivalent relation between baseband complex representations of input and output signals.158). Passband transformations and their baseband equivalent. Three transformations are given in Figure 1. together with their baseband equivalent. y.30b.
that the ﬁlter h .bb/ eC j .a/ .bb/ is simpliﬁed.a/ has symmetrical frequency speciﬁcations around f 0 .a/ . with reference to the analytic signal we deﬁne: 1. f C f 0 / C H. f C f 0 / I and H.bb/ and quadrature component h Q that are related to H. f / D 1 [H.bb/ .t/ (1. f / D Ha . I 2 In practice this condition is veriﬁed by ensuring that the ﬁlter h .t/.− / 2 ! # .bb/ .t/ D h .196) .bb/ . Passband signals 43 " D Re ÄÂ D Re hŁx h.− / e j'1 2 ½ .− / Ł x .1. Envelope Mx .195) .t/ (1. Given a PB signal x.bb/ . .t/ D jx .2³ 2 f 0 − C'1 / .193) .6.t/ is real and the realization of the ﬁlter 1 h . f C f 0 /] 1 [H.194) D 1 [H. f / D 0 Q In other words h .t/ D arg x .a/ has Hermitian symmetry around f 0 then . Instantaneous phase 'x . f C f 0 / D 2j . f / 2j H.t/j 2.bb/ .a/ .161)) H. 2 and H. H f /] (1.bb/Ł .a/ by (see (1.bb/ e j'1 Ã 2 where the last equality follows because the term with frequency components around 2 f 0 is ﬁltered by the LPF.a/Ł .bb/ .160) and (1.bb/ in Figure 1. if H.a/ . I 2 f /] f C f 0 /] (1. Envelope and instantaneous phase and frequency We will conclude this section with a few deﬁnitions. moreover.a/ H.− / Ł x .bb/ We note.30c has inphase component h I .bb/Ł .t/ C h. f / C H.bb/ .197) (1.a/Ł Consequently.bb/ . f / D Q 1 [H.bb/ .
t/ D jx .t/ D Re[x .t/ are given in Figure 6.44 Chapter 1.t/ D f 0 With reference to the above relations.bb/ .bb/ .t/ D jx . Phase deviation 1'x .bb/ .t/ cos.202) becomes x.199) (1. Instantaneous frequency f x .206) (1.t/ D A 'x .t// (1.t/ D f x .'x . from (1.t/] C f 0 2³ dt (1.2³ f 0 t C '0 C 1'x .t/ D 1 d 'x .2³ f 0 t C '0 / it follows that Mx .208) .204) (1. especially those functions concerning secondorder analysis. three other deﬁnitions follow. 1.bb/ .t/j 'x .t/ D 2³ f 0 t C '0 f x .157).a/ .205) 1.t/ Then (1.t/ D arg x .t/ '0 (1.t/] D Mx .198) In terms of the complex envelope signal x .t/j A (1.200) (1.202) Two simpliﬁed methods to get the envelope Mx .a/ .t/ from the PB signal x.bb/ . Frequency deviation 1 f x .t/] cos.t// (1.209) f0 D 1 d 1'x .bb/ . a PB signal x can be written as x.201) Then.t/ D [A C 1Mx .t/ 2³ dt (1.t/ 3.t/j 2.a/ .t/ and from (1. from the polar representation x .t/ D 1 d [arg x . Elements of signal theory 3.t/ e j'x .58 on page 514.152) the equivalent relations follow: Mx .2³ f 0 t C '0 / D arg x .t/ C 2³ f 0 t f x .7 Secondorder analysis of random processes We recall the functions related to the statistical description of random processes.203) (1. . For example if x.t/ D Mx .t/ 2³ dt (1. Envelope deviation 1Mx .207) A D jx .t/ D A cos.t/ D 'x .
t ž x and y are uncorrelated if cx y .t −/ −/ m y .t/mŁ . 15 We observe that the notion of orthogonality between two random processes is quite different from that of orthogonality between two deterministic signals. t − / D E[.x. − . ž x is widesense stationary (WSS) if 1. t − / D rx . 8t.t/x Ł .t. . Crosscorrelation rx y .t/ and y.t.t x mx .t − /] (1.t/ D rx .7.1.210) mx .215) mx . t mx . 2. 8t. 1. Ł In particular. t − / D E[.t/ D mx .15 − / D 0.t/mŁ .x.t.x. ž if at least one of the two random processes has zero mean.t. − .t//. be two continuoustime random processes.t.211) (1. Statistical power Mx .t//. we note that the two random variables v1 and v2 are orthogonal if condition E[v1 v2 ] D 0 is satisﬁed. Autocovariance cx .t/ D E[jx. rx .t. while in the deterministic case. In this case we write x ? y.212) (1. 8t.t/] 2. Crosscovariance cx y .t/y Ł . In fact.y.t/j2 ] 3. in the random case the crosscorrelation must be zero for all the delays and not only for zero delay.t −/ −/ mx .213) − / D E[x. it is sufﬁcient that the inner product of the signals is zero.t − /] (1.t/. t 5. orthogonality is equivalent to uncorrelation.2.t −/ − //Ł ] (1. We indicate the delay or lag with − and the expectation operator with E.t.7. mx .t/ D E[x. t 6.− /.t. 8t.t/ D rx y .1 Correlation Let x. Mean value mx .t. Secondorder analysis of random processes 45 1.214) − / D E[x. Autocorrelation rx . t 2 <.t y − / D 0.t −/ − //Ł ] (1. t 4. t Observation 1. based on Deﬁnition 1.4 ž x and y are orthogonal if rx y .
x 1. rx y . − / D rŁ . The pair of equations (1.− /e j2³ f − d− (1.t/ are jointly widesense stationary if 1. Properties of the autocorrelation function 1.− / D rŁ .0/ D E[jx.t/ D m y . rx .− /j. Elements of signal theory 2 ž rx .t/j2 ] D Mx is the statistical power. 8t.217) one gets the statistical power Mx D rx .11 The passband B of a random process x is deﬁned with reference to its PSD function. − / D rŁ . 2. jmx j2 ž x.0/r y .− /.− /.0/ ½ jrx y . Deﬁnition 1. 4.0/ D Z C1 1 Px .− / D Z C1 1 Px . t 2 <.− /.7. 8t. f / d f (1.2 Power spectral density Given the WSS random process x. rx Ł . rx . its power spectral density (PSD) is deﬁned as the Fourier transform of the autocorrelation function Px .t/ D mx . x 2.216) The inverse transformation is given by the following formula: rx .216) and (1.t/. rx y . f / D F[rx . m y . mx . .t. t − / D rx y .− /] D Z C1 1 rx .− /. yx 5.t/. rx .0/ D ¦x D Mx is the variance of x. f /e j2³ f − d f (1. f /: it represents the distribution of the statistical power in the frequency domain.0/ ½ jrx .218) hence the name PSD for the function Px .46 Chapter 1. 3.217) In particular from (1.t/ and y.− / is a function with Hermitian symmetry.− /j2 . rx . whereas cx .217) are obtained from the Wiener–Khintchine theorem [2].
f fi / (1.c/ where Px is an ordinary (piecewise linear) function and . f / D Px .− / x x with r.222) The most interesting consideration is that the following random process decomposition corresponds to the decomposition (1.d/ Px . With this aim in mind we give the following theorem.c/ The PSD of a WSS process.7.2 .t/ D x . i 2I (1.d/ . Px .219) of the PSD: x. Theorem 1. The inverse Fourier transform of (1.1.c/ .c/ and x .221) Mi e j2³ fi − (1. f / .d/ Px . so that .c/ .d/ Px . Secondorder analysis of random processes 47 Spectral lines in the PSD In many applications it is important to detect the presence of sinusoidal components in a random process.220) where I identiﬁes a discrete set of frequencies f f i g.219) X i 2I Mi Ž.− / D x X i 2I (1. can be uniquely decomposed into a component Px . f / and Px .d/ with no impulses and a discrete component consisting of impulses (spectral lines) Px .c/ .c/ . f / (1.225) where fxi g are orthogonal random variables (r.d/ . f / D (1.220).c/ .226) .223) (1.d/ is given by x .224) Moreover.s) having statistical power E[jxi j2 ] D Mi where Mi is deﬁned in (1. f / D Px .d/ .t/ C x .− / D r.v.d/ are orthogonal processes having PSD functions .− / C r. x .t/ D X i 2I xi e j2³ fi t (1. i 2 I.d/ . f / D Px .d/ . f / C Px .219) yields the relation rx .t/ where x .
d/ Hence Px .− / D jmx j2 x (1. Ł 4. The property 1) denotes that x.229) .13 (White random process) The zeromean random process x. f / (1. Px is a constant.− /. f / D F[rx y .− / and Px . f / D Px y . f / is in general a complex function. 0 Ä jPx y . f / D Px . 5.5 The spectral lines of the PSD identify the periodic components in the process. However.t For such processes. Px . 2. Crosspower spectral density One can extend the deﬁnition of PSD to two jointly WSS random processes: Px y . (1.− / D jmx j2 − !1 (1. f / D ¦x i.d/ . t 2 <. is called white if 2 rx .− / D ¦x Ž.− / D cx . Px .48 Chapter 1. f / are even functions. f / D jmx j2 Ž.230) Properties of the PSD 1. f /P y .12 A WSS random process is said to be asymptotically uncorrelated if the following two properties hold: 1/ lim rx . x Px y . Px .− / (1. f / is a nonnegative function.− / x and r. f /. f / is generally not an even function.c/ .t/ is real valued. Px Ł .t/. f / is a realvalued function.228) 2/ cx .k/] Since rx y .231) Moreover.t/ and x.232) (1. Deﬁnition 1. the following inequality holds: Deﬁnition 1.− / jmx j2 is absolutely integrable : − / become uncorrelated for − ! 1. f /. This follows from property 1 of the autocorrelation. Elements of signal theory Observation 1. P yx . f /.e.− / D rx .227) (1.233) In this case 2 Px . and the process exhibits at most a spectral line at the origin. if the process x. f /j2 Ä Px . . one can prove that r. 3. then both rx . − / 6D rŁ y .
f /. .234) (1.236) Ł Ł Y2 Y2 D jH2 H3 j2 X2 X2 Ł Ł Ł Ł Ł Ł Y1 Y2 D H1 H2 H3 X1 X2 C jH2 j2 H3 X2 X2 (1.449). x1 (t) x2 (t) y1 (t) y2 (t) h1 h2 h3 Figure 1. Substitute the expressions of the products in the previous equations using the rule Yi Y Ł ! P yi y j j Ł X` Xm (1. Determine the frequency response of the various outputs in terms of the inputs. and P y1 y2 . PSD of processes through ﬁltering.240) ! Px` xm Then Ł Ł Ł P y1 D jH1 j2 Px1 C jH2 j2 Px2 C H1 H2 Px1 x2 C H1 H2 Px1 x2 (1. and Px1 x2 . Px2 .31 in which the inputs x1 and x2 have the following PSDs: Px1 .238) 3. f /. the procedure consists of three steps.242) (1. assuming the PSDs of the various input processes are known.239) (1.31. Secondorder analysis of random processes 49 PSD of processes through linear transformations By an example we will show how to determine PSDs of processes in a linear system.235) (1. Construct the different products Ł Ł Ł Ł Ł Ł Ł Y1 Y1 D jH1 j2 X1 X1 C jH2 j2 X2 X2 C H1 H2 X1 X2 C H1 H2 X1 X2 (1. we have Y1 D H1 X1 C H2 X2 Y2 D H2 H3 X2 in which for simplicity we omit the argument f .237) (1. f /.243) P y2 D jH2 H3 j2 Px2 Ł Ł Ł P y1 y2 D H1 H2 H3 Px1 x2 C jH2 j2 H3 Px2 The proof of the above method is based on the relation (1. In our speciﬁc case. f /. f /.1. f /.7. We consider the scheme of Figure 1. 1. 2. P y2 . To determine the PSDs P y1 .241) (1.
247) We note a further property: Px .246) The relation (1.245) is of particular interest since it relates the spectral density of the output process of a ﬁlter to the spectral density of the input process. f /G .k/g be two discretetime random processes.3 PSD of discretetime random processes Let fx.n/ D Z 1 2Tc 1 2Tc Px . Deﬁnitions and properties of Section 1.0/ D Z 1 2Tc 1 2Tc Px .7. from (1. Reference scheme of PSD computations. the statistical power is given by Mx D rx . by applying the above method the following relations are easily obtained: P yx . f / D Px . f / is a periodic function of period 1=Tc . It is. f / P y . 1. through the frequency response of the ﬁlter.244) (1. f / Ł (1.e. f /j2 P yz .1 remain valid also for discretetime processes: the only difference is that the correlation is now deﬁned on discrete time and is called autocorrelation sequence (ACS). interesting to review the properties of PSDs. f /H.n/e j2³ f nTc (1.248) In particular. and y ? z. f / D Px . f /Pz .245) (1. the PSD is obtained as Px . f / d f (1. f /e j2³ f nTc d f (1. f / D Px .50 Chapter 1. Elements of signal theory h x y g z Figure 1.32. however. f /jH. P y .32.231). In the particular case in which y and z have disjoint passbands. f / D Tc F[rx . PSD of processes through ﬁltering With reference to Figure 1. The inverse transformation yields: rx .k/g and fy. f / D 0. then. f /H.n/] D Tc C1 X nD 1 rx . r yz .7.249) . i. Given a discretetime WSS random process x.− / D 0.
the relation (1.1 We calculate the PSD of an i.n/ D Then 2 r.256) Ž Â f ` Tc Ã (1.n/ D ¦x Žn (1.n/ D 2 n 6D 0 jmx j it follows that ( cx . f / D ¦x Tc (1.k/g. sequence fx.n/ e Â f j2³ f nTc (1.n/ D jmx j2 x .k/g are statistically independent and identically distributed we say that fx. f / D Tc C1 X nD 1 .c/ Px .d. samples. f / 2 C1 X `D 1 cx .250) In this case the PSD is a constant: 2 Px .14 (White random process) A discretetime random process fx.1. the PSD exhibits lines at multiples of 1=Tc .d/ Px .252) D jmx j Ž ` Tc Ã (1.c/ 2 Px .k/g has i.253) We note that.257) . f / D jmx j2 C1 X `D 1 (1.c/ .15 If the samples of the random process fx.255) r. Example 1.d/ Px .7.254) nD0 n 6D 0 (1. From ( Mx nD0 rx .219) is limited to a period of the PSD.d/ . Secondorder analysis of random processes 51 Deﬁnition 1.n/ D ¦x Žn x .7. if the process has nonzero mean value.229) and the following are true .i. provided the decomposition (1. f / D ¦x Tc 2 ¦x 0 (1.251) Deﬁnition 1.d. Spectral lines in the PSD Even for a discrete time random process the PSD can be decomposed into ordinary components and spectral lines.k/g is white if 2 rx .i. In particular for a discretetime WSS asymptotically uncorrelated random process.
258) with (1. Elements of signal theory PSD of processes through ﬁltering Given the system illustrated in Figure 1.n/ D rx Ł h.1=z Ł / Py .z/ come in pairs of the type e j' jaj.z/ D C1 X nD 1 rh .e j2³ f Tc / (1. We introduce the ztransform of the correlation sequence: Px .k/h Ł .52 Chapter 1. we obtain the relations between ACS and PSD listed in Table 1.3.z/H Ł (1. Consequently the poles (and zeros) of Ph . assuming these processes are individually as well as jointly WSS.z/H .m/ Ł h Ł .z/ is a rational function.m/ Ł h Ł .n/ D [rx .263) Table 1.n/ z n D H .258) From the comparison of (1.z/ D Px .n/ (1.6 one gets the relation between the PSDs of input and output signals. we want to ﬁnd a relation between the PSDs of the input and output signals.z/H .z/ D C1 X nD 1 rx . then Py .n/ rx y .z/ D Px y .262) Â 1 zŁ Ã (1.z/ H Ł Â 1 zŁ Ã (1. ACS r yx .6 Relations between ACS and PSD for discretetime processes through a linear ﬁlter.12.z/ Px y . the PSD of x. f / D Px .z/ by Px .261) one deduces that. Let the deterministic autocorrelation of h be deﬁned as 16 rh .261) In case Ph . if Ph . it also has a corresponding pole (zero) of the type e j' =jaj.6.k/ is related to Px . m/].z/H .259) Using Table 1.n/ PSD Pyx . f /jH .260) whose ztransform is given by Ph .247).n/ D rx y Ł h.z/H Ł . given by P y .z/ D Px . . from (1.e j2³ f Tc /j2 In the case of white noise input.z/ D Px .z/ D 2 ¦ x H . From the last relation in Table 1.k n/ D [h.z/H Ł . e j' =jaj.n/ D C1 X kD 1 h.n/ r y .1=z Ł / 16 In this text we use the same symbol to indicate the correlation between random processes and the correlation between deterministic functions.n/ D rx Ł rh .z/ has a pole (zero) of the type e j' jaj. f / D Tc Px . m/].n/z n (1.
k/g and its autocorrelation sequence frh . The factor f 0 in (1.z/ F Ł Ł (1. f 2 1 of Py .267) is the geometric mean of Py .z/ is obtained by extracting the poles and zeros of Py . f / D Tc ¦x jH . Two methods are shown in Section 4.n/g having ztransform Py . 1g.267) (with the constraint that F. Z j log Py .7. in (1.264).z/ D f 0 F.z/ can be factorized as follows: Â Ã 1 2 Q Q Py .e j2³ f Tc / d f log f 0 D Tc 1=Tc The logarithms in (1. fQ1 . with autocorrelation sequence fr y . Moreover. The Paley–Wiener criterion implies that Py .265) Among the various applications of (1. the function f 0 F. associated with poles and zeros ztransform of an anticausal sequence f 0 f: : : . fQ2 .z/ that lie outside the unit circle.526) and the considerations relative to (1.e.269) may have any common base. and that the spectral factorization (1. fQŁ .6.269) log Py . monic and minimum phase) is unique.e j2³ f Tc /j d f < 1 (1.e j2³ f Tc /j2 (1.1=z Ł / is the QŁ . i.z 1 / H zŁ (1. Then the function Py .n/g in terms of the ztransform.z/. In the case of real ﬁlters Â Ã 1 Ł D H .e j2³ f Tc /: Z 2 (1.266) 1=Tc where the integration is over an arbitrarily chosen interval 1=Tc .z/ D 1 C fQ1 z 1 C fQ2 z 2 C ÐÐÐ (1. In many practical applications it is interesting to determine the minimumphase impulse response for a given autocorrelation function: with this intent we state the following theorem [3].1.266) and (1. Secondorder analysis of random processes 53 and 2 P y .z/.268) is monic. f / has the same shape as the ﬁlter frequency response.z/ Q is causal.267) f 0 F Ł . it is worth mentioning the process synthesis.264) In other words. which satisﬁes the Paley–Wiener condition for discretetime systems. Minimumphase spectral factorization In the previous section we introduced the relation between an impulse response fh.z/ that lie inside the unit circle (see Q (1.6. : : : g. which deals with the generation of a random process having a preassigned PSD. P y .z/ may have only a discrete set of zeros Q on the unit circle. . Theorem 1.267) z where Q F.3 (Spectral factorization for discretetime processes) Let the process y be given. minimum phase. For rational Py . and associated with a causal sequence f1.261)).
/ .a/ ? x .t/ with x .− / D 4rx . and Px .7.e. Our aim is to derive the power spectral density of the inphase and quadrature components of the process.t/ D x . f /j2 Px .4 PSD of passband processes Deﬁnition 1. f / D 1. hence f /. Then x . f /j2 Px . / .a/ .a/ . hence its mean is zero and consequently also x .276) rx .− / and Px .C/ ? x . being x .− /e j2³ f 0 − (1.a/ and x .a/Ł D 2x . signal x . WSS process.277) (1.C/Ł . hence (1. i. f / D Px . f / D Px .271) yields Px . The analytic (1. We assume that x does not have DC components.274) have nonoverlapping passbands.a/ x . / . / .273) f/ (1.bb/ is related to x . f /1. / .C/ . f / D 4Px .C/ and x . f / D 1.152) and rx . using Property 5 of the PSD. f / Moreover.16 A WSS random process x is said to be PB (BB) if its PSD is of PB (BB) type. / .a/Ł and rx . / . The following relations are valid Px . f / (1.270) For the same input x.bb/ have zero mean.C/ .C/Ł .278) (1. / .a/ .a/ is equal to 2x . f / D jH. f /1.t/. f / C Px .C/ .279) .− / D rx .C/ x . f / and H. / .C/ . f / D 0 as x .275) where Px .C/ . PSD of the quadrature components of a random process Let x be a real PB.C/ . frequency components at f D 0.bb/ . f / D jH.272) (1.54 Chapter 1.C/ . / (1.C/ . / . f/ (1.a/Ł . f / D Px .t/ C x . We ﬁnd that x. the output of the two ﬁlters is respectively x .C/ .C/ and x .271) (1.t/ D x . f / D Px .a/ by (1. We introduce two ﬁlters having a nonoverlapping passband and ideal frequency response given by H. Elements of signal theory 1. / . f / D Px . f / Px . it follows that x .− / D 0 The complex envelope x .
bb/ .− / D rx .275) can be written as Px .− / Then rx .h/ .bb/ .− / D rx .h/ .bb/ ? x .291) (1.a/ .bb/ x I .bb/ x .t/ D Re[x .bb/ x .bb/ .t/ (1.bb/ .bb/ .bb/ .− / D I Q rx . from (1.bb/ x . f / D 1 [Px .− / sin 2³ f 0 − x I Q j sgn.bb/ .bb/ .− / D 1 Re[rx .292) . I Px . f / (1.t/ 2 x .285) (1.bb/ .bb/ .286) (1.0/ D 0.bb/ .288) one .bb/ x .bb/ . f / and Px .t/] D (1.bb/ x Q .bb/ .t/ and x Q .bb/ x . f C f 0 / Moreover.C/ . Referring to the block diagram in Figure 1.t/] D (1.bb/ . from (1.− / D rx .bb/ .bb/ .bb/ x I .t/ C x .− / x (1.bb/ . − / The second equality in (1.1.280) f 0 / C Px . from . f f 0 /] (1.− / D r.bb/ .289) we note that rx . f C f 0 / D 4Px .bb/ .− / cos 2³ f 0 − C r.bb/ .t/ D Im[x .− /] 2 Q I Px .bb/ .bb/ .− / D rx . (1. 4 I rx .278) it follows that x .− /] 2 I (1.287) Px .bb/ . f / Px .bb/ . From (1. Using (1.282) and .290) and rx . Moreover.bb/Ł .bb/ x . f / D one gets rx .bb/ ? x Q only if Px .288) (1.bb/Ł . f 4 Finally. f / D Q I 1 [P .289) follows from Property 4 of ACS.27b.bb/ .− / is an odd function.h/ .− / Q I rx . f / 4j x Q I f /] Q (1.bb/ is an even function.281) x .− / D rx .t/ are always orthogonal since rx . Secondorder analysis of random processes 55 hence Px . f / D Px .h/ . f / C Px .h/ x .h/ x . f / D 1 [Px . as Px .bb/Ł .283) we obtain the following relations: rx .− / D 1 Im[rx . f / D Px .7.280).289) rx .t/ 2j x .bb/ .bb/ .284) f /] (1.bb/ . in any case the random variables gets x I I Q I Q .bb/ x .
f / D 2 B B depicted in Figure 1.− / 6D 0 (1.56 Chapter 1.bb/ x .bb/Ł . being Px .− / cos 2³ f 0 − C rx .bb/ . if x is a real passband WSS process. with rx .2 Let x be a WSS process with power spectral density Ä Â Ã Â Ã½ f f0 f C f0 N0 rect C rect Px .a/ .299) (1. I Q Cyclostationary processes In short.bb/ ? x .bb/ Moreover. The converse is also true: if x .302) .− / D I Q r.bb/ .bb/ is a WSS process and x . / .bb/ . Elements of signal theory and rx .bb/Ł .bb/ .bb/ . then its complex envelope is WSS.0/ D rx .303) (1. f / D I Q (1.C/ . If x .bb/ x .0/ D rx . It is immediate to get Px .0/ D 4rx .0/ D rx .bb/ is WSS.C/ .a/ . we have seen that. f / D Px . f / D N0 rect 2 x Â Ã f B (1.0/ I Q (1.bb/ x . however. then x.0/ D rx . f / D 2N0 rect B Then Px .300) 1 P .− / sin 2³ f 0 − x (1.bb/ .bb/ .0/ D 2rx .0/ rx .301) . we ﬁnd that x I ? x Q .t/ e j2³ f 0 t ] is WSS with PSD given by (1.bb/Ł .33.bb/ .bb/ .293) In terms of statistical power the following relations hold: rx .281).7.t/ D Re[x . and x .bb/ ? x .0/ D 1 rx .0/ D rx .h/ .0/ 2 rx . f / D 2N0 rect and Â Ã f Px .296) (1.bb/ .h/ .297) rx .bb/ .0/ Example 1. f / D 0.294) (1.295) (1.298) Â f B f0 Ã (1.
302) we ﬁnd that the autocorrelation of x. .33.f0 B .17 In this case it is convenient to introduce the average correlation Z T0 1 rx .− /e j2³ f 0 − C rŁ .− / D N T0 0 (1.304) ] In other words.f0 + 2 2 2N 0 P x (f) 0 P x (a) (f) f0 ∋ ∋ ∋ ∋ f0  B 2 f0 + B f R 2 0 2N 0 P x(bb) (f) f0  B 2 f0 f0 + B f R 2  B 2 N0 B 2 P x(bb) (f) . while it is cyclostationary in 0 0 correlation with period T0 =2.− /e j2³ f 0 − j4³ f 0 t j2³ f 0 − e C rŁ .t.305) 17 To be precise.t/ is a periodic function in t of period 1= f 0 : rx . x is cyclostationary in mean value with period T D 1= f .f0 B .bb/Ł . x is a cyclostationary process of period T0 D 1= f 0 .bb/ . observing (1. Spectral representation of a PB process and its BB components.bb/ x . t − / D 1 [rx .− /e 4 x C rx . Secondorder analysis of random processes 57 N0 2 .− /e j2³ f 0 − e x j4³ f 0 t (1.bb/Ł . P x(bb) (f) 0 I Q f R  B 2 0 B 2 f R Figure 1.1.bb/ .bb/ x . t − / dt rx .t.7.
t/ be a modulated DSB signal (see (1. Example 1.t/ C ja .− / and bandwidth Ba .− / D ra .t/ e j'0 .bb/ x . Elements of signal theory whose Fourier transform is the average power spectral density 1 N N Px .t/ is cyclostationary with period 1= f 0 . x.h/ .281).v. f C f 0 /] (1.306) In (1.2³ f 0 tC'0 / ] 2 D 1 2 a.t//e j .t/ D a.t/ real random BB WSS process with bandwidth Ba < f 0 and autocorrelation ra . .58 Chapter 1. f 4 as in the stationary case (1.t/ is the Hilbert transform of a.312) We note that one ﬁnds the same result (1.t.189)) x. t/ D F− [rx .192) it results in x . f 4 f 0 / C Pa . observing (1.2³ f 0 t C '0 / (1.308) with a.309) f 0 / C Px . f. in [0. Hence we have rx .4 Let x. f / D 1 [Px .303) we ﬁnd that x.bb/Ł . From (1.− / is not identical to zero.t/ D a.310) Because ra . 2³ /.7.a.bb/ . a real WSS random process with autocorrelation ra . t − /] (1. From (1.bb/ .3 Let x.t/ be a modulated single sideband (SSB) with an upper sideband.t/ sin. In our case. t/ dt (1.t/ D Re[ 1 .bb/ . it is N Px .bb/ . Example 1. f / D 1 [Pa .311) assuming that '0 is a uniform r.313) where a .− /.− / rx .h/ .7.311) Therefore x has a bandwidth equal to 2Ba and an average statistical power N Mx D 1 2 Ma (1. f.308) the average PSD of x is given by N Px .2³ f 0 t C '0 / 1 2 a .307) Z 0 T0 Px .− /] D T0 where Px .t/ cos.h/ .t/ cos. f / D F[rx . f f 0 /] (1.− / D ra . in this case x turns out to be WSS.t/.307) F− denotes the Fourier transform with respect to the variable − .2³ f 0 t C '0 / (1.− / e j2'0 (1.
bb/ and x .313) is then stationary with Px .317) where the signal x is modulated DSB (1. r.t/ C w. Coherent DSB demodulator and basebandequivalent scheme.314) where a .t/ and it has spectral support only for positive frequencies.t/ (1.bb/ . f f 0 /] (1.t/ and additive white noise w.t/ with PSD equal to Pw .t/ (1.h/ . given by the sum of the desired part xo and noise wo : ro .7. f / D 1 [Pa .t/.t/. In this case x has bandwidth equal to Ba and statistical power given by Mx D 1 4 Ma (1.a.t// coincides with the analytic signal a . f / D H0 rect (1. Secondorder analysis of random processes 59 We note that the modulating signal .309).7.t/ C ja . one can use the coherent demodulation scheme illustrated in Figure 1.34.319) Figure 1.271).1.t/ from r.t//e j'0 2 it results that x .t/ C ja . having a frequency response Ã Â f H.t/ be the sum of a desired part x.bb/ x .t/ C wo .C/ .30b) where h is an ideal lowpass ﬁlter.t/ D 1 .bb/Ł .a.316) Example 1.a/ .C/ .318) 2Ba Let ro be the output signal of the demodulator.bb/Ł have nonoverlapping passbands and rx . therefore it is one half of a. Being x . . f / D N0 =2.h/ .t/ D xo .− / D 0 The process (1.t/ D x.5 (DSB and SSB demodulators) Let the signal r. f 4 f 0 / C Pa .C/ . To obtain the signal a.t/ is deﬁned in (1.34 (see Figure 1.315) (1.
t/ D Re[h.319).325) In the same baseband equivalent scheme. f / D and M w0 D 2 H0 N0 2Ba 4 (1. Elements of signal theory We evaluate now the ratio between the powers of the signals in (1.t/ e j'0 C w.t/ and using (1.bb/ .Á/ e j'0 ].t/ D a. f / D 2N0 1.N0 =2/ 2Ba (1.I .323) e j'1 a. 3o D in terms of the reference ratio 0D Mx . Being h Ł a. f /j2 2N0 1.60 Chapter 1.bb/ is uncorrelated with w. from wo . f / D D 1 4 jH.t/ (1.t/ it results xo .192).328) (1.t/ with Pw.Á/ Ł D Hence we get Mxo D 2 H0 Ma cos2 .327) Ã Â 2 H0 f N0 rect 4 2Ba (1. we ﬁnd Pweq .'0 4 1 2 (1. w. we have r .bb/Ł and thus weq with weq .'0 2 '1 / (1.34 and (1.bb/ .324) '1 / H0 a. f C f 0 / (1. we consider the noise weq at the output of ﬁlter h.326) Â Ã 2 H0 f N0 rect 2 2Ba Ł Being now w WSS.285) it follows Pw0 .bb/ .t/ D weq. Then.321) Mxo M wo (1.320) Using the equivalent block scheme of Figure 1.t/ cos. f C f 0 /.t/ D H0 a.322) (1.329) .
bb/ . .t/ D 1 2 h .312). (a) Coherent SSB demodulator and (b) basebandequivalent scheme. We will now analyze the case of a SSB signal x.313)). using (1. coherently demodulated.t/ PB (1. following the scheme of Figure 1.335) PB Ba the ﬁlter of the basebandequivalent scheme is given by h eq .35.331) It is interesting to observe that.'0 2 .t/ (see (1.N0 =2/ 4Ba 2 (1.330) becomes 3o D 0 (1.H0 =4/ '1 / N0 2Ba D 0 cos2 .35.H0 =4/ Ma cos2 . Secondorder analysis of random processes 61 In conclusion.'0 '1 / (1. f / D rect Ba Ba Note that in this scheme we have assumed the phase of the receiver carrier equal to that of the transmitter. Being Â Ã f Ba =2 H. to avoid distortion of the desired signal. The ideal frequency response of h P B is given by Ã Â Ã Â f f 0 Ba =2 f f 0 Ba =2 C rect (1.333) Mx 0 D . f / D 2 rect (1. the ratio between the power of the desired signal and the power of the noise in the passband of x is given by 3i D For '1 D '0 then 3o D 23i (1.334) H P B .330) For '1 D '0 (1. after the mixer. the DSB demodulator yields a gain of 2 in signaltonoise ratio. would have fallen within the passband of the desired signal.1.7.332) In other words.bb/ Ł h. where h P B is a ﬁlter used to eliminate the noise that otherwise. we have 3o D 2 . measuring the noise power in a passband equal to that of the desired signal.336) Figure 1. at the demodulator input.
whereas the noise is analyzed via the PSD. small and has the same effects as noise. f C f 0 / Ã Â (1.bb/ Ł h eq .337) We now evaluate the desired component xo .H0 =16/ Ma 3o D (1. it results in xo .Á// e j'0 ].N0 =2/ 2Ba Observation 1.316) and (1.H0 =8/ N0 2Ba which using (1.t/ is proportional to a. on the other hand. f /] D 0 N0 rect 4 8 2Ba and H2 Mwo D 0 N0 2Ba (1.a.Á/ C j a . f /j2 2N0 1. f / D jHeq . In the previous example. The demodulated signal xo .t/ 4 In the basebandequivalent scheme.t/. As a matter of fact. f / D H0 rect Â f Ba =2 Ba Ã (1.285).344) 3i D .341) 8 Then we obtain 2 .343) We note that the SSB system yields the same performance (for '1 D '0 ) as a DSB system.342) 2 . it results in Mx D 3o (1.t/. 1 4 From the relation wo D weq.339) f Ba =2 N0 2 H0 rect D 2 Ba Ł and using (1. even though half of the bandwidth is required. must be expressed as the sum of a desired component proportional to a.t/ 2 2 D D H0 Re[a.338) H0 a.t/ D H0 x . the noise weq at the output of h eq has a PSD given by Pweq .6 We note that also for the simple examples considered in this section the desired signal is analyzed via the various transformations.t/.t/ C j a . Finally. the considered systems do not introduce any distortion since xo . which is.h/ .340) Pwo . we are typically interested only in the statistical power of the noise at the system output. Elements of signal theory with frequency response Heq . .h/ .t/ and an orthogonal component that represents the distortion.t/.321) can be written as 3o D 0 (1.t/] 4 (1.Á/ Ł 1 e j'0 1 . which is valid because weq ? weq .I we have Ã Â H2 1 f (1. f / C Pweq .62 Chapter 1. typically. f / D [Pweq .t/ D Re[h eq .bb/ . Using the fact x .
1/ rx .349) and the corresponding column eigenvectors ui satisfy the equation Rui D ½i ui Example 1.k/xT . of the characteristic equation of order N det[R ½I] D 0 (1. all elements along any diagonal are equal.k The N ð N autocorrelation matrix of 2 rx .0/ : : : Ð Ð Ð rx . 2. we have E[jyj2 ] D E[v H xŁ .N xŁ .k/g.8. : : : .k/v.k/g be a white noise process.k is given by rx .348) (1. R is positive semideﬁnite and almost always positive deﬁnite. Its autocorrelation matrix R assumes the form 3 2 2 ¦w 0 Ð Ð Ð 0 2 6 0 ¦w Ð Ð Ð 0 7 7 6 RD6 : : : (1. i D 1. N .e.346) (1.0/ N C 1/] 3 7 7 7 5 (1. R is Hermitian: R H D R.k/ D [x.345) 1/ rx .1 Let fw. 8v.1. For real random processes R is symmetric: RT D R. and letting y D xT .k/.347) If v H Rv > 0.k/] D 6 : : 4 : rx . Indeed. N C 2/ :: : ÐÐÐ 2/ Ð Ð Ð rx .i j/v j ½ 0 (1. x. v N 1 ]. The autocorrelation matrix 63 1.0/ 6 rx . : : : .1/ 6 R D E[xŁ . R is a Toeplitz matrix. in particular R is nonsingular. The eigenvalues of R are the solutions ½i .350) : 7 4 : : :: : 5 : : : 2 0 0 Ð Ð Ð ¦w .k/xT . then R is said to be positive deﬁnite and all its principal minor determinants are positive.k/ 1/. i.8. taking an arbitrary vector vT D [v0 .N Properties 1. N C 1/ Ð Ð Ð rx . : : : .k/v] D v H Rv D N 1N 1 XX i D0 jD0 viŁ rx . Eigenvalues We indicate by det[R] the determinant of a matrix R.8 The autocorrelation matrix Deﬁnition Given the discretetime widesense stationary random process fx. we introduce the random vector with N components xT . 3. x.
N . not all equal to zero. the eigenvectors form a basis in < N .N j! ÐÐÐ e ÐÐÐ e :: : j . From Rm u D ½m u we obtain the relations of Table 1. e j .k/ D e j . : : : .1 ¼R ¼½i / ui ½i 1 ui .7.351) and ui can be any arbitrary vector Example 1.2 We deﬁne a complexvalued sinusoid as x.356) Other properties 1.7 Correspondence between eigenvalues and eigenvectors of four matrices.357) for all combinations of fci g. then the eigenvectors are linearly independent: N X i D1 ci ui 6D 0 (1. Table 1. in [0. 2³ /.N 1/! j .!kC'/ 2 6 6 6 RD6 6 4 ! D 2³ f Tc 3 7 7 7 7 7 5 (1. A possible solution is given by ½1 D N and the relative eigenvector is T u1 D [1.64 Chapter 1.N 1/! (1.N 1/! e : : : e j .352) with ' uniform r.N 2/! 1 : : : 1 2/! ÐÐÐ One can see that the rank of R is 1 and it will therefore have only one eigenvalue. 2. i D 1. e j! . Therefore. If the eigenvalues are distinct. Elements of signal theory from which it follows that 2 ½1 D ½2 D Ð Ð Ð D ½ N D ¦ w (1. The matrix R is given by 1 e j! : : : e j .353) 1Äi ÄN (1. 2.8.354) (1.v. : : : . R Eigenvalue Eigenvector ½i ui Rm m ½i ui R 1 I .355) ] (1. in this case.
349) by uiH . In fact. The eigenvalues of a Hermitian matrix are real. The trace of a matrix R is deﬁned as the sum of the elements of the main diagonal.8. As R is positive semideﬁnite. from (1. U D [u1 . valid for Hermitian matrices: 1.360) (1.362) ½i 6D 0 by hypothesis. it follows uiH Rui D ½i uiH ui from which.363) (1.349) one gets: uiH Ru j D ½ j uiH u j uiH Ru j D ½i uiH u j Subtracting the second equation from the ﬁrst: 0 D .1. from which ½i ½ 0.365) . The autocorrelation matrix 65 3. Consequently. 3. whose columns are the eigenvectors of R. i. If the eigenvalues of R are distinct and their corresponding eigenvectors are normalized. ( 1 iD j 2 H jjui jj D ui ui D (1. 2. It holds tr R D N X i D1 ½i (1. one gets ½i D uiH Rui uiH ui D uiH Rui jjui jj2 (1. then the eigenvectors are orthogonal. it enjoys the following properties.13). : : : .359) The ratio (1.361) (1. uiH Rui ½ 0. u N ] (1. it follows uiH u j D 0.358) Eigenvalue analysis for Hermitian matrices As previously seen. and we indicate it with tr[R]. By left multiplying both sides of (1.e. u2 .364) 0 i 6D j then the matrix U.½ j and since ½ j ½i /uiH u j (1. If the eigenvalues of R are distinct. the autocorrelation matrix R is Hermitian. using (1.360) is deﬁned as Rayleigh quotient.
Elements of signal theory is a unitary matrix. The eigenvalues of a positive semideﬁnite autocorrelation matrix R and the PSD of x are related by the inequalities. Observing that uiH Rui D N N XX nD1 mD1 Ł u i.374) Z D 1 2Tc 1 2Tc Px .66 Chapter 1. f /g Ä ½i Ä maxfPx . f / D N X nD1 u i.m (1.368) ½i ui uiH (1.369) . f / N X nD1 Ł u i.n rx .372) where u i. f /g f f i D 1.n m/u i. : : : .372).368) we obtain the following important relations: R D U UH D and I ¼R D U. Moreover.n e j2³ f nTc N X mD1 u i.248) and (1. minfPx . that is U 1 D UH (1.370) 4. f /j2 d f .373) and using (1.367) D6 : : : : 7 6 : : : : : 5 4 : : : 0 0 Ð Ð Ð ½N we get U H RU D From (1. the preceding equation can be written as uiH Rui D Z 1 2Tc 1 2Tc Px .n e j2³ f nTc (1. f / be the Fourier transform of the sequence represented by the elements of ui : Ui .m e j2³ f mTc df (1.I ¼ /U H D N X i D1 N X i D1 (1. N (1.371) In fact.366) This property is an immediate consequence of the orthogonality of the eigenvectors fui g. f / jUi . if we deﬁne the matrix as 2 3 ½1 0 Ð Ð Ð 0 6 0 ½ ÐÐÐ 0 7 2 6 7 7 (1.n is the nth element of the eigenvector ui . let Ui .1 ¼½i /ui uiH (1.
The joint probability density function is px .1 A r. f /g ½max Ä ½min min f fPx .R/ may assume large values in the case Px . : : : .x mx /T ] is the covariance matrix. f /j2 d f 1 2Tc 1 2Tc Z (1. Example 1. 1. in view of the latter point we can deﬁne the eigenvalue spread as: .I and.Q ] be a complexvalued Gaussian random vector. f / exhibits large variations.376) we observe that .R/ D max f fPx .I D ¦xi. x N . ¦i2 .x mx /.375) jUi . f /g (1. If the inphase component xi. we recall the deﬁnition of Gaussian complexvalued random vector. Examples of random processes 67 Substituting the latter result in (1.Q /] D 0 i D 1. Example 1. Moreover.R/ assumes the minimum value of 1 for a white process.I C j x N .ξ mx / C N .I /. 2 2 2 ¦xi.1. If we indicate with ½min and ½max .xi.I and the quadrature component xi. with a Gaussian distribution can be generated from two r.2 Ð Let xT D [x1 . x N ] be a real Gaussian random vector.I C j x1.ξ mx / (1.378) (1. Example 1. moreover.379) . : : : .Q mxi. mx D E[x] is the vector of mean values and C N D E[. respectively. the minimum and maximum eigenvalue of R.371) follows.xi. f / jUi . f /j2 d f from which (1.s with uniform distribution (see Appendix 1.v. E[.Q are uncorrelated.B for an illustration of the method).Q . ¾ N ]. : : : . .9.v.9.376) From (1.2³ / N det C N ] 1 1 1 T 2 e 2 .360) one ﬁnds Z ½i D 1 2Tc 1 2Tc Px . 2.9.9 Examples of random processes Before reviewing some important random processes.ξ / D [.Q D 1 ¦xi 2 mxi.3 Let xT D [x1. : : : . N (1.9. xi 2 N mi .377) where ξ T D [¾1 .
382) The vector x is called a circularly symmetric Gaussian random vector.t 2³ 0 A2 cos.t1 /. : : : .t/ D N X i D1 (1.2³ f t C a/A sin[2³ f .t N /]. Example 1. The mean of x is mx .ξ mx / H C N 1 .2³ f i t C 'i / (1.381) (1.385) Ai sin.t/] Z 2³ 1 A sin.ξ / D [³ N det C] 1 e ξHC 1ξ (1.ti / having real and imaginary components that are uncorrelated Gaussian r.386) .s with zero mean and equal variance for all values of ti . 2³ /.9.9.ξ mx / (1.v.6 Given N realvalued sinusoidal signals x.2³ f t C a/ da D 2³ 0 D0 and the autocorrelation function is given by Z 2³ 1 rx .384) − / C a] da (1. x2 .t/ D A sin. : : : .2³ f t C'/ be a realvalued sinusoidal signal with ' r.t1 /. x N . for which we will use the notation ' 2 U[0.383) where C is the covariance matrix of [x1 .ξ / D [³ N det C N ] 1 e . Elements of signal theory then the joint probability density function is px . 2³ /.68 Chapter 1.t/ D E[x.4 Let xT D [x1 .v.9. The joint probability density function in this case results in px . Example 1.t2 /.− / D A sin.380) with the vector of mean values and the covariance matrix given by mx D E[x] D E[x I ] C j E[x Q ] C N D E[. uniform in [0.5 Let x.t N /] be a complexvalued Gaussian (vector) process.2³ f − / D 2 Example 1. x N .x mx /.x mx / ] H (1. The vector x is called a circularly symmetric Gaussian random process. with each element xi .
388) We note that. Moreover.t/ D N X i D1 Ai e j .k/ be given by the sum of the 2 random process x.9. expressed as y. Example 1. following a similar procedure to that used in Examples 1.5 it is able to obtain the mean value mx .386) is not asymptotically uncorrelated.390) is not asymptotically uncorrelated.12. Examples of random processes 69 with f'i g statistically independent uniform r.t/ D and the autocorrelation function rx .s in [0.t kT / (1. In this case r y .v.k/g are uncorrelated.391) Example 1. 2³ /.− / D N X A2 i cos.9.k/g and fw. the process (1.k/ of Example 1.t/ D C1 X kD 1 x.7 Given N complexvalued sinusoidal signals x.k/ h T x . we assume fx.t/ D 0 (1.k/ C w.s in [0.9.5 and 1.v.− / D N X i D1 jAi j2 e j2³ fi − (1.9 We consider a signal obtained by pulseamplitude modulation (PAM). according to the Deﬁnition 1. Example 1.1.390) We note that the process (1.387) (1.8 Let the discretetime random process y.2³ fi tC'i / (1.389) with f'i g statistically independent uniform r. page 48. from Example 1. we ﬁnd rx .392) .9.9.2³ f i − / 2 i D1 N X i D1 mxi .9.9.9.7 and white noise w.n/ D N X i D1 2 jAi j2 e j2³ fi nTc C ¦w Žn (1.k/ with variance ¦w .k/ D x. 2³ /.6.
f /þ Px .36. from (1. and fx. In general y is a cyclostationary process of period T .396) iT/ (1.397) the average statistical power of the output signal is given by N M y D Mx where E h D R C1 1 Eh T (1. Elements of signal theory x(k) T hTx y(t) Figure 1.t − / dt D [h T x . f /j2 .t.397) (1.0/ T 0 Z C1 1 T 1 X r y .i/rh T x .k/g is a discretetime (with T spaced samples) WSS sequence. In fact we have 1. Modulator of a PAM system as interpolator ﬁlter. The signal y.395) If we introduce the average spectral analysis Z 1 T my D m y .394) rx .i C m/ T /h Ł x .− /] D þ HT x . Average power for a white noise input For a white noise input with power Mx .393) rh T x . .399) jh T x . where h T x is a ﬁniteenergy pulse.t .t/ D mx 2.t/ dt D mx HT x . Correlation r y . 3.t/j2 dt is the energy of h T x . t/].36.t/ Ł h Ł x .− / be the deterministic autocorrelation of the signal h T x : Z C1 h T x . t −/ D C1 X iD 1 C1 X kD 1 h T x .− / (1.t/ is the output of the system shown in Figure 1.t. Mean m y .− / D N T 0 T iD 1 and þ þ2 þ1 þ N P y . t − / dt D rx .70 Chapter 1. f / D F[r y . f / is a periodic function of period 1=T .i/ C1 X mD 1 h T x .− r y . We note that Px .− / D T T 1 with Fourier transform equal to jHT x . Let rh T x .t/h Ł x . having power spectral density Px . f / N þT þ (1.t T − mT / (1.398) we observe that the modulator of a PAM system may be regarded as an interpolator ﬁlter with frequency response HT x =T .t kT / (1.
i C m/T /h T x .k/ x Q .e.t. on the other hand.k/g by using the scheme depicted in Figure 1.t/] D 0 (1. i/ (1.k/] C 2 j E[x I . E[y 2 . letting x I .k/] D E[x I .k/ x Q . input signal fx.k/] D x Q .k/] D E[x Q .i/ D rx I x Q .409) (1.i. and observing the relation r yy Ł . t −/ D 0 (1.406) (1.9.406) can be obtained assuming the less stringent condition that x ? x Ł .401) and E[x I .i/ C1 X mD 1 h T x .t/. In particular we ﬁnd that y.t − mT / (1.k/]Ž.e.t/ is circularly symmetric.379)).i/ D rx Q .i.378) and (1.405) −/ D C1 X iD 1 rx x Ł .k/] D 0 (1.k/j2 ] 2 (1. this requires that the following two conditions are veriﬁed rx I .k/] D 0 These two relations can be merged into one. 2 E[x 2 . i.k/] we have 2 2 E[x I .398).t .400) (1.1.k/ D Im[x.408) Observation 1.402) (1. Moments of y for a circularly symmetric i. i.k/] E[jx.i/ D E[x 2 .k/ D Re[x. t then rx x Ł .i/ and rx I x Q .407) We note that the condition (1.404) that is y. input Let x.403) Filtering the i.k/] 2 E[x Q .t/ ? y Ł .7 It can be shown that if the ﬁlter h T x has a bandwidth smaller than 1=.d.t.2T / and x is a WSS sequence.i/ D 0 and r yy Ł . .k/ be a complexvalued random circularly symmetric sequence with zero mean (see (1. then y is WSS with spectral density given by (1.d.36. Examples of random processes 71 4.
n/ D N X 1 Q0 1 r y . page 52. Mean m y . where Q 0 is a positive integer number.q TQ / from (1.9.411) describes the input–output relation of an interpolator ﬁlter (see (1.415) i Q0/ (1.609)). We denote with H.416) Consequently.n/ the deterministic autocorrelation (see (1.416) it results in rh . fyq g is a cyclostationary random sequence of period Q 0 with 1.k/ h q k Q 0 (1.q.418) (1.q.414) hp (1. (1.q/ D mx 2. We recall the statistical analysis given in Table 1.0/ 1 Q0 1 my D N m y .412) rx .q/ D mx Q 0 qD0 Q0 where H. f /þ Px .410) yq D y.260)) of the sequence fh p g.k/g is a WSS random sequence with mean mx and autocorrelation rx .10 Let us consider a PAM signal sampled with period TQ D T =Q 0 .417) (1.i/ hq . from (1.0/ D and r y . f / the Fourier transform (see (1.k/g is white noise with power Mx .411) kD 1 If Q 0 6D 1. f / P N þQ þ 0 If fx.n/. f / D TQ F[r y .n Q0 i D 1 C1 X pD 1 (1. In general.i Cm/Q 0 Ł hq n m Q0 (1.6.n/] D þ 1 H.84)) and with rh . Elements of signal theory Example 1.n/ Q0 In particular the average power of the ﬁlter output signal is given by r y . q n/ D C1 X iD 1 C1 X kD 1 C1 X mD 1 hq k Q0 (1.72 Chapter 1.413) By the average spectral analysis we obtain X H.n/ D Mx N N M y D Mx Eh Q0 (1. q Q 0 qD0 n/ D C1 1 X rx . We also assume that fx. p TQ / (1. Correlation r y . Let h p D h T x .i/ rh . the average PSD is given by þ þ2 þ þ N y .392) it follows C1 X yq D x.419) .
t/ D g M Ł w.420) is ﬁltered with a ﬁlter having impulse response g M . . f /j2 d f rwu . gu .426) Pw . f /G.t/ (1.424) The optimum ﬁlter has frequency response GM . while the power of wu .t0 / is equal to Z C1 (1. f / (1. f /jG M .1.422) We now suppose that y is observed at a given instant t0 . g M : max gM jgu .0/ D 1 x(t)=g(t)+w(t) gM GM (f) = K G* (f) Pw (f) y(t) t0 y (t0 ) = gu (t0 ) + wu (t0 ) e j2π ft0 Figure 1. We indicate with gu and wu respectively the desired signal and the noise component at the output: gu . Proof.t/ The output is expressed as y.425) where K is a constant.423) (1. 1. We point out that the condition M y D Mx pD is satisﬁed if the energy of the ﬁlter impulse response is equal to the interpolation factor Q 0 . f / evaluated in t D t0 . the best ﬁlter selects the frequency components of the desired input signal and weights them with weights that are inversely proportional to the noise level.t/ wu .t0 /j2 ] G Ł. f / e Pw .421) (1.t/ C wu . In other words.t/ D g M Ł g. f / D K j2³ f t0 (1. i.10. Matched ﬁlter 73 P N where E h D C1 1 jh p j2 is the energy of fh p g.t0 / and the power of the noise component wu . Reference scheme for the matched ﬁlter.37.t/ D g.t/ (1.t/ D gu .10 Matched ﬁlter Referring to Figure 1. The problem is to determine g M so that the ratio between the squared amplitude gu .t0 / coincides with the inverse Fourier transform of G M .t0 /j2 E[jwu .37.t/ C w. The signal x. we consider a ﬁniteenergy signal pulse g in the presence of additive noise w having zero mean and power spectral density Pw .t0 / is maximum.e.
f / e G M . f /. Matched ﬁlter in the presence of white noise If w is white. f / D K jG.429) þ2 Z þ þ pG.431) the solution (1.f/ þ w C1 þ þ 1 þ2 þ þ pG.431) where K is a constant. f / Z C1 Pw . Implicitly it is assumed that Pw .t0 t/ (1. f / and GŁ. f / Pw . f / 6D 0. Elements of signal theory Then jgu . f / j2³ f t0 (1.430) and is achieved for p G Ł.t/ D K g Ł . f /þ w (1. matched to the input signal pulse. f /jG M . then Pw . f / e p Pw . The desired signal pulse at the ﬁlter output has the frequency response Gu . f / p dfþ e þ Pw . f /G. f / þ d f þ P . From (1.74 Chapter 1. f / D Pw is a constant and the optimum solution (1.0/ Z C1 þ þ 1 j2³ f t0 (1.434) .433) j2³ f t0 (1. f /j2 d f (1. i. f /j2 e j2³ f t0 (1. f /e Correspondingly.432) from which comes the name of matched ﬁlter (MF).1) to the functions p (1. Applying the Schwarz inequality (see Section 1. f /j2 d f 1 p where the integrand at the numerator was divided and multiplied by Pw .0/ þZ þ þ þ C1 G M . f / j2³ f t0 þ G M . f / D K p Pw .t0 /j2 Ä rwu . the ﬁlter has impulse response g M .t0 /j2 D rwu . f /jG M . f / Pw .430) Therefore the maximum value is equal to the righthand side of (1.425) becomes G M . f / D K G Ł .425) follows immediately. f / e j2³ f t0 þ d f D þ P . f / it turns out jgu .e.428) G M . f / Pw .427) p D þZ þ þ þ C1 1 þ2 G. f /e j2³ f t0 Z 1 C1 1 þ2 þ dfþ þ Pw .
0/ Eg jgu .t0 /j2 g D D 2 r .436) If E g is the energy of g.t/ D K rg . Note that in this case the matched ﬁlter has also limited duration and it is causal if t0 ½ tg . Matched ﬁlter for an input pulse in the presence of white noise. Z C1 rg .t/ D wT .t t0 / (1.439) Â t T =2 T Ã (1.437) In Figure 1.0/ Pw Pw jK j g (1.38. the matched ﬁlter is proportional to g g M .10.t0 ) + wu (t) t0 y(t0 ) gM (t)=Kg*(t0 t) Figure 1. gu .441) .440) (1.t/ and the output pulse in the absence of noise is equal to 8 þ þÃ Â þ þ > < K T 1 þ t T þ 0 < t < 2T þ T þ gu .− / D T 1 Ã − Á j− j rect T 2T (1.− / D g.a/g Ł . From the deﬁnition of the autocorrelation function of g.t/ D K wT .38.t/ D > :0 elsewhere (1. as depicted in Figure 1.1 (MF for a rectangular pulse) Let g.0/ rwu . Example 1.0/ the maximum of the functional (1.435) then.438) For t0 D T . using the relation E g D rg .1.a − / da 1 (1.10. Matched ﬁlter 75 x(t)=g(t)+w(t) gM y(t)=Krg (t .t/ D rect with Â rg .39 the different pulse shapes are illustrated for a signal pulse g with limited duration tg .424) becomes jK j2 r2 .
k/ D E[x. We investigate now the possibility of moving from ensemble averaging to time averaging. If in the limit it holds18 X 1 K 1 lim x. mx Á vanishes for K ! 1.k/] D mx (1.k/ K kD0 18 The limit is meant in the meansquare sense. that is we consider the problem of estimating a statistical descriptor of a random process from the observation of a single realization.76 Chapter 1. . Elements of signal theory g(t) 0 t0 = 0 gM (t) tg t tg t0 = t g 0 gM (t) t 0 r g (t) tg t tg 0 tg t Figure 1. 1.442) K !1 K kD0 1 P K 1 x.v.11 Ergodic random processes The functions that have been introduced in the previous sections for the analysis of random processes give a valid statistical description of an ensemble of realizations of a random process. Let x be a discretetime WSS random process having mean mx .39. that is the variance of the r. Various pulse shapes related to a matched ﬁlter.
444) we see that for a random process to be ergodic in the mean.k/ equal to the sum of sinusoidal signals (see (1. . one could prove that under the hypothesis19 C1 X nD 1 jnj rx .444) for y translates into a condition on the statistical moments of the fourth order for x.445) and (1.442) implies the condition 2þ þ2 3 þ1 K 1 þ X þ þ lim E 4þ (1. however.k/ mx þ 5 D 0 þ K kD0 þ K !1 or equivalently 1 lim K !1 K K 1 X nD .443) x.444) From (1.n/ (1.k/x Ł . In practice.k/ D x. we ﬁnd that the ergodicity in correlation of the process x is equivalent to the ergodicity in the mean of the process y.k/x Ł . f / þ (1. exploiting the ergodicity of a WSS random process. Let y.445) K !1 Also for processes that are ergodic in correlation one could get a condition of ergodicity similar to that expressed by the limit (1. Therefore it is easy to deduce that the condition (1.442). Analogously to deﬁnition (1. We note that the existence of the limit (1.446) the following limit holds: 2 1 lim E 4 K !1 K Tc þ þ K 1 þ X x. Therefore it is usually recommended that the deterministic components of the process be removed before the spectral estimation is performed.442).k n/. Ergodic random processes 77 then x is said to be ergodic in the mean. the timeaverage of samples tends to the statistical mean as the number of samples increases.k n/] D rx . difﬁcult to prove for nonGaussian random processes. one obtains the relations among the process itself.1. In other words.k/ D A.386)).k K kD0 n/ D E[x. We will not consider particular processes that are not ergodic such as x. The property of ergodicity assumes a fundamental importance if we observe that from a single realization it is possible to obtain an estimate of the autocorrelation function and from this. Observing (1.K 1/ Ä 1 ½ jnj cx .n/ D 0 K (1. the power spectral density. Alternatively.447) Then. we say that x is ergodic in correlation if in the limits it holds: lim X 1 K 1 x.k/ e þTc þ kD0 (1. for a process for which the above limit is true.444). ergodicity is.n/ < 1 þ2 3 þ j2³ f kTc þ 5 þ D Px . or x. some conditions on the secondorder statistics must be veriﬁed.k/x Ł . where A is a random variable. we will assume all stationary processes to be ergodic. its autocorrelation function and power spectral density shown 19 We note that for random processes with nonzero mean and/or sinusoidal components this property is not veriﬁed.11.
Based on a realization of fy. 1.1 Mean value estimators Given the random process fx.40. Relation between ergodic processes and their statistical description.k/ w K . from (1.k/ K kD0 (1. If we let Q X K Tc . with a rectangular window of duration Td .k/x Ł .449) holds also for continuoustime ergodic random processes.450) . where Q XTd .k n/.442) an estimate of the mean value of y is given by the expression my D O X 1 K 1 y. Elements of signal theory Figure 1.449) The relation (1. f / denotes the Fourier transform of the windowed realization of the process.k/ D jx.k/g: for example.k/g.11. makes use of a statistical ensemble of the Fourier transform of process x. we wish to estimate the mean value of a related process fy. to estimate the statistical power of x we set y. We note how the direct computation of the PSD.k/] (1. while for the estimation of the correlation of x with lag n we set y.40. f / D lim Td !1 Q E[jXTd .447). given by (1. f /j2 ] Td (1. f / D Tc F[x.78 Chapter 1. (1. while the indirect method via ACS makes use of a single realization. in Figure 1.474)) and Td D K Tc .k/ D x.448) where w K is the rectangular window of length K (see (1.k/g.k/j2 .447) becomes Px .
Ergodic random processes 79 In fact. in general we can think of extracting the average component of fy. (c) Corresponding frequency responses.015 0.4 0.45 0. H.k/g using an LPF ﬁlter h having unit gain.2 0 0.452) 0. Note that for a unit step input signal the transient part of the output signal will last K 1 time instants.4 0.02 h(k) 0.k/g.3 0.11.k/ D h Ł y.450) attempts to determine the average component of the signal fy.035 0.25 fT c 0.025 0.0/ D 1. Let K be the length of the impulse response with support from k D 0 to k D K 1.451).05 0.6 H(f) 0.451) We now compute mean and variance of the estimate.k/ O for k ½ K 1 (1.15 0.1.0/ D m y O (1.03 0. Therefore we assume m y D z.8 0. (b) Typical impulse responses: exponential ﬁlter with parameter a D 1 2 5 and rectangular window with K D 33. (1.1 0. i. . and suitable bandwidth B.e. the mean value is given by E[m y ] D m y H.41.2 0.5 (c) Figure 1.2 (b) 1 0. (a) Time average as output of a narrow band lowpass ﬁlter.005 0 0 5 10 15 20 k 25 30 35 40 (a) 1. From (1.35 0.01 0. As illustrated in Figure 1.41a.2 0 −0.
the length K of the ﬁlter impulse response must be larger.453) is bounded by var[m y ] Ä E h S O where E h D rh . Because of their simple implementation.453) Assuming SD C1 X nD 1 2 jc y .458) (1. : : : .n/ (1. two commonly used ﬁlters are the rectangular window and the exponential ﬁlter. for a ﬁxed ".454) and (1.460) . or equivalently the bandwidth B must be smaller. f / D rect 2B Â Ã jfj < 1 2Tc (1.n/j D ¦ y C1 X jc y . K elsewhere 1 (1. from (1. n/c y .459). and neglecting a delay factor.n/j <1 2 ¦y nD 1 (1.k/g.k/ D K :0 k D 0. whose impulse responses are shown in Figure 1.455) assuming as ﬁlter length K that of the principal lobe of fh.0/.0/. Rectangular window 8 < 1 h.0/ D 1.457) In other words.459) " 2S (1. it results in E h D 2B and K ' 1=B. from (1.k/g that exhibit larger variance and/or larger correlation among samples. Elements of signal theory as H. 1.80 Chapter 1.6 of the correlation of a ﬁlter output signal given the input. Introducing the criterion that for a good estimate it must be var[m y ] Ä " O with " − jm y j2 . the variance of the estimate is given by 2 var[m y ] D ¦ y D O C1 X nD 1 rh .n/j Ä rh .454) and being jrh .455) it follows BÄ and K ½ 2S " (1. Using the expression in Table 1. For an ideal lowpass ﬁlter f H. to obtain estimates for those processes fy.456) (1. the variance in (1.41.
from (1.³ f Tc / (1.k/ D that can be expressed as z.k K n/ (1.471) .464) with jaj < 1. Moreover. The 3 dB ﬁlter bandwidth is equal to BD 1 a 1 2³ Tc for a > 0:9 (1.11. The ﬁlter output is given by z.k/ D z. B D 1=.469) whose computation requires only two additions and one shift of l bits.k/ (1.k 1/ C 2 l .1 C a/ and.e.1 0 a/a k k½0 elsewhere (1. adopting as bandwidth the frequency of the ﬁrst zero of jH. E h D . The frequency response is given by H. Ergodic random processes 81 The frequency response is given by H. adopting as length of h the time constant of the ﬁlter. the interval it takes for the amplitude of the impulse response to decrease of a factor e.461) For the rectangular window we have E h D 1=K and.467) The ﬁlter output has a simple expression given by the recursive equation z. K 1D 1 1 ' ln 1=a 1 a (1.470) 2 l 1/ C .463) K 1 X nD0 1 y.1 a/=.K Tc /. the ﬁlter time constant is given by K 1 D 2l (1. i.k/ D az.468) (1.465) Moreover.k We note that choosing a as aD1 the expression (1.468) becomes z.466).k 1/ C y.k/ y.k/ D .1 a/ y.k 1// (1.462) Exponential ﬁlter ( h.³ f K Tc / sin.k K K/ (1. f / D 1 e K j2³ f K 1 Tc 2 Ð sin.k/ D z. f /j.k/ z.1.y. f / D 1 1 a ae j2³ f Tc (1.466) where the approximation holds for a ' 1.
1.473) (1. 1. 1. 2.k/ D w D .476) 4. : : : . be a realization of a random process with K samples. K examine two estimates.k/ D D 1 > > > : 0 3. We Let fx. Rectangular window ( w. K 1 (1.470) give an expression to update the estimates. 1.k/ window20 (1. for random processes with slowly timevarying statistics. : : : .478) 20 We deﬁne the continuoustime rectangular window with duration Td as Â Ã ( 1 0 < t < Td t Td =2 wTd . : : : . D elsewhere 1 (1.k/ D 1 0 k D 0.11.474) where D denotes the length of the rectangular window expressed in number of samples. 1. Raised cosine or Hamming window 8 0 D 11 > > k > < B 2 C 0:54 C 0:46 cos @2³ A w.463) and (1. We note that. Hann window 8 > > > < > > > : 0 B 0:50 C 0:50 cos @2³ 0 8 > > > < > > > : þ þ D 1þ þ þk þ þ 2 þ 2þ þ þ D 1 þ þ þ 11 C A k D 0. Triangular or Bartlett window k D 0.472) is introduced to normalize the area of h to 1. Elements of signal theory General window In addition to the two ﬁlters described above.k/x Ł .475) k D D 2 1 w. 1.k/ D Aw.2 Correlation estimators 1. a general window can be deﬁned as h.k/ D 1 0 1 (1. D elsewhere 1 (1.82 Chapter 1. D elsewhere 1 (1.477) . : : : . D elsewhere w. 1.472) with fw. the equations (1. k D 0. : : : .k/g of length K . : : : .t/ D rect D Td 0 elsewhere Commonly used discretetime windows are: 1.k/ D k D 0.k n/ n D 0.n/ D O 1 K n K 1 X kDn x. Unbiased estimate rx .k/g. The factor A in (1.
482) The mean value of the biased estimate satisﬁes the following relations: Ã Â jnj E[rx .n/] D O 1 K n K 1 X kDn E[x. the variance of the biased estimate is expressed as Ã Â C1 K jnj 2 1 X var[rx .m K K mD 1 x n/] (1.1.n/] C jBIASj2 O (1.n/] D 1 L rx .n/ (1.n/] ' O from which it follows var[rx . the biased estimate of the ACS exhibits a meansquare error21 larger than the unbiased.n/] ' O [r2 .n/ ! rx .k K kDn Â n/ D 1 Ã jnj rx .11. It should also be noted that the estimate does not necessarily yield sequences that satisfy the properties of autocorrelation functions: for example. one can show that the variance of the estimate is approximately given by var[rx .n/] L rx . Biased estimate rx .483) Unlike the unbiased estimate.m/ C rx .487) 21 For example.m C n/rx .481) The above limit holds for n − K .n/] D L var[rx . for the estimator (1.n/j O n 6D 0 (1.m/ C rx . the mean of the biased estimate is not equal to the autocorrelation function.k/x Ł . the following property may not be veriﬁed: O rx .n/ (1.479) If the process is Gaussian.k/x Ł .m C n/rx . denoted as BIAS : BIAS D E[rx . but approaches it as K increases.478) the meansquare error is deﬁned as E[jrx . Ergodic random processes 83 The unbiased estimate has mean value equal to E[rx .k n/] D rx .485) In general.n/ O rx .n/ K K !1 (1.n/ O K (1.480) (1. Note that the variance of the estimate increases with the correlation lag n. especially for large values of n.m x n/] (1.0/ ½ jrx .K n/2 C1 X mD 1 [r2 .n/j2 ] D var[rx .484) For a Gaussian process.486) .n/ D L X 1 K 1 x. Note that the biased estimate differs from the autocorrelation function by one additive constant.n/] O !0 K !1 K .
f /j2 d f 1 D K Tc using the properties of the Fourier transform (Parseval theorem).k/g is given by O Mx D X 1 K 1 jx.K 1/ 1 Q jX . : : : . Based on (1.k/j2 K kD0 Z 1 2Tc 1 2Tc 1. consequently.491) D Tc W B Ł Px .K 1/ K 1 X nD . f / is the Fourier transform of the Bartlett window ( jnj jnj Ä K 1 1 w B .3 Power spectral density estimators After examining ACS estimators.n/ even for lags up to K 1. it also L exhibits a large variance. f / D Tc X .489) rx . f /j2 K Tc (1.n/ e L j2³ f nTc (1.84 Chapter 1.493) (1. as PPER . whose variance is very large.³ f K Tc / sin. we review some spectral density estimation methods. f / is computed using the samples of rx .n/e K j2³ f nTc (1.11.³ f Tc / ½2 (1. K an estimate of the statistical power of fx. f / where W B . (1. f /] D Tc K 1 X nD .492) We note the periodogram estimate is affected by BIAS for ﬁnite K . Periodogram or instantaneous spectrum Q Let X .489) as PPER . Elements of signal theory 1. f / is the Fourier transform of fx. f / D Tc and. where X .n/ D K 0 jnj > K 1 and 1 WB .488).490) E[rx .K 1/ K 1 X nD .488) Q jX . a PSD estimator called a periodogram is given by PPER . k D 0. . f / D We can write (1. f / D K Ä sin.n/]e L Â 1 j2³ f nTc D Tc Ã jnj rx . E[PPER .k/g. f /. Moreover.
k/e j2³ f kTc (1.500) w.11.k/ D kD0 (1. that is the largest integer smaller than or equal to a.495) For each s.494) C1 Ns D D S Let w be a window (see footnote 20 on page 82) of D samples: then x .½/].k C s.s/ P .s 1/ and with the following one x . for each frequency.s/ be the sth subsequence. f / þ DTc Mw (1.k/ x. average the periodograms: PWE . . f / D Tc and obtain . : : : .D S// k D 0. D 1 s D 0. The symbol dae denotes the function ceiling. f / where W. that is the smallest integer larger than or equal to a.s/ þ þX .496) þ þ2 1 þ Q .k/e j2³ f kTc (1. Given a sequence of K samples. 0 Ä S Ä D=2.s/ . The number of subsequences Ns is22 ¹ ¼ K D (1.s/ .497) where Mw D X 1 D 1 2 w . N s 1 (1. different subsequences of consecutive D samples are extracted.499) Mean and variance of the estimate are given by E[PWE . f / D D 1 X kD0 x .s/ . f /] D Tc [jW.k/ D w. with the choice S D 0 yielding subsequences with no overlap and therefore with less correlation.1. f / D X 1 Ns 1 .s/ PPER .½/j2 Ł Px . As a last step. In general.f/ Ns sD0 PER (1.501) 22 The symbol bac denotes the function ﬂoor.498) is the normalized energy of the window. Subsequences may partially overlap. : : : . Ergodic random processes 85 Welch periodogram This method is based on applying (1. 1. compute the Fourier transform Q X . Let x .447) for ﬁnite K . f / D D 1 X kD0 (1.sC1/ . characterized by S samples in common with the preceding subsequence x . 1.
f / (1. we see that the variance of the estimate is reduced by increasing the number of subsequences. To simplify the notation.505) Windowing and window closing The operation of windowing time samples in the periodogram.0/ D 1. f /] D Tc W.n/ e O j2³ f nTc (1. any truncation of a sequence is equivalent to a windowing operation. n D O transform PBT . In fact. Blackman and Tukey correlogram For an unbiased estimate of the ACS. f / D Tc L X nD L L .D both cases. we will use the same symbol in . From (1.86 Chapter 1. consider the Fourier w. leakage can mask spectral components that are further apart and have different amplitudes. f /] D 1 2 2L 2 Px . has a strong effect on the performance of the estimate. In terms of the mean value of the estimate. Smearing yields a lower spectral resolution. If K is the number of samples of the realization sequence. 23 The symbol ‘/’ indicates proportional. f /E w D P . 1/=2. 24 The windows used in (1. and the autocorrelation sequence in the correlogram.502).n/g. with w.502)).503) are the same as those introduced in note 20: the only difference is that they are now centered around zero instead of .n/rx . Elements of signal theory Assuming the process is Gaussian and the different subsequences are statistically independent. carried out via the function “rect”. that is the capability to distinguish two spectral lines that are close.502) Note that the partial overlap introduces correlation between subsequences. On the other hand. L. if the Bartlett window is chosen.504) For a Gaussian process. we ﬁnd E[PBT . Then if the Bartlett window (1.493) is chosen. : : : . f / ½ 0. D must be large enough so that the generic subsequence represents the process and also Ns must be large to obtain a reliable estimate (see (1. one ﬁnds that PBT . f /] / 1 2 P .f/ K 3K x (1. In general. The choice of the window type in the frequency domain depends on the compromise between a narrow central lobe (to reduce smearing) and a fast decay of secondary lobes (to reduce leakage).f/ Ns x (1. we require that L Ä K =5 to reduce the variance of the estimate. the variance of the estimate is given by var[PBT . we get23 var[PWE .503) where w is a window24 of length 2L C 1. frx . therefore the application of the Welch method requires many samples. f / Ł Px .
1 C ²/ kTc T T rect Ã # Â 8T C Tc kTc 2 kTc 4² T T 16 X 16 h. f / D ¦w Tc A2 jH. The aforementioned method is called window closing. In this way we get estimates with a higher resolution. A1 D 1=20. where '1 . f C f 2 // where H. f 2 D 1:75. and Ah D Moreover Â sin ³. For a given observation of K samples.kTc / D 16 1 X h.509) A2 C 2 . Example 1. 2³ /.kTc / D ²/ kTc T " Ã C 4² Â Ã kTc kTc Ã Â cos ³.507) (1. A Dirac impulse is represented by an isosceles triangle having a base equal to twice the desired . f C f 1 // (1.2³ f 1 kTc C '1 / (1.. w. it is initially better to choose a small number of samples over which to perform the DFT. Another example is the Welch periodogram.nTc /w.Ž.1.kTc /g.44 as a solid line.k Ah nD 16 n/Tc / C A1 cos.nTc / is a white random process with zero mean and variance 2 ¦w D 5.509) is shown in Figures 1.264) and (1. The estimate is then repeated by increasing the number of samples per window. An example has already been seen in the correlogram. The procedure is terminated once it is found that the increase in variance is no longer compensated by an increase in the spectral resolution.506) C A2 cos.11. A2 D 1=40.Ž. Consequently. Tc D 0:2. Actually y is the sum of two sinusoidal signals and ﬁltered white noise through h. '2 2 U[0. f 2 4 Ah f 1 / C Ž.2³ f 2 kTc C '2 /. f 4 f 2 / C Ž. 2 P y .388). thus decreasing the number of windows. f /j2 C 1 . observing (1. f / is the Fourier transform of fh. Ergodic random processes 87 The choice of the window length is based on the compromise between spectral resolution and the variance of the estimate. but also characterized by an increasing variance. f 1 D 1:5.1 h. where the condition L Ä K =5 must be satisﬁed. and therefore a large number of windows (subsequences) over which to average the estimate.508) ³ 1 with T D 4Tc and ² D 0:32.1 Consider a realization of K D 10000 samples of the signal: y.kTc / (1. The shape of the PSD in (1.11.42 to 1.
f / D jW. Finally. of area A2 =4 will 1 have a height equal to A2 =. the estimate obtained by the Welch periodogram method using the Hamming or the rectangular windows. f where W. frequency resolution Fq . in particular. f C f 1 /j2 (1.497). Elements of signal theory Figure 1. f f 1 / C W.88 Chapter 1. In general.42 shows. thus maintaining the equivalence in statistical power 1 between different representations.43 shows how the Hamming window also improves the estimate carried out with the correlogram. and the analytical PSD given by (1. Windowing a complex sinusoidal signal f 1 /. the estimates of Figure 1.44 shows how the resolution and .4Fq /.510) DMw 2 Figure 1. in the frequency domain the spectral line of a sinusoidal signal becomes a signal with shape W. using the Hamming or the rectangular window.k/g produces a signal having Fourier transform equal to W. We observe that the use of the Hamming window yields an improvement of the estimate due to less leakage. in particular we will emphasize the effect on the resolution of the type of window used and the number of samples for each window.509).499) are: D D 1000. Comparison between spectral estimates obtained with Welch periodogram method.43 were obtained using in (1. We now compare several spectral estimates.503) L D 500. f / is the Fourier transform of w. the periodogram of a real sinusoidal signal with amplitude A1 and frequency f 1 is Â Ã2 A1 Tc PPER . in addition to the analytical PSD (1. fe j2³ f 1 kTc g with fw. Likewise Figure 1.42. for example. Parameters used in (1. Therefore. Ns D 19 and 50% overlap between windows.496) and (1. from (1. obtained using the previously described methods. Consequently. f / centered around f 1 . Figure 1. We state beforehand the following result.509). a Dirac impulse.
44.43. by varying parameters D ed Ns .1. Comparison between spectral estimates obtained with the correlogram using the Hamming or the rectangular window. using the Hamming window. Ergodic random processes 89 Figure 1. .509). Comparison of spectral estimates obtained with the Welch periodogram method.11. and the analytical PSD given by (1. Figure 1.
k n/ (1. Note that by increasing D.k/ generated by (1.511) in terms of the input–output relation of the linear system.k n/ (1. both resolution and variance of the estimate increase.k/ D p X nD1 an x.k/. Elements of signal theory the variance of the estimate obtained by the Welch periodogram vary with the parameters D and Ns . is the output of an IIR ﬁlter having as input white noise with variance 2 ¦w .45. and hence decreasing Ns . the process x.k n/ C q X nD0 bn w.k/ D C1 X nD0 h ARMA . . from (1.q) model Let us consider the realization of a random process x according to the autoregressive moving average model illustrated in Figure 1.512) w(k) Tc w(k1) Tc Tc w(kq) b0 b1 bq + x(k)  ap x(kp) Tc x(k2) a2 Tc x(k1) a1 Tc Figure 1. using the Hamming window. In other words. and is given by the recursive equation25 x. ARMA model of a process x.45. the ﬁrst samples x.511) should be ignored because they depend on the initial conditions.n/w.511) Rewriting (1. 25 In a simulation of the process.90 Chapter 1. 1.129) we ﬁnd in general x. also called observed sequence.12 Parametric models of random processes ARMA(p.
46.z/ D > an z : nD0 n HARMA .e j2³ f Tc / (1. we get the moving average model of order q.516) (1. From (1. as illustrated in Figure 1.519) . the power spectral density of the process x is given by: ( þ2 þ B.k/ D N X nD1 2 where w is white noise with variance ¦w . we see that its behavior is generally characterized by wide “peaks” and narrow “valleys”.517) If we represent the function Px . assuming ai D 0 i D 1. f / D Tc ¦w þ where A. The output process is described in this case by the recursive equation x.z/ D B. The transfer function is given by an x.k n/ C w.z/ and 2 Px .514) MA(q) model If we particularize the ARMA model.513) n assuming a0 D 1 Using (1.518) HAR . Parametric models of random processes 91 which indicates that the ﬁlter used to realize the ARMA model is causal.z/ D bn z > > < nD0 p > X > > A.129) one ﬁnds that the ﬁlter transfer function is given by 8 q X > > B. f / þ A. The equations of the ARMA model therefore are reduced to HMA . f / of a process obtained by the MA model.264). f /j2 (1.z/ A. f / D A.z/ D 1 A. 2. f / D Tc ¦w jB.12.1.z/ where (1.515) or A.z/ D 1.47. f / D B. AR(N) model The autoregressive model of order N is shown in Figure 1. f / þ Px .e j2³ f Tc / þ þ 2 þ B. : : : .z/ (1.z/ D B. p (1.k/ (1.
520) We observe that (1.z/ D 1 C N X nD1 an z n (1. .e.1 pN z 1/ (1.519) describes a ﬁlter having N poles.521) For a causal ﬁlter.1 1 /. Therefore HAR . N .z/ D .92 Chapter 1.47. the stability condition is jpi j < 1. Elements of signal theory Figure 1. w(k) +  x(k) aN a2 Tc a1 Tc Tc Figure 1. 2. AR model of a process x. i.46.z/ can be expressed as HAR . i D 1. : : : .1 p1 z 1 p2 z 1 / Ð Ð Ð . Power spectral density of a MA process with q D 4. with A. all poles must be inside the unit circle of the z plane.
1.z/A Ł Ł z 1 zŁ (1. given by Px .6 the ztransform of the ACS of x is given by Px . f / of an AR process will have narrow “peaks” and wide “valleys” (see Figure 1.522) Hence the function Px . the function Px .12. Figure 1.524) jA.48. Power spectral density of an AR process with N D 4. from Table 1. reciprocal to the MA model.523) (1. f /j2 (1. .48).e j2³ f Tc / one obtains the power spectral density of x.z/ has poles of the type jpi je j'i Letting A.z/ 1 A. Parametric models of random processes 93 In the case of the AR model.525) Typically.z/ D Pw . f / D A. f / D 2 Tc ¦w and 1 j'i e jpi j (1.z/A Ł Â ÃD 2 ¦w Â Ã 1 A.
1 jp1 je j'1 z 1 / Ð Ð Ð .522). The process s. is described by the recursive equation s.523).z/ in (1.528) while x is obtained as ﬁltered white noise: x. Whitening ﬁlter We observe an important property illustrated in Figure 1. If x is input to a ﬁlter having transfer function A.z/ is called whitening ﬁlter (WF).527) where s and x are uncorrelated processes.49.1 jp N je j' N z 1 / Â Ã Â Ã 1 j'1 1 1 j' N 1 1 ÐÐÐ 1 e z e z jp1 j jp N j (1.n/w.z/ is called spectral factorization. Elements of signal theory Spectral factorization of an AR(N) model Consider the AR process described by (1.k/.k/ D C1 X nD1 Þn s.529) . in the sense that the new information associated with the sample x. Relation between ARMA. called predictable process. If A.522). Two examples are illustrated in Figure 1. In this case the ﬁlter A.k/ D C1 X nD0 h.522) can be chosen in 2 N different ways.z/. MA and AR models The relations between the three parametric models are expressed through the following propositions.k/ D s.z/ D 2 ¦w .z/ that lie inside the unit circle of the zplane.k n/ (1.z/.526) For a given Px . Every WSS random process y can be decomposed into: y. Observing (1.k/ C x.94 Chapter 1. we have the following decomposition: Px . As stated by the spectral factorization theorem (see page 53) there exists a unique spectral factorization that yields a minimumphase A. Wold decomposition.50. it is clear that the N zeros of A.z/.z/ is minimum phase. Suppose x is modeled as an AR process of order N and has PSD given by (1. The selection of the zeros of A.k/ (1. which is obtained by associating with A.k/ is carried only by w.k n/ (1.z/ only the poles of Px . the output of this latter ﬁlter would be white noise. the white process w is also called innovation of the process x.
z/. Parametric models of random processes 95 Figure 1.1. among the poles of Px .z/.49.12. Two examples of possible choices of the zeros (ð) of A. .
provided that the order is sufﬁciently high.n m/ n>0 (1.k m/x Ł .n 2 m/ C ¦w Žn (1.k/x Ł .533) nD0 n<0 rx .k n/] D N X mD1 am E[x.530) Taking expectations.k n/ (1. m/ C ¦w rŁ .532) N X mD1 N X mD1 am rx .k n/ C w.k/x Ł . 1. and observing that w.96 Chapter 1.50. or AR) can be adopted to approximate the spectrum of a process.k/ is uncorrelated with all past values of x.k m/ x Ł . Therefore any one of the three descriptions (ARMA.518) by x Ł .531).12.531) From (1.k n/ D N X mD1 am x. we ﬁnd x. Elements of signal theory Figure 1. MA.n/ D 2 am rx .1 Autocorrelation of AR processes It is interesting to evaluate the autocorrelation function of a process x obtained by the AR model. for n ½ 0 one gets E[x.n/ D In particular we have 8 > > > > > > > > < > > > > > > > > : N X mD1 am rx .k/x Ł . it follows rx .k n/. Whitening ﬁlter for an AR process of order N.k 2 n/] C ¦w Žn (1. Multiplying both members of (1. x n/ . Theorem 1.4 (Kolmogorov theorem) Any ARMA or MA process can be represented by an AR process of inﬁnite order.
N C 1/ 6 6 r.N / process.536) 76 : 7 D 6 : 7 6 : : :: 6 : 7 7 : : 54 : 5 4 : : : ÐÐÐ 4 : 5 : r.537) one ﬁnds that a does not depend on rx .0/ C r H a Observation 1. rx .0/. r x can be written as N X rx .8 ž From (1. for n D 1. with the exception of the component w.540) rx .541) rx .n/ D n D 1.538) and (1.12.0/ C (1.N / univocally determines the ACS of an AR. N . and observing (1. In the hypothesis that the matrix R has an inverse. the solution for the coefﬁcients fai g is given by a D R 1r (1. 2.N / aN that is Ra D r (1.533).1 and 2. ž We note that the knowledge of rx .1. we get: rx .0/ r.2). for n > 0.537) with obvious deﬁnition of the vectors.534) i D1 Assuming an AR process with j pi j < 1.n/ with r. with a computational complexity proportional to N 2 (see Sections 2.N 1/ r.0/ ž Exploiting the fact that R is Toeplitz and Hermitian.z/.N 2/ Ð Ð Ð r.538) Equations (1.n m/ (1.538).n/ satisﬁes an equation analogous to the (1.537) and (1.n/ D mD1 .533) for n D 0. : : : . : : : .539) can be numerically solved by the Levinson–Durbin or by Delsarte–Genin algorithms. N (1. N .2.n/ D ci pin n>0 (1. called Yule–Walker equations. if f pi g are zeros of A.539) mD1 D rx . 2. 1/ Ð Ð Ð r.n/ !0 n!1 (1.2/ 7 7 6 r. m/ ¦w D rx . we get N X am rx . rx . r x . for n > N .2.k/.n/. : : : . the set of equations (1. allow us to obtain the coefﬁ2 cients of an AR model for a process having autocorrelation function rx .n/ ²x . for i D 1.533). Parametric models of random processes 97 We observe that.1/.0/. This implies that. N C 2/ 7 6 a2 7 7 6 76 6 (1. : : : .0/ r. The variance ¦w of white noise at the input can be obtained from (1.535) Simplifying notation rx .1/ r. one gets a set of equations that in matrix notation are expressed as 3 2 3 2 a1 3 2 r.518). which yields N X 2 am rx . but only on the correlation coefﬁcients rx . from (1.1/ r.0/ Ð Ð Ð r.
(1.541) is used.48. a spectral estimate for the process of Example 1. Note that the continuous part of the spectrum is estimated only approximately. In fact the AR model yields PAR . while O for jnj > N the recursive equation (1.2. such as PBT . Comparison between the spectral estimate obtained by an AR(12) process model and the analytical PSD given by (1.542) Usually the estimate (1. For example.2 Spectral estimation of an AR(N) process Assuming an AR. because it does not show the effects of ACS truncation. f /. f / D 2 Tc ¦w jA. Elements of signal theory 1. Figure 1.509).525). f / D Tc C1 X nD 1 rx .N / model for a process x. the choice of a larger order N would result in an estimate with larger variance. .542) allows a better resolution than estimates obtained by other methods. From (1.12. f /j2 (1.n/ is estimated for jnj Ä N with one of the two methods of Section 1. on the other hand.98 Chapter 1.51. we deﬁne as spectral estimate PAR .538) yields the coefﬁcient vector a.543) where rx . which implies an estimate of the ACS up to lag N is available.n/ e O j2³ f nTc (1.11. The AR model accurately estimates processes with a spectrum similar to that given in Figure 1. The correlation coefﬁcients were obtained by a biased estimate on 10000 samples.11.1 on page 87 obtained by an AR(12) model is depicted in Figure 1.51.
AR(1).52.1/ .545) j1 C a1 e j2³ f Tc j2 (1. Parametric models of random processes 99 Also note that the presence of spectral lines in the original process leads to zeros of the polynomial A. a1 /jnj (1. In practice.n 1/ n>0 D rx .544) we obtain rAR. the correlation estimation method and a choice of a large N may result in an illconditioned matrix R.1. and hence the system would be unstable.0/ C a1 rx .52. 1/ 2 ¦w (1.539) for N D 1 and N D 2.12.n/ D from which the spectral density is PAR. Figure 1. From ( rx . Some useful relations We will illustrate some examples of AR models. Spectral density of an AR(1) process. f / D 2 Tc ¦w 1 ja1 j2 .546) The behavior of the spectral density of an AR(1) process is illustrated in Figure 1.n/ D 2 ¦w a1 rx .z/ near the unit circle (see page 101).1/ . . In particular we will focus on the Yule– Walker equations and the relation (1. In this case the solution may have poles outside the unit circle.
PAR.53. Figure 1. be the two complex roots of A.53.2³ f 0 Tc / ½ (1. as illustrated in Figure 1. f / has a peak that becomes more pronounced.2³ f 0 Tc / (1.547) a2 D % 2 Letting # D tan we ﬁnd s Ã2 Â Ł2 1 C %2 1 %2 ð 1C tan 1 .4³ f 0 Tc / C %4 The spectral density is thus given by PAR. f C f 0 /Tc þ2 1 Ä 1 %2 tan 1 C %2 1 . .100 Chapter 1.2/ . and rx . We consider a real process: ( a1 D 2% cos.550) We observe that.2³ f 0 Tc / 2 2 % 1C% 2 1 %jnj cos. Let p1. where '0 D 2³ f 0 Tc . as % ! 1.k/ tends to exhibit a sinusoidal behavior. f / D þ þ1 2 Tc ¦w þ2 þ f 0 /Tc þ þ1 j2³.548) #/ (1.2 D %eš j'0 .2³ f 0 jnjTc rAR. f %e þ (1.549) %e j2³.2/ . Elements of signal theory AR(2).2/ .z/ D 1 C a1 z 1 C a2 z 2 .n/ D ¦w 1 %2 cos2 . Spectral density of an AR(2) process.
552) 1 C a2 > > ! > > > > a2 > r .1/rx .554) satisﬁes the following difference equation for k ½ 0: x. we have " 2 p1 .2³ f 0 Tc /z 1 Cz 2 (1.k/ D 2 cos. p1 1/ pn p1 /.1 C a2 /2 a2 > 1 > > < a1 rx .n/ D rx . p2 2 p2 . p1 p2 C 1/ 2 AR model of sinusoidal processes The general formulation of a sinusoidal process is: x. p1 p2 C 1/ 1 # (1.k/ D A cos.2/ rx > a2 D > > r2 . We observe that the process described by (1.0/ (1.2³ f 0 Tc '/ (1. as the stability .1. Parametric models of random processes 101 Solutions of the Yule–Walker equations are 8 rx .0/ r2 .1/rx .2³ f 0 Tc /x.0/ rx . for n > 0.2/ (1.1/ D rx . we get the homogeneous equation A. 1/ D 0.2 D eš j2³ f 0 Tc (1.2/ >a D > 1 > > r2 .0/ pn .k 2/ C Žk A cos ' Žk 1 A cos.1/ rx .2/.z/ D 1 The zeros of A.557) 2 cos.z/ are p1.554) with ' 2 U[0. In the zdomain.12. We note that the Kronecker impulses determine only the amplitude and phase of x.2³ f 0 kTc C '/ (1.0/ : 1 C a2 In general. 2/ D x.0/ D 1 C a2 > > > 1 a2 .1/ C a2 rx .1/ > x x > < 2 .0/.551) Solving the previous set of equations with respect to rx .553) . p2 p1 /. 2³ /.1/ and rx .1/ > x x > > 2 : ¦w D rx . Consequently the representation of a sinusoidal process via the AR model is not possible. one obtains 8 2 ¦w > > rx . p2 1/ rx .556) It is important to verify that these zeros belong to the unit circle of the z plane.k 1/ x.555) with x.0/ r2 .2/ D 1 > x a2 C rx .0/rx .0/ C a1 rx . rx .
The two components are then estimated by different methods. Elements of signal theory condition. 1. 5.102 Chapter 1. on the other hand. 8. rx . we can try to ﬁnd an approximation.2N .¦w !0 lim 2 (1. 6].n/ ' 6 4 2 2 ¦w 2 1 %2 3 7 7 cos.z/ ! A.2³ f 0 nTc / 5 0 Tc / (1. 11]. is not satisﬁed. ž From (1.560) Observation 1. Using the formula (1. j pi j < 1. For a statistical analysis of random processes we refer to [1.38. for % ' 1 we ﬁnd 2 6 rAR. 9.513).n/ D A2 cos. Observing (1.555) is not white noise.2/ . In the hypothesis of uniform '.390) one ﬁnds that an AR process of order N is required to model N complex sinusoids.559) %2 cos2 . 16]. 14.z/. one sees that an AR process of order 2N is required to model N real sinusoids.9 We can observe the following facts about the order of an AR model approximating a sinusoidal process. for example by the scheme illustrated in Figure 3. ž An ARMA process of order . . 15.388). 2N / is required to model N real sinusoids plus white 2 2 2 noise having variance ¦b . the subject of spectral estimation is discussed in detail in [2. Finally. In any case. it results ¦w ! ¦b and B. 7. Moreover the input (1. from (1.4³ f 0 Tc / 2 ¦w 2 %!1. A better estimate is obtained by separating the continuous part from the spectral lines. 12.5.13 Guide to the bibliography Many of the topics surveyed in this chapter are treated in general in several texts on digital communications. in particular [4.558) This autocorrelation function can be approximated by the autocorrelation of an AR(2) 2 process for % ! 1 and ¦w ! 0.2³ f 0 nTc / 2 (1.9.4³ f and impose the condition 2 A2 1 %2 D 2 %2 cos2 . 13]. from Example 1.549). 10. Indepth studies on deterministic systems and signals are found in [3.
Signal analysis. NJ: PrenticeHall. “Interpolation in digital modems—Part II: implementation and performance”.. NJ: PrenticeHall. Englewood Cliffs. 1999. 1989. Shiryayev. Digital spectral analysis with applications. Moeneclaey. Crochiere and L. Messerschmitt and E. N. The Fourier integral and its applications. New York: McGrawHill. [14] S. [7] G. Marple Jr. NJ: PrenticeHall. Probability. La teoria uniﬁcata dei segnali. New York: McGrawHill. New York: IEEE Press. A. New York. 3rd ed. Discretetime signal processing. Kay. [2] M. Gardner. 41. Meyr.. [8] A. Vaidyanathan. A. R. L. 1996. Rabiner. Boston. An introduction to the theory of random signals and noise. [9] A. IEEE Trans. Principles of digital transmission with wireless applications. Stoica and R. Multirate systems and ﬁlter banks. NJ: PrenticeHall. Schafer. MA: Kluwer Academic Publishers. 1995. W. G. NY: Academic Press. E. Papoulis. 1988. [3] A. Multirate digital signal processing. Papoulis. Oppenheim and R. Moses. NJ: PrenticeHall. random variables and stochastic processes. Harris.. Erup. Lee. June 1993. B. 2nd ed. 1994. pp.. New York: Kluwer Academic Publishers. A. Englewood Cliffs.1. Spectral analysis and time series. Turin: UTET. Digital communications. V. Biglieri. 1962. [17] L. Digital communication receivers. on Communications. Root. Davenport and W. New York: McGrawHill. and S. 1997. Modern spectral estimationtheory and applications. [6] J. [11] R. 998– 1008. [13] A. Benedetto and E. [16] P. NJ: PrenticeHall. [5] D. New York: McGrawHill. P. Englewood Cliffs. New York: John Wiley & Sons. Cariolaro. Englewood Cliffs. [18] H. 1991. [4] S. S. Digital communication. W. Proakis. M. [12] J. Englewood Cliffs. . 3rd ed. 1981. 1998. M. F. 1984. 1984. Englewood Cliffs. M. Fechtel. 1987. [15] L. Papoulis. Introduction to spectral analysis. New York: Springer–Verlang. Priestley. vol. G. Probability. and R. 1993. 1987. B. 1983. Bibliography 103 Bibliography [1] A. [10] P.
54.567) (1.561) can be written as yk D n2 X nDn 1 h.566) ¹ (1.kTc0 nTc /x.kTc0 / h.568) xn T c h yk T’ c Figure 1.1 Fundamentals We consider the discretetime linear transformation of Figure 1.nTc / yk D y. between t1 and t2 . .564) kTc0 t2 Tc kTc0 t1 n2 D Tc (1. 11]. Discretetime linear transformation.A Multirate systems The ﬁrst part of this appendix is a synthesis from [10.kTc0 If we assume that h has a ﬁnite support.nTc / (1.kTc0 n 1 Tc / C Ð Ð Ð C xn 2 h. with impulse response h.A. say. that is kTc0 or equivalently for n> then. letting n1 D ¼ kTc0 t2 Tc ¾ n< kTc0 t1 Tc ³ (1.562) (1.t/. the sampling period of the input signal is Tc . Elements of signal theory Appendix 1.104 Chapter 1. 1. The input–output relation is given by the equation y. t 2 <.kTc0 / D C1 X nD 1 h.kTc0 nTc /xn D xn 1 h.54. whereas that of the output signal is Tc0 .561) We will use the following simpliﬁed notation: xn D x.kTc0 n 2 Tc / (1.563) nTc / 6D 0 for (1.565) nTc < t2 kTc0 nTc > t1 (1.
. and t2 .A. 1=L . if L D 1 only one set of coefﬁcients exists.568) becomes yk D I2 X i DI1 h. while .572) kTc0 Tc ¹ n (1.570) ¹ kTc0 C Tc ³ ¼ 0¹ kTc kTc0 C Tc Tc kTc0 Tc ³ ¼ (1.k M/mod L L (1.L 1/=Lg for any value of k. t1 . Tc0 . Hence there are only L univocally determined sets of values of h that are used in the computation of fyk g. we get 1k D k ¼ ¹ M M k L L Â ¼ ¹ Ã 1 M D kM k L L L D 1 .i C 1k /Tc /xbkTc0 =Tc c i (1.574) (1.568) are a complicated function of Tc . In the special case Tc0 M D Tc L with M and L integers.1.569) (1. in particular..571) (1. ž the limits of the summation (1. Multirate systems 105 One observes from (1.561) that: ž the values of h that contribute to yk are equally spaced by Tc .575) We observe that 1k can assume L values f0. Introducing the change of variable ¼ iD and setting 1k D kTc0 Tc ¾ t1 I1 D Tc ¾ t2 I2 D Tc ¼ ¹ kTc0 Tc ³ ¾ t1 1k D Tc ³ ¾ t2 1k D Tc (1.573) From the deﬁnition (1.570) it is clear that 1k represents the truncation error of kTc0 =Tc and that 0 Ä 1k < 1. 2=L . : : : .
Summarizing.106 Chapter 1. Decimation or downsampling transformation by a factor M.e j! / D 0 X !0 1 M 1 X ej M mD0 yk M 2³ m Á M (1. (1. Elements of signal theory if M D 1 the sets are L.576) We note that the system is linear and periodically timevarying. Equivalently.581) xn Tc Fc = 1 Tc T’ =MTc c Fc F’c = M Figure 1. we get 1k D 0. that is for L D M D 1.i xj k M k L i (1.z M W M / M mD0 (1.55 represents a decimator or downsampler.580) where W M D e j M is deﬁned in (1. For Tc0 D Tc .2 Decimation Figure 1.92). with the output sequence related to the input sequence fxn g by yk D x k M (1. the decimation factor.z/ in terms of X . in terms of the radian frequency 0 normalized by the sampling frequency. !0 D 2³ f =Fc . the output of a ﬁlter with impulse response h and with different input and output time domains can be expressed as yk D where gk. We will show that Y .i x k i (1. is an integer number.z/..577) C1 X iD 1 gk. . 1.55.i D h.z/ D 2³ X 1 1 M 1 m X .578) We will now analyze a few elementary multirate transformations.580) can be written as Y .i C 1k /Tc / (1.A.579) where M. and the input–output relation is the usual convolution yk D C1 X iD 1 g0. We now obtain an expression for the ztransform of the output Y .
Multirate systems 107 Figure 1.e j! =M /. Note that the only difference with respect to the previous representation is that all frequency responses are now functions of the frequency f .584) The relation (1. f / D X . and b) in the normalized radian frequency domain. after summation. obtaining X . f / D Y .582) for the signal of Figure 1.e j! / by a factor M. f / D M mD0 M Tc where X . It is also useful to give the expression of the output sequence in the frequency domain.580). as we would expect from a discrete Fourier transform. ž sum all the replicas and divide the result by M.583) (1.581) is shown in Figure 1. We observe that. Proof of (1.56.57.585) .582) X f Y. the result is periodic in !0 with period 2³ .A. A graphical interpretation of (1.e j2³ f Tc / Y.56 is represented in Figure 1. The ztransform of fyk g can be written as Y . we get Ã X Â m 1 M 1 (1.56: ž expand X .1. 0 ž create M 1 replicas of the expanded version. and frequencyshift them uniformly with increments of 2³ for each replica. Decimation by a factor M D 3: a) in the time domain.z/ D C1 X kD 1 yk z k D C1 X kD 1 x Mk z k (1.e j2³ f M Tc / (1.
589) can be written as ck D Hence we obtain X 0 . With this position we get Y . It only remains to express X 0 .z/. we note that (1. . : : : otherwise (1.57.z/ D X C1 1 M 1 X x k W M km z M mD0 kD 1 k X 1 M 1 W km M mD0 M X C1 Ð 1 M 1 X m x k zW M M mD0 kD 1 (1.589) Note that the (1.590) D k (1.591) m The inner summation yields X . šM. Elements of signal theory Figure 1. because x 0 is nonzero only at multiples of M.108 Chapter 1. Effect of decimation in the frequency domain.586) 0 so that yk D x Mk D x Mk .587) This relation is valid.z/ in terms of X .zW M /: hence.580).587) we get (1. We deﬁne the intermediate sequence ( xk 0 xk D 0 k D 0. : : : otherwise (1. š2M.586) can be expressed as 0 x k D ck x k (1.z/ D C1 X k0D 1 0 xk 0 M z k0 D C1 X kD 1 0 xk z k=M D X 0 . to do this. š2M. observing (1. šM.z 1=M / (1.588) where ck is deﬁned as: ( ck D 1 0 k D 0.
z/ D X . f / D Y e j2³ f L (1.A.1.592) yk D > :0 otherwise where L. f / where X . f / D X . The creation of images implies that a lowpass signal does not remain lowpass after interpolation. the interpolation factor.A.60.593) can be expressed as Y .e j! /. we get Y.e j! L / 0 0 (1. moreover. š2L . Interpolation or upsampling transformation by a factor L.z L / (1. We note that the only 0 effect of the interpolation is that the signal X must be regarded as periodic with period Fc rather than Fc . is an integer number.58.e j! / D X . there are L 1 replicas of the compressed spectrum.594) is illustrated in Figure 1.595) The (1. !0 D 0 2³ f =Fc .595) for the signal of Figure 1. with the input sequence fxn g related to the output sequence by 8 Â Ã > <x k k D 0. then (1.z/ is given by Y . : : : L (1.597) (1. in terms of radian frequency normalized by the sampling frequency.z/ and X . We will show that the input–output relation in terms of the ztransforms Y .3 Interpolation Figure 1. Multirate systems 109 1.59 is illustrated in Figure 1.e j2³ f Tc / Tc Á Y.596) (1. šL . f / D X .594) The graphical interpretation of (1.58 represents an interpolator or upsampler.59: Y . xn Tc 1 Fc = Tc L T’ = c yk Tc L F’ =LF c c Figure 1.593) Equivalently. called images. . It is also useful to give the expression of the output sequence in the frequency domain.e j! / is the compressed version by a factor L of X .
Effect of interpolation in the frequency domain.593). Interpolation by a factor L D 3: (a) in the time domain. to avoid aliasing in the downsampling process.z/ D Observing (1.z L / (1.110 Chapter 1.592) we get C1 X kD 1 yk z k D C1 X nD 1 yn L z nL D C1 X nD 1 xn z nL D X .598) 1. to form a decimator ﬁlter as illustrated in Figure 1. Elements of signal theory ´ µ Ù ¹½ ½ ¼ ÜÒ Ù ¹ Ý Ò ¾ ÉÉ É ¼ ÉÉ É ´ ¼ ÀÀ ¾ ¹ Ù Ù ¹¿ ¹¾ ¹½ µ Ù Ù Ù ¼ ½ ¾ ¿ Ù Ù ¹ ÄÄ ÄÄ ÄÄ ÄÄ ÄÄ ÄÄ ÄÄ ÄÄ Ä¹ Ä ¾ ¼ ¾ ´ µ ¼ Ù ´ µ Figure 1.A. Y . a downsampler is preceded by a lowpass digital ﬁlter.60.61.4 Decimator ﬁlter In most applications. (b) in the normalized radian frequency domain.59. The ﬁlter ensures that the signal vn is bandlimited. Proof of (1. ´ µ Ì½ ¼ ´ µ Ì ½ Ì ¾ Ì ¿ ¹ Ì½ ¼ Ì ½ Ì ¾ Ì ¿ ¹ Figure 1. .
605) we obtain Y . From V . Y .e j! / D 0 j!0 Á 1 X eM M j!0 j Ä ³ (1.e / D j! 0 X 1 M 1 H .1. recalling that !0 D 2³ f M Tc .e j M / (1.e M mD0 j !0 2³ m !0 2³ m M /X .577) we get gk.604) 1 0 j!j Ä ³ M otherwise (1.nTc /. the speciﬁcations of h can be made less stringent. .61.603) or. Let h n D h. if x is bandlimited.599) h i xn i (1.602) C1 X iD 1 C1 X iD 1 (1.z 1=M W M / M mD0 (1.A.z/ D X 1 M 1 m m H .62 for M D 4.z 1=M W M /X . i (1. The decimator ﬁlter transformations are illustrated in Figure 1.601) Note that the overall system is not time invariant.600) h i xk M i D C1 X nD 1 hk M n xn (1. Decimator ﬁlter. equivalently.606) In this case h is a lowpass ﬁlter that avoids aliasing caused by sampling.i D h i 8k. Multirate systems 111 ÜÒ Ì ¹ Ý ¹ ¨ À Ì ÅÌ ¼ À ¨ ÜÒ Ì ¹ ÚÒ Ì ¹ Å Ý Ì ¼ ¹ Figure 1.z/ D X .z/ it follows that Y . Then we have yk D vk M and vn D The output can be expressed as yk D Using deﬁnition (1. unless the delay applied to the input is constrained to be a multiple of M.e j! / D If ( H .z/H .
Elements of signal theory  X (f)  H (f) 0 Fc /2 Fc f  V (f) 0 Fc /2 Fc f  Y (f) 0 Fc /2 Fc f 0 F’ /2 F’ c c Fc /2 Fc f Figure 1. the task of the digital ﬁlter is to suppress images created by upsampling [17]. Let h n D h. Then we have the following input–output relations: yk D C1 X jD 1 hk jwj (1.A. Frequency responses related to the transformations in a decimator ﬁlter for M D 4.609) .63.5 Interpolator ﬁlter An interpolator ﬁlter is given by the cascade of an upsampler and a digital ﬁlter.608) yk D C1 X rD 1 hk r L xr (1. as illustrated in Figure 1.607) wk D Therefore 8 < : x 0 Â Ã k L k D 0. : : : otherwise (1.nTc0 /.112 Chapter 1.62. 1. šL .
however. The interpolator ﬁlter transformations in the time and frequency domains are illustrated in Figure 1. . easier and more convenient to change the sampling frequency by discretetime transformations.615) The relation between the input and output signal power for an interpolator ﬁlter is expressed by (1. A possible procedure consists of ﬁrst converting a discretetime signal into a continuoustime signal by a digitaltoanalog converter (DAC). for example. Interpolator ﬁlter.e j! / 0 0 j!0 j Ä ³ L elsewhere (1.A. however.e j! /X . then resampling it at the new frequency.609) we get yk D C1 X iD 1 gk. Multirate systems 113 ÜÒ Ì ¹ Ý ¹ Ì¨ À Ì Ä ¼ À ¨ ÜÒ Ì ¹ Ä Û Ì ¼ ¹ Ý Ì ¼ ¹ Figure 1.612) (1. In the ztransform domain we ﬁnd W .611) (1.1.63. If ( ³ 1 j!0 j Ä j!0 L (1.i is periodic in k of period L. From (1.i xj k k L i (1. Y .z/ D H .i D h i LC.419).A. it is necessary to change the sampling frequency by a rational factor L=M.610) We note that gk. 1.z/ D X .z/W .e j!0 /D X . Let i D bk=Lc r and gk.6 Rate conversion Decimator and interpolator ﬁlters can be employed to vary the sampling frequency of a signal by an integer factor. in some applications.613) where !0 D 2³ f T =L D !=L.z L / or.614) H . equivalently.e / D 0 elsewhere we ﬁnd ( Y . It is.z/X .e j! L / 0 0 0 (1.64 for L D 3.65. using the structure of Figure 1.z L / Y .e j! / D H .k/mod L .z/ D H .
. Sampling frequency conversion by a rational factor.114 Chapter 1. Time and frequency responses related to the transformations in an interpolator ﬁlter for L D 3.64. Elements of signal theory ÜÒ Ù Ù Ù ´ µ ½ ¼ ½ ¹ Ò Ï ¼ ´ µ ½ ¼ ¾ ¤¤ ´ ¤´ ¤ ¼ ¹ Ù Û Ù ÙÙ ÙÙ Ù ¼½¾¿ ¹ À ¼ ´ µ ¤ ¤¤ ¤ ´´ ¼ ¾ ¤ ¤ ¤¤ ¤ ¤ ¤¤ ´´ ¼ ¹ ¼ ¼ ¾ ¼ ¹ Ý ÙÙ Ù ÙÙ Ù ¼½¾¿ Ù ´ µ ¹ ¼ ¼ ¾ ´´ ¤¤ ¤ ¤ ¹ ¼ Figure 1.65. ÜÒ Ì ¹ Ý Ì ¼¼ ¹ Å¨ À À ¨ ÄÌ ÜÒ Ì ¹ Ä Ì ¼ ÛÐ ¹ Ì Ä ÚÐ Ì ¼ ¹ Å Ì ¼¼ Ý ¹ ÅÌ ¼ Figure 1.
620) 8 j!00 L Á > 1 < X e M 00 M Y .621) or (1.616) 0 elsewhere In the time domain the following relation holds: C1 X yk D (1.66.65. as illustrated in Figure 1.66.e j! / D (1.i xj k M k iD 1 L i where gk. where h D h 1 Ł h 2 .A.e j! / is zero for M Ä !0 Ä 2 ³ L M . Example 1.A.e j! / D M lD0 As we obtain Y .k M/mod L /Tc0 / is the timevarying impulse response.e j! / that has the stopband cutoff frequency within this interval.1 (M > L: M D 5. Decomposition of the system of Figure 1.e j! / D H . 1 j!0 j Ä min 0 L M H . . f / D 1 X . Multirate systems 115 Figure 1.619) (1.e j! L / X !00 2³l !00 L 2³l 1 M 1 H .622) Example 1.i L C . the desired result is obtained by a response 0 H . This system can be thought of as the cascade of an interpolator and decimator ﬁlter.616) we have 00 (1.617) gk. L D 5) The inverse transformation of the above example is obtained by a transformation with M D 4 and L D 5. L D 4) Transformations for M D 5 and L D 4 are illustrated in Figure 1..67. Observing the fact 0 ³ ³ that W .e j! /X .e j M / Y .e j M / M lD0 Ã Â ³M j!00 j Ä min ³. We obtain that ( ³ ³Á .e j! / D From (1.1.A.e j! / D > :0 Y. In the frequency domain we get X !00 2³l 1 M 1 00 V .618) V . 2Tc 2M Tc 0 0 0 (1.2 (M < L: M D 4.i D h. f / for M (1.e j M /X . L elsewhere Ã Â 1 L j f j Ä min . as depicted in Figure 1.68.
k obtained by the linear interpolation z. Linear interpolation Given two samples yk 1 and yk . the signal z.2 π LFc ω" = ω ’ M f Figure 1.2 π 4Fc ω = 2π f T f 0 H(e j ω’) π /L Fc /2 2π 4Fc ω’ = ω L f 0 V(e j ω’) π /M LFc 2M 2π LFc ω’ f 0 Y(e j ω") π /M LFc 2M 2π LFc ω’ f M=5 0 π LFc 2M 5.63. kTc0 ]. linear and quadratic. the number of coefﬁcients required for an FIR ﬁlter implementation can be very large. limited to interval [. is (1. after a ﬁrst interpolator ﬁlter with a moderate value of the interpolation factor. Consequently.7 Time interpolation Referring to the interpolator ﬁlter h of Figure 1.116 Chapter 1.69. Rate conversion by a rational factor L=M where M > L. t 2 <. As shown in Figure 1.623) C t . in the case of a very large interpolation factor L. 1. in fact.67.A. one ﬁnds that if L is large the ﬁlter implementation may require nonnegligible complexity. the samples fyk D y.t/ D y k 1 1/ Tc0 .kTc0 /g may be further time interpolated until the desired sampling accuracy is reached [17]. we describe below two time interpolation methods.t/.t/.yk Tc0 yk 1/ . let fyk g be the sequence that we need to interpolate to produce the signal z. Elements of signal theory X(e j ω) 0 W(e j ω’) 4.k 1/Tc0 .
69. Linear interpolation in time by a factor P D 4. Ù Ý ½ Ù ¼ ÞÒ Ù ÂÂ Â Ý ÂÂ ÂÂ ´ Þ ´Øµ ÂÂ·½ Ù Ý ´ ¾µÌ ¼ ´ ½µÌ ¹ ¼ Ì ¼ · ½µÌ Ø Figure 1.2 π 5Fc ω = 2 π f Tc f L=5 0 H(e j ω’) π /5 Fc /2 2π 5Fc ω’ = ω L f 0 V(e j ω’) π /5 Fc /2 2π LFc ω’ f 0 Y(e j ω") π /5 Fc /2 2π LFc ω’ f 0 M π π L MFc Fc 2L 2 2π MFc L .68. 42π LFc ω" = ω ’ M f Figure 1. .1. Rate conversion by a rational factor L=M where M < L. Multirate systems 117 X(e j ω) 0 W(e j ω’) π Fc /2 2π Fc 5.A.
i.70.z 1=M W M m / M / M mD0 (1.k 1/P.nTc00 / are given by .yk yk 1 / P where n D .70b. their linear interpolation originates the sequence of P values given by (1. Let yk 1 .z 1=M W M m /G. : : : . it is sufﬁcient to note that W M m M D 1. 1.k 1/ P C n 0 . The proof of the noble identities is simple.626).k C 1/Tc0 ]. Therefore. k P 1. As an example we report here the case of quadratic interpolation.k 1/P C 1.z 1=M W M m /G. in other words. known as noble identities.z/ D M mD0 . or the order of upsampling and ﬁltering as illustrated in Figure 1.70c is equivalent to that of Figure 1. a function expressed as the ratio of two polynomials in z or in z 1 . For the ﬁrst identity. P 1 and n D . we need to consider the sampling instants nTc00 D n and the values of z n D z.624) C n Quadratic interpolation In many applications linear interpolation does not always yield satisfactory results. it is possible to exchange the order of downsampling and ﬁltering. The case k D 1 is of particular interest: n n D 0.. and the system of Figure 1.z/ be a rational transfer function. yk and ykC1 be the samples to interpolate by a factor P in the interval [. Elements of signal theory For an interpolation factor P of yk . one resorts to a polynomial of degree Q 1 passing through Q points that are determined by the samples of the input sequence.k 1/P (1.. The quadratic interpolation yields the values Â Ã ÃÂ Ã Â Ã Â n0 n0 n0 n0 n0 n0 1C yk C 1 yk 1 C 1 C 1 ykC1 (1. they will be used extensively in the next section on polyphase decomposition.118 Chapter 1.k 1/Tc0 . the system of Figure 1.627) zn D 2P P P P 2P P with n 0 D 0. z n D yk 1 Tc0 P (1. : : : .A.8 The noble identities We recall some important properties of decimator and interpolator ﬁlters.70d. 1.e. P 1 (1. For this purpose the Lagrange interpolation is widely used. instead of connecting two points with a straight line. . 1.70a is equivalent to that of Figure 1. . regarding y0 and y1 as the two most recent input samples.z/ D X 1 M 1 X . In this case we consider a polynomial of degree 2 that passes through 3 points that are determined by the input samples. : : : .628) X 1 M 1 X . Let G.625) .y1 y0 / P In fact.z/ D Y 1 . hence Y2 .626) z n D y0 C .
632) To expand this idea.z 2 / C z 1 1 X mD0 1 X mD0 h 2m z 2m Cz 1 1 X mD0 h 2mC1 z 2m (1. we can always decompose H .k (d) Figure 1.z L / D Y3 .630) h 2m z m E .z 2 / (1.z/ D we can write H .z/ D G. Separating the coefﬁcients with even and odd time indices.1/ . To explain the basic concept.k xn G(z) (c) L y3.z/ as H . Multirate systems 119 xn M G(z) (a) y1.z/ D nD0 h n z n .9 The polyphase representation The polyphase representation allows considerable simpliﬁcations in the analysis of transformations via interpolator and decimator ﬁlters.z/ (1.z/ D E .0/ .0/ .z/ D Deﬁning E . Noble identities.k xn G(zM) (b) M y2.k G(zL ) y4.z/ D 1 X mD0 hm M z 1 1 X mD0 mM Cz Letting h m MC1 z mM C ÐÐÐ C z .A.z L /X .631) E .`/ em D h m MC` 0Ä`ÄM 1 (1.z/ D G.z L /X 4 .M 1/ 1 X mD0 (1. let M be an integer.1.633) h m MCM 1z mM .k xn L x4. let us consider a ﬁlter having transfer function P1 H . as well as the efﬁcient implementation of such ﬁlters.70.1/ .z/ as H .z/ D 1 X mD0 h 2mC1 z m (1. For the second identity it is sufﬁcient to observe that Y4 . we get H .629) 1.A.634) .
The polyphase representation of an impulse response fh n g with 7 coefﬁcients is illustrated in Figure 1.z/.z M / (1.`/ .z/ D E .635). we can express compactly the previous equation as H . M 1. the polyphase components of H .71. we will ﬁrst consider the efﬁcient implementations for M D 2 and L D 2.`/ .`/ .637) 1 `/ .636) The expression (1. In the following. for M D 3. 6.z/ D 1 X i D0 M 1 X `D0 z ` E .z/. where the components R . : : : .z/ D M 1 X `D0 z . that is R .`/ z i (1.z/.`/ . then we will extend the results to the general case.120 Chapter 1.z/ are permutations of E .635) is called the type 1 polyphase representation (with respect to M).M Efﬁcient implementations The polyphase representation is the key to obtaining efﬁcient implementation of decimator and interpolator ﬁlters. . : : : .`/ . n D 0. A variation of (1.`/ .z/. Elements of signal theory ´¼µ Ñ Ù Ù Ù ½ ¾ ¼ Ò ¹Ñ ¹Ñ ¹Ñ Ù ¼ Ù ½ Ù Ù Ù Ù ´½µ Ñ Ù ¼ Ù ¾ ¿ ¹Ò Ñ Ù ´¾µ ½ Ù ¼ ½ Ù Figure 1.z/ D where E .M 1 `/ R . Polyphase representation of the impulse response fhn g.`/ .635) ei. and E . is given by H .71 for M D 3. 1.z M / (1. where ` D 0. called type 2 polyphase representation.
by the noble identities. Implementation of a decimator ﬁlter using the type 1 polyphase representation for M D 2.73b.0/ and . let N be the number of coefﬁcients of h.1. the ﬁlter representation can be drawn as in Figure 1. we can represent H .74. but.635). The efﬁcient implementation for the general case is obtained as an extension of the case for M D 2 and is shown in Figure 1. we consider a decimator ﬁlter with M D 2. each operating at half the input frequency and having half the number of coefﬁcients as the original ﬁlter.0/ and e . By (1. N In this implementation e . so that N D N . Decimator ﬁlter. Figure 1. and N . as e . this latter operation is generally called serial–to–parallel (S/P) conversion.1/ . The structure can be also drawn as in Figure 1. the computational complexity in terms of multiplications per second (MPS) is N Fc 2 while the number of additions per second (APS) is given by MPS D APS D . the total cost is still N multiplications and N 1 additions.`/ 1 additions.A.`/ operates at half the input rate.1/ be the number of coefﬁcients of e . Multirate systems 121 Figure 1. Optimized implementation of a decimator ﬁlter using the type 1 polyphase representation for M D 2.72.1/ .73a.z/ as illustrated in Figure 1.61.73.1/ . respectively.0/ and e .`/ multiplications and N .`/ requires N . Referring to Figure 1.639) 2 Therefore the complexity is about one half the complexity of the original ﬁlter. .0/ C N . Note that the system output is now given by the sum of the outputs of two ﬁlters. where input samples fxn g are alternately presented at the input to the two ﬁlters e .72.N (1. To formalize the above ideas.638) 1/Fc (1.
0/ and e .76a. we can represent H .nTQ / 0 xq D x.75. In the general case. by the noble identities. this latter operation is generally called parallel–to–serial (P/S) conversion. Interpolatordecimator ﬁlter. The type 2 polyphase implementations of interpolator ﬁlters are depicted in Figure 1.z/ as illustrated in Figure 1. we consider an interpolator ﬁlter with L D 2. With reference to Figure 1.76b.q TQ Let rn D r. ÜÒ ¹ Ì ¾ ¹ ¹ ´¼µ ´ ¾µ Þ ¹ ¹ Þ ¹ ½ ´½µ ´ ¾µ Þ Ý Ì ¼ ¹ Ì ¾ Figure 1.nTQ /g from TQ to TQ to get the signal 0 /g. Implementation of a decimator ﬁlter using the type 1 polyphase representation.1/ .78. where output samples are alternately taken from the output of the two ﬁlters e . efﬁcient implementations are easily obtainable as extensions of the case for L D 2 and are shown in Figure 1. By (1. fx. Implementation of an interpolator ﬁlter using the type 1 polyphase representation for L D 2.122 Chapter 1.77.635). Interpolator ﬁlter.79.640) .63. at the receiver of a transmission 0 system it is often useful to interpolate the signal fr.75. The structure can be also drawn as in Figure 1. Remarks on the computational complexity are analogous to those of the decimator ﬁlter case. Elements of signal theory Figure 1.74. the ﬁlter representation can be drawn as in Figure 1. As illustrated in Figure 1.q TQ / (1.
A. yk D x.1. we refer to [18] (see also Chapter 14). Moreover. .641) (1. Implementation of an interpolator ﬁlter using the type 1 polyphase representation.77. Let yk be the output with sampling period Tc . t0 0 D `0 C L0 L TQ (1.q TQ /g is then downsampled with timing phase t0 . we assume that t0 is a multiple of TQ . : : : .643) where `0 2 f0. For the general 0 case of an interpolatordecimator ﬁlter where t0 and the ratio Tc =TQ are not constrained. Figure 1.76.kTc C t0 / To simplify the notation. we assume the following relations: LD TQ 0 TQ MD Tc TQ (1. 1. L 1g. Multirate systems 123 P/S xn E (0) (z) 2 xn Tc z 1 E (0) (z) yk E (1) (z) (b) E (1) (z) 2 (a) yk T’ = Tc c 2 Figure 1. Optimized implementation of an interpolator ﬁlter using the type 1 polyphase representation for L D 2. 0 The sequence fx. and L0 is a nonnegative integer number.642) 0 with L and M positive integer numbers.
Implementation of an interpolator ﬁlter using the type 2 polyphase representation.77. where yk D vkCL0 (1. the implementation of the interpolatordecimator ﬁlter is given in Figure 1.80.124 Chapter 1. Based on the above equations we have yk D x k M LC`0 CL0 L 0 We now recall the polyphase representation of fh. Figure 1.78.644) fE . For the special case M D 1. : : : .646) .`/ .z/g ` D 0. Interpolatordecimator ﬁlter. Elements of signal theory xn R(0)(z) xn L z1 R(0)(z) P/S k=L1 R(1)(z) L z1 R(1)(z) k=L2 yk R(L1) (z) (a) yk L R(L1) (z) k=0 (b) Figure 1. that is for Tc D TQ . 1.79.nTQ /g with L phases (1.645) 0 The interpolator ﬁlter structure from TQ to TQ is illustrated in Figure 1. L 1 (1.
A. Implementation of an interpolatordecimator ﬁlter with timing phase t0 D 0 .81. Multirate systems 125 Figure 1. Polyphase implementation of an interpolatordecimator ﬁlter with timing phase t0 D . Q Figure 1.`0 C L0 L/TQ .80.`0 C L0 L/T0 .1. .
of L0 samples. let q D ` C n L.`0 . the branch is identiﬁed (say `0 ) and its output must be downsampled by a factor M L. L 1. First. to downsample the signal interpolated 0 at TQ one can still use the polyphase structure of Figure 1.649) As a result. In fact.647) We now consider the general case M 6D 1.643) to be considered.80.z L / in M phases: E .z L M / m D 0. we determine a positive integer N0 so that L0 C N0 is a multiple of M. the signal is not modiﬁed before the downsampler. we have 0 0 0 x`Cn L D x. considering only branch `0 . Elements of signal theory In other words. . fyk g coincides with the signal fvn g at the output of branch `0 of the polyphase structure. M 1 (1. as the relation between fvn g and fyk g must take into account a lead. is equivalent to that given in Figure 1.`0 / . Using now the representation of E . once t0 is chosen. In practice we need to ignore the ﬁrst L0 samples of fvn g. that is L0 C N0 D M0 M (1. the output fxq g at instants that are multiples of TQ is given by the outputs of the various polyphase branches in sequence. In particular we have r 0p D r p N0 and x 0p D x p N0 (1.m/ .80. in which we have introduced a lag of N0 samples on the sequence frn g and a further lead of N0 samples before the downsampler. and n integer.nTQ C `TQ / (1. With 0 reference to Figure 1. : : : .n L TQ C `TQ / D x. 1.126 Chapter 1.650) an efﬁcient implementation of the interpolatordecimator ﬁlter is given in Figure 1.81b.81a. z L0 . 1. Given L0 . In any case. Notice that there is the timing lead L0 L in (1.80. ` D 0. : : : .648) The structure of Figure 1.
it results that w I D A cos ' N and w Q D A sin ' N (1.v.s in the interval [0.v. Generation of Gaussian noise 127 Appendix 1. .v.5. note N N N N N N that w I D Re [w] and w Q D Im [w].B Generation of Gaussian noise Let w D w I C j w Q be a complex Gaussian r. The r. being statistically independent with equal variance.s.1 u 1 / (1. with zero mean and unit variance.654) In terms of real components. In polar notation.v. 2³ /. 1/.653) AD and ' D 2³ u 2 (1.v. have a circularly symmetric Gaussian joint probability density function. each with zero mean and variance equal to 0.651).652) and (1.v.651) It can be shown that ' is a uniform r. N w D A e j' N (1..B. then p ln.1. with probability distribution ( 2 1 e a a>0 P[A Ä a] D (1. if u 1 and u 2 are two uniform r. w is also called circularly symmetric Gaussian r. in [0.v. as the real and imaginary N components. and A is a Rayleigh r.655) are two statistically independent Gaussian r.652) 0 a<0 Observing (1.
.
k/ D N 1 X nD0 cn x.:::.k n/ (2. the coefﬁcients of the ﬁlter are determined using the minimum meansquare error (MMSE) criterion. The development of this theory assumes the knowledge of the correlation functions of the relevant processes. we deﬁne the estimation error as e.k/.1 is called transversal ﬁlter. 2. : : : . 1. n D 0. the problem is to determine the FIR ﬁlter so that.1 The Wiener ﬁlter With reference to Figure 2. If we indicate with fcn g. let x and d be two individually and jointly wide sense stationary random processes with zero mean.1. the output y.k/ D d.k/ y.k/ replicates as closely as possible d. to cause the ﬁlter output y.k/ (2. the N coefﬁcients of the ﬁlter. through realizations of the processes involved. 2] that will be presented in this chapter is fundamental to the comprehension of several important applications.k/j2 ] and the coefﬁcients of the optimum ﬁlter are those that minimize J : fcn g.2) In the Wiener theory. Therefore the cost function is deﬁned as J D E[je.k/ is the desired sample at the ﬁlter output at instant k.k/.nD0.k/.4) . as the output is formed by summing the products of the delayed input samples by suitable coefﬁcients.1) If d.N 1 (2. if the ﬁlter input is x.1. N 1.Chapter 2 The Wiener ﬁlter and linear prediction The theory of the Wiener ﬁlter [1. The FIR ﬁlter in Figure 2. The Wiener theory provides the means to design the required ﬁlter. we have y.k/ to replicate as closely as possible d. An approximation of the Wiener ﬁlter can be obtained by least squares methods.3) min J (2.
A.k N C 1/.k/ by a linear combination of x.k/ D [x.5) 1 The components of an N dimensional vector are usually identiﬁed by an index varying either from 1 to N or from 0 to N 1. : : : . : : : . Coefﬁcient vector cT D [c0 . The Wiener ﬁlter and linear prediction Figure 2. x.k/ c and the estimation error as e.k The ﬁlter output at instant k is expressed as y. c1 .1. Matrix formulation The problem introduced in the previous section is now formulated using matrix notation. Filter input vector at instant k xT .130 Chapter 2.k/ (2.k/ cT x. .k/. We deﬁne:1 1.k/ D cT x. the formulation of the Wiener theory is further extended to the case of vector signals. in the second half of the Appendix. The Wiener ﬁlter problem can be formulated as the problem of estimating d. x. of which reading should be deferred until the end of this section.k/ D xT . A brief introduction to estimation theory is given in Appendix 2.6) 1] (2. The Wiener ﬁlter with N coefﬁcients.8) (2. : : : .k N C 1/] (2.7) 1/.k/. c N 2. x.k/ D d.
k/d.346). it holds p H D E[d Ł . J admits one and only one minimum value.k/ c H xŁ . Correlation between the desired output at instant k and the ﬁlter input vector at the same instant 2 3 E[d.k/xT .k/ (2. .k/x Ł .k/ c H xŁ . N ð N correlation matrix of the ﬁlter input vector. Recalling the deﬁnition J D E[e.k/] Then 2 J D ¦d n/] D rdx .k//] (2.k/]c C c H E[xŁ . is a quadratic function. it follows J D E[d Ł .k/x Ł .15) (2.k/] 6 7 6 E[d.16) cH p p H c C c H Rc (2. N 1 (2.k/] D E[.2. y Ł . Then.k/c/.k/]c Assuming that x and d are individually and jointly WSS.k/] c H E[xŁ .k/x Ł .13) 6 7 : : 6 7 : 4 5 E[d.k/ D d Ł .k Moreover. considered as a function of c.1. if R is positive deﬁnite.k/ and computing the products.k/x Ł .k/xT . we introduce the following quantities: 1. Variance of the desired signal 2 ¦d D E[d.k/cŁ eŁ .k/xŁ .k 1/] 7 6 7 7Dp rdx D E[d.17) The cost function J . We will then seek that particular vector copt that minimizes J .k/] D 6 (2.9) We express now the cost function J as a function of the vector c.k/d.k/eŁ .k N C 1/] The components of p are given by [p]n D E[d. Plots of J are shown in Figure 2. The Wiener ﬁlter 131 Moreover.11) xT .k/d Ł .n/ n D 0.k/] 3.k/xT .12) 2. as deﬁned in (1.k/ D x H .k/xT .d Ł . R D E[xŁ .k/] (2.14) (2.k/] C (2.d. for the particular cases N D 1 and N D 2.k/ D c H xŁ . : : : .10) E[d Ł . 1.2.
the real independent variables are 2N .17) with respect to c.19) In general. as c D c I C jc Q . Plot of J for the cases N D 1 and N D 2. Recognizing that.18) D6 rc J D 7 6 7 @c : : 6 7 : 6 7 4 @J 5 @J Cj @c N 1.I @c N 1. because the vector p and the autocorrelation matrix R are complex.Q (2.2.Q 6 7 6 7 @J @J 6 7 Cj @J 6 7 @c1.Q and also rc J D rc I J C jrc Q J (2.I @c1.20) . we also write p D p I C jp Q (2. Determination of the optimum ﬁlter coefﬁcients It is now a matter of ﬁnding the minimum of (2.132 Chapter 2. to accomplish this task we deﬁne the derivative of J with respect to c as the gradient vector 2 3 @J @J Cj 6 7 @c0.I @c0. The Wiener ﬁlter and linear prediction Figure 2.
27) (2.c I C jc Q / D p H rc Q p H c D rc Q p H . we ﬁnd rc I p H c D rc I p H .26) (2.19).34) .24) (2.29) (2.2. In scalar form.30) (2.1.31) For the optimum coefﬁcient vector copt the components of rc J are all equal to zero.c H Rc/ D 2R I c Q C 2R Q c I From the above equations we obtain rc p H c D 0 rc c p D 2p rc c H Rc D 2Rc Substituting the above results into (2. the Wiener–Hopf equation is a system of N equations in N unknowns: N 1 X i D0 (2.17) using (2.n/ n D 0.23) (2.i rx .cT I rc Q c H p D rc Q .c I C jc Q / D jp H rc I c H p D rc I . Observation 2.n i/ D rdx . N 1 (2.1 The computation of the optimum coefﬁcients copt requires the knowledge only of the input correlation matrix R and of the crosscorrelation vector p between the desired output and the input vector.17).25) (2.c H Rc/ D 2R I c I jcT /p D p Q jcT /p D Q 2R Q c Q jp (2.22) (2.32) is copt D R 1 p (2.32) copt.cT I rc I .Rc p/ H (2. The Wiener ﬁlter 133 and R D R I C jR Q If now we take the derivative of the terms of (2. the solution of (2. : : : .33) If R 1 exists.28) (2. it turns out rc J D 2p C 2Rc D 2. 1.32) is called the Wiener–Hopf equation (WH). hence we get Rcopt D p Then (2.21) rc Q .
k/xŁ . Orthogonality of signals for an optimum ﬁlter. 2 Note that orthogonality holds only if e and x are considered at the same instant.2 In scalar form.1 (Principle of orthogonality) The condition of optimality for c is satisﬁed if e. that is E[e.k/.k/. (2. : : : .k/ which for c D copt yields E[e.d.k/ are orthogonal: E[e. 1.k/] D 0 In fact. Figure 2.k/] D E[e.1 For c D copt .k/xŁ .k/] D 0 Formally the following is true.k n/] D 0 n D 0.36) xT .k/. e. and y. N 1. N 1 (2.k/y Ł . e.k/. the notion of orthogonality between random variables is used.37) (2. .3 depicts the relation between the three signals d.k/x Ł . : : : . Theorem 2.39) for c D copt (2.k/] D cH 0 D0 For an optimum ﬁlter.k/] D E[xŁ .k/] D c H E[e.134 Chapter 2.k/ and x. the ﬁlter is optimum if e. The Wiener ﬁlter and linear prediction The principle of orthogonality It is interesting to observe the relation E[e.k/c/] D p Rc (2. n D 0.k/y Ł .35) Corollary 2. 1. E[e. In other words.3.k/ and y.k/xŁ . using the orthogonality principle.38) d(k) e(k) y(k) Figure 2.k/c H xŁ .k/ is orthogonal to fx.k n/g.k/ are orthogonal.
17). we get 2 Jmin D ¦d 2 D ¦d H copt p H p H copt C copt p p H copt (2.46) where ¹i D uiH .1.2): d. using the decomposition (1.c : ¹N copt / (2.32) of copt in (2.40) we can ﬁnd an alternative expression to (2.c copt /. Then J assumes the form: J D Jmin C ν H ν D Jmin C N X i D1 N X i D1 ½i j¹i j2 (2.40) Another useful expression of Jmin is obtained from (2.c copt / (2.c Let us now deﬁne 2 copt / H U U H .c copt / is nonnegative and in particular it vanishes for c D copt .43) Using (2.44) Recalling that the autocorrelation matrix is positive semideﬁnite.41) (2.42) whereby it follows 2 Jmin D ¦d 2 ¦y (2. Characterization of the cost function surface The result expressed by (2.c copt / H R.44) allows further observations on J . The vector ν may be interpreted as a translation and a rotation of the vector c.17) for the cost function J : J D Jmin C .k/ are orthogonal for c D copt .47) copt /j2 D Jmin C ½i juiH .k/ C y.c copt / (2.c copt / H R. it follows that the quantity . The Wiener ﬁlter 135 Expression of the minimum meansquare error We now determine the value of the cost function J in correspondence of copt . Substituting the expression (2.k/ D e.45) 3 ¹1 6 : 7 ν D 4 : 5 D U H .369) we get J D Jmin C .c . In fact.k/ As e.k/ and y.2. then 2 2 ¦d D Jmin C ¦ y (2.
5. The Wiener ﬁlter and linear prediction Figure 2.n i/ D rdx . Let u½max and u½min denote the eigenvalues of R in correspondence of ½max and ½min . The above observation allows us to deduce that J increases more rapidly in the direction of the eigenvector corresponding to the maximum eigenvalue ½max .136 Chapter 2. The result (2. Likewise the increase is slower in the direction of the eigenvector corresponding to the minimum eigenvalue ½min .50) is employed to analyze the system in the general case of an IIR ﬁlter.i rx . In the 2dimensional case they trace ellipses with axes that are parallel to the direction of the eigenvectors and ratio of axes that is related to the value of the eigenvalues.z/ D Pdx .k/. it follows that rc J is largest along u½max . The Wiener ﬁlter in the zdomain For a ﬁlter with an inﬁnite number of coefﬁcients.49) We note that while (2.4. This is also observed in Figure 2.4. equation (2.33) of the optimum ﬁlter becomes C1 X iD 1 copt.48) Taking the ztransform of both members yields Copt . Example 2.z/ Then the transfer function of the optimum ﬁlter is given by Copt . respectively.34) is useful in evaluating the coefﬁcients of the optimum FIR ﬁlter.z/Px .50) (2. Note that each component is proportional to the corresponding eigenvalue.z/ (2. where sets (loci) of points c for which a constant value of J is obtained are graphically represented. not necessarily causal.k/ D h Ł x.z/ (2. In this case. Pdx .51) .3. Loci of points with constant J (contour plots).z/ D Px .47) expresses the excess meansquare error J Jmin as the sum of N components in the direction of each eigenvector of R.1 Let d.z/ D Pdx . from Table 1.n/ 8n (2.z/H .1.z/ Px . the equation (2. as shown in Figure 2.
i/ dx (2.50): Jmin D 2 ¦d Z 1 2Tc 1 2Tc Ł Pdx . We assume the desired signal is given by d.e j2³ f Tc /j2 df Px .0. The Wiener ﬁlter 137 h d(k) + c x(k) y(k) e(k) Figure 2.k/ (2. f /C opt .i rŁ . The autocorrelation function of x and the .k D/C'] (2.e. applying Fourier transform properties. we get 2 Jmin D ¦d N 1 X i D0 (2.55) !0 D 2³ f 0 Tc is the tone radian frequency normalized to the sampling period.5.2 We want to ﬁlter the noise from a signal given by one complex sinusoid (tone) plus noise.z/ From (2. f / D 2 ¦d Z 1 2Tc 1 2Tc jPdx . 2³ /.z/ D H . in radians.e j2³ f Tc / d f 2 D ¦d Z 1 2Tc 1 2Tc Using (2.56) where D is a known delay.2.1.40) in scalar notation. An application of the Wiener ﬁlter theory.!0 kC'/ C w.1.53) Ł Pdx . The optimum ﬁlter is given by Copt . f / df Px . x. f / jPdx .k/ D Ae j . uncorrelated with '. f /j2 df Px .52) copt.k/ D B e j[!0 . f / Pdx .55) In (2. and w is white noise with 2 zero mean and variance ¦w .e j2³ f Tc / (2. i.54) D 2 ¦d Z Tc 1 2Tc 1 2Tc Example 2. We also assume that ' 2 U.
57) (2.!0 / 1/ ] (2.40) the minimum value of the cost function J is given by Jmin D B 2 ABe j!0 D E H .N 1/ 1/ 2 A 2 C ¦w : : : : : : A2 e j!0 . using (2. From (2.!0 / R 1D 2 I 2 ¦w ¦w C N A 2 Hence.!0 / ABe j!0 D B2 D 2 1 C N3 ¦w C N A 2 (2. the inverse of R is given by # " 1 A2 E.!/E.!0 /E H .62) (2.138 Chapter 2. The Wiener ﬁlter and linear prediction crosscorrelation between d and x are given by 2 rx .64) (2.!0 / Observing that E H .N ::: 2 A 2 C ¦w 3 7 7 7 AB e 7 5 j!0 D (2.n/ D A2 e j!0 n C ¦w Žn (2.58) rdx .61) (2.N we can express R and p as 2 R D ¦w I C A2 E.!0 / D 2 A 1 C N3 ¦w C N A 2 (2.!0 /E H .!0 /E.N 1 e j!0 : : : e j!0 . the autocorrelation matrix R and the vector p have the following structure: 2 6 6 RD6 6 4 2 6 6 pD6 6 4 2 A 2 C ¦w A2 e j!0 : : : A2 e j!0 .n/ D AB e j!0 .60) Deﬁning ET .N :: : ::: 2/ A2 e j!0 .34): copt D ABe j!0 D B 3e j!0 D E.!0 / E. e j! . : : : .n D/ For a Wiener ﬁlter with N coefﬁcients.N 1/ 2/ 3 7 7 7 7 5 (2.63) p D ABe j!0 D E.65) 2 where 3 D A2 =¦w is the signaltonoise ratio.66) . e j!.!/ D [1.59) A2 e j!0 : : : A2 e j!0 .!/ D N .
!/copt D that is.! !0 /i (2. 3 D 30 dB.e j2³ fTc / given by (2. jCopt .e j! / D N 1 X i D0 copt. copt D B e AN j!0 D E. the optimum ﬁlter frequency response is given by Copt . Jmin becomes negligible. . 2. A Figure 2. Magnitude of Copt .! !0 / (2.68) for f0 Tc D 1=2.!0 /. The Wiener ﬁlter 139 Deﬁning ! D 2³ fTc .e j! / D > B 3e j!0 D 1 e > : A 1 C N3 1 e ! D !0 j .i e j!i D E H .68) ! 6D !0 We observe that. and N D 35.67) 8 j!0 D > B N 3e > < A 1 C N3 Copt . for 3 × 1.6. X B 3e j!0 D N 1 e A 1 C N 3 i D0 j . 3.! !0 /N j . B D A.1.e j!0 /j D B . 1.2.
k j x. the system is called the onestep backward predictor of order N .k O 1// D N X i D1 ci x. Let x be a discretetime WSS random process with zero mean.k N /.70) The block diagram of the linear predictor is represented in Figure 2. it results in 1. i.e j2³ f Tc /j is given in Figure 2.e.k 1/. given the values of x. Indeed.k N /] (2. The Wiener ﬁlter and linear prediction Conversely. Jmin D B 2 . The plot of jCopt . x. attempts to estimate the value of x. . x(k) Tc x(k1) Tc x(k2) Tc c N1 x(kN) c1 c2 cN ^ x(k x (k1) ) Figure 2. 2. 2.k/. In this case.k/. as the signaltonoise ratio vanishes the best choice is to set the output y to zero.69) The onestep forward predictor of order N . when the power of the useful signal is negligible with respect to the power of the additive noise.k 2/.k/ is expressed as a linear combination of the preceding N samples: x. copt D 0.k 1/.e j!0 /j D 0.k i/ (2.2 Linear prediction The Wiener theory considered in the previous section has an important application to the solution of the following problem.k 1/ D [x. Forward linear predictor The estimate of x. jCopt . : : : . Linear predictor of order N. In particular.7. given xT . x.7. There exists also the problem of predicting x. x.6. : : : . prediction consists in estimating a “future” value of the process starting from a set of known “past” values. let us deﬁne the vector xT . for 3 ! 0. 3.140 Chapter 2.k N C 1/.
78) (2. 1.76) E[x .77) Applying (2.2. the optimum coefﬁcients satisfy the equation R N copt D r N Moreover.k/ D x.k/xŁ .k/j2 ] (2.1/ rx .32). from (2. Linear prediction 141 This estimate will be subject to a forward prediction error given by f N . Desired signal d.0/ H r N copt (2.74) (2.75) (2.k T 1/] D R N 2 6 6 1/] D 6 6 4 3 rx . Cost function J (given by (2.79) We can combine the latter two equations to get an augmented form of the WH equation for the linear predictor: " #" # " # H 1 JN rx .k Ł 1/x .k/ 2.k/ x. J D E[j f N .k j x.2.72)). We recall the following deﬁnitions.0/ (2.k 3. we can use the optimization results according to Wiener. Then it turns out: 2 2 ¦d D E[x.2/ 7 7 7 : 7 D rN : 5 : rx .80) 0N copt rN RN where 0 N is the column vector of N zeros.72) to determine the predictor coefﬁcients.N / with R N N ð N correlation matrix.40) we get the minimum value of the cost function J . Filter input vector (deﬁned by (2.k/xŁ .k 1/] D E[x. and p D E[d.69)) xT .k/ D x. .k/] D ¦x D rx .k/x Ł .k/ D x.0/ r N D (2.71) i/ ci x.73) 1/ (2.k O N X i D1 1// (2.k Optimum predictor coefﬁcients If we adopt the criterion of minimizing the meansquare prediction error.k (2. Jmin D J N D rx .
.N N X i D1 copt.83) in (2. a0N .84) as shown in Figure 2. : : : .8. 0T The coefﬁcients a N D [a0 .83) where a D copt . we obtain f N . : : : .i x.k i/ (2.k/ D x.N f (k) N Figure 2.N (2.81) and taking care to extend the equation also to i D 0.N ] are directly obtained by substituting 0.82) which can be rewritten as a0N 1 a (2.N a’ 1.142 Chapter 2.N a’ N1.82) in (2.N x. a0 .80): ½ Ä JN 0 (2.i Ä D i D0 i D 1.81) D 1 copt.N a’ 2. For an optimum predictor.85) R N C1 a N D 0N x(k) Tc x(k1) Tc x(k2) Tc x(kN) a’ 0. 2. f N .N 1.k i/ (2. Forward prediction error ﬁlter. N ½ (2.N a’ N.k/ D N X i D0 0 ai. Substituting (2.k/ We introduce the vector ( 0 ai.8. The Wiener ﬁlter and linear prediction Forward “prediction error ﬁlter” We determine the ﬁlter that gives the forward linear prediction error f N .
To reproduce fx. Analysis and synthesis of AR . Figure 2. producing at the output only the uncorrelated or “white” component. while prediction can be interpreted as the analysis of an AR process.518) we ﬁnd f N . for copt D a. copt ].2. we can derive the ﬁlter that gives the backward linear prediction error. Moreover.86) It can be shown that the optimum coefﬁcients are given by BŁ gopt D copt (2. J N D ¦w .k/g of power ¦w D J N as input.k/ D f N .78) with the Yule–Walker equation (1. by estimating the autocorrelation sequence over a suitable observation window.9.k/ D w.k/g. b N . in that it is capable of removing the correlated signal component that is present at the input. .k i C 1/ (2.N/ processes. As illustrated in Figure 2.k N/ N X i D1 gi x. Linear prediction 143 With a similar procedure.537) allows us to state what follows: given an AR process x of order N .9.k/. we can observe that this ﬁlter has whitening properties. the AR model may be regarded as the synthesis of the process. Actually. if the order of the prediction error ﬁlter is large enough. the optimum prediction coefﬁcients copt coincide 2 with the parameters a of the process and. that is. moreover. from the last to the ﬁrst (see page 27).87) where B is the backward operator that orders the elements of a vector backward.2.k/ D x. the parameters copt and J N can be determined.k/g. for a process x.k/g. given a realization of the process fx. Relation between linear prediction and AR models The similarity of (2.81) with (1. can be used. Using the predictor then we determine the prediction error f f N . In general. an allpole ﬁlter with 0T 2 coefﬁcients a N D [1. comparing (2. the prediction error f N coincides with white noise having statistical power J N . having white noise fw.
1/ x rx .1/ x r2 .2/ j².1/j2 C ² Ł2 .1/ >c : opt.90) D rx .1/². introduced in (1.1/ >c > opt.1/j2 (2.2 D 1 j².0/ rŁ .0/ rŁ .1 # D " J1 0 # (2.1/ þ þ þ x x 1r D þ þ D r2 .1 8 > J D 1r > 1 > < rx .1/².1 (2.1/j2 > > ². it turns out 0.2. The Wiener ﬁlter and linear prediction First and second order solutions We give below formulae to compute the predictor ﬁlter coefﬁcients and prediction error ﬁlter coefﬁcients for orders N D 1 and N D 2.0/ rŁ .1 a0 1.2 and J2 1 D rx .1/rx .2/ ² 2 .1 x < 1r > > 0 >a D : 1.1/ rx .0/rx .1/j2 j². These results extend to the complex case the formulae obtained in Section 1.12.1/j2 x rx .1/ 1r (2.89) jrx .2/ r2 .88) it results 8 > a0 D J1 r .1/ rx .92) We note that in the above equations ².1 where þ þ þ r .2/ C j².1/j2 x ) 8 ².0/ > > 0. .0/ x þ rx .2/ x r2 .93) 1.2 > < > > 0 >a D : 2. From " rx .0/ þ As a0 D 1.0/ #" a0 0.0/ > > 0 >a : ž N D 2.1/j2 (2.1/ rx .91) j².0/ 2j².1/j2 ² Ł .1 D > < > > : a0 D ². ž N D 1.1/j2 J1 D1 rx .540). 8 > a0 D > 1.0/ jrx .1 J1 rx .1 D > < 1 ² Ł .1/ 1.n/ is the correlation coefﬁcient of x.0/ ) 8 > copt.2/ 1 j².0/ jrx .0/ rx .144 Chapter 2.2/j2 (2.1/rx .
k/j2 ] It results in 0 Ä Jn Ä Jn with J0 D rx .n 1 1 C Cn an 0Ł k.n k. 1.n 1 0 D 1 and an.100) Jn D Jn 1 . Here.94) (2. : : : .2. B 0 1n D .103) .rnC1 /T an (2. In the case of real signals.1/ 2.2.97) 1 Then (2.0/ and J N D J0 N Y nD1 1 (2. in which R N C1 is positive deﬁnite. : : : .104) .96) # C Cn " 0 an 0 BŁ 0 an 1 0 # (2. Moreover. R N C1 is symmetric and the computational complexity of the Delsarte–Genin algorithm (DGA). Linear prediction 145 2.99) (2. nth iteration.2.97) corresponds to the scalar equations: a0 D a0 k. We set: J0 D rx . n (2.1 jCn j2 / (2. 2. We calculate Cn D " 0 an D (2.n with a0 0. is halved with respect to that of LDA. and Toeplitz. Initialization.1 jCn j2 / We now interpret the physical meaning of the parameters in the algorithm.2.0/ 10 D rx .n 1 k D 0.1 The Levinson–Durbin algorithm The Levinson–Durbin algorithm (LDA) yields the solution of matrix equations like (2. Hermitian. n D 1.98) D 0. we report a stepbystep description of the LDA: 1. given in Section 2.85).101) n½1 (2.2. Jn represents the statistical power of the forward prediction error at the nth iteration: Jn D E[j f n .102) (2.95) 1n Jn 1 1 (2. N . instead of N 3 as happens with algorithms that make use of the inverse matrix. with a computational complexity proportional to N 2 .
k n/ D anT xnC1 .k/ D a0 Ł x. by substitution. : : : .107) 1/j2 ].n (2. Cn satisﬁes the following property: 0 Cn D an.k/ C Cn " 0 an 0 BŁ #T xnC1 .k (2.k/ C Cn bn 1 .k/ C Ð Ð Ð C an.k/ D [x.112) 1 .k/ xn . Lattice ﬁlters We have just described the Levinson–Durbin algorithm.k/j2 ] 1/] 1 .107) we have (2.k 1/ # (2.k/ D xn .k/j ] E[ f n Ł 1 .k Ł 1/ C Cn f n 1 . 1n can be interpreted as the crosscorrelation between the forward linear prediction error and the backward linear prediction error delayed by one sample.110) n/]T (2.97) we obtain " f n .k we can write: " xnC1 .k n/ # D " x. Deﬁning xnC1 .k/ n nC1 n.k/ 0. along with (2.k/.n n 0.k/ 1 (2.101) and (2. we get Cn D and.k 1/ By a similar procedure we also ﬁnd bn .n x.k/j ] D E[jbn jCn j Ä 1 The coefﬁcients fCn g are called reﬂection coefﬁcients or partial correlation coefﬁcients (PARCOR).109) We recall the relation for forward and backward linear prediction error ﬁlters of order n: 8 0 0 < f n .k/ D a0 x. noting that E[j f n 2 1 . Its analysis permits us to implement the prediction error ﬁlter via a modular structure.k/ (2.k/ D bn 1 . x. from (2.n (2.106) Finally.146 Chapter 2.96).k/bn 1 .105).105) In other words. from (2.k/bn 1 .k E[j f n 1 .108) D E[jbn 2 1 .k/ x.k n/ D a0 B H x . The Wiener ﬁlter and linear prediction The following relation holds for 1n : 1n 1 D E[ f n Ł 1 .111) : b .k/ C Ð Ð Ð C a0 Ł x.k/ D D fn 0 an 1 0 #T xnC1 .n From (2.113) .k 1/] (2.
The optimum coefﬁcients Cn . Observation 2. the ﬁlter is minimum phase. Initialization. in which the output is given by f N . are veriﬁed. n D 1.1/ C rx . 3. 1.2.log N /2 .0/ C rx . The lattice ﬁlters are quite insensitive to coefﬁcient quantization. N .2 From the above property 2 and (2.3 Here is the stepbystep description.116) 3 Faster algorithms. This property is useful if the ﬁlter length is unknown and must be estimated. Finally. with a complexity proportional to N .114) 2. (2. we ﬁnd that all predictor error ﬁlters are minimum phase.1/ 0 1 D rx .2/ (2. Lattice ﬁlter. 1.10 is obtained. : : : . n D 1.2. 1]T þ0 D rx . therefore one can change N without having to recalculate all the coefﬁcients. N . are independent of the order of the ﬁlter. 2.2 The Delsarte–Genin algorithm In the case of real signals. If the conditions jCn j Ä 1.10. further reduces the number of operations with respect to the LDA. the DGA. : : : .115) (2.1/ D rx .0 the block diagram of Figure 2. taking into account the initial conditions. have been proposed by Kumar [4]. at least for N ½ 10. Linear prediction 147 f0 (k) f1 (k) f m1 (k) fm (k) f N1 (k) fN (k) CN C1 x(k) Cm C* 1 b0 (k) * Cm * CN Tc b1 (k) bm1 (k) Tc bm (k) bN1 (k) Tc bN (k) Figure 2. .k/ and a0 D 1 0.2. also known as the split Levinson algorithm [3]. f 0 .k/ D x.k/ D b0 .0/ þ1 D rx .108). We set v0 D 1 v1 D [1. We list the following fundamental properties.
119) T D rnC1 vn D .k/ y. from a practical point of view. In this case the matrix is only Hermitian and the solution that we are going to illustrate is of the LS type [1. : : : .n C 1// C [vn ]2 .k/g are available. The Wiener ﬁlter and linear prediction 2. Based on the observation of the sequences fx.k/ (2. in which from the estimate of rx we construct R as a Toeplitz correlation matrix. 2. Therefore to get the solution it is necessary to determine estimates of rx and rdx .rx .120) exploits the symmetry of the vector vn .122) (2.124) Jn D þn Cn D 1 We note that (2.þn . in particular it is [vn ]1 D [vn ]nC1 D 1.126) k D 0. K 1 and fd.127) . introducing a new cost function. and various alternatives emerge.rx . according to the least squares method the optimum ﬁlter coefﬁcients yield the minimum of the sum of the squared errors: fcn g. and 2) the covariance method.k/ D d.1. We reconsider the problem of Section 2. K 1 (2. However. : : : .118) (2. N .3 The least squares (LS) method The Wiener ﬁlter will prove to be a powerful analytical tool in various applications.1.1).k/g k D 0.k/g and fd. We compute Þn D .120) ½n D þn þn 1 ½n ½n ½n (2.k/g and of the error e. often only realizations of the processes fx.125) where y.n// C Ð Ð Ð (2. : : : .121) Ä 0 vn n 1 1 0 an D vn ½ (2. one of which is indeed prediction.nD0.117) 2 þn D 2þn Þn þn ½ C Ä vn 1 0 0 vn 1 ½ 0 Þn 4 vn 2 5 0 2 3 (2.k/ is given by (2. Two possible methods are: 1) the autocorrelation method. nth iteration.þn Ä vn D n 1 2 1 n 1/ n 2/ (2.2/ C rx .148 Chapter 2.123) (2. n D 2.1/ C rx . in which we estimate each element of R by (2. 2].:::.N 1 min E (2.130).
k D N expressed as 3 2 2 x.N 1/ x.0/ to x.K 1/  x.134) .n/ O n/ (2.k/g Ed D K 1 X kDN 1 jd. is LðN where L D K N C1. the output fy. which gives the MSE.k/g.i O N C 1/rdx . n/ D .k n/ i.N / 7 6 x.3).0/ x.k/ x Ł .129).K #. Energy of fd. Data windowing In matrix notation.K {z N/ In (2.K N C 1/rx . deﬁned by (2.3. We give some deﬁnitions: 1.i. The case examined is called the covariance method and the data matrix T. especially if K is not very large. n/ depend on both indices .131) Using (1. K ::: ::: :: : x.K 1. the following identities hold: 8.133) in which the values of 8.478) for an unbiased estimate of the correlation. : : : .k i/ x. : : : .1/ : : : 1. Other choices are possible for the input data window.1). Matrix formulation We deﬁne 8. 1. N 1 (2.129) data matrix T 2/ : : : x. : : : .132) (2.K 1/ x.N 2/ y. The least squares (LS) method 149 where ED K 1 X kDN 1 je. N 1 (2. n/ and not only upon their difference.N 1/ 7 6 6 7D6 6 : : : : : : 5 4 4 : : : y. n D 0.i.128) Note that in the LS method a time average is substituted for the expectation (2.2. can be 32 76 76 76 54 } c0 c1 : : : cN 1 3 7 7 7 5 (2.i.k/j2 (2. 1.N / x.k/j2 (2. given by (2.k n/ n D 0.130) #. n/ D K 1 X kDN 1 x Ł .n/ D .n/ D K 1 X kDN 1 d.N 1/ 6 y.129) we note that the input data sequence actually used goes from x.K 1/.i.
129). are real and nonnegative. 1/ 8. : : : .150 Chapter 2.137) is given by rc E D 2. 1/ : : : ::: ::: :: : 1. N 1/ 1/ 1/ 3 7 7 7 5 (2.1/.k/ (2. Eigenvalues of 4. 0/ 6 8.N 8.N 8. 0/ 6 D6 : : 4 : 8. Determination of the optimum ﬁlter coefﬁcients By analogy of (2. cls . N : : : 1.140) Then the vector of optimum coefﬁcients based on the LS method.N 3.N 1/] (2. is Hermitian. 2. the gradient of (2.e.1. N 8. is positive semideﬁnite.0. .17).1.139) with T input data matrix deﬁned by (2. #.141) .136) Then the cost function can be written as E D Ed cH ϑ ϑ H c C cH c (2. Input autocorrelation matrix 2 8. 3.135) 1. We note that the matrix T is Toeplitz. c ϑ/ (2. Crosscorrelation vector between d and x ϑ T D [#.1. #. satisﬁes the normal equation cls D ϑ (2. D K 1 X kDN 1 xŁ .137) Correlation matrix is the time average of xŁ .k/xT . The Wiener ﬁlter and linear prediction 2. i. 1/ : : : 8. can be written as D TH T (2.0. 0/ 8.137) with (2.0.k/.138) Properties of 1.0/.k/xT .
that is cls K !1 ! copt (2.k/x Ł .32).142) In the solution of the LS problem.145) In other words.n/ n D 0. 1.k/ x Ł .128). i/cls. : : : . we have rcn E D rcn .2.147) C j .k n/eŁ .k/g the estimation error found with the optimum coefﬁcient values. As for an ergodic process (2.k D 2 K 1 X kDN 1 n/e. The least squares (LS) method 151 If 1 exists. taking the gradient with respect to cn . 1.143) K (2.k n/e.n.I E C jrcn .144) We ﬁnd that the LS solution tends toward the Wiener solution for sufﬁciently large K . the solution to (2. j x Ł .1 The principle of orthogonality From (2. N 1 (2.k//] (2.3.141) is given by cls D 1 ϑ (2.132) yields: K and 1 ϑ N C1 !p K !1 1 N C1 !R K !1 (2.k n/ D 0 n D 0.Q E D K 1 X kDN 1 [ x Ł .3. for K ! 1 the covariance method gives the same solution as the autocorrelation method.i D #.146) 2. cls .k/ x.141) becomes a system of N equations in N unknowns: N 1 X i D0 8. : : : .148) .141) corresponds to the Wiener–Hopf equation (2. N 1 (2. the equation (2. (2. In scalar notation.k n/e.k/ If we denote with femin .k/ n/eŁ . then the optimum coefﬁcients must satisfy the conditions K 1 X kDN 1 emin .k j x.
151) observing (2.K : : : x Ł .155) from the deﬁnition (2.2/ x x #.N 1/.149) Equation (2.K 1/ (2.K 1/] (2.153) Ey (2. because of the orthogonality (2.k/ a linear combination of fx.0/ x .k/ C emin . Expressions of the minimum cost function Substituting (2. we get Emin D Ed H where E y D cls ϑ .152) (2.149) expresses the fundamental result: the optimum ﬁlter output sequence is orthogonal to the minimum estimation error sequence. it follows that Ed D E y C Emin from which. Note that for c D cls we have d.0/ Ł . The Wiener ﬁlter and linear prediction which represent the timeaverage version of the statistical orthogonality principle (2.152 Chapter 2.156) .N 2/ x Ł .36).K ::: x 32 3 1/ d. 1.k/ D y.k/j2 D c H c (2.151).141) in (2.k/y Ł .1/ 7 6 x Ł .N 1/ 2/ 7 6 d.141) in (2.k/ then.N /.k n/g. being y. : : : .n/ we get 2 3 2 Ł #.131) of #. : : : . Moreover.N 1/ x Ł .N 1/ 6 7 6 6 7D6 : : : : : : 4 5 4 : : : Ł .137). d.K : :: : : : Ł . n D 0.N / 6 #. d.154) The normal equation using the T matrix Deﬁning the vector of desired samples dT D [d.130).N 1/ : : : x Ł .k/ D 0 (2. N 1.149) between y and emin . (2.150) An alternative expression to Emin uses the energy of the output sequence: Ey D K 1 X kDN 1 jy.N / 7 76 7 76 7 : : 54 5 : N/ d. substituting (2. the minimum cost function can be written as Emin D Ed ϑ H cls (2. we have K 1 X kDN 1 emin .
159) must be overdetermined with more equations than unknowns.165) The matrix O D T.160) and correspondingly (2.129) the vector of ﬁlter output samples yT D [y.160) we get y D Tcls D T.T H T/ 1 (2. from (2.150) becomes Emin D d H d d H T. Let I be the identity matrix: the difference O? D I ODI T.T H T/ 1 TH (2. the estimation vector error is given by emin D d y (2.161) depend only on the desired signal samples and input samples. using the (2.159) exists.158) (2.166) .T H T/ 1 TH d (2. the solution is cls D .141) becomes T H Tcls D T H d (2. it is useful to introduce the system of equations for the minimization of E. the solution c is unique only if the columns of T are linearly independent.T H T/ 1 (2.162) can be related to the input data matrix T as y D Tc This relation will still be valid for c D cls . that is the case of nonsingular T H T. This requires at least K N C 1 > N .2. Tc D d From (2.160) and (2.158). Geometric interpretation: the projection operator In general. y.161) We note how both formulae (2.157).N /.T H T/ 1 TH d (2. The least squares (LS) method 153 that is ϑ D TH d Thus. Moreover.K 1/] (2.163) TH d (2. and from (2. if .T H T/ 1 T H can be thought of as a projection operator deﬁned on the space generated by the columns of T. : : : .139) and (2. y.164) Correspondingly.157) Associated with system (2.158). the normal equation (2.N 1/. that is the system of equations (2.3.
let us consider the solutions to a linear system of equations Tc D d with T N ð N square matrix. orthogonal to O .T H T/ does not exist. Relations among vectors in the LS minimization.160) must be reexamined.170) 1d exists. In general.169) In Figure 2.168) (2. from (2. which involves three steps: a.170) can be found by the successive substitutions method with O. 2. y. Factorization of T T D LU (2.159). The Wiener ﬁlter and linear prediction d emin y Figure 2. a solution to the system (2.165) emin D d y D O ?d (2. This is what we will do in this section after taking a closer look at the associated system of equations (2.3.11.11 an example illustrating the relation among d.2 Solutions to the LS problem If the inverse of .171) with L lower triangular having all ones along the diagonal and U upper triangular. In fact.167) where emin ? y (see (2.161) can be written as H Emin D emin emin D d H emin D d H O ? d (2.N 2 / operations. 2. if T is nonsingular. one can use the Gauss method. If T is triangular and nonsingular.154 Chapter 2. is the complementary projection operator. In general. the solution of the LS problem (2. . the solution c D T is unique and can be 1. and emin is given. (2.164) y D Od and from (2. Moreover.149)). If T obtained in various ways [5]: 1 (2.
172) with L lower triangular having nonzero elements on the diagonal.174) (2.N 3 / operations and O. 4.g. If T is Toeplitz and nonsingular. c. so that T D U VH with 8 >D D> : 0 9 0 > > . This method requires O. v2 . because T is not a square matrix. Given an L ð N matrix T of rank R.N 2 / memory locations. 0 LðN ¦1 > ¦ 2 > Ð Ð Ð > ¦ R > 0 UU H D I LðL VV H D I N ðN (2..175) D D diag. The least squares (LS) method 155 b. we will state the following result.369) how the N ð N Hermitian matrix R can be decomposed in terms of a matrix U of eigenvectors and a diagonal matrix of eigenvalues. However.170) [5]: in particular we will consider the method of the pseudoinverse. the factorization (2.177) (2.178) (2. : : : . one can use the generalized Shur algorithm with a complexity of O. ¦ R / U D [u1 . : : : . Solution of the system in z Lz D d through the successive substitutions method. u L ] LðL V D [v1 . First. it is necessary to use alternative methods to solve the system (2.N 3 / operations. two unitary matrices V and U exist. We also recall the Kumar fast algorithm [4]. ¦2 .176) (2. Now we extend this concept to an arbitrary complex matrix T. about half as many as the Gauss method. u2 . 3.179) (2.171) becomes the Cholesky decomposition: T D LL H (2. : : : . v N ] N ðN .173) (2. Solution of the system in c Uc D z through the successive substitutions method.3. Singular value decomposition (SVD) of T We have seen in (1.2.N 2 /: generally it is applicable to all T structured matrices [6]. This method requires O. If T is Hermitian and nonsingular. e. if T 1 does not exist.¦1 .
Again. it follows U H TV D as illustrated in Figure 2. of rank R.170). are singular values of T. : : : . R. In (2.141).182) We ﬁnd an expression of T# for the two cases in which T has full rank.L .177) the f¦i g.184) 4 We will denote the rank of T by rank. Note that in this case there are fewer equations than unknowns. Note that in this case the system (2.12. : : : . . Being U and V unitary. Case of an underdetermined system (L < N ) and R D L. The Wiener ﬁlter and linear prediction Figure 2. ¦2 1 . i D 1. N /.TT H / 1 (2. it can be shown that T# D T H .T H T/ 1 TH (2.1 The pseudoinverse of T.181) Ä D D 1 0 0 0 ½ D 1 D diag ¦1 1 .183) In this case T# d coincides with the solution of system (2.4 that is R D min. is given by the matrix T# D V where # # (2. Deﬁnition 2. Using the above relations it can be shown that T# D . Singular value decomposition of matrix T.12. ¦ R 1 Á (2.170) has more equations than unknowns.T/.156 Chapter 2. L ð N . hence there are inﬁnite solutions to the system (2.180) UH D R X i D1 ¦i 1 vi uiH (2. Case of an overdetermined system (L > N ) and R D N .
then T# D . for L < N and rank.T/ D L: . for L > N and rank.T H T/ b.3.170).188) (2.189).T/ D N .128).187) coincide with the solution (2. T# D T 2. cls D R X uH d i i D1 1 R X vH TH d i i D1 1 1 (2.185). If L < N and a. If L D N and rank.189) and cls is the minimum norm solution of an underdetermined system of equations. rank. the pseudoinverse matrix T# gives the LS solution of minimum norm.TT H / b.186) and (2.185) in the cases (2.T/ D R (also < N ). The constraint on jjcjj2 is needed in those cases in which there is more than one vector that minimizes jjTc djj2 . rank.187).T/ D N .190) Only solutions (2. rank. i. jjcjj2 . T is nonsingular.T/ D L. then T# D T H . If L > N and a.186) TH (2.2 The solution of a least squares problem is given by the vector cls D T# d where T# is the pseudoinverse of T.2. E D jjejj2 D jjy djj2 D jjTc djj2 .T/ D N . in other words it solves the problem of ﬁnding the vector c that minimizes the squared error (2. and simultaneously minimizes the norm of the solution.142). rank.e.T/ D R (also < L). The computation of the pseudoinverse T# directly from SVD and the expansion of c in terms of fui g. The least squares (LS) method 157 Minimum norm solution Deﬁnition 2. from (2. We list the different cases: 1. ¦i2 vi (2.187) and cls is the LS solution of an overdetermined system of equations (2. ¦i2 T H ui (2. fvi g and f¦i2 g have two advantages with respect to the direct computation of T# in the form (2. or in the form (2. By applying (2.185) cls D 3.185) (2.
on Acoustics. T. . NJ: PrenticeHall.. 2. 254–267. 1984. van Loan. Vetterling.TT H / 1 . Fundamentals of statistical signal processing: estimation theory. Speech and Signal Processing. pp. T. NJ: PrenticeHall. “A fast algorithm for solving a Toeplitz system of equations”. 2nd ed. [3] P. 3rd ed. Delsarte and Y. G. L. B. Adaptive ﬁlter theory. June 1986. Kay. The SVD also gives the rank of T through the number of nonzero singular values. [5] G. Flannery and W. Haykin. M. Marple Jr. 1987. 1985. IEEE Trans. Englewood Cliffs. 3rd ed. Ciofﬁ. Honig and D. Speech and Signal Processing. 43. 1989. vol. Kay. Press. “Fast computation of channelestimate based equalizers in packet data transmission”. [9] S. We conclude citing two texts [8. NJ: PrenticeHall. Englewood Cliffs. H. Adaptive ﬁlters: structures. NJ: PrenticeHall. Nov. Golub and C. 2462– 2473.. pp. Bibliography [1] S. 34. Englewood Cliffs. Numerical Recipes. 470–478. Feb. which report examples of realizations of the algorithms described in this section. M. Messerschmitt. IEEE Trans. [8] L. 1988.158 Chapter 2. [2] M. Genin. New York: Cambridge University Press. [6] N. A. on Signal Processing. P. [4] R. M.. Boston. V. The Wiener ﬁlter and linear prediction 1. pp. algorithms and applications. H. MA: Kluwer Academic Publishers. [10] S. Digital spectral analysis with applications. AlDhahir and J. 1993. Kumar. 33. 9]. W.T H T/ 1 or . There are two algorithms to determine the SVD of T: the Jacobi algorithm and the Householder transformation [7]. IEEE Trans. S. vol. vol. Baltimore and London: The Johns Hopkins University Press. “The split Levinson algorithm”. F. on Acoustics. 1995. Englewood Cliffs. [7] S. Modern spectral estimationtheory and applications. 1996. 1988. Matrix computations. The required accuracy in computing T# via SVD is almost halved with respect to the computation of ..
þ/] pdjx .x/ the estimation error is given by eDd O d (2.þ/]2 pdx .þ/ that minimizes J is given by the expected value of d given x D þ.s. Using the variational method (see Appendix 8.196) 2 1 .Þ.d/.Þ j þ/ dÞ dþ 2 1 where the relation pdx .193) [Þ h. We wish to determine the function h that minimizes the meansquare error.þ/ be the probability density functions of d and x. let the value of x equal to þ.Þ/ and px . respectively. The integral (2.þ/ Z C1 (2.Þ j þ/ is used.þ/ D E[d j x D þ] Proof. The estimation problem is to determine what the corresponding value of d is. somehow related via the function f .s.þ/]2 pdjx . that is Z C1 Z C1 2 [Þ h. that is x D f . h.þ/] pdjx .v.þ/ pdjx .194) (2.191) MMSE estimation Let pd .192) (2. 8þ. using as estimate of d the function O d D h. if f were known and the inverse function f 1 existed. The estimation problem 159 Appendix 2. Obviously. that is x D þ. we ﬁnd that this occurs if Z C1 [Þ h. however. and pdjx . þ/ D px . Theorem 2.193) is minimum when the function Z C1 [Þ h. the solution would be trivial.A).A The estimation problem The estimation problem for random variables Let d and x be two r. pdx .v.Þ j þ/ the conditional probability density function of d given x D þ.Þ.2.A.195) is minimized for every value of þ.þ/ 6D 0. þ/ dÞ dþ J D E[e ] D 1 1 Z D C1 1 px . In any case.2 The estimator h. On the basis of an observation. we often know only the joint probability density function of the two r. þ/. moreover let px .Þ j þ/ dÞ 1 (2.Þ j þ/ dÞ D 0 8þ (2.Þ.
. After several steps.d md /. where O d D arg max pxjd .1 Let d and x be two jointly Gaussian r.þ/ D tanh.203) the estimation of d is obtained by applying the following theorem. and covariance c D E[. If the distribution of d is uniform. it can be shown that h. x N D þ N (2.202) Extension to multiple observations In the case of several observations.þ/ (2. x 1 D þ1 .194) follows. which yields O d D arg max pdjx . For w 2 N . the MAP criterion becomes the maximum likelihood (ML) criterion.198) where the notation arg max is deﬁned in (6.160 Chapter 2.v. 1/ and d 2 f 1.þ/ (2.Þ j þ/ Þ (2.199) Examples of both MAP and ML criteria are given in Chapters 6 and 14.201) Example 2.200) The corresponding meansquare error is equal to Jmin D 2 ¦d Â c ¦x Ã2 (2.þ 2 ¦x mx / (2. Example 2.2 Let x D d C w.þ/ D 1 C1 Þ pdjx . : : : .0.s with mean values md and mx . 1g with P[d D 1] D P[d D 1] D 1=2. respectively.Þ j þ/ dÞ D Z C1 1 Þ pdx .A. The Wiener ﬁlter and linear prediction that is for Z h.A. O An alternative to the MMSE criterion for determining d is given by the maximum a posteriori probability (MAP) criterion. it can be shown that [10] h.x mx /].þ j Þ/ Þ (2. whose proof is similar to the case of a single observation.Þ.þ/ D md C c . þ/ dÞ px .197) from which the (2.21).v. where d and w are two statistically independent r.s.
that minimizes J D E[. þ N / In the following. : : : . þ N / dÞ px1 :::x N .x1 :::x N .205) (2.Þ. Example 2. in the case of multiple observations the estimate is a linear combination of observations. Letting c D [c1 . c N ]T . : : : . : : : .v.205) and (2. x D [x1 . x N /.206) it is easy to prove the following theorem (see page 130). In the case of realvalued r. : : : .x1 . : : : .2. d D h. : : : . x N D þ N / dÞ 1 (2. þ1 .β/ D pT R and 2 Jmin D ¦d 1 (2. the MMSE linear estimator of d has the following expression O (2.s with zero mean and the following second order description: ž Correlation matrix of observations R D E[x xT ] ž Crosscorrelation vector p D E[dx] For x D β.204) Z D Þ pd.3 Let d. be realvalued jointly Gaussian r. : : : .208) MMSE linear estimation For a low complexity of implementation.207) pT R 1 p (2.Þ j x 1 D þ1 .206) β (2.3 O The estimator of d.s.v.209) d D cT x C b where b is a constant. x N D þ N ] Z C1 D Þ pdjx1 :::x N . Theorem 2.d O d/2 ] is given by h. : : : . The estimation problem 161 Theorem 2.þ1 . þ N / D E[d j x 1 D þ1 . it can be shown that h.v. using the deﬁnitions (2.s with zero mean. and O (2. it is often convenient to consider a linear function h. to simplify the formulation we will refer to r.þ1 .A.4 Given the vector of observations x. x N ]T .210) d D pT R 1 x .A.
s.b (2. rd2 x . dT D [d1 .217) (2. modeled as a vector of N r. and of the M ð 1 vector b.216) (2.208). respectively.220) In other words.210) and (2. x2 .218) Rx D E[xŁ xT ] Rd D E[d d ] The problem is to determine a linear transformation of x.211) with (2. d2 . modeled as a vector of M r. linear estimation coincides with optimum MMSE estimation.s. Let x be an observation. copt D R and the corresponding meansquare error is 2 Jmin D ¦d 1 p 1 pT R p (2. let d be the desired vector.213) (2.219) that minimizes the cost function J D E[jjd O djj2 ] D M X mD1 E[jdm O dm j2 ] (2.211) Note that the r. The Wiener ﬁlter and linear prediction In other words.s. the optimum coefﬁcients C and b are the solution of the following problem: min J C. d M ] We introduce the following correlation matrices: rdi x D E[di xŁ ] Rxd D E[x d ] D [rd1 x .v.214) (2. : : : . we note that.212) (2. rd M x ] Ł T H Rdx D E[dŁ xT ] D Rxd (2.v. xT D [x1 . consisting of the N ð M matrix C.v. coincides with the linear function of the observations (2.v. and for a desired vector signal. Observation 2.162 Chapter 2.s.215) (2. in the case of jointly Gaussian r.v.3 Comparing (2.219) Deﬁnition 2.3 The linear minimum meansquare error (LMMSE) estimator. : : : . MMSE linear estimation for random vectors We extend the results of the previous section to the case of complexvalued r.s are assumed to have zero mean.207) and (2. given by O d D CT x C b O such that d is a close replica of d in the meansquare error sense.221) . : : : . x N ] Moreover. Ł T (2.
since the function (2. The (2.d Q Q Q CT x/ H b]g C jjbjj2 (2. the optimum estimator in the LMMSE sense is given by O d D . d and d are Mdimensional vectors. satisfy equations of the type (2.220) operates on single components.227) (2.Rx 1 Rxd /T x (2.224) Q Q CT xjj2 ] C jjbjj2 Q being E[d] D E[x] D 0. based on the deﬁnition (2. d M . then b D 0.1 we have xT D [x.225) with c1 column vector with N coefﬁcients.k/. For M D 1. Q Q Q since the choice of b D b 6D 0 implies an estimator CT x C b with a larger value of the cost function. the solution is given by Rx c1 D rd1 x where rd1 x is deﬁned by (2. Rx cm D rdm x m D 1. d2 .2.222) (2. cm . Nevertheless. and T O O d D d1 D c1 x D xT c1 (2.k/] T N C 1/] (2. The estimation problem 163 We note that in the formulation of Section 2. respectively.226). First of all. we will assume that both x and d are zero mean random vectors. We determine now the expression of C and b in terms of the correlation matrices introduced above. : : : . M (2. Therefore the columns of the matrix C.223) that is M D 1. d D d1 .226) hence. O Vector case.229) (2. the vector problem (2. it results in C D Rx 1 Rxd Thus. we observe that if d and x have zero mean. and the matrix C becomes a column vector.215).228) . In fact J D E[jjd D E[jjd D E[jjd Q CT x Q CT xjj2 ] Q bjj2 ] 2RefE[.224) implies that the choice b D 0 yields the minimum value of J . : : : . x.221) leads to M O O O scalar problems. Without loss of generality.214). each with input x and output d1 . Scalar case.221) leads again to the Wiener ﬁlter. : : : . In this case the problem (2.A. For M > 1.k d D [d.
J D tr[Re ] Substituting (2.232).231) O d (2. yields Jmin D tr[Rd Rdx Rx 1 Rxd ] (2.230) the cost function (2.233) (2.220) is given by the trace of Re . On the basis of the estimation error eDd with correlation matrix Re D E[eŁ eT ] D Rd Rdx C C H Rxd C C H Rx C (2.164 Chapter 2.232) .228) in (2. The Wiener ﬁlter and linear prediction Value of the cost function.
k/ D [x. : : : .k/ (3.k/c. Some caution is therefore necessary in using the equations of an adaptive ﬁlter.k i/ D xT .k/ D [c0 . and the crosscorrelation p between the desired output and the input vector be known (see (2. Adopting.1.15)).2.k/ D N 1 X i D0 1 . Input vector at instant k: xT . Estimating these correlations is usually difﬁcult.1.k/.k The output signal is given by y. for example. it is required that the autocorrelation matrix R of the ﬁlter input vector. c N 2.2) ci .11.k/] (3. 3 In this chapter the deﬁnition of the estimation error is given as the difference between the desired signal and the ﬁlter output.1) 1/. so that the ﬁlter output y is a replica as accurate as possible of the process d. Depending on the application. The ﬁlter structure at instant k is illustrated in Figure 3. the optimum solution requires solving a system of equations with a computational complexity that is at least proportional to the square of the number of ﬁlter coefﬁcients.k/ y.k N C 1/] (3. We deﬁne: 1. In this chapter.k/. Coefﬁcient vector at instant k: cT . Given two random processes x and d. We will consider transversal FIR ﬁlters2 with N coefﬁcients. : : : .k/ D d. c1 .k/. the meansquare error criterion. the estimation error may be deﬁned using the opposite sign. 2]. we develop iterative algorithms with low computational complexity to obtain an approximation of the Wiener solution. x.k/. For the analysis of IIR adaptive ﬁlters we refer the reader to [1.k/x.4) 1 2 Two estimation methods are presented in Section 1. In general the coefﬁcients may vary with time.Chapter 3 Adaptive transversal ﬁlters We reconsider the Wiener ﬁlter introduced in Section 2. . x. we obtain the estimation error3 e.1 Moreover.k/ with the desired response d.k/ (3. we want to determine the coefﬁcients of a FIR ﬁlter having input x.3) Comparing y.
1.k/g. to minimize is J . or functional. where copt is given by (2. 3. analogously to (2.k/ is Jmin . The corresponding minimum value of J .166 Chapter 3. In the following sections we will present iterative algorithms for each of the two classes.17). We will see that this method avoids the computation of the inverse R 1 .16) and (2. J .1 Adaptive transversal ﬁlter: MSE criterion The cost function. it requires that R and p be known.k/ D E[je. in Chapter 2 two classes of algorithms have been developed: 1. The optimum Wiener–Hopf solution is c. 3. Structure of an adaptive transversal ﬁlter at instant k.k/p p H c. 2.15). meansquare error (MSE). given by (2.k/ can be written as 2 J .k/ D copt .1 Steepest descent or gradient algorithm Our ﬁrst step is to realize a deterministic iterative procedure to compute copt .40).k/ (3.k/ D ¦d c H .k/Rc.5) Assuming that x and d are individually and jointly WSS. Adaptive transversal ﬁlters x(k) Tc x(k1) Tc x(k2) Tc x(kN+1) c0 (k) c1 (k) c 2(k) c N1 (k) + y(k) e(k) + d(k) Figure 3.k/j2 ] (3. . however.34). Depending on the cost function associated with fe.6) where R and p are deﬁned respectively in (2.1.k/ C c H . least squares (LS).
8) In the scalar case (N D 1).Rc.k/ D 2rx .k C 1/ D c0 .k/ ¼rx .2.k C 1/ D c.k/ hence c. As J .c0 . the trajectory of rc J .0/.k/ is a quadratic function of the vector of coefﬁcients.0/.k/ with respect to c (see (2.k/ @c0 copt.k/ ¼.0 /2 (3.c0 .0 / (3.k/ is orthogonal to the locus of points with constant J that includes c.9) p/ (3.k/ J .7) where rc.12) The behavior of J and the sign of rc0 J .k/ denotes the gradient of J . In the twodimensional case (N D 2).11) copt.2.k/ D Jmin C rx .k/.k/ as a function of c0 is illustrated in Figure 3.k/ and rc0 J . in general not necessarily coinciding with time instants.k/ (3.0/.31) we ﬁnd rc.1.k/ p/ (3. a realvalued positive constant.k/ 1 2 ¼rc.Rc.0 / (3.k/ J .c0 . from (2. and k is the iteration index.k C 1/ D c.3. We recall that in general the gradient vector for c D c.k/ D @ J .k/ J . Behavior of J and sign of the gradient vector rc in the scalar case (N D 1). J Jmin 0 c opt. ¼ is the adaptation gain.k/ is illustrated in Figure 3.0 c0(1) c0(0) Figure 3. for realvalued signals the above relations become: J . 0<∆ 0=∆ c0 0>∆ .k/ copt. Adaptive transversal ﬁlter: MSE criterion 167 The steepest descent or gradient algorithm is deﬁned as: c.18)).10) The iterative algorithm is given by: c0 .k/ D 2.3.
that is with 1c.k/ C ¼Rcopt Starting at k D 0 with arbitrary c. : : : . for the convergence of 1c.k/ D [I Deﬁning the coefﬁcient error vector as 1c.15) copt (3.k/ D c.0/ D c.k/ (3.0/.17) (3.13) we obtain 1c.369). N .k/ from (3.13) ¼R]c. the iterative algorithm (3. i D 1.9) can be written as c.k/ C [¼R ¼R]1c. equivalently.14) ¼[Rc. and setting (see (2.3. where U is the unitary matrix formed of eigenvectors of R. Adaptive transversal ﬁlters Figure 3.k C 1/ D [I ¼ ]ν.0/ copt . R D U U H . we determine now the conditions for the convergence of c. Loci of points with constant J and trajectory of rc J in the twodimensional case (N D 2).k C 1/ to 0.k/ equation (3.k C 1/ D [I D [I ¼R]c. Stability of the steepest descent algorithm Substituting for p the expression given by (2.15) becomes ν.k C 1/ D c. Using the decomposition (1.k/ Rcopt ] (3.46)) ν.168 Chapter 3.k/ D U H 1c.32).k/ to copt or.16) .k/ I]copt (3. and is the diagonal matrix of eigenvalues f½i g.
equivalently. In the case ¼½i > 1 and j1 ¼½i j < 1. ¹i .1.k/ D . the ith component of the vector ν.k/ if and only if j1 or. N (3.21) !0 k!1 8¹i . ¹i .1 ¼½i /k ¹i .k C 1/ D .0/ (3.4): ¹i .3. Plot of ¹i .23) If ½max (½min ) is the largest (smallest) eigenvalue of R. we have that the algorithm converges if and only if 0<¼< 2 ½i i D 1.360)).23) the convergence condition can be expressed as 0<¼< ν (k) i 2 ½max (3.1 ¼½i /¹i .17) satisﬁes the difference equation: ¹i .20) As ½i is positive (see (1. N (3.k/ i D 1.k/ converges.22) ¼½i j < 1 (3. .k/ is still decreasing in magnitude. that is ¹i .19) The ith component of the vector ν.24) 0 1 2 3 4 5 6 k Figure 3. but it assumes alternating positive and negative values.0/ (3. observing (3. : : : .4.k/ as a function of k is given by (see Figure 3. 1<1 ¼½i < 1 (3.18) Hence. Adaptive transversal ﬁlter: MSE criterion 169 Conditions for convergence As is diagonal.k/ as a function of k for ¼½i < 1 and j1 ¼½i j < 1.k/ in (3. 2. : : : . 2.
k/ D copt C Uν.k/ (3.¼/ ¼ (3.n C N X i D1 u i. given by c. if the convergence condition (3.5.1 ¼½max / (3. We deﬁne as ¼opt the value of ¼ that minimizes the time constant of the slowest mode.¼/ D max j1 i ¼½i j (3.16) and (3.0/ Therefore. converges to the optimum solution as a weighted sum of N exponentials.n . Adaptive transversal ﬁlters Correspondingly. : : : .k/ D copt C [u1 . for each coefﬁcient it results in cn .1 ¼½i /k ¹i . In (3. N 1 (3. u N ]ν. Choice of the adaptation gain for fastest convergence The speed of convergence. n D 0. Note that. each with the time constant4 −i D 1 ln j1 ¼½i j ' 1 ¼½i i D 1.19) we obtain the expression of the vector of coefﬁcients.k/ D copt C N X i D1 N X i D1 ui ¹i . : : : .n is the nth component of ui . : : : .29) As illustrated in Figure 3. N (3.30) 4 The time constant is the number of iterations needed to reduce the signal associated with the ith mode of a factor e.0/ n D 0.28) then we need to determine min ¾.1 ¼½i /k ¹i . each coefﬁcient cn .27) where the approximation is valid if ¼½i − 1. . 2. the solution is obtained for 1 ¼½min D .25) D copt C ui . N 1.0/ characterizes the ith mode of convergence.1 ¼½i /k ¹i . depends on the choice of ¼. : : : .n . observing (3.23) is satisﬁed. u2 .26) the term u i. If we let ¾.170 Chapter 3. which is related to the inverse of the convergence time.k/ D copt.26) where u i.
i D 1 2 ln j1 ¼½i j ' 1 2¼½i (3. The ith mode will have a time constant given by −MSE.1. Adaptive transversal ﬁlter: MSE criterion 171 1.376).47) the general relation holds J .¼opt / D 1 2 ½max ½min ½min D D ½max C ½min ½max C ½min . for k ! 1. and consequently the larger the eigenvalue spread the slower the convergence.19) we have J .k/j2 (3.3.R/ D ½max =½min is the eigenvalue spread (1. from which we get ¼opt D and ¾. J .k/ D Jmin C N X i D1 N X i D1 ½i j¹i . We emphasize that ¾.35) .31) where .k/ D Jmin C Now using (3.¼opt / is a monotonic function of the eigenvalue spread (see Figure 3.R/ C 1 (3. if the condition for convergence is veriﬁed.0/ (3.k/ ! Jmin as a weighted sum of exponentials.33) ½i .µλ i  1 ξ(µ) 0 0 1 µopt 1 λmax λi 1 λmin µ Figure 3.5. Plot of ¾ and j1 ¼½i j as a function of ¼ for different values of ½i : ½min < ½i < ½max .32) ½max 2 C ½min (3. We note that other values of ¼ (associated with ½max or ½min ) cause a slower mode. Transient behavior of the MSE From (2.R/ 1 .1 ¼½i /2k ¹i2 .34) Consequently.6).
modes associated with small eigenvalues tend to weigh less in the convergence of J .0/ on the ¹2 axis (in correspondence of ½max ) we have the following situations: 8 >< 1 > > > ½max > > > > < 1 if ¼ D > ½max > > > > > > >> 1 : ½max the iterative algorithm has a nonoscillatory behavior the iterative algorithm converges in one iteration the iterative algorithm has a trajectory that oscillates around the minimum (3. and an oscillatory behavior around the minimum along u½max . If no further information is given regarding the initial condition c.2 0 0 1 2 3 4 5 χ(R) 6 7 8 9 10 Figure 3. assuming ¼½i − 1.6.8 ξ(µopt) 0.34) is different from (3.R/ D ½max =½min . Choosing c.k/.0/. Case 1 for ½1 − ½2 . In particular. we have the following two cases.26) because of the presence of ½i as weight of the iith mode: consequently.4).4 0.¼opt / as a function of the eigenvalue spread . choosing ¼ D ¼opt the algorithm exhibits monotonic convergence along u½min .172 Chapter 3. ¾. Adaptive transversal ﬁlters 1 0. let us examine the twodimensional case (N D 2). Recalling the observation that J .6 0. respectively. We note that (3.k/ increases more rapidly (slowly) in the direction of the eigenvector corresponding to ½ D ½max (½ D ½min ) (see Figure 2. .36) Let u½min and u½max be the eigenvectors corresponding to ½min and ½max .
1.8). whose gradient is given by rc. Adaptive transversal ﬁlter: MSE criterion 173 Case 2 for ½2 D ½1 .8) is thus estimated to be5 rc.7.3.k/ C 2xŁ . Choosing ¼ D 1=½max the algorithm converges in one iteration.42) Implementation The block diagram for the implementation of the LMS algorithm is shown in Figure 3. with reference to the following parameters and equations.k/e. but in general it also exhibits a large variance.41) 1 2 ¼rc.k/c.1. .k C 1/ D c. the following instantaneous estimates are used: O R.k/j2 .k/] (3.k/ D 2e.k/xT . Actually. 5 We note that (3.39) represents an unbiased estimate of the gradient (3.k/ 2xŁ . becomes c.k/ xT .k/ D D D 2d.k/c.39) (3.k/ D d.k/xŁ .k/ and O p.k/ J .k/xŁ .k/ J . independently of the initial condition c.k/ that is c.k/ 2xŁ .2 The least meansquare (LMS) algorithm The LMS or stochastic gradient algorithm is an algorithm with low computational complexity that provides an approximation to the optimum Wiener–Hopf solution without requiring knowledge of R and p.k/xŁ .k/ D xŁ .40) Observation 3.k/ C ¼e.k/ The gradient vector (3.k/ (3. 3.37) The equation for the adaptation of the ﬁlter coefﬁcients.k/ (3.k/ J .0/.k/xT .38) (3.k/[d.k/ (3.k/xŁ .1 The same equation is obtained for a cost function equal to je.k C 1/ D c. where k now denotes a given instant.
If no a priori information is available.k/ D xT . Coefﬁcient vector adaptation c.44) The accumulators (ACC) in Figure 3.k/ C ¼e. 1. Adaptive transversal ﬁlters x(k) Tc * x(k1) Tc * x(k2) Tc * c N1 (k) ACC x(kN+1) * c0 (k) ACC c1 (k) ACC c2 (k) ACC e(k) µ d(k) y(k) + Figure 3.k/. 2 .k/xŁ .0/ D 0 (3.k C 1/ D c.k/ Initialization. .k/xŁ . number of coefﬁcients of the ﬁlter.7. 0 < ¼ < Filter. Estimation error e. which are updated by the current value of ¼e.46) (3. 2.7 are used to memorize coefﬁcients. Parameters. N .k/ (3.45) y.43) Adaptation. statistical power of input vector The ﬁlter output is given by y. we set c.k/ 2. Block diagram of an adaptive transversal ﬁlter adapted according to the LMS algorithm.k/ (3.174 Chapter 3.k/c.k/ D d. Required parameters are: 1.
51) Coefﬁcient vector c.k/] E[e.k/] k!1 k!1 ! copt (3.k/ D d I . Input vector Desired signal x.k/ y I .k/ D c I .k/ Q (3.48) (3. Canonical model The LMS algorithm operates with complexvalued signals and coefﬁcients.57) y Q .k/ C jc Q .k/ D xT . This scheme is adopted in practice if only processors that use real arithmetic are available.k/ d.k/x I .47) (3.k/ Using the above deﬁnitions and considering separately real and imaginary terms.54) (3.k/] xT .k/ D y I . We can rewrite complexvalued quantities as follows.k/ C xT .k C 1/ D c Q .k/ e Q . we introduce two criteria for convergence.59) !0 . Adaptive transversal ﬁlter: MSE criterion 175 Computational complexity For every iteration we have 2N C 1 complex multiplication (N due to ﬁltering and N C 1 to adaptation) and 2N complex additions.k/ C e Q .k/x I .53) (3.k/ D d I .55) (3.k/ C jd Q .k/ C j y Q .k/c I .k/ (3.k/ y Q .k/ D xT .k/ D x I .49) (3.3.k/ c Q .k/ Output ﬁlter y.k/c Q .1. Conditions for convergence Recalling that the objective of the LMS algorithm is to approximate the Wiener–Hopf solution.k/x Q .50) (3.k/ C je Q .k/ C jx Q .k/c I . Therefore the LMS algorithm has a complexity of O.k/c Q .k/x Q . Convergence of the mean.k/ ¼[e I .k/ D e I .56) (3.k/ I Q c I .N /.52) (3.k/] Therefore a complexvalued LMS algorithm is equivalent to a set of two realvalued LMS algorithms with crosscoupling.k/ C ¼[e I .k/ D d Q . E[c.k/ I e I . we derive the new equations: y I .k/ e Q .k/ Estimation error e.k C 1/ D c I .58) (3.
e.k/ D a x.k/ (3.k/ 2 N 0.0/ D 0 and k ½ 0.8.k/ D c. ¦w D 0:097). coefﬁcient fc.63) y. For a ﬁrstorder predictor we adapt the coefﬁcient.62) 1/ (3. For c. Adaptive transversal ﬁlters 2 x(k) 0 −2 0 50 k 100 150 0 c(k) −0.k/ C ¼e. Predictor output y.k/ w.61) .k 1/ (3.k/ y.k C 1/ D c.k/j2 g for a onecoefﬁcient predictor (N D 1). e(k)2 (dB) J(k)=E[ e(k)2 ] −2 −4 −6 −8 −10 −12 J 0 min 50 100 150 k Figure 3. c.k/ D x.k/.k/g and squared error fje.k/x.k 2. Coefﬁcient update c. and ¦x D 1 (i. ¦w 2 2 where a D 0:95.k/g. we compute 1.k/ 3.8 we illustrate the results of a simple experiment for an input x given by a realvalued AR(1) random process: Á 2 (3.60) x. Prediction error e. according to the LMS algorithm with ¼ D 0:1. it is required that the mean of the iterative solution converges to the Wiener– Hopf solution and the mean of the estimation error approaches zero. To show the weakness of this criterion.5 −1 0 E[c(k)] −a 50 k 100 150 2 0 J(k) . in Figure 3.k/ D d. In other words.k 1/ C w. Realizations of input fx.176 Chapter 3.k/x. adapted according to the LMS algorithm.
We note that the coefﬁcients are obtained by averaging in time the quantity ¼e.k/j2 g are illustrated. at convergence. fc.k/ D E[je.k/ D E[je.3. 2. The quantity J .3 Convergence analysis of the LMS algorithm We recall the following deﬁnitions. In any case.8.k/ D c.1.1. A constraint on the amplitude of the oscillations must be introduced.1/ is the MSE in excess and represents the price paid for using a random adaptation algorithm for the coefﬁcients rather than a deterministic one. The random processes x and c exhibit a completely different behavior.k/ We also make the following assumptions.k/ D d.1/ Jmin D Jex . both the mean of the coefﬁcient error vector norm and the output meansquare error must be ﬁnite.k/copt (3. statistically independent. Optimum ﬁlter output error emin . however. By itself. From the plots in Figure 3. such as the steepestdescent algorithm.8 we observe two facts: 1. c follows mean statistical parameters associated with the process x and not the process itself. It is interesting to observe that this hypothesis corresponds to assuming the ﬁlter input vectors. as well as mean values E[c. estimated by averaging over 500 realizations. for which they may be considered uncorrelated. we will see that the ratio Jex . Choosing a small ¼ the adaptation will be slow and the effect of the gradient noise on the coefﬁcients will be strongly attenuated. xT . fx.k/g. Coefﬁcient error vector 1c.64) constant (3. Convergence in the meansquare sense.k/. for small values of ¼.k/g and fje.k/g. 1.k/xŁ .1/ In other words.k/] and J .k/ copt jj2 ] ! constant k!1 k!1 (3. it does not yield the desired results. because the iterative solution c may exhibit very large oscillations around the optimum solution.67) copt (3. copt and Jmin represent the Wiener–Hopf solution.k/j2 ]. Adaptive transversal ﬁlter: MSE criterion 177 In Figure 3.1/=Jmin can be made small by choosing a small adaptation gain ¼. 3.65) J .66) .k/ 2. Convergence of the mean is an easily reachable objective. E[jjc.k/j2 ] ! J . Actually. realizations of the processes fx.
16)) in the gradient algorithm is given by U H .70) ¼xŁ . : : : .k/ C ¼emin .k/] D 0 i 6D n i.k/] C ¼E[emin .k/xT .k/ and 1c.k/.k C 1/ D 1c.k/ D E [je. we get E[1c.k/c.k/ (see (1. .68) This assumption is justiﬁed by the observation that the linear transformation that orthogonalizes both x. observing (3.72) Convergence of the mean Taking the expectation of (3.97)).c (3.k/j2 ] x.k/ C ¼xŁ .k/] D R and the second term on the righthand side of (3.k/xT . the cost function6 J .69) Adding and subtracting xT .k/j2 ] (3.41) can thus be written as 1c.33).k/.k/ D [¹1 .k/ D Jmin C N X i D1 ½i E[j¹i . n D 1. ¹ N .k C 1/] D [I ¼R]E[1c.k/ C ¼xŁ .k/] (3. with the change of variables (3.k/ (see (3.k/ is statistically independent of x.73) vanishes for the orthogonality property (2.k/]]E[1c.k/] (3.k/.k 2/.71) can be written as J .36) of the optimum ﬁlter. x. are orthogonal: Ł E[¹i .74) 6 x.k/1c.k/[d.16).k/xT .k C 1/ D 1c. Fourthorder moments can be expressed as products of secondorder moments (see (3. 3. : : : . : : : Moreover.k/]1c.k/copt to the terms within parentheses we obtain 1c. The components of the coefﬁcient error vector.k/xŁ .k/ (see (3.k/ xT .16)).k/ D [I xT . 2.c E denotes the expectation with respect to x and c.k/] (3. N (3.k/[emin . we obtain the same equation as in the case of the steepest descent algorithm: E[1c.k/ We note that 1c.k/] D U H 1c.k/] (3. c.73) As E[xŁ .k 1/. ν.k C 1/] D [I ¼E[xŁ .178 Chapter 3.k/ depends only on the terms x.k/xŁ .k/¹n . transformed according to U H . The adaptation equation of the LMS algorithm (3. Adaptive transversal ﬁlters 1.368)) and 1c.70) and exploiting the statistical independence between x.
25) and (3.78) ¼ Jmin 2 7 In other texts the Gaussian assumption is made.1 ¼x 2 .k/ (3.75) ½max Observing (3.0// .k/.78) it must be j j < 1.0//E[1c2 . Then for the convergence of the difference equation (3.k/x. and choosing the value of ¼ D 2=.79) whose behavior as a function of ¼ is given in Figure 3.k/] where the last term vanishes. 0<¼< Convergence in the meansquare sense (real scalar case) The assumption of a ﬁlter with realvalued input and only one coefﬁcient c. as 1c.0// (3.0/.k/ allows us to deduce by a simple analysis important properties of the convergence in the meansquare sense. whereby E[x 4 .0/ (3.0/ x 2¼rx .k/] D r2 .k/]Jmin (3.k/] becomes rapidly negligible with respect to the meansquare error during the process of convergence.0/Jmin (3.k/] C ¼2 rx . for the LMS algorithm the convergence of the mean is obtained if 2 (3.3.k/ has zero mean and is statistically independent of all other terms.k/ C 2E[. .1.½max ½min / = .2 ¼ Jmin ¼rx .k/] D 3r2 .k/ are assumed to be statistically independent and x.2 ¼rx .76) because x.0/ (3.k C 1/] D .k/x 2 .0/ − 1.1/] D D ' ¼2 rx . and assuming furthermore that x.k/ C ¼emin .77) ¼x 2 .0/ Jmin ¼rx .k/] is reduced at each iteration at least by the factor . We can therefore assume that E[1c. we get E[1c2 . the vector E[1c.k/ and emin .0/.0/.81) 2 rx . The conclusions of the analysis x are similar.k/]E[1c2 .0/ x Let D 1 C ¼2 r2 . Assuming7 moreover. assuming ¼rx .77) becomes x E[1c2 . we get E[1c2 .k C 1/ D .k//1c.9.k/ is orthogonal to emin . From 1c.k/ and 1c.k/ are statistically independent.k//1c. Adaptive transversal ﬁlter: MSE criterion 179 Consequently.1 C ¼2 r2 . Consequently ¼ must satisfy the condition 0<¼< Moreover. E[x 4 .32).k/¼x.k/] D E[x 2 .k C 1/] D E[1 C ¼2 x 4 .½max C ½min /. (3.80) 2¼rx .½max C ½min /.k/emin .k/] C ¼2 E[x 2 .1 2¼x 2 .
86) ¼ Jmin 2 (3. is: MSD D J .k/ D Jmin C rx .1/ ' Jmin C rx .1/ Jmin Jex .8 0.k/] x. for k ! 1.1/ ¼ D D rx . (3.87) .0/ The relative MSE deviation.2 0 0 1/ r (0) x µ 2/ r (0) x Figure 3.84) Convergence in the meansquare sense (general case) The convergence theory given here follows the method developed in [3]. from e. Plot of as a function of ¼.9.k/x.0/E[1c2 . Adaptive transversal ﬁlters 1 0.16).k/] C E[x 2 .k/ it turns out 2 E[e2 .0/ Jmin Jmin 2 (3. or misadjustment.k/] In particular.k/ C ¼emin .k C 1/ D [I ¼U H xŁ .k/ 1c. With the change of variables (3.70) becomes ν.k/ D d.k/U H xŁ .180 Chapter 3.k/ D emin . Likewise.k/U]ν. we have J .k/]E[1c2 .k/] D E[emin .k/ (3.4 0.6 γ 0.82) (3.k/xT .k/ (3.83) that is J .85) (3.k/c.
k/ is statistically independent of x.i.k/ and x. Adaptive transversal ﬁlter: MSE criterion 181 Let us deﬁne Q x.k/xn . considering that ν.k/ 2¼ .n.I ¼ Ł /ν Ł . n/ D xiŁ .k/ (3. x N .k/j2 ].1.k C 1/] D E[.87) becomes Q ν. i/Án .k/ C ¼emin .k/ are not only orthogonal but also statistically independent.96) D Ái .93) Observing (3.93) is equal to ¼2 Jmin .k/j2 ]Án .k/.i.k/.88) Recalling Assumption 1 of the convergence analysis. n D 1.k/g are mutually orthogonal.k C 1/ν T . i/ of the matrix expressed by (3. : : : . and assuming emin .k/ D U H xŁ .k/] is diagonal.90).k/j2 jxn . E[j¹ N .k/j2 ]] (3.i.368) we get E[ ] D hence the components fxi .I ¼ T /] D E [.3.k/ x (3.k/.k/ and x.91) i. Á N .I ν ¼ T T /] (3.I Ł /.k/] D [E[j¹1 . the correlation matrix of ν Ł at the instant k C 1 can be expressed as E[ν Ł .k/ .k/] Ł Recalling Assumption 2 of the convergence analysis.k C 1/ D [I ¼ ]ν.k/ν T .89) (3.k/ν T . and consequently emin .k/[.I ¼ Ł /ν Ł . : : : . the ﬁrst term can be written as x.k/Q T .95) Observing (3.k/ Q Q From (1.i.k/ν .k/ Q Q 2¼½i Ái .92) (3.k/ν T .k/ independent Q of x.ν E [.k/ D [x1 . we ﬁnd that the matrix E[ν Ł .k/ ν T . the second term on the righthand side of (3.k/ nD1 N X nD1 D Ái . n/ .90) (3.k/U x N ð N matrix with elements . the elements with indices .k/xT .I ¼ T /] C ¼2 Jmin E[ Ł ] (3. Q Moreover.k/ 2¼½i Ái .k/ C ¼2 N X nD1 ½i ½n Án .94) are given by " # N X Ł E Ái .k/.94) D E[.k/Q Ł . : : : . Then (3.I ¼ T /]E[ν .k/. N (3.k/ C ¼2 . with elements on the main diagonal given by the vector η T .k/ Q Q and Q D xŁ . : : : . i/Ái .I x ¼ ¼ Ł /E ]ν Ł .k/ D [Á1 .k/.k/ C ¼2 E[jxi .91).k/]T D UT x.
25) from (3.k/ (3.100) where f¦i g denote the eigenvalues of B. : : : .99) (3. : : : . 1.k/ given by (3.k/j2 ] D ½i Q (3. and using the relation N rx .100) is given by: η. ½ N ] be the vector of eigenvalues of R. From (3.k/j4 ] D E[jxi . the cost function J .0/ where the components of η.72) becomes J .97) Let λT D [½1 .¦1 . Adaptive transversal ﬁlters where.k/j2 ]E[jxi . recalling Assumption 3 of the convergence analysis.103) the constants fKi g are determined by the initial conditions Ã Â ¼Jmin i D 1.k/ D N X i D1 Ki ¦ik vi C 2 ¼Jmin 1 ¼N rx .96). In (3.0/ 2 ¼N rx .1 ¼½ N /2 2 3 7 7 7 5 (3. we obtain the relation η.k/ D Jmin C λT η. similar to those applied to get (3. N (3.98) N ð N symmetric positive deﬁnite matrix with positive elements.16): H Án . the general decomposition (2. 1]T .106) .98).0/ k½0 (3.0/ according to (3. : : : .0/j2 ] n D 1.95) and (3.95) and (3.104) 1 Ki D viH η.103) where 1 D [1. ¦ N /V H (3.105) Using (3.k C 1/ D Bη.1 ¼½1 /2 ¼2 ½1 ½2 : : : ¼2 ½1 ½ N 6 ¼2 ½2 ½1 . : : : .0/ depend on the choice of c.182 Chapter 3. N (3. : : : .102) the solution of the vector difference equation (3.93) and (3. and V is the unitary matrix formed by the eigenvectors fvi g of B.0/ D tr[R] D N X i D1 ½i (3.0/j2 ] D E[jun 1c. and .175) becomes B D V diag.1 ¼½2 /2 : : : ¼2 ½2 ½ N 6 BD6 : : : :: : : : 4 : : : : 2½ ½ 2½ ½ ¼ N 1 ¼ N 2 : : : . After simple steps.k/ C ¼2 Jmin λ Using the properties of B.0/ D E[j¹n . 2 Q Q E[jxi .13).101) (3.
Therefore further investigation of the properties of the matrix B will allow us to characterize the transient behavior of J .0/ (3.0/ D N 1 X i D0 (3. The LMS algorithm converges if the adaptation gain ¼ satisﬁes the condition 0<¼< 2 statistical power of input vector (3. Adaptive transversal ﬁlter: MSE criterion 183 Substituting the result (3. The transient behavior of J does not exhibit oscillations. Basic results From the above convergence analysis.k/ D where Ci D Ki λT vi (3.0/ (3. . 1.110) Conversely. In particular. if ¼ satisﬁes (3.n D 1 ¼½i .k D statistical power of input vector the equation (3. : : : .1. i D 1.107) The ﬁrst term on the righthand side of (3. this result is obtained by observing the properties of the eigenvalues of B. N .110) becomes (3.109).108) N X i D1 Ci ¦ik C 2 2 Jmin ¼N rx . being N X i D1 ½i D tr[R] D N rx .99) the sum of the elements of the ith row of B satisﬁes N X nD1 [B]i.109) In fact the adaptive system is stable and J converges to a constant steadystate value under the conditions j¦i j < 1.106). This happens if 0<¼< 2 N rx . from (3. 2.107) describes the convergence behavior of the meansquare error.112) i/j ] 2 E[jx.0// < 1 (3.2 ¼N rx .111) A matrix with these properties and whose elements are all positive has eigenvalues with absolute value less than one.3.103) in (3.110). whereas the second term gives the steadystate value. we will obtain some fundamental properties of the LMS algorithm. we ﬁnd J .
Proof. As shown below.0/ Jmin 2 ¼N rx .1/ ¼ ¼N rx . If ½i D 0. convergence in the meansquare imposes a tighter bound to allowable values of the adaptation gain ¼ than that imposed by convergence of the mean (3.114) the condition for convergence in the meansquare implies convergence of the mean.110) can be intuitively explained noting that. 0/.0/ D Jmin 2 ¼N rx . rather than on the eigenvalue distribution of the matrix R. [B]i. it must be 0<¼< but since N X i D1 2 ½max (3.0/ ¼N rx .0//. 0.113) ½i > ½max (3.108) we get Ci D 0. for a given value of ¼. The new bound depends on the number of coefﬁcients.0. In other words. : : : . However. : : : . : : : . vi. The relation (3.1/ D J . Increasing the number of coefﬁcients without reducing the value of ¼ leads to instability of the adaptive system.k/ in the steady state (k ! 1): J . 0.1/ Jmin D (3.115) from which the excess MSE is given by Jex . 0. from (3. for convergence of the mean. a large time constant of one of the modes implies a low probability that the corresponding term contributes signiﬁcantly to the meansquare error. as ¦i ! 1.113). an increase in the number of coefﬁcients causes an increase in the excess meansquare error due to ﬂuctuations of the coefﬁcients around the mean value.0/ ' N rx .184 Chapter 3. 0.N rx . As λT vi D 0. Jex . the ith row of B becomes .107) reveals a simple relation between the adaptation gain ¼ and the value J . Consequently ¦i D 1 and viT D . a small eigenvalue of the matrix R (½i ! 0) determines a large time constant for one of the convergence modes of J . Adaptive transversal ﬁlters We recall that.0/ (3.0/ 2 (3. : : : .i D 1. For ¼ ! 0 all eigenvalues of B tend toward 1.0.1/ D 2 2 Jmin ¼N rx .116) and the misadjustment has the expression MSD D for ¼ − 2=. 2. 0/. Equation (3. .i D 1. 3.117) Observations 1.
Therefore the convergence of J is less inﬂuenced by the eigenvalue spread of R than would be the convergence of 1c. 4.119) The maximum convergence rate of J is obtained for the adaptation gain ¼opt . combining (3.k/ × Jmin is normally veriﬁed at the beginning of the iteration process. If all eigenvalues of the matrix R are equal. 3.8 ½i D rx . the fact that modes with a large time constant usually contribute to J less than the modes that converge more rapidly.120) As the condition J .k/ C 2¼.k/ ' and Â J .3.k C 1/ ' 1 1 N Ã J . Moreover. N . : : : .0// (3. 1. N . Hence Ci D 0. it results ¼opt .k/ Jmin 1 N rx .0//]J . we obtain J .107) with the (3.k/ (3.2 ¼N rx .k/rx . However.118) and considering a time varying adaptation gain ¼.0/. . 1] is the corresponding eigenvector. for example.0/.k C 1/ ' [1 ¼.k/rx . if the input x is white noise.k/ D J . It is easily veriﬁed that ¦imax is an eigenvalue of B and viT D N 1=2 max [1.k/. it follows that ¦imax is the maximum eigenvalue of B. the maximum eigenvalue of the matrix B is given by ¦imax D 1 ¼rx . Adaptive transversal ﬁlter: MSE criterion 185 It is generally correct to state that a large eigenvalue spread of R determines a slow convergence of J to the steady state. the Perron–Frobenius theorem afﬁrms that the maximum eigenvalue of a positive matrix B is a positive real number and that the elements of the corresponding eigenvector are positive real numbers [4]. mitigates this effect.k/N rx . i D 1.0/ J . If all eigenvalues of the matrix R are equal. the other eigenvectors of B are orthogonal to λ.0/.0/Jmin (3. because vimax is parallel to λT . i 6D i max . since Ci D 0. Moreover.0/ (3. : : : .0/. ½i D rx .2 ¼. Proof.k/.122) 1 N rx .k/ (3.1. i 6D i max . : : : .121) 8 This occurs.118) The remaining eigenvalues of B do not inﬂuence the transient behavior of J . i D 1. Since all elements of vimax are positive.
k/ corresponds to the Wiener–Hopf solution.123) where Jmin .1/ is determined by the large eigenvalues of R.124) 3. The relatively slow convergence is inﬂuenced by ¼. In particular it must be 0<¼< 2 2 D N rx . It turns out that J` is inversely proportional to ¼. 3. The mean corresponds to the optimum vector copt .1/ C J` (3.1. 2. For a large ¼.k/ D Jmin . In case the LMS algorithm is used to estimate the coefﬁcients of a system that slowly changes in time. that the convergence behavior depends on the initial condition c. recalling Assumption 2 of the convergence analysis. 4. on the other hand the convergence of J . however.103) indicates that at steady state all elements of η become equal. Jex .k/ C Jex .186 Chapter 3.1/ depends instead on the LMS algorithm and is directly proportional to ¼. the basic version of the LMS algorithm was presented. 6.0/ statistical power of input vector (3. for time varying systems ¼ must be chosen as a compromise between Jex and J` and cannot be arbitrarily small [5. 7].0/ [8]. the value of J “in steady state” varies with time and is given by the sum of three terms: Jtot .k/] becomes slower. in steady state the ﬁlter coefﬁcients are uncorrelated random variables with equal variance. Note. We now give a brief introduction to other versions that can be used for various applications.k/] is imposed by ½min . Therefore. whereas the speed of convergence of E[c. Consequently.k/ is less sensitive to this parameter. In this case. If the eigenvalue spread of R increases. Adaptive transversal ﬁlters We note that the number of iterations required to reduce the value of J . Final remarks 1.1/. Choosing a small ¼ results in a slow adaptation. the adaptation is fast at the expense of a large Jex . and in a small excess MSE at convergence Jex .k/ by one order of magnitude is approximately 2:3N . the convergence of E[c. 5. Jex .1/. and J` depends on the ability of the LMS algorithm to track the system variations and expresses the lag error in the estimate of the coefﬁcients. the adaptation gain ¼ has a lower bound larger than 0. . Thus (3. instead. the number of coefﬁcients and the eigenvalues of R. 6. The LMS algorithm is easy to implement.4 Other versions of the LMS algorithm In the previous section.
x Q .k/ D E[je.k/ < c.k/j] (3.k/ cT . : : : .k/ D d.0/. as usual.1 ¼Þ/c. the application of the leaky LMS algorithm results in the addition of a small constant Þ to the terms on the main diagonal of the correlation matrix of the input process. the cost function includes an additional term proportional to the norm of the vector of coefﬁcients.k/xŁ . Sign algorithm There are adaptation equations that are simpler to implement. in order not to modify substantially the original Wiener–Hopf solution.k// Note that the ﬁrst version has as objective the minimization of the cost function J .k// D [sgn.k/ (3.k/jj2 ] where.x Q .131) 9 The sign of a vector of complexvalued elements is deﬁned as follows: sgn.k// sgn. e.130) > : sgn.R C ÞI/ 1 p (3. sgn.k N C1// C j sgn.xŁ . Therefore the leaky LMS algorithm is used in cases where the Wiener problem is illconditioned.k/ (3.128).k C 1/ D c. It is usually sufﬁcient to choose Þ two or three orders of magnitude smaller than rx .k//xŁ . Observing (3.x. This equation corresponds to the following cost function: J . Adaptive transversal ﬁlter: MSE criterion 187 Leaky LMS The leaky LMS algorithm is a variant of the LMS algorithm that uses the following adaptation equation: c.k/ D E[je.k// C j sgn.128) It is interesting to give another interpretation to what has been stated.k/ C ¼e.xŁ .x I . one obtains the same result by summing white noise with statistical power Þ to the input process.x I .127) (3. Three versions are:9 8 > sgn.1. for the same J .126) In other words.e.k/j2 ] C Þ E[jjc. or to accelerate the convergence of the LMS algorithm.0/. at the expense of a lower speed of convergence.k/x. and multiple solutions exist.k// (3.k N C1//] (3. In steady state we get k!1 lim E[c.e.125) with 0 < Þ − rx .k/ C ¼ e.k C 1/ D .1/.k/] D .k/ sgn.3.k//.129) . Both approaches are useful to make irreversible an illconditioned matrix R.
4 −0.k/ < c.2 −0.188 Chapter 3.6 −0.6 −0.k C 1/ D c. There also exists a piecewise linear version of the sigmoidal function deﬁned as 1 a '.10) [9]: '. 8 > > < per a < per A (3. .e.a/ D > A > : 1 where A is a positive parameter.134) AÄaÄ A per a > A 1 0.a/ is the sigmoidal function (see Figure 3. Adaptive transversal ﬁlters Sigmoidal algorithm As extensions of the algorithms given in (3.4 0. Sigmoidal function for various values of the parameter þ.2 β =6 β =12 β =24 β =48 ϕ (a) 0 −0.6 0.8 1 Figure 3.8 0.xŁ .k//'.a/ D tanh Â þa 2 Ã D 1 e 1Ce þa þa (3.8 −1 −1 −0.k// '.10.2 0.132) > : Ł .8 −0.6 0.k//xŁ .k/'.4 −0.2 0 a 0.k// (3.4 0.x where '.e. the following adaptation equations may be considered: 8 > '.k/ C ¼ e.133) where þ is a positive parameter.128).
the adaptation algorithm is affected by strong noise in the gradient. ¼ D 1=. a.136) O where Mx .1/.k/ assume large values.k/ (3.468)): O O Mx . in order to assign the values of Mx and p so that the adaptation process does not become unstable. if some x. b.k/ D N Mx . To be able to apply the normalized algorithm. alternatively. typically p' 1 Mx 10 (3. O O Mx . for uncorrelated as well as correlated input signals [10].137) N 1 X i D0 ¼ Q O p C Mx .k/ is the estimate of the statistical power of x.135).140) The normalized LMS algorithm has a speed of convergence that is potentially higher than the standard algorithm. Adaptive transversal ﬁlter: MSE criterion 189 Normalized LMS In the LMS algorithm. Subsequently ¼ is reduced to achieve a smaller J .138) where 0 < a < 1.1 a/jx. In (3. Two values of ¼. Variable adaptation gain In the following variants of the LMS algorithms the coefﬁcient ¼ varies with time.k/. . for example. and Q O Mx . A simple estimate is obtained by the iterative equation (see (1.k 1/ C . 1.135) jx.k/ D jjxjj2 D or. however. with time constant given by −D 1 1 ' ln a 1 a (3.0//.N rx .1.3.k i/j2 (3. Initially a large value of ¼ is chosen for fast convergence.k/ D a Mx .k/ (3.139) for a ' 1.k/j2 (3. This problem can be overcome by choosing an adaptation gain ¼ of the type: ¼D where 0 < ¼ < 2. p is a positive parameter that is introduced to avoid the denominator becoming too small. some knowledge of the input process is necessary.
The following expression of ¼ is used: ¼. Typical values are Þ1 ' 1 and Þ2 − 1. Let µT D [¼0 .190 Chapter 3. . For a timeinvariant system. ¼ N 1 ].130) is given by ¼. 4. µ1 J(k) µ2 = µ1 / 2 Jmin 0 K 1 k Figure 3. ¼ proportional to e. ¼max ]. Vector of values of ¼.142) 3. the adaptation gain usually selected for application with the sign algorithm (3.141) the behavior of J will be illustrated in Figure 3. 2.k/ D ¼1 ¼2 C k k½0 (3.k/. : : : .k/j2 (3.143) with ¼ limited to the range [¼min .11. Behavior of J.k C 1/ D Þ1 ¼. Decreasing ¼.k/ obtained by using two values of ¼. Initially larger values ¼i are chosen in correspondence of those coefﬁcients ci that have larger amplitude. two approaches are possible. Adaptive transversal ﬁlters For a choice of ¼ of the type ( ¼D ¼1 ¼2 per 0 Ä k Ä K 1 per k ½ K 1 (3. a.k/ C Þ2 je.11.
c2 ].k/ 1 > < ways changed sign in the last m 0 iterations Þ ¼i .k/ D a1 x. the roots of A. however.1. For this reason such ﬁlters are now rarely used.144) with ¼ limited to the range [¼min . where p % D a2 D 0:997 and '0 D cos 1 (3. ¼max ].5 Example of application: the predictor We consider a real AR(2) process of unit power. 12].150) . Typical values are m 0 .k/Þ never changed sign in the last m 1 iterations (3. described by the equation x. The application of the LMS algorithm for lattice ﬁlters.k 2/ C w.1 C a2 /2 1 C a2 a2 ] D 0:0057 D 1 22:4 dB (3.2. Adaptive transversal ﬁlter: MSE criterion 191 b.552) we ﬁnd that the statistical power of w is given by 2 ¦w D 1 a2 [. From (2.k C 1/ D > > if the ith component of the gradient has : ¼i .147) Â a1 2% Ã D 2:28 rad (3. For the study of the LMS algorithm for lattice ﬁlters we refer the reader to [11.146) From (1.149) We construct a predictor for x of order N D 2 with coefﬁcients cT D [c1 . is not as simple as for transversal ﬁlters.83) we expect to ﬁnd in steady state c' a (3.1 that ﬁlters with a lattice structure have some interesting properties. 3g and Þ D 2.547). ¼i changes with time following the rule 8 if the ith component of the gradient has al> ¼i . 3.1. from the (1.k 1/ a2 x.148) 2 Being rx .145) with w additive white Gaussian noise (AWGN). LMS for lattice ﬁlters We saw in Section 2.12.0/ D ¦x D 1.z/ are given by %e š'0 . as illustrated in Figure 3. m 1 2 f1.3. using the LMS algorithm and some of its variants [13]. although they were popular in the past when fast hardware implementations were rather costly.k/ (3. and a1 D 1:3 a2 D 0:995 (3.
k 1/ (3.192 Chapter 3. by decreasing ¼.154) Convergence curves are plotted in Figure 3.k/ D x. Adaptive transversal ﬁlters Figure 3.k D c1 . c2 ' 2 2 a2 . and ¦e ' ¦w .1.k/ x.k/ D cT .13 for a single realization and for the mean (estimated over 500 realizations) of the coefﬁcients and of the squared prediction error.152) For the predictor of Figure 3.k C 1/ D c.14 a comparison is made between the curves of convergence of the mean for three values of ¼.k 2/ (3.k/x. that is c1 ' a1 .12 we now consider various versions of the adaptive LMS algorithm and their relative performance.1 (Standard LMS) The equation for updating the coefﬁcient vector is c. In any case. but the convergence time increases.k 1/ (3.k C 1/ D . the predictor output is given by y.k/x. In Figure 3.k with prediction error 1/ 1/ C c2 .151) e. thus giving a more accurate solution.1 ¼Þ/c.k/ (3.k/x. for ¼ D 0:04.1.153) Convergence curves are plotted in Figure 3. . Predictor of order N D 2. Example 3.2 (Leaky LMS) The equation for updating the coefﬁcient vector is c.k/ y.k/ C ¼e. Example 3.15 for a single realization and for the mean (estimated over 500 realizations) of the coefﬁcients and of the squared prediction error. the excess error decreases.k/ C ¼e.12.k/x. We observe that.
8 −a −1 2 200 400 600 800 1000 0 200 400 600 800 1000 k k 0 −5 µ =0.6 0. Adaptive transversal ﬁlter: MSE criterion 193 Figure 3.04 µ =00.14.13.2 0 −0. Comparison among curves of convergence of the mean obtained by the standard LMS algorithm for three values of ¼.01 µ =0.2 µ =0.1 −0.1 µ =0.04 µ =0.4 0.2 1.1.3. .8 µ =0. obtained by the standard LMS algorithm.1 c1(k) 0.4 −0.1 µ =0.6 −0.2 0 c2(k) −0. Convergence curves for the predictor of order N D 2. −a 0.01 J(k) (dB) −10 −15 σ2 −20 w −25 0 100 200 300 400 500 600 700 800 900 1000 k Figure 3.2 1 0 1 0.04 µ =0.
k/ D a ¦x .k/ C ¼.1 a/jx.157) (3.156) 1/ (3.160) . 1/ D 1 [jx.158) ¼ Q p C N ¦x . 2/j2 ] O2 2 with aD1 and pD 1 E[jjxjj2 ] D 0:2 10 2 5 D 0:97 (3. Adaptive transversal ﬁlters Figure 3.k The adaptation gain ¼ is of the type ¼.15.159) (3. obtained by the leaky LMS.k O2 O2 1/ C .k/e.k/x.k/ D where ¦x .194 Chapter 3. We note that the steadystate values are worse than in the previous case.1. Convergence curves for the predictor of order N D 2. Example 3.k C 1/ D c.155) ¦x .3 (Normalized LMS ) The equation for updating the coefﬁcient vector is c.k/j2 k½0 (3. for ¼ D 0:04 and Þ D 0:01.k/ O2 (3. 1/j2 C jx.
Adaptive transversal ﬁlter: MSE criterion 195 Figure 3.k/ C ¼ sgn.k C 1/ D c. 1//.k C 1/ D c. obtained by the normalized LMS algorithm. for ¼ D 0:08.17. Version (3). 1//. Convergence curves are plotted in Figure 3. . however.x.16.k//x.k// sgn. We note that. It turns out that version (2). with respect to the standard LMS algorithm. at the expense of reducing the speed of convergence. Example 3.3.k/ sgn.k C 1/ D c.e. yields fastest convergence. Convergence curves for the predictor of order N D 2.k/ C ¼e. for ¼ D 0:04.1. where the estimation error in the adaptation equation is not quantized. yields the best performance in steady state.1. To decrease the prediction error in steady state for versions (1) and (3). A direct comparison of the convergence curves obtained in the previous examples is given in Figure 3. (3) c.k (2) c.e. the convergence is Q considerably faster.16 for a single realization and for the mean (estimated over 500 realizations) of the coefﬁcients and of the squared prediction error.18 for the three versions of the sign LMS algorithm.k/ C ¼ sgn. the value of ¼ could be further lowered.x.k 1/.k A comparison of convergence curves is given in Figure 3.4 (Sign LMS algorithm) We consider the three versions of the sign LMS algorithm: (1) c.
2 −0.8 −a −1 2 200 400 600 800 1000 0 200 400 600 800 1000 k k 0 ver.6 −0.2 0. Adaptive transversal ﬁlters Figure 3.2 0. Comparison of convergence curves for the predictor of order N D 2.1 −10 ver.3 −5 J(k) (dB) ver. obtained by three versions of the LMS algorithm.1 ver.1 ver.2 0 c2(k) −0.17.18.4 0.2 ver. Comparison of convergence curves obtained by three versions of the sign LMS algorithm.2 0 1 ver.8 ver.2 −15 σ2 −20 w −25 0 100 200 300 400 500 600 700 800 900 1000 k Figure 3.2 0 −0.6 0.3 ver.196 Chapter 3.4 −0.3 c1(k) 0. . −a1 1.
2 The recursive least squares (RLS) algorithm We now consider a recursive algorithm to estimate the vector of coefﬁcients c by an LS method. c N 1 .3. obtained at the expense of a larger computational complexity. x. such as the leaky LMS. obtained for the vector of coefﬁcients c. Input vector at instant i xT .163) c0 (k) c1 (k) c2 (k) cN1 (k) + y(i) e(i) + d(i) Figure 3.k/.i 2.k/] 1/.19.k/ D [c0 . 3.162) 3. Reference system for a RLS adaptive algorithm. . the correlation matrix result is illconditioned with a large eigenvalue spread. c1 . we introduce the following quantities: 1.i/ D cT .k/x.i/c. In this case it is necessary to adopt a method that ensures the stability of the error prediction ﬁlter. Filter output signal at instant i. if the order of the predictor is greater than the required minimum.i/ D xT .k/. : : : . for an AR process x.k/ y. : : : .2 As observed on page 97.i N C 1/] (3. named recursive least squares (RLS) algorithm.161) (3. Coefﬁcient vector at instant k cT .i/.2. With reference to the system illustrated in Figure 3. x.i/ D [x. The RLS algorithm is characterized by a speed of convergence that can be one order of magnitude faster than the LMS algorithm.19.k/ x(i) Tc x(i1) Tc x(i2) Tc x(iN+1) (3. Thus the convergence of the LMS prediction algorithm can be extremely slow and can lead to a solution quite different from the Yule–Walker solution. The recursive least squares (RLS) algorithm 197 Observation 3.
k/ D k X i D1 (3.198 Chapter 3.i/g fd.k/ D ϑ .166) (3.i/ D d.168).k/ D 1 .i/c. Adaptive transversal ﬁlters 4. that enables proper ﬁltering operations even with nonstationary signals or slowly timevarying systems. if 1 .165) (3.167) where the error signal is e. based on the observation of the sequences fx.i/g i D 1.k/ k X i D1 ½k i d.i/ (3.k/ϑ .i/xT . the optimum value of c. : : : . The memory of the algorithm is approximately 1=.k/ k X i D1 ½k i je.k/ where .168) ½k i xŁ . 2. the solution is given by c. Deﬁning E.i/j2 (3. applied to a sequence of prewindowed samples with the exponential weighting factor ½k .i/xŁ . Desired output at instant i d.164) the criterion for the optimization of the vector of coefﬁcients c.k/ (3. ž ½ is a forgetting factor.k/ is the minimum sum of squared errors up to instant k.i/ (3.k/c.128). Normal equation Using the gradient method.1 ½/.k/ D we want to ﬁnd min E. ž This problem is the classical LS problem (2. k (3.k/.i/ Two observations arise: xT .170) exists.171) .i/ At instant k.169) ϑ .k/ c.k/ D From (3.k/ satisﬁes the normal equation .
Both expressions of . B and D are positive deﬁnite matrices.k/ D and kŁ .k/ B 1 1 DB BC.k 1/ C D xŁ .k/ (3.k 1 xT .k/ D ½ϑ . especially if N is large.k 1/ ½ 1 1 . 2.k/ may be too hard.172) We now recall the following identity known as matrix inversion lemma [12].k and similarly ϑ .179) 1/xŁ .k/ D ½ 1 P.k/ (3.k 1/ (3.k/ D it follows that .k/ D ½ .k/ D ½ 1C½ 1 1 .k/ (3.174) 1/ C xŁ .k 1/ C d.k/ 1 . Then A For AD .i/xT .178) We introduce two quantities: P. Therefore we seek a recursive algorithm for k D 1.k/ D ½ 1 1 .173) k 1 X i D1 ½k i xŁ .180) also called the Kalman vector gain.177) the equation (3.k 1 .k/xT .2.D C C H BC/ 1 CH B (3.k/½ 1 1/xŁ . The recursive least squares (RLS) algorithm 199 Derivation of the RLS algorithm To solve the normal equation by the inversion of . From .k/ (3.k/ DD1 (3.k/½ 1 .k/xŁ . : : : .3.k/xT .k 1/xŁ .176) D ½ .k/ and ϑ .181) .k/ (3.k/xT .k 1/ 1 C xT .178) we have the recursive relation P.k/ (3.175) where A.k 1/ ½ 1 Ł k .k/P. From (3.k 1 1 .i/ C xŁ .176) becomes 1 .k/xT .k/ 1/xŁ .k/ can be written recursively. Let ADB 1 C CD 1 CH (3.
182) P.k D c.k 1/ C kŁ .k .k 1/]xŁ .k/ where in the last step (3.k 1/ϑ .k 1/ϑ .k/ xT . the recursive equation to update the estimate of c is given by c.k/ D d.k/xT .k/ D 1 1 1 T x .k/P.k/P.k/xT .k/kŁ .k/ 1 .180) we obtain kŁ .k/ (3.k/ (3.190) (3.k/c.k/ (3. In any case the relation holds c.k/ϑ .k/ D ½[½ D P.k 1/]ϑ .187) we note that xT .181).k P.k/ (3. Deﬁning the a priori estimation error.k/[d.k/ 1/ C P.k/ Ł D P.k/P.185) 1/ C P. In other words.k/[1 C ½ from which we get kŁ .k 1/ ½ 1/ 1 Ł k .k/x .k/xŁ .k/c.k/xT .k 1/ C kŁ .k/.k/ (3.k/P.k/ϑ .k 1/xŁ .191) (3.k/ D ½ 1 1/ C ž.k/d.189) In summary.184) .k/d.k/c.k 1/] 1/ C P.k/xŁ .k/] D ½ 1 1 .k/ D ½ D [½ Using (3.k/ 1/ ½ ½ 1 Ł 1 Ł k .k/ D P. ž.186) kŁ .k/ (3.k/ Using the (3.k/ in the ﬁrst term.k 1/xŁ .k/ž.k/ c.k/xT .k/P.k/ ½ C xT .k/d.k/ D d. we get P.k/. that is computed before updating c.k/xT .k/ xT .k/ D P.k 1 Substituting the recursive expression for P.k/ (3.184) has been used.k/xŁ .k/ xT .k 1/ is the ﬁlter output at instant k obtained by using the old coefﬁcient estimate.k/ D c.k/c.192) 1/ (3. Adaptive transversal ﬁlters We derive now a simpler expression for kŁ . From (3. the RLS algorithm consists of four equations: kŁ .k/ D ½P.k 1/xŁ .k/ is an approximated value of e.k P. it follows kŁ .183) 1 k .174).k/ D d.k 1/xŁ .200 Chapter 3.188) we could say that ž.k xT .k/P.k 1/ (3.193) ž.k 1/ ½ 1 Ł k .k/ϑ .k 1/xŁ . from the a posteriori estimation error e.k 1/xŁ .k/c.k c.k/ D c.k/ P.k 1/ (3.
3.k/.198) Table 3. hence xT .195) k X i D1 .k/ 1 r.k/ D r.k 1/ and normalized by the ½ C xT . The recursive least squares (RLS) algorithm 201 In (3.0/ D Ž Typically Ž 1 1 I Ž − rx .0/ D 0 P.0/.k 1/xŁ .k 1/ C ž.k/c.k 1/ kŁ .k/kŁ .0/ D ŽI (3.k 1/xŁ .k/ D P.i/ C Ž½k I with Ž − 1 This is equivalent to having for k Ä 0 an all zero input with the exception of x. k.k/ D d.k/ is the input vector ﬁltered by P.0/ is the statistical power of the input signal.194) ½k i xŁ .2.k/] H D π T . 2.197) where rx .k/ P.k/ P. N C 1/ D .k/P.k 1/ c.i/xT .k/ xT .½ N C1 Ž/1=2 . In Table 3.k/π Ł .k/ in (3.k/P.1 RLS algorithm. The term xT .k 1/ D [P.0/ (3.k/) is Hermitian.k/ (inverse of the Hermitian matrix .k/ D ½ 1 .k π Ł .1 we give a version of the RLS algorithm that exploits the fact that P.k/ D T . Consequently P.P.k/ D c.k/π T .k// .k 1/xŁ . : : : c.k/ ž.0/ (3.0/ D Ž 1 I 1/xŁ . Initialization of the RLS algorithm We need to assign a value to P.k/ ½Cx kŁ .k/ may be interpreted as the energy of the ﬁltered input.196) D 100 rx . We modify the deﬁnition of .k/π Ł .k/ (3.k/ D so that . Initialization For k D 1.190).
from (3.k/ c H .k 1/ 1/ ϑ H .k//Ł ] (3.k/xŁ .k/ that is ž.k/ž.204) (3.k/kŁ . (3.k D ½Ed .k/] H xŁ .k/ž.k/ D ½Emin .150).205) .k/ D Ed .k/] (3.k/ D ½Ed .k/ϑ .k/ D x H .k D ½Emin .k/ž.192) we get Emin .199) From the general LS expression (2.200) [½ϑ H .k/xT .k/ D c H .k/ D ž Ł .k/[d Ł .k/xŁ .k/ Then (3.k/ .k 1/c.k/j2 1/ C xT .k/eŁ .k ½ϑ H .k 1/ C jd.201) becomes Emin .201) ϑ H .k/ (3.k We note that.k/e. we get ž.k/ .k 1/ C d Ł .k d Ł .k/d Ł .k/ ϑ H .206) 1/ C ž.k/eŁ .k/ observing (3. the recursive relation is given by Emin . as Emin .k/c.k/ 1/ C ž.k/ (3.k/c.k/eŁ .202 Chapter 3.179).k/ Moreover from (3.k/ (3.k 1/ C jd.171) it follows that ϑ H .k/d Ł .k/][c.k/ is real.203) Finally.k/ 1/ C ž.k/ž.k/ is a real scalar value.184) we obtain 1 Using the expression (3.k/ D k X i D1 ½k i jd. Adaptive transversal ﬁlters Recursive form of E min We set Ed . Emin .i/j2 D ½Ed .k/ D ϑ H .k 1/ C d Ł .k/xŁ .202) .k/c.k/j2 (3.174) and (3.k/ is Hermitian.k/ D ½Emin .k 1/ C d.k/cŁ .k/kŁ .k/ D ½Emin .k/kŁ .184) and (3.k/ž.xT .k/kŁ .k/kŁ .k/ D[ 1 . and recalling that ϑ H .
20 for a single realization and for the mean (estimated over 500 realizations) of the coefﬁcients and of the squared estimation error.1. ž For k ! 1 there is no excess error and the misadjustment MSD is zero.1 ½/ much less than the time interval it takes for the input samples to change statistics. For ½ < 1 and 1=. convergence curves for the RLS algorithm are plotted in Figure 3. the direct method (3.1 ½/ and 1 ½ N (3. for ½ D 1. independently of the eigenvalue spread of R. in fact the RLS algorithm converges in a number of iterations of the order of N . 2.207) MSD D 1C½ ž From the above observation it follows that the RLS algorithm for ½ < 1 gives origin to noisy estimates. the direct method is more convenient. Computational complexity of the RLS algorithm Exploiting the symmetry of P. 3. ž The RLS algorithm converges in the meansquare sense in about 2N iterations. when ½ < 1 the “memory” of the algorithm is approximately 1=.5. .K (3. the algorithm is capable of “tracking” the changes.171) requires instead CCDIR D N 2 C N C K N3 N C1 (3. The recursive least squares (RLS) algorithm 203 Convergence of the RLS algorithm We make some remarks on the convergence of the RLS algorithm. expressed as the number of complex multiplications per output sample.3. ž In any case. the computational complexity of the RLS algorithm.208) N C 1/ output samples. is given by CCRLS D 2N 2 C 4N For a number of . if K × N . ž On the other hand the RLS algorithm for ½ < 1 can be used for tracking slowly timevarying systems. We note that a different scale is used for the abscissa as compared to the LMS method.209) We note that. In any case the RLS solution has other advantages: 1. It provides an estimate of the coefﬁcients at each step and not only at the end of the data sequence. Example of application: the predictor With reference to the AR(2) process considered in Section 3. This is true for ½ D 1. It can be numerically more stable than the direct method.2.k/.
204 Chapter 3. but with a computational complexity comparable to that of the LMS algorithm. As a consequence the fast algorithms may become numerically unstable. Therefore we will list a few fast algorithms. Adaptive transversal ﬁlters Figure 3. the RLS algorithm has the disadvantage of requiring . strong and weak points similar to those already discussed in the case of the LMS algorithm for lattice structures [12. 1. Convergence curves for the predictor of order N D 2. Falconer and Ljung [14] have shown that the recursive equation (3.2N C 1/ multiplications. 16].2N 2 C 4N / multiplications per iteration. Ciofﬁ and Kailath [15]. 2. Algorithms for lattice ﬁlters.193) requires only 10. There are versions of the RLS algorithm for lattice structures that in the literature are called recursive least squares lattice (LSL) that have. obtained by the RLS algorithm. Exploiting some properties of the correlation matrix . have further reduced the number of multiplications to 7. in addition to a lower computational complexity than the standard RLS form. 3. whose computational complexity increases linearly with N . the number of dimensions of the coefﬁcient vector c. their weak point resides in the sensitivity of the operations to round off errors in the various coefﬁcients and signals.20.k/. The implementation of these algorithms still remains relatively simple. . with their fast transversal ﬁlter (FTF).2N C 1/.3 Fast recursive algorithms As observed in the previous section. The fast Kalman algorithm has the same speed of convergence as the RLS. Algorithms for transversal ﬁlters.
2 Comparison of three adaptive algorithms in terms of computational complexity. usually known as QR decomposition. error in steady state.4 Block adaptive algorithms in the frequency domain In this section some algorithms are examined that transform the input signal. cost function algorithm multiplications MSE LS LMS RLS FTF 2N C 1 2N 2 C 7N C 5 7. ž performance in terms of speed of convergence. Although the FTF method exhibits a lower computational complexity than the RLS method. 18. and tracking capabilities under nonstationary conditions. owing to the QR decomposition and lattice structure. 19. ž robustness. that leads to a systolictype structure with the following characteristics: ž high speed of convergence. therefore it is rarely used. 21. its implementation is rather laborious.2.4. or b) improved . 23]. a brief comparison among LMS. With respect to the LMS algorithm. 3.2N C 1/ 3.3. 22]. ž a very efﬁcient and modular structure. RLS and FTF is given in Table 3. before adaptive ﬁltering. The name comes from the use of an orthogonal triangularization process. for example from the time to the frequency domain. which does not require the a priori knowledge of the ﬁlter order and is suitable for implementation in very largescale integration (VLSI) technology. 20.2N C 1/ divisions 0 N2 C 4N C 3 4 additions subtractions 2N 2N 2 C 6N C 4 6. that is good performance achieved in the presence of a large eigenvalue spread and ﬁniteprecision arithmetic [5. this approach may exhibit: a) lower computational complexity. 3. ž numerical stability. For further study on the subject we refer the reader to [17. Block adaptive algorithms in the frequency domain 205 Table 3.3.1 Comparison of the various algorithms In practice the choice of an algorithm must be made bearing in mind some fundamental aspects: ž computational complexity. Regarding the computational complexity per output sample. Algorithms for ﬁlters based on systolic structures. A particular structure is the QR decompositionbased LSL.
1 Block LMS algorithm in the frequency domain: the basic scheme The basic scheme includes a ﬁlter that performs the equivalent operation of a circular convolution in the frequency domain. The samples of the transformed sequence are denoted by fX i .n N /X iŁ . : : : . We indicate with fDi . 1. i D 0.n N / i D 0. the method operates over blocks of N samples. while upper case letters will denote sequences in the frequency domain.n C 1/N / D Ci . where n is an integer number. As for real data the complexity of an N point FFT in Figure 3.n N / D Di .4.. the DFT of the desired output and of the adaptive ﬁlter output. : : : .21 for N sample real input vectors. 3. Adaptive transversal ﬁlter in the frequency domain. N 1. Deﬁning E i . 25. Each input block is transformed using the DFT (see Section 1.n N /.n N / C ¼E i . 26. The instant at which a block is processed is k D n N . Adaptive transversal ﬁlters convergence properties of the adaptive process.n N /g.n N / Yi .4).n N /g and fYi . i D 0.206 Chapter 3. 1. As illustrated in Figure 3. lower case letters will be used to indicate sequences in the time domain. We will ﬁrst consider some adaptive algorithms in the frequency domain that offer some advantages from the standpoint of computational complexity [24.210) In the following. 27]. N 1. Computational complexity of the block LMS algorithm via FFT We consider the computational complexity of the scheme of Figure 3.21. : : : . . respectively. the LMS adaptation algorithm is expressed as: Ci .n N /g. N 1 (3. The algorithm requires three N point FFTs and 2N complex multiplications to update fCi g and compute fYi g. 1.21.
213) CCLMS f D 4 4 2 We note that the complexity in terms of real multiplications per output sample of the standard LMS algorithm is CCLMSt D 2N C 1 ' 2N (3.15 0. Block adaptive algorithms in the frequency domain 207 Table 3. for blocks of N input samples.4.3.3 Comparison between the computational complexity of the LMS algorithm via FFT and the standard LMS for various values of the ﬁlter length N. As each complex multiplication requires four real multiplications.015 terms of complex multiplications is given by N point FFT of N real samples D N N point FFT + 2 2 Â Ã N N N N D C log2 4 2 2 2 (3. N 16 64 1024 CCLMS f =CCLMSt 0. We note that the advantage of the LMS algorithm via FFT is non negligible even for small values of N . However. as the product between DFTs of two time sequences is equivalent to a circular convolution. 3. : : : .214) A comparison between the computational complexity of the LMS algorithm via FFT and the standard LMS algorithm is given in Table 3.212) log2 CCLMS f D 3 4 2 using the fact that fYi g and fCi g.41 0. 1.211) then the algorithm requires a number of complex multiplications per output sample equal to Ã Â N 1 C1 (3. Let us deﬁne: .2 Block LMS algorithm in the frequency domain: the FLMS algorithm We consider a block LMS adaptive algorithm in the time domain.21 is appropriate only if the relation between y and x is a circular convolution rather than a linear convolution.3. the complexity in terms of real multiplications per output sample becomes Ã Â N 3 log2 C1 (3. i D 0. N 1 are Hermitian sequences. the direct application of the scheme of Figure 3.4.
error at instant n N C i e.219) As in the case of the standard LMS algorithm.208 Chapter 3.n N {z block n 1 1/.n N C i/ (3.n N  N /. : : : N 1 (3.k/ D [x. is given by (3.n N /.n N /C0 . n N C 1.n N / then the ﬁlter output at instants k 2 y.n N /. we deﬁne10 C0 T .n N C N D n N . 0. : : : . : : : . : : : .n N / D 6 : : 4 : y.n N /] 1/.n N C N }  {z block n and Y0 .217) 1 .n N / D DFT[cT .n C 1/N / D c.n N C i/ 4. : : : .223) 1/ 10 The superscript 0 denotes a vector of 2N elements.n N C i/ D cT .215) (3.220) ¦ 1/] } (3.n N /. ∇.n N /x. x.n N C i/ y. The above equations can be efﬁciently implemented in the frequency domain by the overlapsave technique (see (1. input vector at instant k xT . Adaptive transversal ﬁlters 1.n N /.n N / C ¼ N 1 X i D0 e.n N / 6 y. n N C N 3 7 7 7 D last N elements of DFT 5 1. x.k N C 1/] (3..218) (3.n N /.222) 1 [Y0 .112)). the updating term is the estimate of the gradient at instant n N .n N C i/ D d. c N 3.221) ² X0 .n N / D [c0 . ﬁlter output signal at instant n N C i y.n N / D X0 . where for example L D 2N . coefﬁcient vector at instant n N cT . . x. : : : . c1 . 1.n N / D diag DFT[x.216) The equation for updating the coefﬁcients according to the block LMS algorithm is given by c.n N C i/xŁ . x. x.n N /] (3.k 2. 0]  {z } N zeros (3.n N C 1/ 6 y. Assuming Lpoint blocks. : : : .k/.n N C i/ i D 0.
n N / C ¼F N ðN N ðN F 0 N ðN 0 N ðN E0 .n N / D ﬁrst N elements of DFT 1 [X0 Ł .k/g.n N / (3. [∇. Let E0 T .n N /  {z }  N zeros y.226) In the frequency domain.229) (3. : : : .n N /] (3.n N /DF[d0 .4.n N C N 1/] Ä ½ 0 0 y0 .227) where 0 is the null vector with N elements.4.230) 1 [X0 Ł .k/g and input fx.n N /E0 . and F the 2N ð 2N DFT matrix. if 0 N ðN is the N ð N all zero matrix. k/.n N /] 0 N ðN I N ðN y0 .231) The implementation of the FLMS algorithm is illustrated in Figure 3. the adaptation equation (3. 0.n N /D[0T .n N /]m D N 1 X i D0 e. : : : .n N / C ¼DFT 0 (3.22.n N C i/x Ł . : : : .n N /. : : : . d.n C 1/N / D C0 . the complexity in terms of real multiplications per output sample is given by CCFLMS D 10 log2 N C 8 (3. I N ðN the N ð N identity matrix.n N / C0 .k/ and x Ł . For real input samples. d. d. then the following equations deﬁne the fast LMS (FLMS): d0 T . Block adaptive algorithms in the frequency domain 209 We give now the equations to update the coefﬁcients in the frequency domain..219) becomes Ä ½ ∇.n N /. which is also equal to the convolution between e. referring to the scheme in Figure 3.n N / D DFT[0. . N 1 (3.225) then ∇.228) (3. d.n N /] (3.n C1/N /DC0 ..22. 1.224) This component is given by the correlation between the error sequence fe. Computational complexity of the FLMS algorithm For N output samples we have to evaluate ﬁve 2N point FFTs and 4N complex multiplications.n N /D N ðN N ðN F 1 [X0 .n N C N errors in block n 1/] } (3.n N /] ½ Ä I 0 C0 . Let us consider the mth component of the gradient. In summary.3.n N /C0 .n N C N {z 1/ y.n N C i m/ m D 0.n N /E0 .232) A comparison between the computational complexity of the FLMS algorithm and the standard LMS is given in Table 3.
22.210 Chapter 3. Implementation of the FLMS algorithm. . Adaptive transversal ﬁlters Figure 3.
The FLMS algorithm converges in the mean to the same solution of the LMS.n C 1/N /] D E[c. 2. where ½max is the maximum eigenvalue of R. we get E[c. as usual. The time constant for the convergence of the ith mode (for ¼ − 1) is −i D 1 ¼½i N blocks (3. and taking the expectation of both members of the adaptation equation (3. ¼ must be smaller by a factor N in order to guarantee stability.I R E[c.0/ 2 2 3.05 Convergence in the mean of the coefﬁcients for the FLMS algorithm Observing (3. 3. we have n!1 lim E[c.k/] and p D rdx D E[d. N 16 32 64 1024 CCFLMS =CCLMS 1. Recalling the analysis of the convergence of the steepestdescent algorithm of Section 3. .1.1.n C 1/N /] D R 1 p (3..234) for 0 < ¼ < 2=.5.N ½max /.217) and (3. however.p D .53 0. LMS algorithm in a transformed domain 211 Table 3.218). it can be seen that the misadjustment is equal to that of the LMS algorithm: ¼ ¼ (3.n N /] C ¼N p where.3.n N /] C ¼N .236) MSD D tr[R] D N rx .4 Computational complexity comparison between FLMS and LMS. R D E[xŁ .k/xT .219).k/xŁ .5 LMS algorithm in a transformed domain We consider now some adaptive algorithms in the frequency domain that offer some advantages in terms of speed of convergence [28].233) ¼N R/E[c.235) samples 1 D ¼½i equal to that of the LMS algorithm..85 0. From these equations we can conclude: 1.5 0. For ¼ − 2=N ½max .n N /]/ (3.k/].
Coefﬁcient vector at instant k cT .k/ where G is a unitary matrix of rank N : G 3.238) (3.k/] (3. : : : . z.k/].5. 2.240) (3. we deﬁne the following quantities.k/xT . : : : .k/ D [z 0 .241) Figure 3. : : : .k/. x. Input vector at instant k xT .212 Chapter 3. Transformed vector zT .23. z N In general.k/] 1 1 .k/ D [c0 .k/ D Gx.k/.k N C 1/] (3.k 1/. General scheme for a LMS algorithm in a transformed domain.1 Basic scheme Referring to Figure 3. Adaptive transversal ﬁlters 3.k/ D [x.23.237) with correlation matrix Rx D E[xŁ . .239) D GH (3.k/.k/. c N 1 . z 1 . c1 . 1.k/. x.
1.k/ (3..k/ (3. (3. for a suitable choice of ¼.247) lim c.251) .k/xŁ .GŁ Rx G/ DG 1 1 (3.k/ y.k/zT .249) (3.k/xT . The various powers can be estimated. : : : . Let N D diagfE[jz 0 .k/] D GŁ rdx Then copt D .k/ where ¼i D ¼ Q E[jz i .k/zŁ .k C 1/ D c.Rx 1 rdx / where Rx 1 rdx is the optimum Wiener solution without transformation. e. LMS type: ci .k C 1/ D ci .5.k/j2 ].k/z iŁ .k/ D zT . E[jz 1 .243) (3. Filter output signal y. by considering a small window of input samples or recursively.k/j ]g (3. : : : . Q k!1 1 Ł N z . LMS algorithm in a transformed domain 213 4.k/j2 ] (3.244) can be written in vector notation as c.242) 6.3.250) GŁ rdx 1 Rx 1 GŁ GŁ rdx D G H Rx 1 rdx D G H . Estimation error e. E[jz N 2 1 .k/c.k/ C ¼e.k/] D GŁ E[d.k/ C ¼i e.246) Then (3. Equation for updating the coefﬁcients.k/GT ] D GŁ Rx GT and rdz D E[d.k/ D cT .k/ D d.k/ 5.k C 1/ D copt D Rz 1 rdz (3.k/z.k/] D E[GŁ xŁ .g.245) i D 0.k/j2 ].244) We note that each component of the adaptation gain vector has been normalized using the statistical power of the corresponding component of the transformed input vector.248) where Rz D E[zŁ .k/ Q We ﬁnd that. N 1 (3.
5.254) The ﬁlters are of passband comb type. 3.k/ x. . Then z i . N 1 (3. is used for the adaptation process. 2. with reduced spectral variations. in a simpler recursive form.252) x.253) or. for the normalization N 1 .3 LMS algorithm in the frequency domain In this case GDF N ð N DFT matrix. these two transformations. In both cases the computational complexity to evaluate the output sample y.N log2 N /.k m/e mi j2³ N i D 0. Consequently the adaptation gain ¼ is adjusted to the various modes.N log2 N /. Common choices for G are the following: 1. Observation 3. even if more costly from the point of view of the computational complexity.k/ D z i .N 2 / to O.k N/ (3. : : : .2 Normalized FLMS algorithm The convergence of the FLMS algorithm can be improved by dividing each component of the vector [X0 Ł . Adaptive transversal ﬁlters On the speed of convergence The speed of convergence depends on the eigenvalue spread of the matrix Rz . recalling the deﬁnition (1. and consequently is e difﬁcult to evaluate in real time. however. z i . They reduce the number of computations to evaluate z.214 Chapter 3. This procedure. used in lattice ﬁlters.n N /.3 ž A ﬁlter bank can be more effective in separating the various subchannels in frequency. the resulting signal. Consequently. 3. If Rz is diagonal. KarhunenLo` ve transform (KLT). The KLT depends on Rx . as illustrated in Figure 3. Moreover.24.n N /E0 . implemented by either 1) FFT with parallel input or 2) recursively with serial input to implement equations (3.5. requires that the components of X0 . 1.n N / are indeed uncorrelated. 3.k Â 1/ exp j2³ i N Ã C x. then the eigenvalue spread of N 1 Rz is equal to one.n N /] in (3. and the N modes of convergence do not inﬂuence each other. Lower triangular matrix transformation.239) from O. DFT and discrete cosine transform (DCT).k/ D N 1 X mD0 (3. whiten the signal x by operating on the different subbands.k/ is O. In this case the adaptation algorithm reduces to N independent scalar adaptation algorithms in the transformed domain. a transformation with these characteristics exhibits the best convergence properties.254).231) by the power of the respective component of X0 .k/ in (3.376) of the eigenvalue spread.
1 ³.k/ D cos and G i .k/ i D 0.k/ is decimated.k/] D . gi .24. with the aim of reducing the number of operations.256) .4 LMS algorithm in the DCT domain The LMS algorithm in the DCT domain is obtained by ﬁltering the input by the ﬁlter bank of Figure 3.257) Cz . Adaptive ﬁlter in the frequency domain.z/ D Z[g i . where the ith ﬁlter has impulse response and transfer function given by. 1.255) 3.k/g are realvalued signals. respectively.k/g and fd.5. LMS algorithm in a transformed domain 215 Figure 3. : : : . N 1 (3. N 2 1 (3.24.1 z 1 1 /. : : : . 1/i z ³Á z 2 cos N N / cos 1 ³ Á i 2N 2 (3. 1. ž If fx.2k C 1/i 2N k D 0.k/ D cŁ N 1 i .5. the ﬁlter coefﬁcients satisfy the Hermitian property: ci . ž There are versions of the algorithm where each output z i .3.
5. 25. We note that. ž In general. N 1 (3.k/ D N mD0 z i . if all the signals are real.6 Examples of application We give now some examples of applications of the algorithms investigated in this chapter [1. assumed statistically independent of x. 30]..z/ can be implemented recursively [12]. they require larger computational complexity than the standard LMS.5 General observations ž Orthogonalization algorithms are useful if the input has a large eigenvalue spread and fast adaptation is required. 2.k N mD0 m/ ³. 3.2m C 1/i 2N i D0 (3. the scheme can be implemented by using real arithmetic. we have p N 1 2X x. usually these methods do not offer any advantage over the standard LMS algorithm. : : : .258) m/ cos i D 1. Figure 3. 3. Adaptive transversal ﬁlters Correspondingly. 29.259) Ignoring the gain factor cos. System model in which we want to identify the relation between x and z.³=2N /i/.6. even the ﬁltering operation determined by G i . 2 having zero mean and variance ¦w .1 System identiﬁcation We want to determine the relation between the input x and the output z of the system illustrated in Figure 3. that can be included in the coefﬁcient ci . .25. We note that the observation d is affected by additive noise w.216 Chapter 3. 3.k z 0 .25.k/ D X 2 N 1 x. ž If the signals exhibit timevarying statistical parameters.
k D hT x. known to both systems.k/xŁ .262) y.264) For N ½ Nh . h .263) 1// C w.k/ C ¼e.A).k/ D h 0 x.265) Figure 3.260) We analyze the speciﬁc case of an unknown linear FIR system whose impulse response has Nh coefﬁcients.k/ (3.Nh (3.0/.0/I (3.3.26 can be adopted to estimate the ﬁlter impulse response. we introduce the vector h with N components.k/ and x. the experiment illustrated in Figure 3.k/ D and the estimation error e. 11 Typically x is generated by repeating a PN sequence of length L > N (see Appendix 3.k/ The LMS adaptation equation follows.k/xT .k 1 .k/ 1/ C Ð Ð Ð C h Nh 1 x. Examples of application 217 Linear case Assuming the system between z.k C 1/ D c. Adaptive scheme to estimate the impulse response of the unknown system. Using an input x.k/ D d. : : : .k/ . c.261) N 1 X i D0 ci .k/] D rx . h 1 . d.k i/ D cT .k/ can be modelled as a FIR ﬁlter. h Nh In this case. hT D [h 0 . we get R D E[xŁ .k/ C w. Assuming N ½ Nh .k/ C h 1 x.k/x.26.k/ (3. : : : .6.k/ (3. and assuming the input x is white noise11 with statistical power rx . 0.k/x. we determine the output of the transversal ﬁlter c with N coefﬁcients y. 0] (3.
.1/ D [0.k/jj2 ] D k 2¼/ (3. Let be deﬁned as in (3.0/ Jmin 1 1 k k½0 (3.0/ holds.0/E[jj c.1/ represents the residual error vector. The larger the power of the noise.k/ ' N rx .272) is obtained by (3.272) The result (3. if N < Nh then copt in (3.3. Anyway.k/ xŁ . h N . c.0/ Let c.0/ jj h.267) coincides with the ﬁrst N coefﬁcients of h.270) As the input x is white.k/ approaches E[c.268) From (3.0/jj2 ] C ¼2 N rx . the noise inﬂuences the convergence process and the solution obtained by the adaptive LMS algorithm. the smaller ¼ must be so that c.k/]. as seen in Section 3.271) E[jj c. h Nh 1] T (3.262) is easily determined.218 Chapter 3.k/.k/ is statistically independent of x. Adaptive transversal ﬁlters and p D E[d. and 2 Jmin D ¦w C rx . emin . 2. On the other hand. then we get J . consequently the expectation of (3. : : : . the convergence behavior of the LMS algorithm (3.k/] D rx .k/jj2 ] where E[jj c.269) where h.¼2 N rx . h.1/jj2 (3.k/xŁ .267) (3. h N C1 .k/ D c.k/ is orthogonal to x. In any case J . the approximation xT .0/.k/j2 ] D Jmin C rx . 3. 0. : : : .k/ copt .79): D 1 C rx .k/.70) and the following assumptions: 1.0/h Then the Wiener–Hopf solution to the system identiﬁcation problem is given by copt D R and 2 Jmin D ¦w 1 (3.266) pDh (3.267) we see that the noise w does not affect the solution copt .262) for k ! 1 (equal to copt ) is also not affected by w.k/ D E[je.1.1/ 6D 0.
At convergence.274) N Jmin 2 (3.272) is an extension of (3.269).6. which leads to a misadjustment equal to MSD D 0:26. has energy equal to 1. the convergence curves of the meansquare error (estimated over 500 realizations) are shown in Figure 3. for ﬁxed ¼. Identiﬁcation via standard LMS and RLS adaptive algorithms is obtained using as input a maximallength PN sequence of length L D 31 and unit power. white.1/ D Jmin 1 C ¼ rx . are obtained by choosing a smaller value of N . ¼ D 0:1 is chosen.B.4 on page 26 as h 1 . . may increase the residual estimation error (3. The noise is additive.06.1 Consider an unknown system whose impulse response.3. for ¼ rx . Examples of application 219 Indeed.78). Example 3.0/ 2 (3. as index of the estimate quality we adopt the ratio: 3n D 2 ¦w E[jj hjj2 ] (3. however.27. For a ﬁlter with N D 5 coefﬁcients.273) A faster convergence and a more accurate estimate.27. this. and Gaussian with statistical power 2 ¦w D 0:01. As discussed in Appendix 3.6. Mx D 1.275) Figure 3.1/jj2 ] D ¼ and Â Ã N J . it results in E[jj c. (3. given in Table 1. For the LMS algorithm. Convergence curves of the meansquare error for system identiﬁcation using LMS and RLS.0/ − 1.
k 1/. for systems with a large noise power and/or slow timevarying impulse responses. Adaptive transversal ﬁlters where h D c h is the estimate error vector.x.220 Chapter 3.k/ (3.k//j2 ] O (3. as it leads to easier implementation.k/. as illustrated in Figure 3.k/ assumes values in an alphabet with at most M 3 values.k/j2 ] D E[jd. which can be identiﬁed by a table or randomaccess memory (RAM) method. the RLS algorithm usually yields a better estimate than the LMS. even if the input signal is white. given by z. that is for k D 30 in our example. Finite alphabet case Assume a more general.k/j2 D O 2e.279) g. nonlinear relation between z.k 2/] D g.x. The cost function to be minimized is expressed as E[je.276) 3n D 7:8 for RLS We note that. ﬁnite alphabet with M elements.28.28. As a result it is usually preferable to adopt the LMS algorithm. At convergence. Then z.k/ D g[x. the two methods tend to give the same performance in terms of speed of convergence and error in steady state.i/ 2 A. x. .278) Figure 3.k/ and the gradient estimate is given by rg je. Adaptive scheme to estimate the inputoutput relation of a system.k// (3.277) where x. x. it results: ( 3:9 for LMS (3.k/ and x. However.k/.
k/g is i.284) . and the system output signal is determined on Tc =F0 . so that the ci .k/ with s ? w0 (3.4 In this section and in Appendix 3.A. i D 0.k/ D s. it is convenient to represent the estimate of h determined on Tc =F0 as F0 estimates determined on Tc . Primary input.k/ D N 1 X i D0 1.6.283) 2. d.k/.x/ C w] O (3.281) To complete the identiﬁcation process.k// D g.280) In other words. where the RAM is initialized to zero. Using the polyphase representation (see Section 1. 1. however.k/ C w0 . however.29.k/ identiﬁes a particular RAM location whose content is updated by adding a term proportional to the error. we consider two sensors: 1. In the absence of noise.k/w1 .x.k/ O O (3. if the sequence fx.k// D g. Examples of application 221 Therefore the LMS adaptation equation becomes g.k/ O O k D 0. Often. the value at each RAM location is scaled by the number of updates that have taken place for that location. This is equivalent to considering g. consisting of the noise signal w1 . and ¼ is given by the relative frequency of each address.. N ﬁlter output. x.2 Adaptive cancellation of interfering signals With reference to Figure 3. In practice.k/ D d.k/ D 0 during the entire time interval devoted to system identiﬁcation. and to update the RAM with the values of fd.x.k// C d.x/ D E[g.B. : : : .282) We note that this method is a block version of the LMS algorithm with block length equal to the input sequence.6.k i/ (3. : : : (3.9) of d. it is necessary to access each memory location several times to average out the noise. with s ? w1 . given by y.k/ selects in the average each RAM location the same number of times. so that e. the content of a memory location can be immediately identiﬁed by looking at the output.k// C ¼e. consisting of the desired signal s corrupted by additive noise w0 . We assume that w0 and w1 are in general correlated. 1.x. w1 is ﬁltered by an adaptive ﬁlter with coefﬁcients fci g. the input is determined on the domain with sampling period Tc .x.d.3. the observation d and the input x are determined on the same time domain with sampling period Tc . 3. Reference input. Observation 3.k/g. if the RAM is initialized to zero.i. We note that. according to the equation g. An alternative method consists of setting y. the input vector x.
2.z/ (3. Deﬁning the error e.w0 .k/.k/ D d.k/ D d.k/ y. assuming realvalued signals and recalling that s is orthogonal to the noise signals.287) for y. the Wiener–Hopf solution in the ztransform domain is given by (see (2. is the most accurate replica of w0 .k/ c c y.k/ We have two cases.k//2 ] C min E[y 2 . In this case e.29.0/ C min E[.z/ D Pdx .k/ C w0 . for a general input x to the adaptive ﬁlter. Adaptive transversal ﬁlters Figure 3.s.k//2 ] for y.222 Chapter 3.k/.0/ (3.k/] D E[s 2 .k/ D 0.z/ Px .k/ C w0 . w1 and w0 are uncorrelated: min J D E[. w1 and w0 are correlated: min J D rs .k/ and the noise w0 is not cancelled.286) y. 1. In this case e.k/ D w0 .k/ D s.50)) Copt .30.k/] c c (3.k/ (3.285) the cost function.s.k/] C E[.k/.288) D E[. is given by J D E[e2 .k/ D s.289) . General solution With reference to Figure 3.k//2 ] D rs .k/ C w0 .w0 .k//2 ] (3. General conﬁguration of an interference canceller.k/ y.
3.31.z/ D 1 H .291) . Figure 3. in which w0 and w1 are additive noise signals uncorrelated with w and s.30.z/ (3.z/H Ł .31. and using Table 1.z/ C Pw . Speciﬁc conﬁguration of an interference canceller.z/H .6. (3.289) becomes Copt . Block diagram of an adaptive cancellation scheme.290) Copt . (3.z/H Ł .z/ D 0 If w1 Á 0. 0 0 Adopting for d and x the model of Figure 3.290) becomes Pw .3.1=z Ł / 0 Pw1 .1=z Ł / (3. Examples of application 223 Figure 3.
Adaptive transversal ﬁlters 3.295) (3. We note that in this case x2 can be obtained as a delayed version of x1 .2³ f 0 kTc C '/ The adaptation equations of the LMS algorithm are c1 .292) where s is the desired signal.33. we take as reference signals x1 .2³ f 0 kTc C '0 / (3.4 Disturbance cancellation for speech signals With reference to Figure 3.k/x 2 .k/ D B sin.32.3 Let Cancellation of a sinusoidal interferer with known frequency d. and the sinusoidal term is the interferer.28). The reference signal Figure 3. the primary signal is a speech waveform affected by interference signals such as echoes and/or environmental disturbances.6.k/ C A cos. It is easy to see that x2 is obtained from x1 via a Hilbert ﬁlter (see Figure 1. Conﬁguration to cancel a sinusoidal interferer of known frequency.k/ c2 .k/ (3.k/ D B cos.6.294) (3. the two coefﬁcients c1 and c2 change the amplitude and phase of the reference signal to cancel the interfering tone.k C 1/ D c2 .k/ D s.2³ f 0 kTc C '/ and x2 . .32.296) (3.34.293) At convergence.k/x 1 .224 Chapter 3. 3. As shown in Figure 3. The relation between d and output e corresponds to a notch ﬁlter as illustrated in Figure 3.k C 1/ D c1 .k/ C ¼e.k/ C ¼e.
34. 3. from the primary signal. The output signal is a replica of the speech waveform. At convergence. the speech signal of user A is transmitted over a transmission line consisting of a pair of wires (local loop) [31] to the central ofﬁce A.6. which is correlated to the reference signal.33. consists of a replica of the disturbances. the adaptive ﬁlter output will attempt to subtract the interference signal. Disturbance cancellation for speech signals. i.6.35. Figure 3. obtained by removing to the best possible extent the disturbances from the input signal. are separated by a device called . Frequency response of a notch ﬁlter. the signal transmitted by user A and the signal received from user B. where the signals in the two directions of transmission.3.5 Echo cancellation in subscriber loops With reference to the simpliﬁed scheme of Figure 3.e. Examples of application 225 Figure 3.
37. The signals of the array are then equalized to compensate for linear distortion introduced by the radio channel. the echo of signal A that is generated at the hybrid A can be ignored because it is not perceived by the human ear. A general scheme for wideband signals is illustrated in Figure 3.e. Transmission between two users in the public network. discriminating them through their angle of arrival. it is sufﬁcient to substitute for each sensor the ﬁlter with a single complexvalued coefﬁcient [32. an antenna array. For speech waveforms. with the task of ﬁltering signals in space. For narrowband signals. Conﬁguration to remove the echo of signal A caused by the hybrid B. At convergence. it is convenient to use several sensors. the hybrids give origin to echo signals that are added to the desired speech signals. A similar situation takes place at the central ofﬁce B.18).36. i. 3. to equalize the desired signal and remove interference. hybrid.6 Adaptive antenna arrays In radio systems. Because of impedance mismatch. A method to remove echo signals is illustrated in Figure 3. The case for digital transmission is different. with the roles of the signals A and B reversed.6. .36. Adaptive transversal ﬁlters Figure 3. 33] (see Section 8. where y is a replica of the echo. Figure 3. as will be discussed in Chapter 16.226 Chapter 3.35. e will consist of the speech signal B only.
On the other hand. we can use the scheme of Figure 3.39). to cancel a wideband interferer from a periodic signal it is sufﬁcient to take the output of the adaptive ﬁlter (see Figure 3. ž a delay 1 D DTc .3. .37. otherwise part of the desired signal would also be cancelled. the reference signal is generated by delaying the primary input.7 Cancellation of a periodic interfering signal For the cancellation of a periodic interfering signal.6. Examples of application 227 Figure 3. 3. where D is an integer.6. where: ž we note the absence of an external reference signal. is needed to decorrelate the desired component of the primary signal from that of the reference signal.38. Antenna array to ﬁlter and equalize wideband radio signals.
228 Chapter 3. Scheme to remove a sinusoidal interferer from a wideband signal. Scheme to remove a periodic interferer from a wideband desired signal.38. Adaptive transversal ﬁlters Figure 3.39.40. Figure 3. . Scheme to remove a wideband interferer from a periodic desired signal. Figure 3.
Nov. Haykin. [5] S. pp. Speech and Signal Processing. Eweda. Douglas. IEEE Trans. for D > 1 the scheme of Figure 3. However. H. an alternative scheme to that of Figure 3. on Acoustics. Oct. New York: Macmillan Publishing Company. vol.32 is illustrated in Figure 3. 35. 1989. pp. pp. observing (1. 31. Dec. “Fixedpoint roundoff error analysis of the exponentially windowed RLS algorithm for time varying systems”. [7] W. 1909–1922. 2937–2944. pp. . New York: John Wiley & Sons. Gardner. on Signal Processing. Solo. [3] G. pp. Ardalan and S. “The limiting behavior of LMS”. Theory and design of adaptive ﬁlters. vol. “Adaptive IIR ﬁltering”. 42.3. vol. 1994. “A family of normalized LMS algorithms”. Bibliography 229 Note that in both schemes the adaptive ﬁlter acts as a predictor. G. Alexander. 1987. Johnson Jr. on Acoustics. pp. for a sinusoidal interferer a secondorder predictor is sufﬁcient. 1972. and M. [11] B. Shynk. “Theory on the speed of convergence in adaptive equalizers for digital communication”. 1. Larimore. IEEE Signal Processing Letters. J. on Circuits and Systems. A.. Porat and T. “Nonstationary learning characteristics of the LMS algorithm”. 1987. June 1987. [10] S. 1199–1207. In general. 1994.32. R. Baltimore and London: The Johns Hopkins University Press. where the knowledge of the frequency of the interfering signal is not required. Kailath. [9] S. 1983. 6. vol. T. R. 770–783. [2] J.555). Mar. C. 1994. 1989. hence. Neural networks: a comprehensive foundation. C. then D D 1. vol. Bibliography [1] J. vol. vol. IEEE Trans. 16. 2nd ed. LMS and sign algorithms for tracking randomly time varying channels”. Nov.40 requires many more than two coefﬁcients. IEEE Trans.. Treichler. therefore it has a higher implementation complexity than that of the scheme of Figure 3. IEEE ASSP Magazine. [8] V. van Loan. Speech and Signal Processing. IEEE Trans. pp. “Normalized lattice algorithms for leastsquares FIR system identiﬁcation”. [4] G. 37. Matrix computations. 49–51. Ungerboeck. IBM Journal of Research and Development. [6] E. pp. 122–128. if the wideband signal can be modeled as white noise. 1989. 4–21. Apr. vol. Golub and C.40. Exploiting the general concept described above. IEEE Trans. on Acoustics. 546–555. 34. F. “Comparison of RLS. H. Feb. Speech and Signal Processing.
[19] P. on Signal Processing. Adaptive ﬁlters. 3rd ed. on Signal Processing. “Adaptive frequency sampling ﬁlters”. IEEE Trans. on Acoustics. algorithms and applications. IEEE Trans. 34. Zeidler. M. T. May 1993. 43. [17] J. [21] J.N / complexity in adaptive parameter estimation”. 28. on Signal Processing. 631–653. and J. 1781–1806. Englewood Cliffs. “A normalized frequency domain LMS adaptive algorithm”. [14] D. G.S.230 Chapter 3. on Acoustics. [22] Z. Grant. 1439–1446. O. vol. Ciofﬁ. June 1993. Adaptive ﬁlters: structures. 34. Adaptive ﬁlter theory. Speech and Signal Processing. IEEE Proceedings. 20–30. 1993. L. [25] C. J. 814–820. vol. 1985. Regalia. 1584–1588. vol. L. vol. Kailath. Speech and Signal Processing. MA: Kluwer Academic Publishers. M. “Fast. IEEE Trans. 837–845. Liu. IEEE Trans. vol. 1990. and W. June 1986. vol. NJ: PrenticeHall. Feintuch. R. Adaptive transversal ﬁlters [12] S. “Numerical stability properties of a QRbased fast least squares algorithm”. Mar. vol. IEEE Trans. on Information Theory. Dec. July 1987. on Communications. vol. Bucklew. Manolakis. “Performance analysis of LMS adaptive prediction ﬁlters”. pp. ICASSP. pp. Haykin. [26] N. Messerschmitt. “The fast adaptive rotor’s RLS algorithm”. D. A. Englewood Cliffs. L. “High speed systolic implementation of fast QR adaptive ﬁlters”. [15] J. [23] J. Anderson. pp. NJ: PrenticeHall. “A method for recursive leastsquares ﬁltering based upon an inverse QR decomposition”. 78. Sethares. Proakis. 2096–2109. Boston. G. IEEE Trans. pp. 34. Ghirnikar. vol. Ling. Speech and Signal Processing. N. 304– 337. vol. D. Widrow. pp. pp. T. Jan. Ciofﬁ and T. Honig and D. 1990. pp. F. 1984. M. pp. A. 1978. 41. Cowan and P. “Fundamental relations between the LMS algorithm and the DFT”. recursiveleastsquares transversal ﬁlter for adaptive ﬁltering”. “Application of fast Kalman estimation to adaptive equalization”. [13] J. [27] R. [18] F. Aug. pp. G. 38. June 1981. vol. [24] B. 1986. Ljung. [16] M. vol. IEEE Trans. 1988. Kurtz. Falconer and L. on Acoustics. on Circuits and Systems. pp. Apr. 452– 461. Ciofﬁ. “Numerically robust leastsquares latticeladder algorithm with direct updating of the reﬂection coefﬁcients”. “Weak convergence and local stability properties of ﬁxed step size recursive algorithms”. pp. pp. 524–543. IEEE Trans. on Acoustics. Bershad and P. 41. on Circuits and Systems. 1995. IEEE Trans. in Proc. 966–978. Oct. IEEE Trans. 32. M. IEEE Trans. Alexander and A.. [20] S. 39. 720–729. 1984. “QR methods of O. pp. P. 26. Bitmead and B. Speech and Signal Processing. Apr. 1996. . A.
1994. 1996. J. MA: Kluwer Academic Publishers. 1962. Macchi. pp. [41] R. 1995. Sequence design for communications applications. 1983. Mar. pp. on Information Theory. San Francisco: HoldenDay. June 1981. Gupta and A. IEEE Trans. “Phase shift pulse codes with good periodic correlation properties”. 474–484. 28. [42] R. [38] R. vol. Mar. “Maximal recursive sequences with 3valued recursive crosscorrelation functions”. “Echo cancellation in speech and data transmission”. IBM Journal of Research and Development. on Acoustics. 562–576. pp. G. New York: John Wiley & Sons. Jenkins. pp. 1967. Gold.. 1968. 1967. A. 29. 13. J. Chu. 1984. Golomb. pp. F. IEEE Journal on Selected Areas in Communications. Ziemer. July 1972. Darnell. [31] D. Apr. Shift register sequences.. A. Gold. Englewood Cliffs. vol. 154–155. Messerschmitt and E. Oct. 420–426. Stearns. Speech and Signal Processing. pp. “Optimal binary sequences for spread spectrum multiplexing”. and D. Bibliography 231 [28] D. 34. L. W. Fan and M. Frank and S. Milewsky. IEEE Trans. “Adaptive antenna array for weak interfering signals”. [33] L. . L. Zadoff. [40] R. 2nd ed. Jan. pp. vol. 426–431. on Information Theory. IEEE Trans. on Information Theory. pp. [30] O. [35] S. [43] S. Senne. NJ: PrenticeHall. Widrow and S. Ksienski. vol. Taunton: Research Studies Press. Adaptive processing: the LMS approach with applications in transmission. 1985. 1989. Marshall. 283–297. 14. IRE Trans. Introduction to spread spectrum communications. L. A. [36] P. vol. IEEE Trans. W. on Information Theory. “Polyphase codes with good periodic correlation properties”. Horowitz and K. G. vol. Lee. Sept. 36. Englewood Cliffs. “Performance advantage of complex LMS for controlling narrowband adaptive array”. on Circuits and Systems. D. IEEE Trans. 18. L. 531–532. vol. and J. [32] I. 27. [29] B. [37] D. IEEE Trans. Borth. Digital communication. [39] A. NJ: PrenticeHall. Oct.. vol. 62–73. D. “Periodic sequences with optimal properties for channel estimation and fast startup equalization”. IEEE Trans. Messerschmitt. C. Feb.3. “The use of orthogonal transforms for improving performance of adaptive ﬁlters”. 8. “Efﬁcient least squares FIR system identiﬁcation”. Murphy. vol. 1995. [34] D. E. Adaptive signal processing. pp. 1981. Peterson. 1986. 619–621. on Antennas Propag. 2. R. vol. on Circuits and Systems. E. K. Boston. pp. Marple Jr. 381–382.
D. I. Dec. Nagumo and A. Aug. “A learning method for system identiﬁcation”. AT&T Bell Laboratories Technical Journal. pp. 371–378. vol. 138. “Distortion analysis on measuring the impulse response of a system using a crosscorrelation method”. pp. 12.232 Chapter 3. [46] N. N. 1984. [45] S. 282–287. 1991. Adaptive transversal ﬁlters [44] J. Benvenuto. 2171–2192. “Least sum of squared errors (LSSE) channel estimation”. Crozier. A. Noda. 63. D. IEEE Trans. IEE ProceedingsF. vol. on Automatic Control. pp. Falconer. . vol. Mahmoud. and S. June 1967.
` 3/ ý p. and the number of bits equal to ž A subsequence is intended here as a set of consecutive bits of the rsequence. that are generated recursively. It can be shown that the maximallength sequences enjoy the following properties [34. PN sequences 233 Appendix 3.`/ D p.298) In both formulae the approximation is valid for a sufﬁciently large r.`/g. ž The linear span. except the all zero sequence. is still an rsequence.A PN sequences In this Appendix we introduce three classes of deterministic periodic sequences having spectral characteristics similar to those of a white noise signal. e.41 for a sequence with L D 15 (r D 4). the elements of a sequence can be determined by any 2r consecutive elements of the sequence itself. Let f p. e.. ž The number of bits equal to “1” in a period is 2r “0” is 2r 1 1. therefore all binary sequences of r bits are generated. The relative frequency of any nonzero subsequence of length i Ä r is 2r i '2 2r 1 i (3. but with different initial conditions. ž Every nonzero sequence of r bits appears exactly once in each period. which are generated by the same shiftregister. hence the name pseudonoise (PN) sequences. while the remaining elements can be produced by a recursive algorithm (see.g..g. also called rsequences.`/ 2 f0. 35]. using a shiftregister (see page 877). ž The sum of two rsequences. : : : .3. ` D 0. Maximallength sequences Maximallength sequences are binary PN sequences. L 1. that determines the predictability of a sequence. be the values assumed by the sequence in a period. 1.299) . 1g.297) and the relative frequency of a subsequence of length i < r with all bits equal to zero is 2r i 1 '2 2r 1 i (3. In other words.A. which is generated by the recursive equation p. p.` 4/ (3. 1. A practical example is given in Figure 3. the BerlekampMassey algorithm on page 891). and have period equal to L D 2r 1. is equal to r [36].
300) Obviously. even if deterministic and periodic. the all zero initial condition must be avoided.` L `D0 n/mod L D 8 < : 1 1 L for . 3/ D p. 2/ D p. 1/ D p. with the exception of the values assumed for .302) otherwise 3.n/mod L D 0 (3.d.1/ p.d.0/ p. Generation of a PN sequence with period L D 15.L 1/ (3.n/e L Tc nD0 1 j2³ m L T nTc c 8 > 1 > Tc < L Â Ã D > > Tc 1 C 1 : L for .i. .303) We note that.i.301) 2. the spectral density of maximal length sequences is constant. mapping “0” to “ 1” and “1” to “C1”.`/ D L `D0 L (3. Adaptive transversal ﬁlters Figure 3. Assuming initial conditions p. Spectral density (periodic of period L) Â Ã L 1 X 1 D Tc Pp m r p . 4/ D 1. where ý denotes modulo 2 sum. applying (3.299) we obtain the sequence 1 0 0 {z} {z} 0 1 0 0 1 1 0 1 0 1 1 1 {z} : : : p.m/mod L D 0. Mean L 1 1 X 1 p.5. Correlation (periodic of period L) L 1 1 X r p .n/ D p. appear as a random i.m/mod L D 0 otherwise (3. The above properties make an rsequence. sequence from the point of view of the relative frequency of subsequences of bits.234 Chapter 3. also from the point of view of the autocorrelation function. It turns out that an rsequence appears as random i.`/ p Ł . To generate sequences with a larger period L we refer to Table 3.41. we get the following correlation properties. In fact. 1.
`/ D p. L 1 1 (3.` 15/ 13/ ý p.`/ D p.` 19/ 14/ ý p.` 16/ 11/ ý p.`/ D p.`/ D p.`C1/ L ej It can be shown that.` p.3.` p.`/ D M³ `2 L ` D 0.`/ D p.` 14/ ý p.`/ D p.` 3/ ý p.`/ D p.` 14/ ý p.` 3/ ý p.`/ D p.` p.n/mod L 6D 0.` p.` 12/ ý p. for L even for L odd p.` 7/ ý p.` 13/ ý p.` 1 2/ 3/ 4/ 5/ 6/ 7/ 3/ ý p. The CAZAC sequences are deﬁned as. these sequences have the following properties.`/ D p. for different values of r.` 17/ 18/ 17/ ý p.` 2/ ý p.` p.` 5/ ý p.`/ D p.` p.305) M³ `.` 9/ ý p.` p.A. .` 2/ ý p.` p.` 2/ ý p.` 20/ 18/ ý p.` 9/ 10/ 11/ 10/ ý p. 1.` p.`/ D e j p.` p.` 6/ ý p. L ` D 0. 38.` p.` 11/ ý p.`/ D p.` p.` 5/ ý p.` 12/ ý p.` p.` Period L D 2r 1/ 1/ ý p.304) (3.`/ D p.` 14/ ý p.` p.` 11/ ý p.` p.` 2/ ý p.` p.`/ D p.`/ D p. in both cases.n/ equal to zero for .`/ D p. Let L and M be two integer numbers that are relatively prime. : : : . r 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 p.`/ D p.`/ D p. Because of these characteristics they are also called polyphase sequences [37. 39].` p. : : : .` 12/ 13/ 14/ 4/ ý p. PN sequences 235 Table 3.`/ D p.`/ D p.` 17/ ý p.` p.` p.` 1/ ý p.5 Recursive equations to generate PN sequences of length L D 2r 1.`/ D p. 1.` 8/ CAZAC sequences The constant amplitude zero autocorrelation (CAZAC) sequences are complexvalued PN sequences with constant amplitude (assuming values on the unit circle) and autocorrelation function r p .` 11/ ý p.
Adaptive transversal ﬁlters 1. Correlation ( r p .306) 2. In general the crosscorrelation sequence (CCS) between two rsequences may assume three.`/ D 0 L `D0 (3. that have good autocorrelation and crosscorrelation characteristics.M`/mod L g We make the following assumptions. Let a D fa. in other words. that is: b D fb.309) . ž rmod 4 6D 0.`/g be an rsequence with period L D 2r 1. called preferred rsequences [36].236 Chapter 3. i. ž Each sequence of the set must be easily distinguishable from any other sequence of the set and from its timeshifted versions. sets of sequences having one or both of the following properties [40] are required.e.308) 1 0 for . An important class of periodic binary sequences that satisfy these properties. Mean L 1 1 X p. whose CCS assumes only three values. Spectral density Â Ã 1 D Tc Pp m L Tc (3.n/ D 3. four or maybe even a greater number of values.307) Gold sequences In a large number of applications. 42]. that is r must be odd or equal to odd multiples of 2.310) (3.`/g D fa. We deﬁne now another rsequence of length L D 2r 1 obtained from the sequence a by decimation by a factor M. ž The factor M satisﬁes one of the following properties: M D 2k C 1 or M D 22k 2k C 1 k integer (3. We show now the construction of a pair of rsequences.n/mod L D 0 otherwise (3. rmod 4 D 2. ž Each sequence of the set must be easily distinguishable from its own time shifted versions. is the set of Gold sequences [41. Construction of pairs of preferred rsequences. or. as for example in spreadspectrum systems with codedivision multiple access (see Chapter 10).
n/g D 1 .311) 2 rmod 4 D 2 Then the CCS between the two rsequences a and b assumes only three values [35. 1.314) (3. 7.n/ D L 1 1 X a.A. The sequence fb. 1. if we had chosen k D 2. 1.`/g is then given by fb.`/g obtained by decimation of the sequence fa. 1.`/g . 2/ D 1 and M D 22 C 1 D 5. 1. a ý b. k/ D (3. 7.315) bg (3.`/g of preferred rsequences of period L D 2r 1. 1.`/g D fa. b.316) where Z is the shift operator that cyclically shifts a sequence to the left by a position.` L `D0 n/mod L r Ce 2 2 8 > > 1< D > L> : 1C2 1 1 2 r Ce 2 (value assumed 2r (value assumed 2r (value assumed 2r e 1 C2 e 2r e 1 times) 1 times) times) (3. let ( 1 r odd e D g:c:d:.r. It can be proved [41. 1. The set (3. or else M D 22Ð2 22 C 1 D 13.1 (Construction of a pair of preferred rsequences) Let the following rsequence of period L D 25 1 D 31 be given: fa. 1. PN sequences 237 ž For k determined as in the (3. deﬁning g:c:d:. 7. is: frab . 1. 9. 7.`/g D .r. 7.5. 42] that.`/g and fb0 . 1. A set of Gold sequences can be constructed from any pair fa. 9. 31 1. 7. b/ D fa.7. We deﬁne the set of sequences: G. a ý Z L 1 (3.a. 9. 1. : : : . 1. 9.r.`/g and fb. 1.0000100101100111110001101110101/ (3.313) As r D 5 and rmod 4 D 1. for the two sequences fa 0 .`/bŁ . 7. assuming “0” is mapped to “ 1”.310). 7. 7. k/ D g:c:d:. a ý Z b.316) contains L C 2 D 2r C 1 sequences of length L D 2r 1 and is called the set of Gold sequences.5. Therefore e D g:c:d:.0001010110100001100100111110111/ The CCS between the two sequences. Construction of a set of Gold sequences.A. 9. we take k D 1. 9. 1/ D 1 and M D 2k C 1 D 21 C 1 D 3. 1/ We note that.3`/mod L g D . k/ as the greatest common divisor of r and k. 36]: rab . then e D g:c:d:. a ý Z 2 b.3.312) r Ce 2 2 r Ce 2 2 Example 3.
1.A. 9. 1. 1. 1. 1.n/ D (3. 1.A. 9. 1. 31 1. 7. 1. 1. 9. 7.`/ ý b. 1.a. 1. b/. 1. 7. 1 31 1. 1. with the exception of zero lag. 1. 1. 1. 1. 1.321) frb0 . 1. 7. 1. 1. 1. 1. 1.`/g: fa. from which it is possible to generate the whole set of Gold sequences. 1. 7.319) 1 . 1. 1. Example 3. 1. 7. 1.`/g and fb0 . 9. the two sequences (3.317) r C2 r C2 L> : 1 1C2 2 rmod 4 D 2 1 2 2 Clearly. 7.1. 9.313) and (3. and the CCS between fa. 1. 1. 7.31.n/gD 1 . 1.n/gD (3. 1. 1. 1. the ACS of a Gold sequence no longer has the characteristics of an rsequence. 1. 1. 7.320) 1 . 1. 1. 1. 1. 1. 1. 1. 1. 1. 31 1 9. Adaptive transversal ﬁlters belonging to the set G. 9. 1. 1. 1. 1. 1. 1.n/gD frab0 . 1. 1. 7. 7. 1. 1/ fb0 . 7. 1. 1. 1/ 1. as is seen in the next example. 1. 1. 1. 1. 1. 7. 1. 1. hence L D 25 1 D 31. 1. 1. 1/ (3. 7. 1. 1. 1. 1. For example we calculate the ACS of fa. 1/ (3. 1. 1.31. 9.2 (Gold sequence properties) Let r D 5. 1. 1. 1. 1. 1. assume only three values: 8 r C1 r C1 > 1C2 2 r odd 1 2 2 1< 1 ra 0 b0 . 1. 1.` 2/g D a ý Z 2 b. 1. 1. 1. 1. 7. 9. 1. 7.`/g D fa. 1. 1.314) are a pair of preferred rsequences. From Example 3. 1.322) . 1. 1. 9. 1.`/gD. 7. 1.`/gD. 1. 9. 1. 1. 1. 7.`/g and fb0 . 1. 9. 1. 1. 1. 1. 1. 1. (3. 1. 1. 1. 1. 1. 7. 7. the CCS as well as the ACS.238 Chapter 3. 1/ fra .318) (3. 7. 1. 1. 1.
L 1. i D 0. i D 0. where we choose L ½ N . 1. Identiﬁcation of a FIR system by PN sequences 239 Appendix 3. we have rdx . we observe that the crosscorrelation between d and x is then proportional. 2L .n/ D rx . To estimate the impulse response of a linear system. N 1. and an input sequence x with length of at least . : : : . the output will also be periodic with period L. the output v.1=L/w L . f p. : : : . which describes the relation between input and output of an unknown system with impulse response fh i g. L . 1.0/. a rectangular window g Rc . L C 1.k/ D p. i D 0.264). we take as an input signal white noise with statistical power rx . L 1. instead of noise a PN sequence with period L.i/g. We recall that the autocorrelation of a PN sequence is also periodic with period L and is given by (see Appendix 3. we consider the scheme illustrated in Figure 3.0/.k/ D . in other words. : : : . Correlation method to estimate the impulse response of an unknown system. For k ½ . 1.324) Moreover. : : : . 1.L 1/.k/mod L . obtained by repeating f p.42.k/ is given by Figure 3. we recall that if the input to a timeinvariant ﬁlter is periodic with period L.323) In practice.A): ( r p .B. is used as input.n/ D rzx .n/ D1 '0 n D 0.42. : : : (3.n/ D rx Ł h. .B Identiﬁcation of a FIR system by PN sequences 3.1 Correlation method With reference to (3. N 1g. We assume a delay m 2 f0.k/. with a factor rx .N 1/ C . : : : . : : : n D 1.0/h n (3. and that the system is started at instant k D 0.B. In fact.3. To estimate the impulse response fh i g. to the impulse response fh i g. N 1.i/g.L C 1/N samples. x.
k L `D0 `/ p Ł .k ` ` i/mod L pŁ .k ` m/mod L (3.327) can be ignored.k # ` m/mod L Cw. 2.325) becomes v.k ` ` m/mod L C As L 1 1 X p.329) " # `/ p .m i/mod L (3.k Ł L 1 1 X p.k Ł ` m/mod L ' 2 ¦w L (3.k/ ' h m (3.k L `D0 N 1 X i D0 h i r p .m i/mod L C L 1 1 X w.326) (3. the second term on the righthand side of (3.`/j Ä 1.m i/mod L (3.k L `D0 `/ p Ł .327) If L × 1.k/] D assuming w has zero mean.k m/mod L ` i/mod L pŁ . Adaptive transversal ﬁlters v.328) Mean and variance of the estimate of h m given by (3.324) we get v.k ` m/mod L D r p .k N 1 X i D0 `/ p .330) assuming w white and j p.k L `D0 `/ p Ł . hence observing (3.k L `D0 m/mod L (3. 1.k/] D var w.k L `D0 L 1 1 X w.k/ D L 1 X 1 u. .k/ D N 1 X i D0 h i r p . Variance L 1 1 X var[v.325) D hi ` i/mod L pŁ .k ` m/mod L L 1 X 1 D L `D0 " N 1 X i D0 h i p.240 Chapter 3.k L `D0 `/ D L 1 X 1 d. Mean E[v.327) are obtained as follows.
N 1/ . h 1 .332) we get 1/ C .333) O h i D v. O N 1 ] those estimated.m/ D O 1 L . However.L 1/ X kD.N 1/ C .L 1/. Let h be the estimation error vector h O hDh h . varying m from 0 to N 1 it is possible to get an estimate of the samples of the impulse response of the unknown system fh i g at the output of the ﬁlter g Rc .` C .L 1/ to 2. the samples at the correlator output from instant k D .42.L 1// i D 0.B. : : : .N 1/C. this scheme has two disadvantages: 1. we get v.k m/mod L ' h m m D 0.N 1/ d.i C .43. Using the scheme of Figure 3.k/ D L 1 1 X d. it requires a very long computation time (N L). N 1 (3.43: with steps analogous to those of the preceding scheme. : : : . h N 1 ] be the ﬁlter coefﬁcients to be estimated and hT D [h 0 . : : : . Both problems can be resolved by memorizing. give an estimate of the samples of the impulse response of the unknown system fh i g.L 1//mod L (3. Signaltoestimation error ratio O O O Let hT D [h 0 . : : : .k . 1.332) After a transient of .N 1/ samples.N 1/ C . it requires synchronization between the two PN sequences.k/g in a buffer and computing the correlation offline: rdx . h 1 .N 1/ C . Correlation method via correlator to estimate the impulse response of an unknown system. Identiﬁcation of a FIR system by PN sequences 241 Figure 3.L 1//mod L ' h . after a transient equal to N 1 instants. N 1 (3.331) An alternative scheme is represented in Figure 3.3. 2.L 1/ C `/ p Ł . L consecutive output samples fd. from (3.k L `D0 .N In other words. 1.k/ pŁ . at transmitter and receiver.
In our case Mx D 1.242 Chapter 3.340) where a PN sequence of period L D N .42): 3D Mx jjhjj2 2 ¦w (3.k/.335) where Mx is the statistical power of the input signal.k/ D N 1 X i D0 .k/.k/ (3.334) On one hand.k/ the output of the identiﬁed system.42. .k/ D . represented in Figure 3.h i O h i / x.k/ D N 1 X i D0 O h i x.k/ D L 1 X nD0 x.k i/ (3.336) measures the ratio between the variance of the additive noise of the observed system and the variance of the error at the output of the identiﬁed system.338) having variance Mx E[jj hjj2 ] for a white noise input. Finally.338) we note that the difference d. with an error given by z.k i/ (3. From (3.k/ O d. 3.k/ O d. O d.k n/mod L h n (3.339) consists of two terms. Let us consider the vector zT D [z. one due to the estimation error and the other due to the noise of the system.2 Methods in the frequency domain System identiﬁcation in the absence of noise In the absence of noise (w D 0).337) O O the fact that h 6D h causes d. As a consequence.z. is assumed as input signal x. (3.B.k// C w. the output signal of the unknown system. is z. Adaptive transversal ﬁlters The quality of the estimate is measured by the signaltoestimation error ratio 3e D jjhjj2 E[jj hjj2 ] (3. we refer to the normalized ratio 3n D 2 ¦w 3e D 3 Mx E[jj hjj2 ] (3. we have to take into consideration the noise present in the observed system and measured by (see Figure 3.k/ O d.336) O We note that if we indicate with d.k/ 6D z. equal to the length of the impulse response to be estimated.
341) can be solved very efﬁciently.L 1/ (3. z.k/ mod L .345) the expression of the output signal obtained in the presence of noise. Identiﬁcation of a FIR system by PN sequences 243 z.k/ C w.L 1/. 2. x.k 1/mod L .k/ D z.341) can be substituted by the circular convolution (see (1. from a computational complexity point of view.341) admits a unique solution if and only if the matrix M is nonsingular.k C .k/ D x L h.k . : : : .k/].347) # O jh k O h k j2 D L E[jh k 2 h k j2 ] D L ¦w L 1 X i D0 js.k/ D DFT 1 [1=X ]. the product in (3. rather than inverting the matrix M. using the output samples fz. setting s. X m D DFT[x. and a circulant matrix M whose ﬁrst row is [x.L 1//g we obtain a system of L linear equations in L unknowns. m 1 m D 0. After an initial transient of L 1 samples. : : : . L ½ 1 (3. Mean O E[h] D h 2. (3. which in matrix notation can be written as z D Mh (3. : : : .3.348) . The system of equations (3.L 1//]. : : : . x. : : : .k/ C s L w. mean and variance of the estimate (3.k/ (3. in the frequency domain. z.k/ (3.i/j2 (3. d.341) assuming k D L 1 in the deﬁnition of z and M. Variance " E[jj hjj2 ] D E L 1 X kD0 (3.k/ D h k C s L w. L 1 (3.343) Ä Zm Xm k D 0.342) can be rewritten in terms of the discrete Fourier transforms as Zm D Xm Hm from which we get h k D DFT or. 2.342) Letting Zm D DFT[z. 1.346) are obtained as follows. Being M circulant. : : : .k C 1/.346) 2 Assuming that w is zeromean white noise with power ¦w .105)) z.L 1//mod L ].k/. the system (3.2.k/ kDL 1. Because the input sequence is periodic.k/ D s L z. : : : .345) System identiﬁcation in the presence of noise Substituting in (3. for k D L 1. and Hm D DFT[h k ].L 1/. the estimate of the coefﬁcients of the unknown system is given by O hk D s L d.k/].B.344) hk D s L z.
(3. CAZAC sequences yield 3 dB improvement with respect to the maximallength sequences.i/j2 D L 1 L 1 1 X 1 X 1 jS j j2 D L jD0 L jD0 jX j j2 (3. in the best case. x. x. the noisy output of the unknown system can be written as d.3 The LS method 1/.B.k/ C w. it has the disadvantage that. it turns out X0 D 1 jX1 j2 D jX2 j2 D Ð Ð Ð D jX L hence. 44.L 1//g.244 Chapter 3. O O d. The unknown system can be identiﬁed using the LS criterion O [43. : : : .. . observing (3. L 1 (3.336) becomes 3n D L C1 2L (3.353) we see that the observation of L samples of the received signal requires the transmission of L T S D L C N 1 symbols of the training sequence fx.3.354) .k/ D [x.k/j2 (3.352) 2 and the minimum of (3.355) N X 1 1CL kDN 1 jd.348) is equal to ¦w . : : : . (3. 3. In the ﬁrst case. : : : .N 1//]. 45].303). letting xT .k/.L 1/ From (3. 1.351) 2 1j (3. from (3.N 1/.k/ D hT x.k x. Although this method is very simple.42. it gives an estimate with variance equal to the noise variance of the original system. we have that all terms jX j j2 are equal jX j j2 D L j D 0. if L is large. : : : .336) for PN maximallength and CAZAC sequences.350) D L C1 For CAZAC sequences.1/.353) With reference to the system of Figure 3.k/ As for the analysis of Section 2. In other words. the sum of squared errors at the output is given by ED where. For a certain estimate h of the unknown system.337). x. from (3. Adaptive transversal ﬁlters Using the Parseval theorem L 1 X i D0 js.0/.k .348). therefore 3n D 1. from (3.N 1/ C .349) it is possible to particularize (3. we introduce the following quantities.k/ D hT x.k/ k D .308).k/ O d. (3.N 1/ C .
k n/ (3.i. Correlation matrix of the input signal D [8.360) Then the cost function (3. n D 0. : : : .361) As the matrix is determined by a suitably chosen training sequence. n/] where 8.k/ x Ł . observing (3.n/ D N X 1 1CL kDN 1 1/] (3.k n/ (3.k i/ x. : : : .B.0/.362) 1 in the (3. we can assume that is positive deﬁnite and therefore the inverse exists. because We observe that the matrix it depends only on the training sequence. #.359) d. In some applications it is useful to estimate the O variance of the noise signal w that.¦w / D O2 1 Emin L (3. Crosscorrelation vector ϑ T D [#.354) becomes E D Ed O hH ϑ O O O ϑ H h C hH h (3. Identiﬁcation of a FIR system by PN sequences 245 1. N 1 (3. Energy of the desired signal Ed D N X 1 1CL kDN 1 jd.362) can be precomputed and memorized.3.364) .N where #.k/j2 (3. n/ D N X 1 1CL kDN 1 i.356) 2.i.358) 3. The solution to the LS problem yields O hls D with a corresponding error equal to Emin D Ed O ϑ H hls (3.363) 1 ϑ (3.339).357) x Ł . for h ' h can be assumed equal to .
369) Substituting (3.365) 1/.353). in relation to an alternative LMMSE estimation method.357). Observing (2..160).I H I/ which coincides with (3.139).k/ (3.372) . We note the introduction of the new symbols I and o. : : : . d.L 1//] (3.366) ϑ D IHo (3. substituting (3.N 2.L 1/ 3 ::: x. (2.362).362).246 Chapter 3.370) observing (3.k/ (3. we obtain the relation ϑ D hCξ (3.k/ xŁ . 1 x. (3. the estimation error vector can be expressed as hD 1 ξ (3. Desired sample vector o T D [d.k/ is given by (3.4.N 1/ C .367) IHo (3.371) in (3.k/ xŁ .360).L 1/ (3. we have D IHI and O hls D .B. we recall the following deﬁnitions.131).359) can be rewritten as ϑ D N X 1 1CL kDN 1 d. and (2. 1.369).353) in (3.. From (3.371) Consequently.N where d. L ð N observation matrix 2 6 I D4 x. Adaptive transversal ﬁlters Formulation using the data matrix From the general analysis given on page 152.368) Computation of the signaltoestimation error ratio We now evaluate the performance of the LS method for the estimation of h. and letting ξD N X 1 1CL kDN 1 w.N : : : 1/ C .0/ 7 : :: : 5 : : 1// : : : x. which will be given in Section 3.
Ł / 1 ] (3. yields 3n D .381) In Figure 3. .376) yields 3n D L N are equal to (3.B. ξ Ł is a zeromean random vector with correlation matrix 2 Rξ D E[ξ ∗ ξ T ] D ¦w Ł (3. with parameter L.L C 1 N / N . and (3.336).L C 2 N / (3.380) which.3. the matrix D LI where I is the N ð N identity matrix.374) In particular. and indicating with 1 N ðN the matrix with all elements equal to 1. the correlation matrix can be written as D .375) and. Identiﬁcation of a FIR system by PN sequences 247 2 If w is zeromean white noise with variance ¦w . 3n also doubles. (3. substituted in (3.378) gives a good indication of the relation between the number of observations L.376) is diagonal. using as training sequence a maximallength sequence of periodicity L. from (3.378) The (3. For example. The elements on the diagonal of 1=L.379) Â Ã 1 N ðN 1 IC D L C1 L C1 N (3. the number of system coefﬁcients N . Ł / 1 (3.L C 1/I From (3. and 3n . for CAZAC sequences (solid line) and for maximallength sequences (dotteddashed line).377) 1 Using as training sequence a CAZAC sequence. h has mean zero and correlation matrix R h 2 D ¦w . we get 3n D . 2 E[jj hjj2 ] D ¦w tr[. doubling the length of the training sequence.376).L C 1/.373) Therefore. Now.tr[ 1 ]/ 1 (3. We make the following observations.379) the inverse is given by 1 1 N ðN (3.44 the behavior of 3n is represented as a function of N .
after obtaining the estimate. ž For a given N . the two sequences yield approximately the same 3n . for various values of L.248 Chapter 3. if N is smaller than Nh .382) . On the other hand. ž For sparse systems. the estimation error may assume large values (see (3. Adaptive transversal ﬁlters Figure 3.331) is adopted. for example. we get Â Ã 1 1 hD I hC ξ L L (3. ž For a given value of L. We note that the frequency method operates for L D N . but only a few of them are nonzero. we get 1 O hD ϑ L where ϑ is given by (3. Observing (3. the estimate is usually very noisy. N for CAZAC sequences (solid line) and maximallength sequences (dotteddashed line). where the number of coefﬁcients may be large.359). for L D 15 the maximallength sequence yields a value of 3n that is about 3 dB lower than the upper bound (3.371). The worst case is obtained for L D N .44.270)). 3n vs.378). because of the presence of the noise w. Therefore. choosing L × N . the estimate of the coefﬁcients becomes worse if the number of coefﬁcients N is larger than the number of coefﬁcients of the system Nh . it is necessary to set to zero all coefﬁcients whose amplitude is below a certain threshold. ž If the correlation method (3.
1=L 2 / Rξ .0/j2 / h i .0/j2 <L 2 ¦w 3. if L is large enough to satisfy the condition 3 C .jjhjj2 C . from (3. In particular. Let us assume that w and h are statistically independent random processes. we obtain the same values 3n (3..3.N 2/ jH. for a CAZAC sequence. Moreover. the LS method (3.379) we get Â 1 L Ã I h 2 D 1 jj.386) hence 3n D 1 NC 3 C .N L Ä L jH.324) is strictly true and the correlation method (3. Identiﬁcation of a FIR system by PN sequences 249 Consequently the estimate is affected by a BIAS term equal to .377) the second term of the denominator in (3.384) vanishes.1 L2 I/ hjj2 D X 1 N 1 jh i L 2 i D0 H.N L2 2/ jH. as 1 is diagonal.378).362) coincides with (3. whose secondorder statistic is known.384) Using a CAZAC sequence. We observe that using the correlation method.42. it turns out E[jj hjj ] D and 3n D 1 tr[ ] C L2 Â 1 1 L Ã I h 2 2 Â 1 L Ã I h 2 C 2 ¦w tr[ ] L2 (3.0/j2 2/ 2 ¦w ½ (3. .385) D where H. using (3.331).383) 1 2 ¦w (3. Using instead a maximallength sequence. and has a covariance matrix equal to .387) where 3 is deﬁned in (3.4 The LMMSE method We refer to the system model of Figure 3.B. we have tr[ ] D N L (3.373).B.0/ D PN 1 i D0 1 .335).0/j2 (3. In fact. from (3.381) as the LS method.1=L/ I/h. and 3n is given by (3.
(3.A.221)). (3.389) be a random vector with noise components. from (3. Recalling the deﬁnition (3.395) We note that with respect to the LS method (3. some caution is needed to apply (2.. we have 2 O hLMMSE D [¦w .176).393) Using the matrix inversion lemma (3.388) provides an estimate only of the random (i.388) becomes O hLMMSE D [.RŁ / h 1 1 C I H . we have Ro D E[oŁ o T ] D I Ł Rh I T C Rw and Roh D E[oŁ hT ] D I Ł Rh Then (3. which depends on the ratio between the noise variance and the variance of h. (3.395) introduces a weighting of the components given by ϑ D I H o. which now will be denoted as o.392) I Ł Rh ] T o (3.388) where we have assumed E[o] D 0 and E[h] D 0. and the components of h are derived by the power delay proﬁle.229) to the problem under investigation. If the variance of the components of h is large. we can write o D Ih C w Assuming that the sequence fx.I Ł Rh I T C Rw / 1 (3. Adaptive transversal ﬁlters For a known input sequence x. Consequently.390) (3. while the desired signal was given by the system noisy output d. letting wT D [w. Consequently.368).394) C I H I] 1 IHo (3.393) can be rewritten as O hLMMSE D [. w.353).RŁ / w 1 I] 1 I H .RŁ / w 1 o (3. Now.229).250 Chapter 3.N 1/. the LMMSE method (3. the LMMSE estimator is given by O hLMMSE D . the nondeterministic) component of the channel impulse response.e.N 1/ C . and (2. We note that in the LS method the observation was the transmitted signal x. : : : . The observation is now given by d and the desired signal is the system impulse response h.Ro 1 Roh /T o . we desire to estimate h using the LMMSE method given in the Appendix 2.k/g is known.391) (3.366) of the observation vector o. .RŁ / h 2 If Rw D ¦w I.395). from the observation of the noisy output sequence d. We conclude by recalling that Rh is diagonal for a WSSUS radio channel model (see (4. then Rh is also large and likely Rh 1 can be neglected in (3.L 1//] (3.
3. in general E[jj hjj2 ] D tr[R1h ] This result can be compared with that of the LS method given by (3.Á/x Ł .233). repeated several times.375).t/ D rect Tc The modulated output signal x is therefore given by x.397) Moreover.Á t/dÁ (3.t/ D wTc .RŁ / w 1 I]Ł g 1 (3. (3.396) C I H I]Ł g 1 (3. which uses the error O vector h D hLMMSE h having a correlation matrix R1h D f[.RŁ / h 1 1 C I H .B.t/ D L Tc Z LT C 2c L Tc 2 x. we get 2 2 R1h D ¦w f[¦w .399) g. Identiﬁcation of a FIR system by PN sequences 251 For an analysis of the estimation error we can refer to (2.401) Figure 3.45.5 Identiﬁcation of a continuoustime system In the case of continuoustime systems.42 can be modiﬁed to that of Figure 3. A PN sequence of period L. the scheme of Figure 3.t/ D C1 X i D0 p.t i Tc / (3.45 [46]. is used to modulate in amplitude the pulse Ã Â t Tc =2 (3. where the noise is neglected.400) The autocorrelation of x. is expressed by 1 rx .B. Basic scheme to measure the impulse response of an unknown system.RŁ / h 2 If Rw D ¦w I. .i/mod L g. periodic function of period L Tc .398) 3.
`/rg .403) Substituting (3.Á/g. and the result is ﬁltered by an ideal integrator between 0 and L Tc with impulse response Ã Â 1 1 t L Tc =2 (3. rx .t/ D D 1 L Tc Z t u.404) rx .`/ D 1=L for ` D 1.t/ D L Tc L Tc L Tc we obtain v.− / D v− 0 − /dÁ (3.t/ D L L Tc 2Tc 2 as shown in Figure 3.t/ has a simple expression given by rx . .252 Chapter 3. we obtain Ã Â Ã Â ÃÂ t L Tc 1 jtj 1 rect jtj Ä 1 C 1C (3.46.46 for L D 8. : : : .¾ /x.− ¾ /d¾ D h Ł rx .t − /.t/. Adaptive transversal ﬁlters As g has ﬁnite support of length Tc .403) in (3.402) where.¾ /rx .t/ D 0 Ã Â Ã jtj t rect Tc 2Tc (3.Á/dÁ t L Tc Z t Z C1 1 [h.402) and assuming a maximallength PN sequence. Autocorrelation function of x.405) w L Tc . L 1.Á t/dÁ D Tc 1 rg . with r p .t Tc `D0 `Tc / 0 Ä t Ä L Tc (3.Á L Tc t L Tc 0 Z C1 D h. x Ł . If the output z of the unknown system to be identiﬁed is multiplied by a delayed version of the input.t/ D rect g Rc . in the case of g.406) Figure 3.399). we have Â Z Tc g.0/ D 1 and r p .Á ¾ /d¾ ]x Ł .t/ D L 1 1 X r p .t/ given by (3.
of simpler implementation because it does not require synchronization of the two PN sequences at transmitter and receiver. Sliding window method to measure the impulse response of an unknown system. Assuming 1=Tc is larger than the maximum frequency of the spectral components of h.− / At the output of the ﬁlter g Rc . and L is sufﬁciently large.47 is an alternative to that of Figure 3.411) . related to the clock frequency f 0 D 1=Tc of the transmitter by the relation Â Ã 1 0 (3.Á/]Ł x.Á − / dÁ L Tc 0 (3.405).Á/] x. but a different clock frequency f 0 D 1=Tc0 .410) (3.407) f0 D f0 1 K where K is a parameter of the system.Á − / dÁ ' rx 0 x − (3. the output z of the unknown system is multiplied by a PN sequence having the 0 same characteristics of the transmitted sequence.Tc0 Tc / D t=K .¾ /rx K 0 Â Ã Z C1 t L Tc ' ¾ d¾ h.− /. In this latter scheme. we can assume that rx 0 x .− / D [x 0 . We consider the function Z L Tc 1 rx 0 x .3.B. the delay between the two sequence diminishes of the quantity .¾ /x. Identiﬁcation of a FIR system by PN sequences 253 Figure 3.− / ' rx . The scheme represented in Figure 3. so that Â Ã Z t 1 t L Tc 0 Ł for t ½ L Tc [x .t=Tc0 /.409) L Tc t L Tc K If K is sufﬁciently large.408) where − is the delay at time t D 0 between the two sequences. Therefore the output assumes a constant value v− equal to the convolution between the unknown system h and the autocorrelation of x evaluated in − .Á/]Ł dÁ v. therefore we have Z t Z C1 1 [h.t/ D L Tc t L Tc 0 Â Ã Z C1 t L Tc 0x ¾ d¾ D h. the output v− is approximately proportional to h. given by (3.45. As time elapses.¾ /rx K 0 (3.47.Á ¾ / d¾ ][x 0 .
.t L Tc /=K .409) and (3.t 0 ¾ / d¾ (3. it can be shown that the approximations in (3. Adaptive transversal ﬁlters or.45 are equivalent.412) coincides with the integral in (3.K t 0 C L Tc / ' 0 C1 h.410) are valid.¾ /rx . Z y.47 and in Figure 3. with the substitution t 0 D .406).254 Chapter 3. If K is sufﬁciently large (an increase of K clearly requires a greater precision and hence a greater cost of 0 the frequency synthesizer to generate f 0 ). Therefore the systems in Figure 3.412) where the integral in (3.
If Z 1 and Z 2 are respectively the input and output impedances of the 2port network and vo is the opencircuit voltage at the output. e. or optical ﬁber.1 Electrical characterization of a transmission system Simpliﬁed scheme of a transmission system We consider a message source. a transmission system can be conﬁgured as illustrated in Figure 4. as shown in Figure 4. that are expressed as voltage signals. The fundamental properties of various transmission media will be discussed in the remaining sections. where the signal vi produces v1 through the source voltage divider. and the signal v L is obtained from vo through the load voltage divider. .g. radio link. of one or more of the following media: twistedpair cable. An intermediate device may compensate for attenuation and/or disturbances introduced by the medium. waveguide. which could be for example a machine that generates speech signals and/or sequences of symbols. v1 gives origin to vo according to the 2port network transfer characteristics.2c.2b. we obtain the equivalent electrical scheme of Figure 4. many components (e.g. more generally. ﬁlters. coaxial cable. The transmission medium may consist. Z L the load impedance. A 2port network is a device that transfers a signal vi from a source to a load. and v1 and v L are. It may consist of a simple ampliﬁer or. of a repeater. respectively. To be able to convey the information represented by these messages to a user situated at a certain distance from the source.2a.Chapter 4 Transmission media The ﬁrst two sections of this chapter introduce several parameters that are associated with the electrical characteristics of electronic devices. 4. ampliﬁers. where Z i denotes the source impedance. cables. From an electrical point of view. The word channel is often used in practice to indicate in abstract terms a transmission medium that allows the propagation of a signal. The task of the receiver is to yield an accurate replica of the original message..1. or even—for data signals – of a regenerator that attempts to restore the original source message. where we identify input and output twoterminal devices.. : : : ) of a transmission system may be interpreted as a cascade of 2port linear networks. The transmitter is a device that converts the source message into a signal that can physically propagate through the transmission medium. The corresponding abstract model is illustrated in Figure 4. the input and output signals of the 2port network.
f /G L .2. f / D G1 o. . the expression of GCh . however. In the cascade of several 2port networks. Simpliﬁed scheme of a transmission system. or I L =I1 .1. we have GCh . To analyze the characteristics of the network of Figure 4.1) where G1 o . Connection of a source to a load through a 2port linear network. the frequency response G Ch . Figure 4. I L =V1 .1). with reference to Figure 4. we will refer to the study of twoterminal devices. f / (4.256 Chapter 4.2a. f / will be different from (4. For these cases. We note that in some cases the frequency response could be deﬁned as VL =I1 .2c. Therefore. respectively. f / of each network is given by the ratio VL =V1 . f / and G L . Transmission media Figure 4. f / denote the frequency responses of the 2port network and of the load.
5) is called average power density transferred to the load and expresses the average power per unit of frequency. If v and i are. and an impedance Z b . the crosscorrelation rvi is a real function. Being I D Vb 1 Zb C Zc V D Vb Zc Zb C Zc (4.1 The function p. 1 In propagation theory [1].t/ is an ergodic process in mean (see (1.6) Figure 4.− / d− D E[v. Electrical characterization of a transmission system 257 Characterization of an active device We consider the twoterminal device of Figure 4.4) PD 1 1 In fact. P is sometimes deﬁned as PD 1 1 lim 2 t!1 t Z t=2 t=2 v.2) . we have that: Z C1 Z C1 Pvi .4. modelled as a random WSS process with statistical power spectral density Pvb (V2 /Hz). The device is connected to a load with impedance Z c . the voltage and the current at the load.− / d− (4.3. f /] (W/Hz) (4.3) t!1 t t=2 assuming that v. f / D Re[Pvi .1.t/i.− / i.442)). from (1. Deﬁnition 4.3 that consists of a generator with an opencircuit voltage vb . f /] d f (4.230). Hence Pvi is Hermitian with even real part and odd imaginary part.− / i. We now obtain Pvi in terms of Pvb using the method shown on page 49.t/ i. respectively.t/] (4. the average power (W) transferred to the load is deﬁned by the relation:1 Z 1 t=2 P D lim v. If Pvi is the crossspectral density between v and i. Active twoterminal device (voltage generator) connected to a load. f / d f D Re[Pvi .
Transmission media then Ł V I Ł D Vb Vb Zc jZ b C Z c j2 (4.8) In general. f / and p. f / 4Rb Rb D Re[Z b ] (4.9) with respect to Z c and is obtained for Z c D Z b : pd .2 The available power per unit of frequency of an active twoterminal device is deﬁned as Ł the maximum of (4.4.258 Chapter 4.12) where G b D Re[Yb ]. f / D Pvb .4. f / D Pv . that Ł consists of a current source with admittance Yb D G b C j Bb and a load with Yc D Yb . f / Rc jZ b C Z c j2 Rc D Re[Z c ] (4.7) hence Pvi . Active twoterminal device (current generator) connected to a load. the following relation holds p.10) Deﬁnition 4. For the circuit of Figure 4. f / D Pvb . f / 4G b (4.9) Zc jZ b C Z c j2 (4. if v is the voltage at the load impedance Z c . . f / Rc jZ c j2 (4.11) We note that (4. the available power per unit of frequency is given by pd . and Pib (A2 /Hz) is the PSD of the signal i b . f / D Pib . Active twoterminal device as a current generator. Figure 4.11) is a parameter of the active device that expresses the maximum power per unit of frequency that can be delivered to the load. f / D Pvb .
f / Z2. f / D Ł Z b . the condition for maximum transfer of power Ł Z c D Z b is easily veriﬁed. the conditions for the absence of signal distortion. For a broadband signal vb . f / D Zc. f / is a complex constant in the passband of vb . f / C Zb.17) (4. f / D GL . Characterization of a 2port network With reference to the circuit of Figure 4. f / Zi . Distortion is not a problem because G. In a connection between a source and a load. from Norton theorem we get Yb D and Ib D Vb 1 Zb (4. f / ZL. f / f 2B (4. Note that for Rb D Rc also the condition for maximum transfer of power is satisﬁed.15) where B is the passband of vb (see Deﬁnitions 1.16) where K 1 is a constant. and the phase term is equivalent to a delay. regarding the impedances as complexvalued functions of the frequency within the passband B.4 and that of Figure 4. are veriﬁed in the following two cases. For a narrowband signal vb .16) is that both the load impedance (Rc ) and the source impedance (Rb ) are purely resistive. f / C Z L . f / C Z 1.4. 1.1. f / V. the voltage vb is transferred to the load without distortion if Zb.10 on page 29 and 1.14) 1 Rb D Zb jZ b j2 j Xb jZ b j2 (4.18) . in this case the frequency response is given by G.3 for Z b D Rb C j X b . According to Heaviside conditions (1. the only way to verify the conditions Z c .3 and consider the relation between the load voltage v and the source voltage vb . the frequency responses of the various blocks are given by Gi . f / D K1 Zc. for which the impedances are regarded as complexvalued constants within the passband B.13) Conditions for the absence of signal distortion We now return to the circuit of Figure 4.11 on page 46). 2. f / D Z1. f / f 2B (4.2. as well as for maximum transfer of power to the load. f / Zc. f/ D Vb . Electrical characterization of a transmission system 259 We have a simple relation between the circuit of Figure 4. f / (4. f / and (4.144).
f / 2 jZ 1 j jZ 1 C Z i j2 (4.2c) Pv L . f / (4. f / D po . f / jZ L j2 jZ L C Z 2 j2 (4.21) Deﬁnition 4.3 The network power gain is deﬁned as the ratio g. f / pi . f / D jGCh . f / 1 jZ 1 j2 RL R L jZ 1 j2 D jGCh . deﬁned as gt .260 Chapter 4. f / pi. f / (4. and observing (see Figure 4. f / (4.d . f / V1 . Transmission media and G1 o.25) (4.19) The conditions for the absence of distortion between vi and v L are veriﬁed if G. Let pi . f / and po . f /j2 Pv1 . Deﬁnition 4.27) . f / D VL D Gi .24) In the presence of match for maximum transfer of power only at the source. f / D Pv L . f / RL RL D Pvo . f / we get g. f/ D Vo . f /G1 Vi o. f / and po . respectively: pi . f / D po . f / be the average power densities of source and load.23) Using the expressions of the various frequency responses. Note that also in this case the presence of a constant delay factor is possible. f / R1 (4.20) is a constant in the passband of vi . f /j2 R1 jZ L j2 jZ L j2 Pv1 . f / D Pv1 . we introduce the notion of transducer gain.26) where pi. f / D Pv L . f / is the available power per unit of frequency at the source. f / (4.4 A 2port network is said to be perfectly matched if the conditions for maximum transfer of power are established at the source as well as at the load: Z 1 D Z iŁ and Ł Z L D Z2 (4.d . f /G L .22) R1 R1 D Pvi .
f /. we speak of available attenuation of the network : 1 (4.32) ad D gd In dB. f / D jGCh .d .d .31) If in the passband B of vi we have gd > 1 the network is said to be active. f / D Pvi .gd /dB (4. f / 1 4Ri 1 4R2 (4. observing (4. f / po. from (4.ad /.1.34) (4. that is when input and output network impedances coincide. Electrical characterization of a transmission system 261 In this case the powers become available powers: žat the source žat the load pi. If instead gd < 1 then the network is passive.d . the relation between Go and gd is more complicated (see (4. for an ideal distortionless network with power gain (attenuation) gd . f / pi. in case the conditions leading to (4.33) Deﬁnition 4.5 The available power gain is deﬁned as the ratio between po.ad /dB D . f / (4. f / D Go constant f 2B (4. f / D Pvo . .37) (4. gd .25)).25) we get gd . .30) In particular for Z 1 D Z 2 .d . f / D po. f /j2 (4.35) Consequently.d . f / and pi.t/ where. in this case.31).31) are not veriﬁed. the impulse response is given by gCh .28) (4.36) We note that.t/ D Go Ž.gd /dB D 10 log10 gd and . we will assume that the frequency response of the network is GCh .29) Deﬁnition 4.6 Apart from a possible delay. Go D 1 p gd D p ad (4.d .4.
/Bell Labs. Reproduced with permission of Lucent Technologies.P in mW/ . In fact.37).P/dBrn D 10 log10 Some relations are . (4. Transmission media Measurement of signal power Typically gd and ad are expressed in dB.P in pW/ 3 W). Frequency weighting known as Cmessage weighting.262 Chapter 4.40) 3 dBW.39) (4.P/dBm D 10 log10 . or dBrn: .P/dBm D 27 dBm. Inc. and .P/dBW D 10 log10 .P/dBm 30 (4.Go /d B D .1. [ c 1982 Bell Telephone Laboratories.5 [2]. With reference to (4.Go /d B D 20 log10 Go D 10 log10 gd D . it follows that .42) For telephone signals. as Go denotes a ratio of voltages.gd /d B .P in W/ . a further power unit is given by dBrnc. mW (10 or pW (10 12 W). dBm. or in dBW. the power P is expressed in W.41) .P/dBW D . we note that .] .P/dBrn Example 4. The ﬁlter reﬂects the perception of the human ear and is known as Cmessage weighting.5. which expresses the power in dBrn of a signal ﬁltered according to the mask given in Figure 4.P/dBW D 120 D .38) (4.gd /d B (4. Figure 4.1 For P D 0:5 W we have .
. Actually. if we represent the motion of an electron within a conductor in a twodimensional plane. the sources of the two motions do not interact with each other. For a conductor of resistance R. In addition to interference caused by electromagnetic coupling between various system elements and noise coming from the surrounding environment. f / D 2kTR . the power spectral density of the open circuit voltage w at the conductor terminals is given by Pw .2 Noise generated by electrical devices and networks Various noise and disturbance signals are added to the desired signal at different points of a transmission system. Noise generated by electrical devices and networks 263 4. the large number of electrons and collisions gives origin to a measurable alternating component. the typical behavior is represented in Figure 4. We will analyze two types of noise generated by transmission devices: thermal noise and shot noise.6b. Between two consecutive collisions the electron produces a current that is proportional to the projection of the velocity onto the axis of the conductor. there is also noise generated by the transmission devices themselves.6.4.6a is illustrated in Figure 4. For example.44) Figure 4. f / where k D 1:3805 Ð 10 23 (4. at an absolute temperature of T Kelvin. As each electron carries a unit charge. f/ D e kT 1 kT (4. Representation of electron motion and current produced by the motion.43) J/K is the Boltzmann constant and Â hf Ã 1 hf . its motion between collisions with atoms produces a short impulse of current.2. Such noise is very important because it determines the limits of the system. If a current ﬂows through the conductor. Thermal noise Thermal noise is a phenomenon associated with Brownian or random motion of electrons in a conductor.6a where the changes in the direction of the electron motion are determined by random collisions with atoms at the set of instants ftk g. the behavior of instantaneous current for the path of Figure 4. Although the average value (DC component) is zero. an orderly motion is superimposed on the disorderly motion of electrons.
Therefore the PSD of w is approximately white. the spectral density of the open circuit voltage w is still given by (4.11). . f / ' 1.e. i. Transmission media where h D 6:6262 Ð 10 34 Js is the Planck constant.t/ with PSD 1 P j . Observing (4. the available noise power per Figure 4. where a conductor is modelled as a noiseless device having in series a generator of noise voltage w. we get .7.t/ is the Gaussian distribution with zero mean. a suitable model for the amplitude distribution of w. Electrical circuit to measure the available source noise power to the load.8. We note that. where a noisy impedance Z D R C j X is matched to the load for maximum transfer of power. In other words. where R D Re[Z ].43). (a) Electrical circuit (b) Equivalent scheme Figure 4.264 Chapter 4. f /.8.45) We adopt the electrical model of Figure 4. for f − kT= h D 6 Ð 1012 Hz (at room temperature T D 290 K). because of the wide support of Pw . Pw . In the case of a linear twoterminal device with impedance Z D R C j X at absolute temperature T. only the resistive component of the impedance gives origin to thermal noise. f / D 2kTR (4. f / D 2kT R . 2 An equivalent model assumes a noiseless conductor in parallel to a generator of noise current j .7.2 Because at each instant the noise voltage w. Electrical model of a noisy conductor.t/ is due to the superposition of several current pulses. Let us consider the scheme of Figure 4. Note that the variance is very large.
in this case it is convenient to use the electrical model of Figure 4. In any case.48) that the total available power of a thermal noise source is proportional to the product of the system bandwidth and the absolute temperature of the source. Shot noise.d .46) At room temperature T D 290 K.d . For T D 290 K. f / D pw. but rather by an equivalent function called noise temperature. in [2] shot noise is evaluated for a junction diode and shot and thermal noise for a transistor.50) Shot noise Most devices are affected by shot noise.8 has a bandwidth B. The noise temperature is deﬁned as Tw . f / D 2 Ð 10 . can also be modelled as Gaussian noise with a constant PSD given by Pishot .2.pw. the power delivered to the load is equal to Pw D kT 2B D kTB (W) 2 (4.d . Speciﬁcally. pw.d . f / D eI (A2 /Hz) (4.Pw /dBm D 174 C 10 log10 B (dBm) (4. f //dBm D (W/Hz) and (4. Noise temperature of a twoterminal device Let pw.51) where e D 1:6 Ð 10 19 C is the electron charge and I is the average current that ﬂows through the device.48) We note that a noisy impedance produces an open circuit voltage w with a root meansquare (rms) value equal to p p p ¦w D Pw 2B D pw. Noise generated by electrical devices and networks 265 unit of frequency is given by pw.d be the available power per unit of frequency. Noise in diodes and transistors Models are given in the literature to describe the different noise sources in electronic devices.52) .4. due to the presence of noise in a device. .47) 177 (dBm/Hz) If the circuit of Figure 4.49) We also note from (4.d 4R2B D kT 4R B (V) (4. which is due to the discrete nature of electron ﬂow: also in this case the noise represents the instantaneous random deviation of current or voltage from the average value. f / k=2 (4.4. the total output noise power of a device is usually not described by p. f / D kT (W/Hz) 2 21 (4. expressed as a current signal.
Tw represents the absolute temperature that a thermal noise source should have in order to produce the same available noise power as the device. .9a. expressing the noise power in terms of effective noise temperature. with available power pwo.S/ .S/ . and gd is the available power gain of the 2port network deﬁned in (4. generates a noise voltage wi .30). which if measured at the output is equal to wo. with noise temperature T S .S/ . This concept can be extended and applied to the output of an ampliﬁer or an antenna.9. then Tw > T. with available power at the load given by kT S (4. f / D gd . the noise voltage generated at the network output because of the presence of wi .A/ . Noise source connected to a noisy 2port network: three equivalent models.266 Chapter 4.53) pwo. where both the source impedance and the load impedance are matched for maximum transfer of power. we will have a total output noise Figure 4.A/ . the network also introduces noise. Transmission media In other words. Assuming that the source.S/ is equal to wo D wo. Noise temperature of a 2port network We will consider the circuit of Figure 4. f / 2 If in addition to the source. We note that if a device at absolute temperature T contains more than one noise source.
9c. the available power at the load will be equal to the sum of the two powers. f /.54) becomes:3 k pwo . considers all noise sources at the output. The scheme of Figure 4.57) Deﬁnition 4. f / k 2 (4.A/ are uncorrelated.56) Deﬁnition 4.e. 3 To simplify the notation we have omitted indicating the dependency on frequency of all noise temperatures. we introduce the equivalent circuits illustrated in Figure 4. In particular the scheme of Figure 4. f /. f /Twi Then p wo .2. Assuming the two noise signals wo. f / D k Tw 2 o (4. f / D gd .9b assumes the network to be noiseless and an equivalent noise source is considered at the input.9c.A/ .59) (4. f / gd .A/ .9 The effective output temperature of a system consisting of a source connected to a 2port network is Two D gd . Note that the dependency on frequency of T A and T S is determined by gd . f /.9 are the same. f / D pwo.A/ .S/ and wo.S/ C wo. on the other hand.7 The effective noise temperature T A of the 2port network is deﬁned as TA. f / [T S C T A ] 2 (4. i. p wo .S/ . .58) Equivalentnoise models By the previous considerations.4. Then (4. f / kT S C pwo.8 The effective input temperature of a system consisting of a source connected to a 2port network is Twi D T S C T A (4.9b and 4. f / 2 (4.54) Deﬁnition 4. The effects on the load for the three schemes of Figure 4. Noise generated by electrical devices and networks 267 signal given by wo D wo. and pwi. f / D g d .55) and denotes the temperature of a thermal noise source connected to a 2port noiseless network that produces the same output noise power. pwo.A/ .
63) From the above considerations we deduce that to describe the noise of an active 2port network. a useful relation to determine F. F is a parameter of the network and does not depend on the noise temperature of the source to which it is connected. f / D Pwo.60) . Let us consider a passive network at temperature T0 . f / pwo. The noise ﬁgure is given by the ratio between the available power at the load due to the total noise power pwo D pwo. leads to the following experiment. we obtain 2 the important relation F. which is matched to the network for maximum transfer of power.S0 / .S / (4. we must assign the gain gd and the noise ﬁgure F (or equivalently the noise temperature T A ). We now see that for a passive network at temperature T0 .S0 / 4 F. it is sufﬁcient to assign only one of the two parameters.268 Chapter 4. f / D p wo .S0 / .A/ C pwo. f / kT0 . We set the source at a noise temperature equal to the room temperature: T S 0 D T0 D 290 K. 4 Given an electrical circuit.A/ does not depend on T S . as for example a transmission line.S 0 / will only be thermal noise with noise temperature equal to T0 . f / (4.55).F 2 1/T0 gd (4.61) the noise power of the 2port network can be expressed as pwo. is given by (see (4. f / D gd .62) We note that F is always greater than 1 and it equals 1 in the ideal case of a noiseless 2port network. f / D 1 C TA T0 (4. now the noise wi . but through the noise ﬁgure F. f / Pw 0 .A/ . f / D1C Pw 0 . This is obtained by disconnecting the source and setting as an input to the 2port network an impedance Z i equal to the source impedance.A/ .S0 / and that due only to the source pwo. the system is equivalent to a twoterminal device with impedance Z 2 at temperature T0 . f / Pwo . equivalent to (4.9)) F. Applying Thevenin theorem to the network output.S0 / . and substituting for pwo. at temperature T0 .S / o. Recognizing that pwo. f / o. as the source and network noise signals are generated by uncorrelated phenomena. for which gd < 1.A/ . To determine the noise ﬁgure let us assume as source an impedance. f / pwo. that employs the PSDs of the output noise signals. From (4. Transmission media Noise ﬁgure of a 2port network Usually the noise of a 2port network is not directly characterized through T A .61) D1C Being pwo.61). f / pwo. f / D k . Moreover.A/ the expression (4.
62). Noise generated by electrical devices and networks 269 Assuming the load is matched for maximum transfer of power. Note that also in this case. f / D . pwi. the effective input temperature of the system can be expressed as Twi D T S C T A D T S C .kT0 =2/. Summarizing.1 Let us consider the conﬁguration of Figure 4. An electrical model of the connection is given in Figure 4. Twi is given by (4. Antennapreampliﬁer conﬁguration and electrical model. in a connection between a source and a 2port network. Z 2 D Z Ł .S0 / . T A . where an antenna with noise temperature T S is connected to a preampliﬁer with available power gain g and noise ﬁgure F.S0 / . where the antenna is modelled as a resistance with noise temperature T S .4. f / D 1 D ad gd (4.64) where ad is the power attenuation of the network. .61) we have F. (a) (b) Figure 4.S0 / . according to (4. given F.2. we can determine the effective noise temperature of the network. and pw0. f / D . f / D gd pwi.65).10a. Hence from the ﬁrst of (4.66) 1/T0 (4.2.46) L at the output we have pw0 .65) Example 4.10. f / kTwi 2 (4. f / D g d .kT0 =2/. If the impedances of the two devices are matched.e.F and the available noise power at the load is given by p w0 . On the other hand.10b. from (4. i.
270 Chapter 4.S0 / .67) With regard to the noise characteristics.F3 1/ .F2 2 1/T0 g2 (4.68) At the output of the second network we have pwo.11. f / FD D pwo. Equivalent scheme of a cascade of two 2port networks.69) using (4. f / Simplifying (4.2 . characterized by gains gi and noise ﬁgures Fi . the overall network has a gain g equal to the product of the gains of the individual networks: g D g1 g2 (4.F N 1/ C C ÐÐÐC g1 g1 g2 g1 g2 : : : g N (4. : : : . we wish to determine the parameters of a network equivalent to the cascade of the two networks.1 . . with available power gains g1 and g2 and noise ﬁgures F1 and F2 .63) to express the noise power due to the second network only.72) 1 Figure 4. Then the noise ﬁgure of the overall network is given by pwo. Transmission media Cascade of 2port networks As shown in Figure 4.11. f /g2 C k .70) Extending this result to the cascade of N 2port networks. it is sufﬁcient to determine the noise ﬁgure of the cascade of the two networks. With regard to the power gain. Ai .70) we get F D F1 C .1 . For a source at room temperature T0 . f / D pwo. i D 1. we consider the cascade of two 2port networks A1 and A2 . the noise power at the output of the ﬁrst network is given by pwo. Assuming the impedances are matched for maximum transfer of power between different networks.66) for T S D T0 . respectively. from (4. N .F2 1/ . f / D kT0 F1 g1 2 (4.2. we obtain Friis formula of the total noise ﬁgure F D F1 C .F2 2 2 kT0 g1 g2 2 1/g2 (4.71) kT0 kT0 g1 g2 F1 C .2 .F2 1/ g1 (4.
: : : .73) 1 Obviously the total gain of the cascade is given by g D g1 g2 : : : g N (4. characterized by noise temperatures T Ai . Noise generated by electrical devices and networks 271 We observe that F strongly depends on the gain and noise ﬁgure parameters of the ﬁrst stages.F A gc 1/ D g A F A (4. is given by T A D T A1 C T A2 T A3 T AN C C ÐÐÐ C g1 g1 g2 g1 g2 : : : g N (4.Fsr 1/ C ÐÐÐ C gsr gsr gsr : : : gsr D N .Fsr 1/ C 1 ' N Fsr (4.77) where Fsr is given by (4.74) Example 4. Then. i D 1.2.75) Therefore the N sections have overall unit gain and noise ﬁgure F D Fsr C .2.62). To compensate for the attenuation of the cable we choose g A D ac . . the more F will be reduced. that relates the noise ﬁgure to the effective noise temperature.64)). with power attenuation ac and noise ﬁgure Fc D ac (see (4. is called a repeater section. each section has a gain gsr D and noise ﬁgure Fsr D Fc C FA 1 D ac C ac . with gain g A and noise ﬁgure F A . we have that the equivalent noise temperature of the cascade of N 2port networks.72). Substituting (4. Figure 4. the smaller F1 and the larger g1 .12.4. N .Fsr 1/ .2 The idealized conﬁguration of a transmission medium consisting of a very long cable where ampliﬁers are inserted at equally spaced points is illustrated in Figure 4. cascaded with an ampliﬁer. in particular.12. Each section of the cable.76). in (4. We note that the output noise power of N repeater sections is N times the noise power introduced by an individual section. Transmission channel composed of N repeater sections.76) 1 gA D 1 ac (4.
with bandwidth B. Note that if Z b D Z c . the effects of the two signals on a certain load Z c are measured by the average powers.81) (4. then the two SNRs coincide.t/] kTw B E[w (4.82) pw .4Rb /. where the source vb generates a desired signal s and a noise signal w: vb . f / d f Ps 1 D Z C1 3p D (4. then Rc =jZ b C Z c j 1=. f / D Pw .t/ D s. Later we will often use this relation.272 Chapter 4. 3D Ps E[s 2 .84) 2 and.85) where Ps is the available average power of the desired signal. Therefore we also introduce the following signaltonoise ratio of average powers Z C1 ps .80). f / d f Ms E[s 2 .t/ C w. from (4. deﬁned as the ratio of the statistical powers Z C1 Ps .t/] 1 3s D D Z C1 D (4. . Transmission media 4. 2 D that is the condition for maximum transfer of power is satisﬁed.78) To measure the level of the desired signal with respect to the noise.t/] D 2 . ps .79) Mw E[w 2 . from (4.t/ (4.80) Pw pw . f / Rc jZ b C Z c j2 Rc jZ b C Z c j2 (4.3.3 Signaltonoise ratio (SNR) SNR for a twoterminal device Let us consider the circuit of Figure 4. one of the most widely used methods considers the signaltonoise ratio (SNR). f / d f 1 On the other hand. However.9). and Tw is the noise temperature of w. if the term Rc =jZ b C Z c j2 is a Ł constant within the passband of s and w. Hence it is sufﬁcient that Rb is constant within the passband of s and w to have 3 D 3s D 3 p (4. assuming pw is constant within the passband of w.t/] Pw . we have k Pw D Tw 2B (4. f / Therefore 3s and 3 p are in general different. f / d f 1 where. f / D Ps .83) Moreover.
9b): vi . With reference to the above conﬁguration.30) Z C1 Z Pso D pso .4) and (4. but wo has much smaller power since its passband coincides with that of the network frequency response. f / d f D 2 ps . The open circuit voltage of the network output is given by vo .t/ D s.88) We indicate with B the passband of the network frequency response.F 1/T0 is the effective noise temperature including both the source and the 2port network. From the expressions (4.T S D T0 / (4.t/ D so .94) . we have k Tw g2B (4.T S D T0 / In (4.t/ C wo . and with B its bandwidth.t/ C wi .t/ (4. and Twi D T S C . f / is constant within B.91) and (4.4. f / d f (4.89) 1 B and Z P wo D 2 B pwi . the effective input noise due to the connection sourcenetwork has an average power for T S D T0 . Under matched load conditions (that is Ł Z L D Z 2 ). f /g.t/] D 2 . usually equal to or including the passband of s. we observe that the power of wi could be very high if Twi is constant over a wide band. and assuming (4. where vi has a desired component s and a noise component wi (see Figure 4. f / D kTwi =2.t/ (4.90) Assuming now that g.83) holds.93) and the average power of the effective output noise is given by .t/] kTwi B E[wo (4. respectively.93). Finally we get Pso D Ps g and Pwo D 3out D 2 Ps E[so .F/d B . (4. Therefore pwi .Pwi /dBm D 114 C 10 log10 B jMHz C.Twi D FT0 / equal to .Pwi /dBm C .3.t/] D o 2 .t/] P wo E[wo (4.50). BjMHz denotes the bandwidth in MHz. at the network output we obtain 3out D 2 Ps E[so . f / d f (4.Pwo /dBm D .87) where so and wo depend on s and wi .2b. f /g.92) where Ps is the available power of the desired signal at the network input. Signaltonoise ratio (SNR) 273 SNR for a 2port network Let us consider now the connection of a source to the linear 2port network of Figure 4. From (4.g/d B .86) and wi has an effective noise temperature Twi D T S C T A .91) 2 i assuming that also the source is matched for maximum transfer of power.
The antenna feeds a preampliﬁer with a noise temperature T A1 of 125 K and a gain g1 of 20 dB.1=a` / gant g A D Ps 10 44=10 . it follows Ps ½ 73 W (4.g2 /d B D 20 C 80 D 100 dB and effective noise temperature: T A D T A1 C T A2 . f / pso .98) Relation between noise ﬁgure and SNR For a source at room temperature T S D T0 . The preampliﬁer is followed by an ampliﬁer with a noise ﬁgure F2 of 10 dB and a gain g2 of 80 dB. The two receiver ampliﬁers can be modelled as one ampliﬁer with gain: . The satellite has an antenna with a power gain of gsat D 6 dB and the total attenuation a` due to the distance between transmitter and receiver is 190 dB.97) 2. f / D kT0 =2.S0 / . receiving signals from a satellite. has an antenna with gain gant of 40 dB and a noise temperature T S of 60 K (that is the antenna acts as a noisy resistor at a temperature of 60 K). Given the average power of the noise generated by the source at room temperature Pwi.1 A station. f /=pwi. f / is a constant within the passband B of the network.F2 1/T0 . the average power of the thermal noise at the receiver output. As Pso D Ps gsat .96) (4. We want to ﬁnd: 1.Pso =Pwo / ½ 20 dB we get .100) . Transmission media Example 4.S0 / .g A /d B D .60 C 151/ 10100=10 106 15:36 dBm WD (4.91) the average power of the output noise is Pwo D k. 2.g1 /d B C . From (4. given that pwi.T S C T A /g A B D 1:38 ð 10 D 2:91 ð 10 23 5 .99) A more useful relation is obtained assuming that g. f / (4.274 Chapter 4. the minimum power of the signal transmitted by the satellite to obtain a SNR of 20 dB at the receiver output. The transmitted signal bandwidth is 1 MHz. f /=pwo . From 3out D .3.S0 / D kT0 2B 2 (4.95) 1.Pso =Pwo / ½ 100.1010=10 1/290 D T A1 C D 125 C D 151 K g1 g1 1020=10 (4. it can be shown that FD ps .
4.1 the typical values of F.101) 4.1 Transmission lines Fundamentals of transmission line theory In this section.1 Parameters of three devices.102) .92). T A .S0 / (4. Transmission lines 275 Table 4.0 T A (K) 11 250 1163 g (dB) 20 ł 30 20 ł 30 50 Frequency 6 GHz 3 GHz Ä70 MHz and 3in D then. We now develop the basic transmission line theory. the principles of signal propagation in transmission lines are brieﬂy reviewed. Uniform transmission line of length L. . With reference to Figure 4. In Table 4.13.4. 3out D Ps Pwi. that supports the propagation of transverse electromagnetic (TEM) waves [3. Examples of transmission lines are twistedpair cables and coaxial cables. let Figure 4. and gain g are given for three devices. which illustrates a uniform line.T S D T0 / F In other words.7 7. 1]. In the last column the frequency range usually considered for the operations of each device is also given.16 2. Device maser TWT ampliﬁer IC ampliﬁer F (dB) 0. A uniform transmission line consists of a twoconductor cable with a uniform crosssection. from (4.4 4. F is a measure of the reduction of the SNR at the output due to the noise introduced by the network.4. we have 3in (4.13.
let us consider a uniform line segment of inﬁnitesimal length that we assume to be time invariant. Line segment of inﬁnitesimal length dx. inductance.x.103) to distance and the second with respect to @ 2i @ x@t @ 2v c 2 @t ` (4. resistance.x. The parameters r.276 Chapter 4. respectively.x. The termination is found at distance x D 0 and the signal source at x D L.14. Voltage and current variations in the segment dx are given by 8 > @v dx D > < @x > @i > : dx D @x Differentiating the ﬁrst equation with respect time.105) . Primary constants are in general slowly timevarying functions of the frequency. The model of Figure 4. conductance and capacitance of the line per unit length. we get the wave equation @ 2v 1 @ 2v @ 2v D `c 2 D 2 2 @x2 @t ¹ @t (4.` dx/ (4. t/ be. we obtain 8 > @ 2v > > < @x2 D > > @ 2i > : D @t@ x @i @t @v . t/ and i D i. depicted in Figure 4. Transmission media i rdx ldx i+ ð i dx ðx v gdx cdx v+ ð v dx ðx Figure 4. they will be considered time invariant. Let v D v.104) Substituting @ 2 i=@t@ x in the ﬁrst equation with the expression obtained from the second. Ideal transmission line We initially assume an ideal lossless transmission line characterized by r D g D 0.14. in this context. the voltage and current at distance x at time t. They deﬁne. g.14 is obtained using the ﬁrst order Taylor series expansion of v. `. x denote the distance from the origin and L be the length of the line. however. To determine the law that establishes the voltage and current along the line. respectively. t/ as a function of distance x.cdx/ @t .x. t/ and i. c are known as primary constants of the line.
The general solution to the wave equation for a lossless transmission line is given by xÁ xÁ C '2 t C (4. that propagating in the negative direction is given by v .112) C jV j cos ! t C C Âp v. The voltage at distance x D 0 is given by v. Integrating by parts (4.x.107) we get i.t x=¹/]. consists of two waves that propagate in opposite directions: the wave that propagates from the source to the line termination is called the source or incident wave. t/ D 1 h '1 t Z0 xÁ ¹ '2 t C x Ái ¹ (4.x/ is time independent and can therefore be ignored in the study of propagation.106) v.108) where '. respectively. Transmission lines 277 p where ¹ D 1= `c represents the velocity of propagation of the signal on a lossless transmission line.111) The wave propagating in the positive direction of x is given by vC .106) yields ` @i D @t 1 0 ' t ¹ 1 xÁ 1 0 xÁ C '2 t C ¹ ¹ ¹ . We consider now the propagation of a sinusoidal wave with frequency f D !=2³ in an ideal transmission line.113) .t C x=¹/ C Â p ]. Noting that from (4.0.109) c the expression for the current is given by i. The transmission line voltage is obtained as the sum of the two components and is given by i h h x Ái xÁ (4.x. (4.x.x/ ¹ (4. t/ D cos ! t Z0 x Ái ¹ i h jV j xÁ C Âp cos ! t C Z0 ¹ (4. Deﬁning the characteristic impedance of a lossless transmission line as r ` Z 0 D `¹ D (4. considered as a function of distance along the line.4. t/ D jV j cos[!. t/ D jVC j cos[!. t/ D 1 h '1 t `¹ xÁ ¹ '2 t C x Ái C '.107) 0 0 where '1 and '2 are the derivatives of '1 and '2 . t/ D jVC j cos ! t ¹ ¹ The current has the expression h jVC j i. t/ D '1 t ¹ ¹ where '1 and '2 are arbitrary functions.110) From the general solution to the wave equation we ﬁnd that the voltage (or the current).103) @i =@t D (4.x.!t/ (4.4.x. that which propagates in the opposite direction is called reﬂected wave.x.1=`/@v=@ x.x. t/ D V0 cos.
respectively. % D 1 and V D VC . ž if Z L D 1.112) and (4.116) In particular.VC e jþx V e jþx / Z0 (4.113) in complex notation. the line is opencircuited. The transmission coefﬁcient is deﬁned as the ratio between the phasors representing. We deﬁne the wavelength as ½ D 2³=þ. This point is seen by an observer as moving at velocity ¹ in the positive direction of the xaxis. From (4. We note that frequency f and wavelength ½ are related by ½D ¹ f (4. we obtain P =PC D j%j2 .t x=¹/ is a constant. where the phasors V and I represent amplitude and phase at distance x of the sinusoidal signals (4. respectively. Transmission media Let us consider a point on the xaxis individuated at each time instant t by the condition that the argument of the function F. Let us consider some speciﬁc cases: ž if Z L D Z 0 . the line is shortcircuited. If VC is taken as the reference phasor with phase equal to zero.117) V > I L D VL D VC : ZL Z0 Z0 The reﬂection coefﬁcient is deﬁned as the ratio between the phasors representing.112) and (4.115) where þ D !=¹ denotes the phase constant. then V D jV je jÂ p . where Â p is the phase rotation between the incident and the reﬂected waves at x D 0. For sinuosoidal waves the velocity for which the phase is a constant is called phase velocity ¹.119) At the termination. the propagation in free space is characterized by ¹ D c D 3 Ð 108 m/s. the voltage and current at the termination are given by 8 > VL D VC C V < (4. ž if Z L D 0. % D 1 and V D VC . the reﬂected and incident waves. % D 0 and there is no reﬂection. % D V =VC . V D VC e jþx C V e jþx 1 ID .117). Let us consider a transmission line having as termination an impedance Z L . It is useful to write (4.118) þV þ Z L C Z0 C and −D 2Z L Z L C Z0 (4.113). the ratio between the power delivered to the load and the incident power is hence given by 1 j%j2 . .114) (4. By Kirchhoff laws. the termination voltage and the incident wave − D VL =VC . deﬁning the incident power as PC D jVC j2 =Z 0 and the reﬂected power as P D jV j2 =Z 0 . respectively. it turns out þ þ þV þ Z L Z0 %D D þ þ e jÂ p (4.278 Chapter 4.
Frequency response Let us consider the transmission line of Figure 4. the real and imaginary parts of : Þ is the attenuation constant measured in neper per unit of length. the changes in voltage and current in a line segment of inﬁnitesimal length characterized by an impedance Z and an admittance Y per unit length can be expressed using complex notation as 8 dV > > < dx D Z I > dI > : D YV dx (4. Recalling that V = VC D %.122) is a characteristic constant of the transmission line called propagation constant. Transmission lines 279 Nonideal transmission line Typically.15.120) Differentiating and substituting in the ﬁrst equation the expression of d I =dx obtained from the second.4. Let Þ and þ be.1 C %/. we deﬁne the voltage Vo D VL j Z LD1 D VC .123) The expression of the current is given by I D where r Z0 D Z Y (4.121) CV e x D VC e Þx e jþx C V eÞx e jþx (4. we get d2V D dx 2 where p D ZY (4.4.124) is the characteristic impedance of the transmission line. The solution of the differential equation for the voltage can be expressed in terms of exponential functions as V D VC e x 2 V (4. From (4. with a sinusoidal voltage source vi and a load Z L .125) 1 VC e Z0 x V e x Ð (4.123) the voltage at the load can be expressed as VL D VC . For sinusoidal waves in steady state.1 C %/ j%D1 D 2VC . The propagation constant and the characteristic impedance are also known as secondary constants of the transmission line. and þ is the phase constant measured in radians per unit of length. . in a transmission line the primary constants r and g are different from zero. respectively.
e < V > : I1 D C . Transmission line with sinusoidal voltage generator vi and load ZL .e L %e L / Z0 L C %e L/ (4.130) Gi D V1 Z1 Z 0 .1 C %e 2 L / D D Vi Zi C Z1 Z 0 .15.129) o (4.127) (4. deﬁned as GCh D VL = V1 .1 C %e 2 L / C Z i . given by: Z1 D Z2 D 1 C %e V1 D Z0 I1 1 %e 2 L 2 L (4. The input and output impedances of the 2port network are. Observing the above relations we ﬁnd the following frequency responses: GL D G1 D ZL 1 VL 1C% D D VC . We now want to determine the ratio between the voltage VL and the voltage V1 .126) where Z i denotes the generator impedance.1 C %/ j%D1 Vooc D V D Z0 C I L sc %/ j%D 1 Z 0 .1 C %/ D Vo 2VC 2 Z L C Z0 Vo 2e D V1 1 C %e L 2 L (4.1 %e 2 L/ (4. For the voltage V1 and current I1 we ﬁnd 8 > V1 D Vi Z i I1 D VC .1 where I L sc D I L j Z L D0 and Vooc D VL j Z L D1 . Transmission media i(t) 1 Z i v 1 (t) v(t) i v L(t) ZL x = L x=0 Figure 4. respectively.280 Chapter 4.128) VC .131) .
f / D 10 10 . from (4.4. f //d B D .135) To determine the power gain of the network. f //d B D 8:68Þ Q (4.118) we get 1 C % ' 2Z L =Z 0 . the available attenuation is given by ad . .ad . the channel frequency response is given by: GCh D G1 Let us consider some speciﬁc cases: ž Matched transmission line: % D 0 for Z i D Z L D Z 0 .4. Therefore (4.132) (4. L/ (4. we can use the general equation (4.25).ad . one can introduce an attenuation in dB per unit of length.137). as Q Q ad .136) yields þ þ þ ZL þ þ .137) In (4.136) where Þ D Re[ ]. or observe (4.23).ad . for a matched transmission line. f / D 1 je L j2 D e2ÞL (4.141) Q þZ þ 0 .139).1). f //d B is given by Q . f //d B L Q (4.140) In a transmission line with a nonmatched resistive load that satisﬁes the condition Z L − Z 0 . f //d B . from (4.1 C %/e 1 C %e 2 L L (4. we obtain g.ad . Þ expresses the attenuation in neper per unit of length. Transmission lines 281 Then. and %2 ' 1 4Z L =Z 0 . We note that.139) From (4. f //d B L 1 (4.ad .ad .133) GCh D 0 ž Opencircuited transmission line: % D 1 GCh D 2e 1Ce L 2 L D 1 cosh.138) The relation between Þ and . %2 e 4ÞL ' 0. f //d B L 10 log10 4 þ (4.Gi D 1=2/ GCh D e ž Shortcircuited transmission line: % D 1 (4. the attentuation in dB introduced by the transmission line is equal to .ad .134) L o GL D . f //d B D . in any case. Alternatively. f / D 1 j%j2 e 1 j%j2 e 4ÞL 2ÞL (4.ad .
f / has a constant amplitude and a linear phase. An expression of the propagation constant generally used to characterize the propagation of TEM waves over a metallic transmission line [1] is r r p ! ! C jK C j! `c . at least within the passband of the source.147) 2 2 where K is a constant that depends on the transmission line.r C j!`/. The attenuation constant of the transmission line is therefore given by p Þ.143) (4. using the approximation Â Ã1=2 r2 1 r2 1C 2 2 '1C 2 ! 2 `2 ! ` we ﬁnd r Þ' 2 r c ` p and þ ' ! `c (4. these conditions are satisﬁed if Þ is a constant and þ is a linear function of the frequency. we obtain `c ÞD! 2 and `c þD! 2 r (Â r2 1C 2 2 ! ` Ã1=2 )1=2 C1 (4. shows that both the attenuation constant and the phase constant must include a term proportional to the square root of frequency.r C j!`/=. a more accurate model of the propagation constants.g C j!c/. it can be shown that Heaviside conditions are equivalent to the condition r c D g` In the special case g D 0.144) r (Â r2 1C 2 2 ! ` Ã1=2 )1=2 1 (4. f / D K ³ f (neper/m) (4.147) is valid for both coaxial and twistedpair cables insulated with plastic material. The secondary parameters of the transmission line can p p be expressed as D Þ C jþ D .146) (4.145) Impulse response of a nonideal transmission line For commonly used transmission lines.148) .g C j!c/. For a matched transmission line. For a matched transmission line. and Z 0 D . The expression (4.282 Chapter 4.142) For frequencies at which r − !`. f/ D K (4. that takes into account the variation of r with the frequency due to the skin effect. Transmission media Conditions for the absence of signal distortion We recall that Heaviside conditions for the absence of signal distortion are satisﬁed if GCh .
t/ (4.06 KL=3 0.149) We note that.17 for four telep f law in the range of frequencies phone lines [2].4. Impulse response of a matched transmission line for various values of KL.t/ D p e 2 ³t3 .02 KL=5 KL=6 0 0 2 4 6 8 t (s) 10 12 14 16 Figure 4.K L/2 4t 1.2 we give the values of Z 0 and D Þ C jþ experimentally measured for some telephone transmission lines characterized by a certain diameter.ad .16 for various values of the product K L.16. the impulse response has the following expression KL gCh . f //d B D 8:68K ³ f (dB/m) Q (4.04 KL=4 0.133) of the frequency response of a matched transmission p line. without considering the delay `c introduced by the term p j! `c. which is usually indicated by a parameter called gauge.147). given the value of Þ.12 KL=2 0.150) The pulse signal gCh is shown in Figure 4. with given by (4. 0.08 gCh(t) 0. We note a larger dispersion of gCh for increasing values of K L. From the expression (4. Therefore it is possible to determine the attenuation constant at every other frequency. For some transmission lines this law is followed also for f > 100 kHz.1 0. we can obtain the value of K . The behavior of Þ as a function of frequency is given in Figure 4. Secondary constants of some transmission lines In Table 4.4. . we may note that it follows the f < 10 kHz. f / at a certain frequency f D f 0 . Transmission lines 283 and the attenuation introduced by the transmission line can be expressed as p .
0:5105/ 26 . Figure 4. Reproduced with permission of Lucent Technologies. Inc.17. Reproduced with permission of Lucent Technologies.0:4039/ Frequency (Hz) 1000 2000 3000 1000 2000 3000 1000 2000 3000 1000 2000 3000 Characteristic impedance Z0 ( ) 297 217 183 414 297 247 518 370 306 654 466 383 j278 j190 j150 j401 j279 j224 j507 j355 j286 j645 j453 j367 Propagation constant Þ C jþ (neper/km) (rad/km) 0:09 C 0:12 C 0:15 C 0:13 C 0:18 C 0:22 C 0:16 C 0:23 C 0:28 C 0:21 C 0:29 C 0:35 C j0:09 j0:14 j0:18 j0:14 j0:19 j0:24 j0:17 j0:24 j0:30 j0:21 j0:30 j0:37 Attenuation ad D 8:68Þ Q (dB/km) 0:78 1:07 1:27 1:13 1:57 1:90 1:43 2:00 2:42 1:81 2:55 3:10 c 1982 Telephone Laboratories.284 Chapter 4.0:6426/ 24 . Attenuation as a function of frequency for some telephone transmission lines: three are polyethyleneinsulated cables (PIC) and one is a coaxial cable with a diameter of 9./Bell Labs./Bell Labs. Transmission media Table 4.] . [ c 1982 Bell Telephone Laboratories.2 Secondary constants of some telephone lines. Gauge diameter (mm) 19 .0:9119/ 22 . Inc.525 mm.
2n C 1/½ BT =4 D L BT . f / to be ﬂat in the voice band./Bell Labs. the phase þ. formerly some lump inductors were placed at equidistant points along the transmission line. 1.4. Transmission lines 285 albeit with a different constant of proportionality.18.] 5 By localloop we intend the transmission line that goes from the user telephone set to the central ofﬁce. at the connection point we get destructive interference between the reﬂected and incident component: this interference reveals itself as a notch in the frequency response of the transmission line. terminated by an open circuit and connected in parallel to a local loop. f / may result very distorted in the passband. The digital subscriber line (DSL) technologies. where ½ BT satisﬁes the condition .4. which goes from 300 to 3400 Hz. Moreover. the incident signal separates into two components. are given in Figure 4. A bridgedtap consists of a twisted pair cable of a certain length L BT .18 for a transmission line with gauge 22 [2]. but considerably increases the attenuation outside of the voice band. Given Figure 4. introduced for data transmission in the local loop. Reproduced with permission of Lucent Technologies. In any case in the localloop. up to about 20 MHz for the VDSL technology (see Chapter 17). [ c 1982 Bell Telephone Laboratories. The frequency response of a DSL transmission line can also be modiﬁed by the presence of one or more bridgedtaps. causes Þ. Attenuation constant Þ and phase constant þ for a telephone transmission line with and without loading. The component propagating along the bridgedtap is reﬂected at the point of the open circuit: the component propagating on the transmission line must therefore be calculated taking also into consideration this reﬂected component. Typical behavior of Þ and þ in the frequency band 0 ł 4000 Hz. require a bandwidth much greater than 4 kHz. At the connection point. At the frequencies f BT D ¹=½ BT . with and without loading. For DSL applications it is therefore necessary to remove possible loading coils that are present in the local loops. : : : . .5 to force the primary constants to satisfy Heaviside conditions in the voice band. This procedure. n D 0. Inc. called inductive loading.
4. 2) the attenuation of the nearend crosstalk signal. Let us consider the two transmission lines of Figure 4. the transmission characteristics of unshielded twistedpair (UTP) cables commonly used for data transmission over local area networks are deﬁned by the EIA/TIA and ISO/IEC standards.2 Crosstalk The interference signal that is commonly referred to as crosstalk is determined by magnetic coupling and unbalanced capacitance between two adjacent transmission lines. We note that the signal attenuation and the intensity of NEXT are substantially larger for UTP3 cables than for UTP4 and UTP5 cables. 20 / belong to the disturbed transmission line. or NEXT.20 dB/100 m NEXT attenuation at 16 MHz ½23 dB ½38 dB ½44 dB Characteristic impedance 100 100 100 š 15% š 15% š 15% the large number of transmission lines actually in use. In the study of the interference signal produced by magnetic coupling.19. Signal attenuation at 16 MHz UTP3 UTP4 UTP5 13.85 dB/100 m 8. 4.286 Chapter 4. the cables are divided into different categories according to the values of 1) the signal attenuation per unit of length.2. to evaluate the performance of DSL systems we usually refer to a limited number of loop characteristics. we consider Figure 4.15 dB/100 m 8. that will be deﬁned in the next section. Cables of category three (UTP3) are commonly called voicegrade.3 Transmission characteristics deﬁned by the EIA/TIA for unshielded twisted pair (UTP) cables.3. Transmission media Table 4. Transmission lines conﬁguration for the study of crosstalk. . where the terminals . On the other hand. those of categories four and ﬁve (UTP4 and UTP5) are datagrade.1. 10 / belong to the disturbing transmission line and the terminals .19. which can be viewed as samples taken from the ensemble of frequency responses. As illustrated in Table 4. and 3) the characteristic impedance.
Transmission lines 287 i1 1 v1 Z0 1’ im 2 Z0 2’ Figure 4.1=.2Z 0 // .21a.1=. Interference signal produced by unbalanced capacitance. that can be redrawn in an equivalent way as illustrated in Figure 4.20. The EMF produces a current j2³ f E Im D .2Z 0 // D .4.21b.1=. where I1 ' V1 =Z 0 .151) . We will assume that the length of the transmission line is much longer than the wavelength corresponding to the maximum transmitted frequency and that the impedance Z 0 is much higher than the inductor reactance.21. The induced electromagnetic force (EMF) is given by E D j2³ f m I1 . 1 c 11 v1 c 12 1 2 Z0 2 1 (a) (b) c1 2 c1 2 c 22 Z0 c1 2 Z0 c1 2 c 12 Z0 v1 Z 0 c 11 2 c 12 ic 1 Z0 c 22 c 12 2 m Z0 Figure 4. We assume that the impedance Z 0 is much smaller than the reactance of the capacitors that can be found on the bridge.4.2Zm// I1 . that can be expressed as Im D j2³ f m V1 . 2 0 To study the interference signal due to unbalanced capacitance. Interference signal produced by magnetic coupling. Applying the principle of the equivalent generator we ﬁnd 0 Ic D V220 j Ic D0 D Z 220 1 1 c10 2 1 B c10 20 B @ 1 1 C c10 20 c120 C 1 C j2³ f V1 A 1 1 1 1 C C c10 2 c12 c12 C c10 2 c120 C c10 20 (4. we consider the circuit of Figure 4.20. the circuit of Figure 4.
with autocorrelation ra p .152) Recalling that the current Ic is equally divided between the impedances Z 0 on which the transmission line terminates.x/] D r p . Nearend crosstalk Let a p .153) be the nearend crosstalk coupling function at distance x from the origin.x/ 1c.z/ D E[a p .22.z/ p (4.288 Chapter 4.x/ Z0 C 2Z 0 2 (4. A model commonly used in practice assumes that a p . Transmission media Figure 4. In complex notation. depending on whether the receiver side of the disturbed line is the same as the transmitter side of the disturbing line.x/ D m.x/ is a white stationary random process.154) To calculate the power spectral density of NEXT we need to know the autocorrelation function of the random process a p .0/Ž. the interference signals are called nearend crosstalk or NEXT.x C z/a Ł . As illustrated in Figure 4.x/.22. we ﬁnd that the crosstalk current produced at the transmitter side termination is I p D Im C Ic =2.x/ dx (4. and the crosstalk current produced at the receiver side termination is It D Im C Ic =2. from which we obtain Ic D c12 c10 20 c120 c10 2 j2³ f V1 D j2³ 1cV1 c12 C c10 2 C c120 C c10 20 (4. or the opposite side. or farend crosstalk or FEXT. Illustration of nearend crosstalk (NEXT) and farend crosstalk (FEXT) signals. respectively.155) . the NEXT signal is expressed as Z Vp D Z0 I p D 0 L V1 e 2 x j2³ f a p . We now evaluate the total contribution of the near and farend crosstalk signals for lines with distributed impedances.
159) ( w1x .163) 6 Observing (1.x i1x/ (4.157) and (4.1 K e4K p ³f L / ' E[jV1 .161) GNE X T .158) To perform computer simulations of data transmission systems over metallic lines in the presence of NEXT.K ³ f C j K ³ f C j2³ f `c/. f /j2 D E[jV p . f /j2 ]k p f 3=2 (4.449). 1x/ otherwise (4.449). i D 0. f /j2 ] ' k p f 3=2 E[jV1 .4.0/ 1x (4. then from (4. f /j2 ] ³ 3=2 r p .156) where K is deﬁned by (4. f /j2 ] D E[jV1 .4.0/ K (4. . f /j2 ] (4.161) the variance of ai to be used in the simulations is given by E[ai2 ] D K kp ³ 3=2 1x (4. f /j2 is also equal to the ratio between the PSDs of v p and v1 .0/ f 3=2 . denote statistically independent Gaussian random variables with zero mean and variance E[ai2 ] D A NEXT coupling function is thus given by L 1x 1 X i D0 p p p 1 2. jG p .162) If we know the parameters of the transmission line K and k p . Transmission lines 289 For NEXT the following relation holds E[jV p .i C 2 /1x r p .x/ D 1 0 if x 2 [0.x/ D with ai w1x . but also the phase of NEXT coupling.160) where ai .148). and kp D ³ 3=2 r p . In addition to experimental models obtained through laboratory measurements. f / D j2³ f ai w1x . the following stochastic model is used: L 1x 1 X i D0 a p .157) Using (1.x i1x/e (4. L=1x 1. the level of NEXT coupling is given by6 jG p . : : : . it is required to characterize not only the amplitude.
Transmission media Farend crosstalk Let at . Deviations from the characteristic expressed by (4. we assume that at is a white stationary random process.168) where kt D .2 dB has been included to take into account the attenuation caused by the possible presence of connectors. with autocorrelation rat .147) may be caused by losses in the dielectric material of the cable.2³ /2 rt .164) be the farend crosstalk coupling function at distance x from the origin. .1 For localarea network (LAN) applications. f /j2 ]e p 2K ³ f L (4. The level of FEXT coupling is given by jGt . A frequency independent attenuation of 1. f /j2 ] p 2K ³ f L (4. For the IEEE Standard 100BASET2.2³ f /2 rt . We note that the signal attenuation at the frequency of 16 MHz is equal to 14. f /j2 ] D E[jV1 . f /j2 ] D kt f 2 Le E[jV1 . the presence of connectors. Example 4. as it indicates a constant propagation delay.169) where f is expressed in MHz and L in meters.x/] D rt .z/ For the FEXT signal the following relation holds E[jVt .23 [4].x/ D m. the FEXT signal is given by Z L V1 e L j2³ f at . We note that for highspeed data transmission systems over unshielded twistedpair cables.0/L (4. nonhomogeneity of the transmission line.0/. which deﬁnes the physical layer for data transmission at 100 Mb/s over UTP3 cables in Ethernet LANs (see Chapter 17). the following worstcase frequency response is considered: GCh .290 Chapter 4.x C z/a tŁ . NEXT usually represents the dominant source of interference. f / D 10 p 1:2 20 e .x/ dx (4.6 dB. In (4.0/Ž.3 for UTP3 cables. f /j2 D E[jVt .0:00385 j f C0:00028 f /L p (4. etc.165) Vt D Z 0 It D 0 Analogously to the case of NEXT.z/ D E[a t .x/ 1c. The amplitude of the frequency response obtained for a cable length L D 100 m is shown in Figure 4.x/ Z0 C 2Z 0 2 (4.167) where L is the length of the transmission line. a higher value than that indicated in Table 4. In complex notation.4.166) .169). the term e j2³ f `cL is ignored. the maximum length of cables connecting stations is typically limited to 100 m.
23.23 as a dotted line. 6. due to the factor f 3=2 . We recall that for electromagnetic wave propagation in free space. For indepth study of optical ﬁber properties and of optical component characteristics we refer the reader to the vast literature existing on the subject [5. we note that the useful interval for transmission is in the range from 800 to 1600 nm. and four realizations of NEXT coupling function. we note the increase as a function of frequency of 15 dB/decade. f in MHz –20 (dB) –30 Amplitude –40 Four NEXT coupling functions –50 –60 0 5 10 15 20 f (MHz) 25 30 35 40 Figure 4. [ c 1997 IEEE.158) is illustrated in Figure 4. the wavelength rather than the frequency is normally used.116) holds: a frequency of 3 Ð 1014 Hz corresponds therefore to a wavelength of 1 µm for transmission over optical ﬁbers. in this section we limit ourselves to introducing some fundamental concepts.0 dB 16 MHz NEXT coupling envelope curve –21 + 15 log10 ( f/16 ) .24 [8. The signal attenuation as a function of the wavelength exhibits the behavior shown in Figure 4.5 Optical ﬁbers Transmission systems using light pulses that propagate over thin glass ﬁbers were introduced in the 1970s and have since then undergone continuous development and experienced an increasing penetration. Optical ﬁbers 291 0 Amplitude characteristic for 100 m cable length –10 –14. 4. that are found in the optical band and are much higher than the frequency of radio waves or microwaves. 9].3 for UTP3 cables.] The level of NEXT coupling (4. The level of NEXT coupling equal to 21 dB at the frequency of 16 MHz is larger than that given in Table 4.23. The amplitude characteristics of four realizations of the NEXT coupling function (4. to the point that they now constitute a fundamental element of modern information highways. to identify a transmission band.6 dB –21. Amplitude of the frequency response for a voicegrade twistedpair cable with length equal to 100 m.5. that corresponds . the relation (4.4. The term “optical communications” is used to indicate the transmission of information by the propagation of electromagnetic ﬁelds at frequencies typically of the order of 1014 ł 1015 Hz. 7].162) are also shown in Figure 4.
c 1980 IEEE.2. see also Miya et al.292 Chapter 4. equivalent to that needed for the transmission of ¾300:000 television signals. Three regions are typically used for transmission: the ﬁrst window goes from 800 to 900 nm. We immediately realize the enormous capacity of ﬁber transmission systems: for example. [From Li (1980).24. A fundamental device in optical communications is represented by the laser. such as wavelengthdivision multiplexing (WDM) and optical frequencydivision multiplexing (OFDM). Optical transmission lines with lengths of over a few hundred meters use ﬁber glass. a system that uses only 1% of the 2 Ð 1014 Hz bandwidth mentioned above. although the propagation of electromagnetic ﬁelds in the atmosphere at these frequencies is also considered for transmission (see Section 17.] to a bandwidth of 2 Ð 1014 Hz. the second from 1250 to 1350 nm. Description of a ﬁberoptic transmission system The main components of a ﬁberoptic transmission system are illustrated in Figure 4.25 [10]. has an available bandwidth of 2 Ð 1012 Hz. Attenuation curve as a function of wavelength for an optical ﬁber. because they present less attenuation with respect to ﬁbers using plastic material. Dispersion in the transmission medium causes “spreading” of the transmitted pulses. and the third from 1500 to 1600 nm. beginning in the 1970s. the majority of optical communication systems employ as transmission medium an optical ﬁber. multiplexing techniques using optical devices have been developed. this phenomenon in turn causes intersymbol interference and limits the available bandwidth of the transmission . we note that.1). which acts as a waveguide. made coherent light sources available for the transmission of signals. which. To efﬁciently use the band in the optical spectrum. Moreover. Transmission media Figure 4. (1979). each with a bandwidth of 6 MHz.
with values near zero around the wavelength of 1300 nm for conventional ﬁbers. 1300. normalized by the length of the optical ﬁber. because of the low attenuation and dispersion.170) where M is the dispersion coefﬁcient of the material. The stepindex (SI) ﬁber is characterized by a constant value of the refraction index. Mg is the dispersion coefﬁcient related to the geometry of the waveguide. monomodal ﬁbers are preferred for applications that require wide transmission bandwidth and very long transmission lines. Multimode ﬁbers allow the propagation of more than one mode of the electromagnetic ﬁeld. L denotes the length of the ﬁber and 1½ denotes the spectral width of the light source. Special ﬁbers are designed to compensate for the dispersion introduced by the material. The bandwidth of the transmission medium is inversely proportional to the dispersion. and 1550 nm. are given for different types of ﬁbers. whereas the gradedindex (GRIN) ﬁber has a refraction index decreasing with the distance from the ﬁber axis.M C Mg / L 1½ (4.5. Elements of a typical ﬁberoptic transmission system. we note that the dispersion is minimum in the second window. the monomodal ﬁbers are characterized by larger bandwidths.4 typical values of the transmission bandwidth. and 15 ps/(nmðkm) at wavelengths of 850. In this case the medium introduces signal distortion caused by the fact that propagation of energy for different modes has different speeds: for this reason multimodal ﬁbers are used in applications where the transmission bandwidth and the length of the transmission line are not large. In Table 4. Monomode ﬁbers limit the propagation to a single mode. Optical ﬁbers 293 Figure 4. to limit the number of modes . 0.25.M C Mg / has values near 120. thus eliminating the dispersion caused by multimode propagation. Because in this case the dispersion is due only to the material and the geometry of the waveguide.4. A measure of the pulse dispersion is given by 1− D . The total dispersion . As noticed previously. respectively. medium. these ﬁbers are normally used in very long distance connections.
173) the ﬁrst term is due to shot noise and the second term to thermal noise.gi ² P Rc /2 R L 2e R L B. and R L is the resistance of the load that follows the photodetector. . 16]. n is a parameter that indicates the photodetector excess noise. P Rc is the power of the incident optical signal and ² is the photodetector response.6 Radio links The term radio is used to indicate the transmission of an electromagnetic ﬁeld that propagates in free space. e is the charge of the electron. B is the receiver bandwidth. The conversion from a current signal to an electromagnetic ﬁeld that propagates along the ﬁber can be described in terms of light signal power by the relation PT x D k0 C k1 i (4.172) where i is the device output current. Transmission media Table 4.294 Chapter 4.4 Characteristic parameters of various types of optical ﬁbers. ž mobile terrestrial communication systems [12. k is Boltzmann constant. 15. Laser diodes are characterized by a smaller spectral width 1½ as compared to that of LEDs. 14. 13. the diameter of the monomodal ﬁber is related to the wavelength and is normally about one order of magnitude smaller than that of multimodal ﬁbers. The more widely used photodetector devices are semiconductor photodiodes. Semiconductor laser diodes (LD) or lightemitting diodes (LED) are used as signal light sources in most applications. I D is the photodetector dark current. Signal quality is measured by the signaltonoise ratio expressed as 3D gin . The transmitted waveform can therefore be seen as a replica of the modulation signal. which convert the optical signal into a current signal according to the relation i D ² P Rc (4. Tw is the effective noise temperature in Kelvin. Some examples of radio transmission systems are: ž pointtopoint terrestrial links [11]. 4. and therefore lead to a lower dispersion (see (4.170)). We note that in the denominator of (4. Fiber multimode SI multimode GRIN multimode GRIN monomode monomode Wavelength (nm) 850 850 1300 1300 1550 Source LED LD LD o LED LD LD Bandwidth (MHzÐkm) 30 500 1000 >10000 >10000 to one.5 mA/mW.173) where gi is the photodetector current gain. these sources are usually modulated by electronic devices.171) where k0 and k1 are constants. in this case the current signal. Typical values of ² are of the order of 0.I D C ² P Rc / C 4kTw B (4.
26. via reﬂection and scattering in the atmosphere (or via tropospheric scattering). give also origin to signal reﬂection and/or diffusion.26. In particular. We will now consider the types of propagation associated with frequency bands.6. where c is the speed of light in free space. or as a direct wave. ž earthsatellite links (with satellites employed as signal repeaters) [17]. Very low frequency (VLF) for f 0 < 0:3 MHz.6. requires an antenna of at least 30 m. humidity. This means that an AM radio station. these are phenomena that permit transmission between two points that are not in lineofsight (LOS). the electromagnetic propagation depends on the changes of the refraction index of the medium. one of the dimensions of the antenna must be at least equal to 1=10 of the carrier wavelength.1 Frequency ranges for radio transmission Frequencies used for radio transmission are in the range from about 100 kHz to some tens of GHz. . buildings. to achieve an efﬁcient radiation of electromagnetic energy. : : : ). Radio links 295 Figure 4. Radio link model. In fact. Obstacles such as mountains. In any case. where we assume that the transmit antenna input impedance and the receive antenna output impedance are matched for maximum transfer of power. The choice of the carrier frequency depends on various factors.. if the atmosphere is nonhomogeneous (in terms of temperature. We speak of diffusion or scattering phenomena if molecules that are present in the atmosphere absorb part of the power of the incident wave and then reemit it in all directions. A radio link model is illustrated in Figure 4. pressure. ž deepspace communication systems (with space probes at a large distance from earth). Recall that. etc. among which the dimensions of the transmit antenna play an important role.4. At these frequencies the signals propagate around the earth. 4. A radio wave usually propagates as a ground wave (or surface wave). The earth and the ionosphere form a waveguide for the electromagnetic waves. this gives origin to the reﬂection of electromagnetic waves. with carrier frequency f 0 D 1 MHz and wavelength ½ D c= f 0 D 300 m.
due to rain: for f 0 > 10 GHz. Extremely high frequency (EHF) for f 0 > 30 GHz. a ﬁlter is usually employed at the transmitter frontend. with peak attenuation at around 20 GHz. regulatory bodies specify power radiation masks: a typical example is given in Figure 4. respectively. Nevertheless. High frequency (HF) for 3 < f 0 < 30 MHz. The waves propagate as ground waves up to a distance of 160 km. for our purposes a very simple model. 2. Therefore these frequencies are adopted for satellite communications. The limit to the coverage is set by the earth curvature.6. due to water vapor: for f 0 > 20 GHz. They are also employed for lineofsight transmissions. ionospheric and tropospheric scattering (at an altitude of 16 km or less) are present at frequencies in the range 30–60 MHz and 40–300 MHz. To comply with these limits. if h D 100 m. If h is the height of the tower in meters. For f 0 > 30 MHz.27. Very high frequency (VHF) for 30 < f 0 < 300 MHz. the signal propagates through the ionosphere with small attenuation. At frequencies of about 10 GHz. the range p covered expressed in km is r D 1:3 h: for example. which cause additional signal attenuation: 1. coverage is up to about r D 13 km.296 Chapter 4. using high towers where the antennas are positioned to cover a wide area. 3. We note the following absorption phenomena. which cause the signal to propagate over long distances with large attenuations. with peak attenuation at 60 GHz. 4. The waves are reﬂected by the ionosphere at an altitude that may vary between 50 and 400 km. Super high frequency (SHF) for 3 < f 0 < 30 GHz. Ultra high frequency (UHF) for 300 MHz < f 0 < 3 GHz. if the antennas are not positioned high enough above the ground. Transmission media Medium frequency (MF) for 0:3 < f 0 < 3 MHz. assuming the diameter of the rain drops is of the order of the signal wavelength. In any case. However. where the plot represents the limit on the power spectrum of the transmitted signal with reference to the power of a nonmodulated carrier. the electromagnetic ﬁeld propagates not only into the free space but also through ground waves. Radiation masks A radio channel by itself does not set constraints on the frequency band that can be used for transmission. We note that. to prevent interference among radio transmissions. due to oxygen: for f 0 > 30 GHz. which . atmospheric conditions play an important role in signal propagation.2 Narrowband radio channel model The propagation of electromagnetic waves should be studied using Maxwell equations with appropriate boundary conditions.
27. the power density is concentrated within a cone and is given by 8 D G T x 80 D GT x PT x 4³ d 2 (4. usually. consists in approximating an electromagnetic wave as a ray (in the optical sense). the power density is 80 D PT x (W/m2 ) 4³ d 2 (4. Radio links 297 Figure 4. GT x D 1 for an isotropic antenna. is often adequate.6.175) where GT x is the transmit antenna gain.174) where 4³ d 2 is the surface of a sphere of radius d that is uniformly illuminated by the antenna. . At a distance d from the antenna. In the case of a directional antenna. Obviously. The deterministic model is used to evaluate the power of the received signal when there are no obstacles between the transmitter and receiver. that is in the presence of line of sight: in this case we can think of only one wave that propagates from the transmitter to the receiver. We observe that the power density decreases with the square of the distance. This situation is typical of transmissions between satellites and terrestrial radio stations in the microwave frequency range (3 < f 0 < 70 GHz). Let PT x be the power of the signal transmitted by an ideal isotropic antenna. On a logarithmic scale (dB) this is equivalent to a decrease of 20 dBperdecade with the distance.4. which uniformly radiates in all directions in the free space. GT x × 1 for a directional antenna. Radiation mask of the GSM system with a bandwidth of 200 kHz around the carrier.
f 0 is the carrier frequency and Á is the efﬁciency factor.½=4³ d/2 is called free space path loss. The (4.GT x /d B .ad /d B D 10 log10 PT x D 32:4 C 20 log10 djkm C 20 log10 f 0 jMHz P Rc . and . The available attenuation of the medium. f 0 in MHz. The factor Á Rc < 1 takes into account the fact that the antenna does not capture all the incident radiation.176) where P Rc is the received power. For GT x D G Rc D 1. 0:6] for parabolic antennas. while Á ' 0:8 for horn antennas.178).177) where 32:4 D 10 log10 . because of the factor A=½2 . b) it increases with frequency as log10 f 0 . .181) d is expressed in km. whereas for metallic transmission lines the dependency is linear (see (4.181): a) it increases with distance as log10 d. . the available power in conditions of matched impedance is given by P Rc D 8A Rc Á Rc (4.180) 4³ which represents the power of a signal received at the distance of 1 meter from the transmitter. Usually Á 2 [0:5. ½ D c= f 0 is the wavelength of the transmitted signal.ad /d B coincides with the free space path loss. To conclude. we will use the following deﬁnition: Â Ã2 ½ P0 D PT x GT x G Rc (4. expressed in dB. A Rc is the effective area of the receive antenna and Á Rc is the efﬁciency of the receive antenna.G Rc /d B in dB. Observing (4.178) ½2 where A is the effective area of the antenna. we get Â Ã ½ 2 (4. (4.179) is known as the Friis transmission equation and is valid in conditions of maximum transfer of power. Later. The term . is .GT x /d B and .4³=c/2 . the power of the received signal is given by P Rc D PT x The antenna gain can be expressed as [1] 4³ A Á (4. because a part is reﬂected or lost. for a given G.140)).179) P Rc D PT x GT x G Rc 4³ d GD The (4. (4. In any case. We note that.179) does not take into account attenuation due to rain or other environmental factors.178) holds for the transmit as well as for the receive antenna. working at higher frequencies presents the advantage of being able to use smaller antennas. Transmission media At the receive antenna.G Rc /d B A Rc GT x Á Rc 4³ d 2 (4. nor the possibility that the antennas may not be correctly positioned.298 Chapter 4. It is worthwhile making the following observations on the attenuation ad expressed by (4.
f / D . and T A D . Electrical equivalent circuit at the receiver. The antenna produces the desired signal s. The noise temperature of the antenna depends on the direction in which the antenna is pointed: for example T S. and the available noise power per unit of frequency is pw .k=2/Tw .10. for matched input and output circuits. which implies using a directional antenna.Sun > T S. The spectral density of the open circuit noise voltage is Pw . and w represents the total noise due to the antenna and the ampliﬁer. the signaltonoise ratio at the ampliﬁer output is equal to 3D available power of received desired signal P Rc D kTw B available power of effective input noise (4. using a slightly different notation from that of Figure 4. The ampliﬁer has a bandwidth B around the carrier frequency f 0 . introduced by the antenna (w S ) and by the receiver (w A ).4. Let sT x be a narrowband Figure 4. From (4.F 1/T0 . The effective noise temperature at the input is Tw D T S C . f / D 2kTw Ri .F 1/T0 is the noise temperature of the ampliﬁer. where T S is the effective noise temperature of the antenna.28 the electrical equivalent circuit at the receiver. .atmosphere (4.182) We note that there are two noise sources. T0 is the room temperature and F is the noise ﬁgure of the ampliﬁer.6.183) Multipath It is useful to study the propagation of a sinusoidal signal hypothesizing that the oneray model is adequate.92).28. Radio links 299 Equivalent circuit at the receiver We redraw in Figure 4.
bb/ ST x .187) Limited to signals sT x of the type (4. the delay per unit of distance is equal to 3. (4. a part of its power is absorbed by the surface while the rest is retransmitted in another direction. f / D 0 for f < f 0 .3 ns/m. If a ray undergoes a reﬂection caused by a surface.a/ e h . If the ith ray has undergone K i reﬂections before arriving at the receiver and if ai j is a complex number denoting the reﬂection coefﬁcient of the jth reﬂection of the ith ray.185) where −1 D d=c denotes the propagation delay. hence A Rc / A T x =d.a/ h . As the propagation delay is given by − D d=c. (4. the amplitude of the received signal decreases linearly with the distance. then A Rc D A0 =d.bb/ f 0 was removed because the input already satisﬁes the condition . in particular. and ' Rc D 2³ f 0 −1 D 2³ f 0 d=c is the phase of the received signal.188) indicates that the received signal exhibits a phase shift of ' Rc D 2³ f 0 −1 with respect to the transmitted signal. As the power decreases with the square of the distance between transmitter and receiver.184).184) ] D Re[A Rc e j' Rc e j2³ f 0 t ] (4. Choosing f 0 as the carrier frequency.bb/ gCh . the total reﬂection factor is ai D Ki Y jD1 ai j (4.189) 7 The constraint that GCh . because of the propagation delay.188) Thus.150) of h .186) can be rewritten as Ä ½ A Rc j' Rc . that is sT x .t/ D Re[A Rc e j2³ f 0 . . f / D 0 for f < .− / D 2A Rc e AT x j2³ f 0 −1 Ž.186) gCh . Transmission media transmitted signal.300 Chapter 4.185) has impulse response Ä ½ A Rc . and is not adequate to characterize radio channels.− / D Re AT x that is the channel attenuates the signal and introduces a delay equal to −1 .t −1 / (4. Using the deﬁnition (1. the baseband equivalent of gCh is given by7 .− / D Re AT x (4.− −1 / (4. A Rc is the amplitude of the received signal.− −1 / (4.t/ D Re[A T x e j2³ f 0 t ] The received signal at a distance d from the transmitter is given by s Rc .t/. such as for example the channel between a ﬁxed radio station and a mobile receiver. Rc Reﬂection and scattering phenomena imply that the oneray model is applicable only to propagation in free space. the radio channel associated with (4. We will now consider the propagation of a narrowband signal in the presence of reﬂections.− / gCh .a/ . and the power of the received signal is given by P Rc D A2 =2. if A0 is the amplitude of the received signal at the distance of 1 meter from the transmitter.
the received signal can still be written as s Rc .bb/ gCh .− −i / (4.− / D Nc 2A0 X ai e A T x i D1 di j2³ f 0 −i Ž. If Nc is the number of paths and di is the distance traveled by the ith ray. The total phase shift asociated with each ray is obtained by summing the phase shifts introduced by the various reﬂections and the phase shift due to the distance traveled.29. corresponding to rays that are not the direct or line of sight ray. of the term A0 .188) to the case of many reﬂections.193) the resulting signal is given by the sum of Ai e j i . undergo an attenuation due to reﬂections that is added to the attenuation due to distance.193) with 'i D 2³ f 0 −i . Radio links 301 Therefore signal amplitudes. i D 1. : : : .186) we get " # Nc A0 X ai . from (4.29.4.6.− −i / (4.190) A T x i D1 di where −i D di =c is the delay of the ith ray. . extending the channel model (4. Limited to narrowband signals.ai =di /e j'i .− / D Re h .191) We note that the only difference between the passband model and its baseband equivalent is constituted by the additional phase term e j2³ f 0 −i for the ith ray. extending the channel model (4. As P0 D A2 =2. Representation of (4. as represented in Figure 4.t/ D Re[A Rc e j' Rc e j2³ f 0 t ] where now amplitude and phase are given by A Rc e j' Rc D A0 Nc X ai e j'i di i D1 (4.192) (4. Let Ai and i be amplitude and phase.193) in the complex plane. the received power is 0 ψ3 ARc φR c ψ2 ψ 1 Figure 4.190) around f 0 is equal to . The complex envelope of the channel impulse response (4. Nc . respectively.a/ gCh .
Example 4. but assume that transmitter and receiver are positioned in a room.6. that are placed at a distance d. with height respectively h 1 and h 2 .194). so that the inequalities between the antenna heights and the distance d are no LOS h1 h2 d Figure 4. we get h2h2 P0 j1'j2 D PT x GT x G Rc 1 4 2 (4. P Rc D Example 4. Transmission media P Rc þ þ2 Nc þX a þ þ i j'i þ D P0 þ e þ þ i D1 di þ (4. i. We will now give two examples of application of the previous results.197) d2 d We note that the received power decreases as the fourthpower of the distance d. We consider the case of two paths: one is the straight path (LOS).195). that is 40 dB/decade instead of 20 dB/decade as in the case of free space.302 Chapter 4. and considering that for the above assumptions the lengths of the two paths are both approximately equal to d. Moreover.196) from which. Observing (4. one transmitting and the other receiving.6. Therefore the law of power attenuation as a function of distance changes in the presence of multipath with respect to the case of propagation in free space.2 (Fading caused by multipath) Consider again the previous example.30. . For small values of 1' we obtain: j1 e j1' j2 ' j1'j2 D 16³ 2 h2h2 1 2 ½2 d 2 (4.30).1 (Power attenuation as a function of distance in mobile radio channels) We consider two antennas.180) in (4. the earth acts as an ideal reﬂecting surface and does not absorb power. it is assumed that d × h 1 and d × h 2 (see Figure 4.195) P Rc ' 2 j1 e j1' j2 d where 1' D 2³ f 0 1d=c D 2³ 1d=½ is the phase shift between the two paths.194) and is independent of the total phase of the ﬁrst ray. by substituting (4.e. the received power is given by P0 (4. and 1d D 2h 1 h 2 =d is the difference between the lengths of the two paths. Tworay propagation model. and the other is reﬂected by the earth surface with reﬂection coefﬁcient a1 D 1.
As a result the received power is given by þ þ 3 þX a e j'i þ2 þ þ i (4. and Â is the angle of incidence of the signal with respect to the direction of motion (Â is assumed to be the same in P and in Q).193) also varies: in some positions all rays are aligned in phase and the received power is high. In fact. known as a Doppler shift. one ﬁnds that the power decreases with the distance in an erratic way. In the previous example this phenomenon is not observed because the distance d is much larger than the antenna heights. and the phase difference between the two rays remains always small. 1t is the time required for the receiver to go from P to Q.200) θ P ∆l Rc Q Figure 4. The variation in distance between the transmitter and the receiver is 1` D v p 1t cos Â . With these assumptions. . depending on the position. reﬂection from the ﬂoor. The phase variation of the received signal because of the different path length in P and Q is 1' D 2³ v p 1t 2³ 1` D cos Â ½ ½ vp 1 1' D cos Â 2³ 1t ½ (4.6. and reﬂection from the ceiling. 4.31. respectively.31. Radio links 303 longer valid.199) and hence the apparent change in frequency or Doppler shift is fs D T x (4. the frequency of the received signal undergoes a shift with respect to the frequency of the transmitted signal. moreover. It is assumed. we consider a transmitter radio Tx and a receiver radio that moves with speed v p from a point P to a point Q.3 Doppler shift In the presence of relative motion between transmitter and receiver.6. We now analyze in detail the Doppler shift. Illustration of the Doppler shift. in the sense that by varying the position of the antennas the received power presents ﬂuctuations of about 20ł30 dB. the phases of the various rays change and the sum in (4. to LOS. where v p is the speed of the receiver relative to the transmitter. and a2 D a3 D 0:7.4.198) P Rc D P0 þ þ þ i D1 di þ where the reﬂection coefﬁcients are a1 D 1 for the LOS path. whereas in others the rays cancel each other and the received power is low. that the rays that reach the receive antenna are due. With reference to Figure 4.
for example.201) The (4. For a vehicle traveling at 96. Therefore..203) (4. The wavelength is 3 ð 108 c D D 0:162 m f0 1850 ð 106 a) The Doppler shift is positive. must be much smaller than the inverse of the Doppler spread of the channel. each with a different length.304 Chapter 4.82 m/s). is usually referred to areas outside of buildings: these environments can be of various types. b) going away from the transmitter. e. the received signal is no longer monochromatic. This phenomenon manifests itself also if both the transmitter and the receiver are static.200) the frequency shift f s depends on the angle Â . we want to evaluate the frequency of the received carrier if the vehicle is moving: a) approaching the transmitter. If the signal propagation were taking place through only one ray. The term outdoor.55 km/h (26. but a person or an object moves modifying the signal propagation. if it moves away from the transmitter the Doppler shift is negative. and f 0 jMHz is the carrier frequency in MHz.202) þ where v p þkm=h is the speed of the mobile in km/h. It is intuitive that the more the characteristics of the radio channel vary with time. the received frequency is 26:82 f Rc D f 0 C f s D 1850 ð 106 C D 1850:000166 MHz 0:162 ½D 8 (4. and height. c) perpendicular to the direction of arrival of the transmitted signal.184) is transmitted. We now consider a narrowband signal transmitted in an indoor environment8 where the signal received by the antenna is given by the contribution of many rays. Example 4.t/ D Re[A Rc e j2³. instead. the received signal is s Rc . . if v p D 100 km/h and f 0 D 900 MHz we have f s D 83 Hz. An important consequence of this observation is that the convergence time of algorithms used in receivers. Transmission media This implies that if a narrowband signal given by (4. etc. urban. thus enabling the adaptive algorithms to follow the channel variations. which measures the dispersion in the frequency domain that is experienced by a transmitted sinusoidal signal. the received signal would undergo only one Doppler shift. in particular.6.g. rural. We note that if the receiver moves towards the transmitter the Doppler shift is positive. material. The Doppler spectrum is characterized by the Doppler spread. For example. possibly separated by walls of various thickness.200) relates the Doppler shift to the speed of the receiver and the angle Â .3 (Doppler shift) Consider a transmitter that radiates a sinusoidal carrier at the frequency of f 0 D 1850 MHz. and we speak of a Doppler spectrum to indicate the spectrum of the received signal around f 0 . f 0 f s /t ] (4. to perform adaptive equalization. for Â D 0 we get f s D 9:259 10 4 v p jkm=h f 0 jMHz (Hz) (4.204) The term indoor is usually referred to areas inside buildings. But according to (4. suburban. because of the different paths. the larger the Doppler spread will be.
t/e j2³ f −2 .t// Q (4. − / D Nc X i D1 gi .t/.206) where gi represents the complexvalued gain of the ith ray that arrives with delay −i . c) Doppler shift.t/Ž.t. − / D g1 . i. or to both factors. If the duration of the channel impulse response is very small with respect to the duration of the symbol period.4 Propagation of wideband signals For a wideband signal with spectrum centered around the carrier frequency f 0 .t/Ž.t. (4. which introduces a random frequency modulation that is in general different for different rays. The transmitted signal undergoes three phenomena: a) fading of some gains gi due to multipath.208) where b is a complex number. b) time dispersion of the impulse response caused by diverse propagation delays of multipath rays. which implies rapid changes of the received signal power over short distances (of the order of the carrier wavelength) and brief time intervals.191) is still valid.209) 9 If we normalize the coefﬁcients with respect to g1 . In a digital transmission system the effect of multipath depends on the relative duration of the symbol period and the channel impulse response.205) 4. in this time interval the impulse response is only a function of − .bb/ GCh .6. f / D 1 C b e j2³ f − (4. f / D g1 . the channel is equivalent to a ﬁlter with impulse response illustrated in Figure 4.32 and frequency response given by:9 . the received frequency is 26:82 f Rc D f 0 f s D 1850 ð 106 D 1849:999834 MHz 0:162 c) In this case cos.t// (4. the channel model (4. For a given receiver location. or at least it is timeinvariant within a short time interval.− / C g2 .t/Ž. therefore there is no Doppler shift. (4.bb/ gCh . or to changes in the surrounding environment.bb/ gCh . Q Neglecting the absolute delay −1 . if the gain of the single ray varies in time we speak of a ﬂat fading channel. (4.209) becomes .t/ Q (4.t. Otherwise.Â / D 0. Radio links 305 b) The Doppler shift is negative. we rewrite the channel impulse response as a function of both the time variable t and the delay − for a given t: .e.− −i .208) is called Rummler model of the radio channel. .− −2 .t/ C g2 . where the channel variability is due to the motion of transmitter and/or receiver. the transmitted signal is narrowband with respect to the channel. If the channel is timeinvariant. then the oneray model is a suitable channel model.207) At a given instant t.206) models the channel as a linear ﬁlter having timevarying impulse response.4. In the literature (4. letting −2 D −2 −1 a simple tworay radio channel model has impulse response . an adequate model must include several rays: in this case if the gains vary in time we speak of a frequency selective fading channel.bb/ GCh .6.
t/g2 . For g1 and g2 realvalued.t// shown in Figure 4. in the transmission of narrowband signals the received power is the square of the vector amplitude resulting from the vector sum of all the received rays. It is evident that the channel has a selective frequency behavior.32. that is they do not interact with each other.t/ C 2g1 . In this case from (4.211) From (4.t. for wideband communications.306 Chapter 4.210) þGCh . the signal distortion depends on the signal bandwidth in comparison to 1=−2 .32. Physical representation and model of a tworay radio channel.2³ f −2 . Therefore. f /þ D g1 . where g1 and g2 are assumed to be positive. from (4. for a given transmitted power. In any case.206) the received power is P Rc D PT x Nc X i D1 jgi j2 (4.t/ C g2 . .211) we note that the received power is given by the sum of the squared amplitude of all the rays. rays with different delays are assumed to be independent.209) the following frequency response is obtained þ2 þ þ þ . the received power will be lower for a narrowband signal as compared to a wideband signal. Conversely.bb/ 2 2 Q (4.t/ cos. Transmission media Figure 4. as the attenuation depends on frequency. Q Going back to the general case.
A parameter that is normally used to deﬁne conveniently the MDS of the channel is the rootmean square (rms) delay spread. this time is also called excess delay spread (EDS). Typical values of (average) rms delay spread are of the order of µs in outdoor mobile radio channels. we use the (average) rms delay spread − r ms obtained by substituting in (4. −r ms . all the other rays are assumed to have only a random component: therefore the distribution . In other words. Statistical description of fading channels The most widely used statistical description of the gains fgi g is given by Q g1 D C C g1 Q gi D gi i D1 i D 2.213) The above formulae give the rms delay spread for an instantaneous channel impulse response. the simplest measure is the delay time that it takes for the amplitude of the ray to decrease by x dB below the maximum value. : : : . whereas the ﬁrst ray contains a direct (deterministic) component in addition to a random component.3 on page 67). the EDS is not a very meaningful parameter. E[jgi j2 ].6.213) in place of jgi j2 its expectation.212) jgi j2 −in n D 1. the expectation of the squared amplitude of the channel impulse response. that is −r ms D where Nc X q −2 −2 (4.5 power delay proﬁles are given for some typical channels. as a function of delay −i . which corresponds to the secondorder central moment of the channel impulse response. and of the order of some tenths of ns in indoor channels. Nc (4. In Table 4.214) where C is a realvalued constant and gi is a complexvalued random variable with zero Q mean and Gaussian distribution (see Example 1.4. The MDS is the measure of the time interval that elapses between the arrival of the ﬁrst and the last ray. In this case − r ms measures the mean time dispersion that a signal undergoes because of multipath.9. also called delay power spectrum or multipath intensity proﬁle. We deﬁne as power delay proﬁle. However. With reference to the timevarying characteristics of the channels. because channels that exhibit considerably different distributions of the gains gi may have the same value of EDS. 2 jgi j 2 −n D i D1 Nc X i D1 (4. Radio links 307 Channel parameters in the presence of multipath To study the performance of mobile radio systems it is convenient to introduce a measure of the channel dispersion in the time domain known as multipath delay spread (MDS).
In general for a model with more rays we take K D C 2 =Md .a/ N (4. In this case the expression of the received signal is given by (4. Q D1 Assuming that the power delay proﬁle is normalized such that Nc X i D1 E[jgi j2 ] D 1 (4.215) is given in Figure 4. i.33 for various values of K .184).Q . no direct component exists.217) p we obtain C D K =.1 C K /a 2 ]I0 [2a K . For K ! 1.e. i. is equal to the ratio between the power of the direct component and the power of the reﬂected and/or scattered component.1 C K /]1. we ﬁnd the model having only the deterministic component.t/ D [g1.214) the phase of gi is uniformly distributed in [0. Transmission media Table 4.5 Values of E[jgi j2 ] (in dB) and −i (in ns) for three typical channels. which we rewrite as follows: Q s Rc .t/ C C] cos 2³ f 0 t g1. For a oneray channel Q Q model. To justify the Rice model for jg1 j we consider the transmission of a sinusoidal signal (4.a/ N where I0 is the modiﬁed Bessel function of the ﬁrst type and order zero.I . i 6D 1. 2³ /. where Md is the P statistical power of all reﬂected and/or scattered components.308 Chapter 4. we have N p pjg1 j . with no reﬂected and/or scattered components and. hence.t/ sin 2³ f 0 t Q (4. In (4.e. Standard GSM −i 0 200 500 1600 2300 5000 E[jgi j2 ] 3:0 0 2:0 6:0 8:0 10:0 Indoor ofﬁces −i 0 50 150 325 550 700 E[jgi j2 ] 0:0 1:6 4:7 10:1 17:1 21:7 Indoor business −i 0 50 150 225 400 525 750 E[jgi j2 ] 4:6 0 4:3 6:5 3:0 15:2 21:7 of jgi j will be a Rice distribution for jg1 j and a Rayleigh distribution for jgi j.218) . it is K D 0.192). known as the Rice factor.1 C K / a exp[ K .K C 1/.x/ D 2³ ³ (4. and the Rayleigh distribution is obtained for all the gains fgi g. Z ³ 1 e x cos Þ dÞ I0 . Typical reference values for K are 3 and 10 dB. that is Md D iNc E[jgi j2 ].a/ D 2a e a 1.a/ D 2.215) 2 pjgi j . If C D 0. the parameter K D C 2 =E[jg1 j2 ]. letting gi D gi = E[jgi j2 ]. In p particular. C D 1.216) The probability density (4.
2 0 0 0. t 1t. − 1− / D E[g. the (baseband equivalent) channel impulse response can be represented with good approximation as a timevarying complexvalued Gaussian random process g.5 1 1.6 K=5 1. The Rice probability density function for various values of K.t. The Rayleigh density function is obtained for K D 0.33.6 0. Radio links 309 2 K=10 1.Q Q are given by the sum of a large number of random components.4 0. As the gains g1.5 3 Figure 4. 4.I and g1. In particular g.I .t.t/ Q Q2 (4. as will be discussed later. where C represents the contribution of the possible direct component of the propagation Q Q Q signal.t. rg . − /g Ł . − 1− /] (4.6.4. A general continuoustime model is now presented. is a Rice random variable for each instant t. and g1.220) .4 1. − / represents the channel output at the instant t in response to an impulse applied at the instant .5 a 2 2. We now evaluate the autocorrelation function of the impulse response evaluated at two different instants and two different delays. − /.t 1t.5 Continuoustime channel model The channel model previously studied is especially useful for system simulations.t − /.−. they can be approximated by independent Gaussian random processes with zero mean. in the assumption just formulated. which in turn are subject to a very large number of random phenomena.Q .I and g1.2 (a) K=2 1 1 p g  K=0 0.219) which.t/ C C]2 C g1. Assuming that the signal propagation occurs through a large number of paths.8 1.6.t.Q are due to the scattered component. The instantaneous envelope of the received signal is then given by q [g1.8 0.
− /. − /j2 ]. the autocorrelation is nonzero only for impulse responses that are considered for the same delay time.− − /2 M. if the delay time is the same. Exponential. Gaussian.226) The inverse of the (average) rms delay spread is called the coherence bandwidth of the channel. − 1− / D rg .− r ms / 2− r ms / (4.− /Ž. . For a Rayleigh channel model. the values of g for rays that arrive with different delays are uncorrelated. Therefore we have rg .− / d− (4.− / D 1 − r ms e −=.− 2 2 2. unilateral M.− / C 1 Ž. Moreover. Two rays.225): 1.−.225) − r ms D M. the autocorrelation only depends on the difference of the times at which the two impulse responses are evaluated.− / is above a certain threshold is called (average) excess delay spread of the channel.t. As in the case of the discrete channel model previously studied.− / D E[jg.224) The measure of the set of values − for which M. Transmission media According to the model known as the widesense stationary uncorrelated scattering (WSSUS).1− / (4.310 Chapter 4. with equal power M.− / d− 1 where −D Z 1 1 − M.− / d− 1 2 Z 1 (4. t 1t. three typical curves are now given for M.t.− / D 3.1t. unilateral r M. we deﬁne the (average) rms delay spread as Z 1 .t. − / for a given delay − . that is called channel power delay proﬁle and represents the statistical power of the gain g. as g is stationary in t.222) 2 1 e ³ − r ms 2 − 2 =.2− r ms / − ½0 (4.223) − ½0 (4. Power delay proﬁle For 1t D 0 we deﬁne the function M.− / D 1 Ž.221) In other words. where − r ms is the parameter deﬁned in (4. and g is stationary in t.
so that a suitably designed receiver can recover the transmitted information. we observe that if − r ms is of the order of 20% of the symbol period. f /G Ł .230) M( τ ) (dB) 0 10 20 30 0 1 2 5 τ ( µs) Figure 4.t.4. if the coherence bandwidth of the channel is lower than 5 times the modulation rate of the transmission system. in the presence of ﬂat fading the received signal may vanish completely.t. Radio links 311 For digital transmission over such channels.6.34.0/ D 4:38 µ s 0:01 C 0:1 C 0:1 C 1 (4. Doppler spectrum We now analyze the WSSUS channel model with reference to time variations.34.t 1t.227) Consequently the coherence bandwidth of the channel is equal to Bc D 146 kHz. deﬁned as Bc D 5−r ms .1/2 C . and. First we introduce the correlation function of the channel frequency response taken at instants t and t 1t.2/ C .213) we have −D N and .5/ C . f 1 f /] (4.0:1/.0:1/.6.1/.5/2 C . Example 4.228) . f 1 f / D E[G. whereas frequency selective fading produces several replicas of the transmitted signal at the receiver. or larger then signal distortion is nonnegligible. 1 and determine the coherence bandwidth.0:01/. f. rG .µ s/2 −N2 D 0:01 C 0:1 C 0:1 C 1 Therefore we get −r ms D N p 21:07 . respectively.0:01/.4:38/2 D 1:37 µ s (4. otherwise the channel is ﬂat fading. Equivalently.0/ D 21:07 . However.1/.0:1/.4 (Power delay proﬁle) We compute the average rms delay spread for the multipath delay proﬁle of Figure 4. ¯ From (4. at frequencies f and f 1 f .1/ C . t 1t.229) (4. Multipath delay proﬁle.0:1/.2/2 C . . then we speak of a frequency selective fading channel.
(4.½.0. We recall that the Doppler shift is caused by the motion of terminals or surrounding objects.1 f /e j2³ ½.− / e j2³.1t.1t. which represents the power of the Doppler shift for different values of the frequency ½. − / e j2³ f − d− (4.t.1 f / is the Fourier transform of rg . rg . so that Z C1 M. The Fourier transform of rG is given by Z 1 PG .234) represents a normalization factor such that Z C1 D. f / D Z C1 1 g. .1t/ D F 1 [D.½/. evaluated at two different instants.− / The term rg .230) the relation G. of the channel.1t/ M.234) implies that rg .t.− / where d. or gain g.1t/ (4.1 f /− d− 1 (4.239) 10 In very general terms.½/ d½ D 1 1 (4.236) with d.237) and M.½.− / is the power delay proﬁle.− /.− / d− D 1 1 (4.1t.1t.232) that is rG .0/ (4.0.1t/ (4.1 f / D rg .½/].0. 1 f / D rG .312 Chapter 4.½/ as the Fourier transform of the autocorrelation function of the impulse response. Now we introduce the Doppler spectrum D. we could have a different Doppler spectrum for each path.1t.− / j2³ ½1t e D.½/ D PG .− / D d. moreover it holds Z C1 rG . that is:10 Z C1 rg .1t/ Ð rg .½.− / in (4.t. − /.231) we ﬁnd that rG depends only on 1t and 1 f .1t/ d. in correspondence of the same delay − .238) With the above assumptions the following equality holds: D.½/ D d.235) We note that (4.234) 1 rg .1t. Transmission media Substituting in (4.− / D d. We deﬁne D.0/ D 1 (4. 0/.1t.− / is a separable function.233) 1 The time variation of the frequency response is measured by PG .1t.
known as the Jakes model or classical Doppler spectrum.242) D. The maximum frequency f d of the Doppler spectrum support is called the Doppler spread of the channel and gives a measure of the fading rate of the channel. thanks to the study conducted by a special commission (JTC). we usually say that the channel is fast fading if f d T > 10 2 . For indoor radio channels. D. Let T be the symbol period in a digital transmission system. Shadowing The simplest relation between average transmitted power and average received power is P Rc D P0 dÞ (4. f / D 2 f d : 0 elsewhere with a corresponding autocorrelation function given by rg .− / D Nc X i D1 sinc. If f d denotes the Doppler spread. Doppler spectrum models A widely used model.− −i / (4.206). The inverse of the Doppler spread is called coherence time: it gives a measure of the time interval within which a channel can be assumed to be time invariant or static. f / D (4.240) : 0 otherwise For the channel model (4. The model of the Doppler spectrum described above agrees with the experimental results obtained for mobile radio channels.244) .4.½/ can be obtained through the rms Doppler spread or second order central moment of the Doppler spectrum.2 f d 1t/ M.1t.2³ f d 1t/ M. which in turn can be determined by transmitting a sinusoidal signal (hence 1 f D 0) and estimating the autocorrelation function of the amplitude of the received signal. Radio links 313 Therefore.241) where J0 is the Bessel function of the ﬁrst type and order zero. then 8 1 < 1 p j f j Ä fd ³ f d 1 .6. f = f d /2 D. it was demonstrated that the Doppler spectrum can be modelled as 8 < 1 j f j Ä fd (4. to represent the Doppler spectrum is due to Clarke. the corresponding autocorrelation function of the channel impulse response is given by rg .243) A further model assumes that the Doppler spectrum is described by a second or thirdorder Butterworth ﬁlter with the 3 dB cutoff frequency equal to f d .½/ can also be obtained as the Fourier transform of rG .−i / Ž. and slow fading if f d T < 10 3 .− −i / (4.1t. Another measure of the support of D.0/.1t.−i / Ž.− / D Nc X i D1 J0 .
the inverse of the transmitted signal bandwidth is much larger than the delay spread of the channel and g. has a shadowing with ¦. These ﬂuctuations are modelled as a lognormal random variable. In the ﬁrst case the channel has a constant gain. for example. where ¾ is a Gaussian random variable with zero mean and variance ¦¾2 . In a slow fading channel. the impulse response of the channel changes within a symbol period. however. The received signal consists of several attenuated and delayed versions of the transmitted signal. . For indoor and urban outdoor radio channels the relation depends on the environment. In the second case instead the channel has a timevarying frequency response within the passband of the transmitted signal and consequently the signal undergoes frequency selective fading. The relation between ¦¾ and ¦. the impulse response changes much more slowly with respect to the symbol period.¾ /d B is ¦¾ D 0:23¦. Hence.¾ /d B . Final remarks A signal that propagates in a radio channel for mobile communications undergoes a type of fading that depends on the signal as well as on the channel characteristics. on the other hand. The ﬁrst type of fading can be divided into ﬂat fading and frequency selective fading. by using more details regarding the environmental conﬁguration. Transmission media where Þ is equal to 2 for propagation in free space and to 4 for the simple 2ray model described before. variations of the average received power are lower in outdoor environments than in indoor environments. this condition leads to signal distortion.314 Chapter 4. A propagation model that completely ignores any information on land conﬁguration. we would have a model with ¦¾ D 0. and also the material used for their construction.¾ /d B D 12 dB. which increases with increasing Doppler spread. in practice shadowing provides a measure of the adequacy of the adopted deterministic model. that is the coherence time of the channel is smaller than the symbol period. in case we had an enormous amount of topographic data and the means to elaborate them. In general. Improving the accuracy of the propagation model. In a fast fading channel. their dimensions. whereas for the correct design of a network it is good practice to make use of the largest possible quantity of topographic data. the Doppler spread causes dispersion in the domain of the variable ½ and therefore time selective fading. − / can be approximated by a delta function. In particular. Shadowing takes into account the fact that the average received power may present ﬂuctuations around the value obtained by deterministic models. according to the number of buildings. in general. Usually there are no remedies to compensate for such distortion unless the symbol period is decreased. in the presence of shadowing it becomes e¾ P Rc . and therefore is based only on the distance between transmitter and receiver. the channel can be assumed as time invariant for a time interval that is proportional to the inverse of the Doppler spread. the shadowing can be reduced. If P Rc is the average received power obtained by deterministic rules. A channel can be fast fading or slow fading. this choice leads to larger intersymbol interference.t. centered at − D 0. whereas the delay spread due to multipath leads to dispersion in the time domain and therefore frequency selective fading. that is e¾ . shadowing should be considered in the performance evaluation of mobile radio systems. with random amplitude and phase. these conditions occur when the inverse of the transmitted signal bandwidth is of the same order or smaller than the delay spread of the channel. in other words.
and 1=10 p f d TP Ä 1=5. (4.6. however. which imposes the desired power delay proﬁle.`T P / is complexvalued Gaussian white noise with zero N mean and unit variance. We immediately notice that the various delays in (4.36 is used. fgi g are random processes.i TQ /.kTQ /. 1. : : : . If the channel is time invariant ( f d D 0). and h int is an interpolator ﬁlter (see Section 1.− /.206) must be multiples of TQ and consequently we need to approximate the delays of the power delay proﬁle (see.4.g. Radio links 315 4. Table 4. Nc 1. e. Ä Usually we choose f d TQ − 1. Starting from a continuoustime model of the power delay proﬁle (see.g. Discrete time model of a radio channel. i D 0. Nc 1. To generate each process gi . : : : .35.. the scheme of Figure 4.224)). i D 0. in the case of ﬂat fading we choose Nc D 1. e. as illustrated in Figure 4. Model to generate the ith coefﬁcient of a timevarying channel.36. In general.6 Discretetime model for fading channels Our aim is to approximate a transmission channel deﬁned in the continuoustime domain by a channel in the discretetime domain characterized by sampling period TQ . The interpolator output signal is then multiplied by a constant ¦i D M. . and are obtained as realizations of Nc random variables. The discretetime model of the radio channel is represented. by a timevarying linear ﬁlter where the coefﬁcient gi corresponds to the complex gain of the ray with delay i TQ .A.7). all coefﬁcients fgi g.6.35. x(kTQ) TQ TQ TQ g (kTQ) 0 g (kTQ) 1 g (kT ) Q N1 c + y(kTQ) Figure 4. we need to obtain a sampled version of M. Figure 4. are constant. where wi .5). h ds is a narrrowband ﬁlter that produces a signal gi0 with the desired Doppler spectrum..
Furthermore.z/ D 2 X n an z 1C nD1 where. 1. deﬁning !0 D tan.a) Secondorder Butterworth ﬁlter. a constant Ci must be added to the random component gi . N f .249) (4. so that Nc 1 X i D0 E[jgi .1 (4. we have [18] a1 D 2 !0 / p 2 1 C !0 C 2 !0 2.10 on page 72. Observing (4.245) For example.1 C !0 C 2 !0 /2 1 4 (4. Given !d D 2³ f d . 1) Using a ﬁlter. has energy equal to the interpolation factor T P =TQ . the coefﬁcients fgi g.1 C a1 C a2 / 11 Based on the Example 1. to avoid modifying the average transmitted power. if the channel model N N includes a Doppler shift f si for the ith branch. are scaled.¦i2 C Ci2 / D 1 (4. m D 1. where f d is the Doppler spread. : : : . the equivalent interpolator ﬁlter. .kTQ /j2 ] D 1 (4.316 Chapter 4. f /. We analyze the two methods. then we need to multiply the term Ci C gi by the exponential function exp.9. the above condition is satisﬁed if each signal gi0 has unit statistical power11 and f¦i g satisfy the condition Nc 1 X i D0 . Nc 1.242) in two ways: 1) implement a ﬁlter h ds such that jHds .246) Generation of a process with a preassigned spectrum The procedure can be generalized for a signal gi0 with a generic Doppler spectrum of the type (4. We give the description of h ds for two cases 1.!d TP =2/ where TP is the sampling period.240) or (4. : : : . f /j2 D D. j2³ f si kTQ /.1 C z 1 /2 ! (4.247) Hds . i D 0. in the range from f d to f d .248) a2 D c0 D 4 1 C !0 p 2 . the transfer function of the discretetime ﬁlter is c0 . Transmission media If the channel model includes a deterministic component for the ray with delay −i D i TQ .211).250) . 2) generate a set of N f (at least 10) complex sinusoids with frequencies fš f m g. it is M 0 D 1 if M wi D 1. given by N gi the cascade of h ds and h int .
z/.wi .`T P / D Nf X mD1 Ai.4. n D 0. 21 : 1:3651 e 4 8:1905 e 2:0476 e 3 9:0939 e 1:8067 e 3 1:3550 e 7:1294 e 5 9:5058 e 1:3321 e 5 4:5186 e 1:8074 e 5 3:0124 e 4 4 3 5 5 6 2:0476 e 6:7852 e 5:3726 e 7:1294 e 6:0248 e 3 4 4 5 5 2:7302 e 1:3550 e 6:1818 e 2:5505 e 4:5186 e 3 3 5 5 5 . : : : .253) D1=3 . f /d f . c 1997 IEEE. Let gi0 .254) Table 4. Hds 1 .240)..` Q Q 1/T P / C wi .` Q 1. Hds 2 . N f (4.6 reports values of the overall ﬁlter parameters for f d TP D 0:1 [19].m / e j8i.252) The spacing between the different frequencies is 1 f m . [From Anastasopoulos and Chugg (1997). Table 4.2³ f m `TP C'i.` 1/T P / a2 gi0 .I C e j .b) IIR ﬁlter with classical Doppler spectrum..z/ f d TP D 0:1 8:6283 e C 0 3:3622 e C 0 1:8401 e C 0 9:4592 e C 0 7:2390 e C 0 2:8706 e 1 fan g. Now h ds is implemented as the cascade of two ﬁlters. as 1 fm D Kd N f D1=3 . The ﬁrst. 2) Using sinusoidal signals. f m / m D 1. letting f 1 D 1 f 1 =2. 11 : 1:0000 e C 0 4:4153 e C 0 6:1051 e C 0 1:3542 e C 0 7:9361 e C 0 5:1221 e C 0 fbn g.251) C c0 .`T P / C 2wi . Radio links 317 The ﬁlter output gives gi0 . with cutoff frequency f d . : : : .z/ D B.Q ] (4.z/=A.6 Parameters of an IIR ﬁlter which implements a classical Doppler spectrum. The second.6. : : : .2³ f m `TP C'i..`T P / D a1 gi0 .m [e j . deﬁning K d D 0 fd fd Nf (4.m / e j8i. 1 fm D Z or. Each 1 f m can be chosen as a constant.z/. for m > 1 we have f m D f m 1 C 1 f m . n D 0.. is a Chebychev lowpass ﬁlter. is an FIR shaping ﬁlter with amplitude characteristic of the frequency response given by the square root of the function in (4.] Hds .` 2/T P / 2/T P // (4.
(bb) Suppose f 0 D 0 and f m D f m to minimizing the error 1 C 1 f m .I and 8i. Transmission media Figure 4. f m /1 f m .1 Telephone channel Characteristics Telephone channels. The Doppler frequency f d was assumed to be zero.Q are uniformly distributed in [0. If D.254) corresponds fm fm 1 Nf XZ mD1 . which can scatter for a duration equal to 45 times − r ms .37 are represented nine realizations of the amplitude of the impulse response of a Rayleigh channel obtained by the simulation model of Figure 4.I and 8i.255) The phases 'i. p The amplitude is given by Ai.35. the choice (4. f / is ﬂat. 2³ / and statistically independent. − /j for a Rayleigh channel with an exponential power delay proﬁle having − rms D 0:5 T. 4. Transmission of a signal over a telephone .Q ensures that the real and imaginary parts of gi0 are statistically independent. : : : . f / d f (4.m D D.m . for an exponential power delay proﬁle with − r ms D 0:5 T .m must be generated as a Gaussian random variable with zero mean and variance D. originally conceived for the transmission of voice. m D 1. fm f /2 D. We point out that the parameter − r ms provides scarce information on the actual behavior .37.7. Nine realizations of jgCh . N f .318 Chapter 4. This choice for 8i. if instead D. f / presents some frequencies with large amplitude.7 4. In Figure 4. f m /1 f m . Ai.t. today are extensively used also for the transmission of data. 8i.bb/ of gCh . by the central limit theorem we can claim that gi0 is a Gaussian process.
4. such as symmetrical transmission lines. Therefore channel characteristics depend on the particular connection established. and satellite links. f f off / f >0 Y. Noise sources Impulse noise. Thermal noise. Frequency offset It is caused by the use of carriers for frequency up and downconversion. respectively.t/ and output y. Quantization noise. For a single quantizer.149)) −. f/ D (4. coaxial cables. Linear distortion The frequency response GCh . radio. As a statistical analysis made in 1983 indicated [2].t/ is given by ( X.7. Telephone channel 319 channel is achieved by utilizing several transmission media. optical ﬁbers. It is caused by electromechanical switching devices and is measured by the number of times the noise level exceeds a certain threshold per unit of time.39. f C f off / f <0 Usually f off Ä 5 Hz. It is described in Section 4. The plots of the attenuation a. f / of a telephone channel can be approximated by a passband ﬁlter with band in the range of frequencies from 300 to 3400 Hz. The attenuation and envelope delay distortion are normalized by the values obtained for f D 1004 Hz and f D 1704 Hz.257) are illustrated in Figure 4. the signaltoquantization noise ratio 3q has the behavior illustrated in Figure 4. . It is introduced by the digital representation of voice signals and is the dominant noise in telephone channels (see Chapter 5). a telephone channel is characterized by the following disturbances and distortions.38 for two typical channels.2 and is present at a level of 20 ł 30 dB below the desired signal. Nonlinear distortion It is caused by ampliﬁers and by nonlinear Alaw and ¼law converters (see Chapter 5). f / D 1 d arg GCh . f /j (4. f / 2³ d f (4.258) X . The relation between the channel input x. f / D 20 log10 jGCh .256) and of the group delay or envelope delay (see (1.
Attenuation and envelope delay distortion for two typical telephone channels.320 Chapter 4.38. . Transmission media Figure 4.
5. As illustrated in Figure 4. Phase jitter It is a generalization of the frequency offset (see (4. .4. then it is practically indistinguishable from the original voice signal. talker speech path talker echo listener echo Figure 4. If the echo is not very delayed.6. Talker echo: part of the signal is reﬂected and input to the receiver at the transmit side. Three of the many signal paths in a simpliﬁed telephone channel with a single twotofour wire conversion at each end.40. there are two types of echoes: 1. Echo As discussed in Section 3.40. Signal to quantization noise ratio as a function of the input signal power for three different inputs.39. it is caused by the mismatched impedances of the hybrid.270)). Telephone channel 321 Figure 4.7.
t/ ½ 0 is the signal envelope. We will now analyze the various blocks of the baseband equivalent model illustrated in Figure 4. it returns to the listener and disturbs the original signal. We note that the effect of echo is similar to multipath fading in radio systems. On terrestrial channels the roundtrip delay of echoes is of the order of 10ł60 ms.t/ is the instantaneous phase deviation. expressed as s. Power ampliﬁer (HPA) The ﬁnal transmitter stage in a communication system usually consists of a high power ampliﬁer (HPA). The HPA is a nonlinear device with saturation in the sense that. To mitigate the effect of echo there are two strategies: ž use echo suppressors that attenuate the unused connection of a fourwire transmission line.36. The nonlinearity of a HPA can be described by a memoryless envelope model.t/] (4. 4.41.41. Let s. it introduces nonlinear distortion of the signal itself. and '.t/ D A.t/ cos[2³ f 0 t C '. as illustrated in the scheme of Figure 3.t/ be the input signal of the HPA.259) where A.322 Chapter 4. Transmission media 2. ž use echo cancellers that cancel the echo at the source. Listener echo: if the echo is reﬂected a second time. whereas on satellite links it may be as large as 600 ms. Baseband equivalent model of a transmission channel including a nonlinear device. Figure 4. .8 Transmission channel: general model In this section we will describe a transmission channel model that takes into account the nonlinear effects due to the transmitter and the disturbance introduced by the receiver and by the channel. in addition to not amplifying the input signal above a certain value.
The solid state power ampliﬁer (SSPA) has a more linear behavior in the region of small signals as compared to the TWT. the point at which the ampliﬁer operates is identiﬁed by the backoff.42 for Þ A D 1.t/.263) IBO D 20 log10 p Ms Â Ã ST x (dB) (4.t/e j'.t/]e j . respectively. called envelope transfer functions. First. i.e. the HPA are of two types. represent respectively the amplitude/amplitude (AM/AM) conversion and the amplitude/phase (AM/PM) conversion of the ampliﬁer. however.260) (4. We adopt here the following deﬁnitions for the input backoff (IBO) and the output backoff (OBO): Â Ã S (dB) (4.265) (4. . Þ8 and þ8 suitable parameters.t/]/ It is usually more convenient to refer to baseband equivalent signals: s .266) where Þ A . The (4.264) OBO D 20 log10 p Ms T x where Ms is the statistical power of the input signal s. The travelling wave tube (TWT) is a device characterized by a strong AM/PM conversion. we need to introduce some normalizations. MsT x is the statistical power of the output signal sT x .t/ D G[A. transformations of the input: sT x . þ A . and S and ST x are the amplitudes of the input and output signals.bb/ sT x .bb/ .t/ D G[A.261) (4.t/C8[A.t/ C 8[A. In practice.t/]/ (4.t/ and . depend on the instantaneous. Transmission channel: general model 323 The envelope and the phase of the output signal. without memory. þ A D 0:25. For each type we give the AM/AM and AM/PM functions commonly adopted for the analysis.t/ D A.'. Þ8 D 0:26 and þ8 D 0:25.t/] cos. sT x .8. As a rule.4.265) and (4.266) are illustrated in Figure 4.262) The functions G[A] and 8[A]. Here we assume S D 1 and ST x D G[1] for all the ampliﬁers considered. SSPA.2³ f 0 t C '. The AM/PM conversion is usually negligible. that lead to saturation of the ampliﬁer. The conversion functions are G[A] D 8[A] D ÞA A 1 C þ A A2 Þ8 A 2 1 C þ8 A 2 (4. TWT.
In Figure 4.268) 8[A] D 0 where p is a suitable parameter.324 Chapter 4. Þ8 D 0:26 and þ8 D 0:25. Therefore the conversion functions are G[A] D A .) 10 5 0 14 12 10 8 6 A (dB) 4 2 0 2 (b) Figure 4.267) (4. Transmission media 5 0 G[A] (dB) −5 −10 −15 −14 −12 −10 −8 −6 A (dB) −4 −2 0 2 (a) 25 20 15 Φ[A] (deg. AM/AM and AM/PM characteristics of a TWT for ÞA D 1. þA D 0:25.43 the function G[A] is plotted for three values of p.1 C A2 p /1=2 p (4. the superimposed dashed line is an ideal curve given by ( A 0< A<1 G[A] D (4.42.269) 1 A½1 .
AM/AM experimental characteristic of two ampliﬁers operating at 38 GHz and 40 GHz. . Transmission channel: general model 325 5 0 p=3 p=2 p=1 G[A] (dB) 5 10 15 14 12 10 8 6 A (dB) 4 2 0 2 Figure 4. AM/AM characteristic of a SSPA.8.4.43. 5 HPA 38 GHz HPA 40 GHz 0 G[A] (dB) −5 −10 −15 −14 −12 −10 −8 −6 A (dB) −4 −2 0 2 Figure 4.44.
4 and 4.e. and the output impedance of the oscillator are included among deterministic components.12 due to shortterm stability. and because of the dynamics and transient behavior of the PLL.44 illustrates the AM/AM characteristics of two waveguide HPA operating at frequencies of 38 GHz and 40 GHz. the noise introduced by a receive antenna or the thermal noise and shot noise generated by the preampliﬁer stage of a receiver. or by experimental measurements.t/ D Vo [1 C a. to demodulate the received signal.326 Chapter 4.270) v. The recovered carrier may differ from the transmitted carrier because of the phase noise. frequency drift.41.t/ consist of deterministic components and random noise. a. which employs a local oscillator. The phase noise ' j . of the oscillator. The power spectral density of the AWGN noise can be obtained by the analysis of the system devices. all these noise signals are modelled as an effective additive white Gaussian noise (AWGN) signal. depending on whether they use or not a carrier signal. temperature change. The phase noise is usually represented in a transmission system model as in Figure 4. can be neglected. For transmission lines and radio links the models are given respectively in Sections 4. Phase noise The demodulators used at the receivers are classiﬁed as “coherent” or “noncoherent”. and ' j .t/ C 2 where d (longterm drift) represents the effect due to ageing of the oscillator. Typically both phase and frequency are recovered from the received signal by a phase locked loop (PLL) system. Transmission media It is interesting to compare the above analytical models with the behavior of a practical HPA. for example.t/] cos !0 t C ' j .6. For example. statistically independent of the desired signal. supply voltage. Figure 4. as well as the effect of ageing. Transmission medium The transmission medium is typically modelled as a ﬁlter. The recovered carrier is expressed as Ã Â dt 2 (4.t/ is the amplitude noise.t/ denotes the phase noise.t/. At the receiver input. Often the amplitude noise a. i. . Additive noise Several noise sources that cause a degradation of the received signal may be present in a transmission system. which ideally should have the same phase and frequency as the carrier at the transmitter. 12 Sometimes also called phase jitter. Consider.
−60 −70 −80 Pφ(f) (dBc/Hz) −90 −100 −110 (~ −20 dB/decade) − −120 −130 0 5 10 15 f (MHz) Figure 4.45 for f 1 D 0:1 MHz. a PSD model of the of ' j .272) is shown in Figure 4. is given by 8 >a < P' j . c. a D 65 dBc/Hz. Depending on the values of a. typical values of the statistical power of ' j .272) where the parameters a and c are typically of the order of 65 dBc/Hz and 125 dBc/Hz. b.45. The plot of (4. and b is a scaling factor that depends on f 1 and f 2 and assures continuity of the PSD.t/ comprises ﬁve terms: f2 f2 P' j .t/ are in the range from 10 2 to 10 4 . often used.4. f 1 and f 2 . Transmission channel: general model 327 Ignoring the deterministic effects. f / D k 4 0 C k 3 0 C f4 f3  {z }  {z } random frequency walk ﬂicker frequency noise random phase walk or white frequency noise f2 k 2 0 f2  {z } f2 C k 1 0 C f  {z } ﬂicker phase noise white phase noise k0 f 2  {z0 } (4. Simpliﬁed model of the phasenoise power spectral density. with the exception of the frequency drift. expressed in dB.8. . and c D 125 dBc/Hz. dBc means dB carrier. respectively. f / D c C >b 1 : f2 j f j Ä f1 f1 Ä j f j < f2 (4. that is it represents the statistical power of the phase noise. A simpliﬁed model. f 2 D 2 MHz. with respect to the statistical power of the desired signal received in the passband.271) for f ` Ä f Ä f h .
pp. Boston. London: Chapman & Hall. IEEE. p. 106–108. MA: Kluwer Academic Publishers. 35. pp.. IEEE Communications Magazine. S. [5] R. NJ: PrenticeHall. G. [12] K. Proc. and T. pp. Wireless digital communications. Spilker. Transmission systems for communications. New York: IEEE Press. J. Whinnery.55 µm”. NJ: PrenticeHall. Fields and Waves in Communication Electronics. L. 1997. [3] S. MA: Kluwer Academic Publishers. Englewood Cliffs. Englewood Cliffs. 1990. 1998. [14] K.). NC: Bell Telephone Laboratories. [2] M. [8] T. [13] T. Miya. 1980. Feher. Wireless information networks. Microwave mobile communications. R. 1977. Winston. Nov. IEE Electronics Letters. Hosaka. 1979. Palais. Englewood Cliffs. Oct. 1995. [16] G. Terunuma. Stuber. of the Technical Staff. 15. Rappaport. C. New York: John Wiley & Sons. G. G. [15] W. Boca Raton: CRC Press. vol. Van Duzer. J.. ¨ ¸ [4] G. H. Messerschmitt and E. NJ: PrenticeHall. [9] T. C.328 Chapter 4. Miyashita. Cherubini. ed. D. Ramo. Feb. Transmission media Bibliography [1] C. K. vol. Englewood Cliffs. NJ: PrenticeHall. S. Gibson. Li. 5th ed. Electromagnetic waves. 2nd ed. Pahlavan and A. NJ: PrenticeHall. New York: Marcel Dekker.. Fiber optic communications. in The Communications Handbook (J. Creigh. 1996. 1996. Jakes. Wireless communications: principles and practice. New York: John Wiley & Sons. 2nd ed. Someda. Norwell. 1965. J. “100BASET2: a new standard for 100 Mb/s ethernet transmission over voicegrade cables”. J. 3rd ed. 1997. Digital communications by satellite. 1992. and T. [6] L. Upper Saddle River. “Fiber optic communications systems”. 115–122. and S. [7] J. Ungerboeck. C. Principles of mobile communication. .. B. Jeunhomme. 1995. Digital communication. and transmission properties of optical ﬁbers”. 1994. [11] D. [10] J. 1990. Rao. Y. Singlemode ﬁber optics. A. Olcer. 54. parameters. “Structures. [17] J. Hoss. Levesque. ch. “Ultimate lowloss singlemode ﬁbre at 1. Palais. Lee. 731–739. 1982. 1175. Fiber optic communications. T. 1993.
4. Bibliography
329
[18] A. V. Oppenheim and R. W. Schafer, Discretetime signal processing. Englewood Cliffs, NJ: PrenticeHall, 1989. [19] A. Anastasopoulos and K. Chugg, “An efﬁcient method for simulation of frequency selective isotropic Rayleigh fading”, in Proc. 1997 IEEE Vehicular Technology Conference, pp. 2084–2088, May 1997.
Chapter 5
Digital representation of waveforms
Figure 5.1a illustrates the conventional transmission of an analog signal, for example, speech or video, over an analog channel; in this scheme the transmitter usually consists of an ampliﬁer and possibly a modulator, the analog transmission channel is of the type discussed in Chapter 4, and the receiver consists of an ampliﬁer and possibly a demodulator. Alternatively, the transmission may take place by ﬁrst encoding1 the information contained in the analog signal into a sequence of bits using for example an analogtodigital converter (ADC), as illustrated in Figure 5.1b. If Tb is the time interval between two consecutive bits of the sequence, the bit rate of the ADC is Rb D 1=Tb (bit/s). The binary message is converted by a digital modulator into a waveform that is suitable for transmission over an analog channel. At the receiver, the reverse process occurs: in this case a digital demodulator restores the message, whereas the conversion of the sequence of bits to an analog signal is performed by a digitaltoanalog converter (DAC). The system that has as an input the sequence of bits produced by the ADC, and as an output the sequence of bits produced by the digital demodulator is called a binary channel (see Chapter 7). In this chapter the principles and methods for the conversion of analog signals into binary messages and viceversa will be discussed; as a practical example we will use speech, but the principles may be extended to any analog signal. To compute system performance, a fundamental parameter is the signaltonoise ratio. Let s.t/ be the original signal, s .t/ the Q Q reconstructed signal, and eq .t/ D s .t/ s.t/; then the signaltonoise ratio is deﬁned as 3q D E[s 2 .t/] 2 E[eq .t/] (5.1)
5.1
Analog and digital access
Analog access over a telephone channel in the public switched telephone network (PSTN) is illustrated in Figure 5.2. With reference to the ﬁgure, the word “modem” is the contraction of mod ulatordemodulator. Its function is to convert a binary message, or data signal, into
1
We bring to the attention of the reader that the terms “encoder” and “decoder” are commonly used to indicate various devices in a communication system. In this chapter we will deal with encoders and decoders for the digital representation of analog waveforms.
332
Chapter 5. Digital representation of waveforms
Figure 5.1. Analog vs. digital transmission.
an analog passband signal that can be transmitted over the telephone channel. In Figure 5.2, the source generates a speech signal or a data ﬁle; in the latter case, a modem is required to transmit the signal. The analog signal s.t/ that has a band of approximately 300–3400 Hz is sent over a local loop to the central ofﬁce (see Chapter 4): here it is usually converted into a binary digital message via PCM at 64 kbit/s; in turn this message is modulated before being transmitted over an analog channel. After having crossed several central ofﬁces where switching (routing) of the signal takes place, the PCM encoded message arrives at the destination central ofﬁce: here it is converted into an analog signal and sent over a local loop to the end user. It is here that the signal must be identiﬁed as a speech signal or a digitally modulated signal; in the latter case a modem will demodulate it to reproduce the data message. Figure 5.3 illustrates the concept of direct digital access at the user’s premises. An analog signal is converted into a digital message via an ADC. The user digital message is then sent over the analog channel by a modulator. At the receiver the inverse process is established, where the digital message obtained at the output of the demodulator may be used to restore an analog signal via a DAC. In comparing the two systems, we note the waste of capacity of the system in Figure 5.2. For example, for a 9600 bit/s modem, the modulated PCM encoded signal requires a standard capacity of Rb D 64 kbit/s. By directly accessing the PCM link at the user’s home, we could transmit 64000=9600 ' 6 data signals at 9600 bit/s.
5.1.1
Digital representation of speech
Some waveforms
Some examples of speech waveforms for an interval of 0.25 s are given in Figure 5.4. From these plots, we can obtain a speech model as a succession of voiced speech spurts (see Figure 5.5a), or unvoiced speech spurts (see Figure 5.5b). In the ﬁrst case, the signal
5.1. Analog and digital access
333
Figure 5.2. User line with analog access.
Figure 5.3. User line with digital access.
334
Chapter 5. Digital representation of waveforms
Figure 5.4. Speech waveforms.
Figure 5.5. Voiced and unvoiced speech spurts.
5.1. Analog and digital access
335
is strongly correlated and almost periodic, with a period that is called pitch, and exhibits large amplitudes; conversely in an unvoiced speech spurt the signal is weakly correlated and has small amplitudes. We note moreover that the average level of speech changes in time: indeed speech is a nonstationary signal. In Figure 5.6 it is interesting to observe the instantaneous spectrum of some voiced and unvoiced sounds; we also note that the latter may have a bandwidth larger than 10 kHz. Concerning the amplitude distribution of speech signals, we observe that over short time intervals, of the order of a few tenths of milliseconds (or of a few hundreds of samples at a sampling frequency of 8 kHz), the amplitude statistic is Gaussian with good approximation; over long time intervals, because of the numerous pauses in speech, it tends to exhibit a gamma or Laplacian distribution. We give here the probability density functions of the amplitude that are usually adopted. Let ¦s be the standard deviation of the signal s.t/; then we have gamma: Laplacian: Gaussian: ps .a/ D 3 8³ ¦s jaj 1 p !1
2 p
e
2jaj ¦s 1 a 2 ¦s Á2
3jaj 2¦s
p
ps .a/ D p e 2¦s ps .a/ D p e 2³ ¦s 1
(5.2)
As mentioned above, analog modulated signals generated by modems, often called voiceband data signals, are also transmitted over telephone channels. Figure 5.7 illustrates a
Figure 5.6. Spectrum of voiced and unvoiced sounds for a sampling frequency of 20 kHz.
336
Chapter 5. Digital representation of waveforms
1
s(t)
0
−1
0
0.06
t (s)
Figure 5.7. Signal generated by a modem employing FSK modulation for the transmission of 1200 bit/s.
Figure 5.8. Signal generated by modems employing PSK modulation for the transmission of: (a) 2400 bit/s; (b) 4800 bit/s.
signal produced by the 202S modem, which employs FSK modulation for the transmission of 1200 bit/s, whereas Figure 5.8a and Figure 5.8b illustrate signals generated by the 201C and 208B modems, which employ PSK modulation for the transmission of 2400 and 4800 bit/s, respectively. For the deﬁnition of FSK and PSK modulation we refer the reader to Chapter 6. In general, we note that the average level of signals generated by modems is stationary; moreover, if the bit rate is low, signals are strongly correlated.
5.1. Analog and digital access
337
Speech coding
Speech coding addresses persontoperson communications and is strictly related to the transmission, for example, over the public network, and storage of speech signals. The aim is to represent, using an encoder, speech as a digital signal that requires the lowest possible bit rate to recreate, by an appropriate decoder, the speech signal at the receiver [1]. Depicted in Figure 5.9 is a basic scheme, denoted as ADC, that provides the analogtodigital conversion (encoding) of the signal, consisting of: 1. an antialiasing ﬁlter followed by a sampler at sampling frequency 1=Tc ; 2. a quantizer; 3. an inverse bit mapper (IBMAP) followed by a paralleltoserial (P/S) converter. As indicated by the sampling theorem, the choice of the sampling frequency Fc D 1=Tc is related to the bandwidth of the signal s.t/ (see (1.142)). In practice, there is a tradeoff between the complexity of the antialiasing ﬁlter and the choice of the sampling frequency, which must be greater than twice the signal bandwidth. For audio signals, Fc depends on the signal quality that we wish to maintain and therefore it depends on the application, see Table 5.1 [2].
Figure 5.9. Basic scheme for the digital transmission of an analog signal.
Table 5.1 Sampling frequency of the audio signal in three applications.
Application telephone 300 broadcasting 50 audio, compact disc digital audio tape
Passband (Hz)
Fc (Hz)
3400 (narrow band speech) 8000 7000 (wide band speech) 16000 10 ł 20000 44100 10 ł 20000 48000
338
Chapter 5. Digital representation of waveforms
The choice of the quantizer parameters is somehow more complicated and will be dealt with in detail in the following sections. We will consider now the quantizer as an instantaneous nonlinear transformation that maps the real values of s in a ﬁnite number of values of sq . To illustrate the principle of an ADC, let us assume that sq assumes values that are taken from a set of 8 elements:2 Q[s.kTc /] D sq .kTc / 2 fQ
4 ; Q 3 ; Q 2 ; Q 1 ; Q1 ; Q2 ; Q3 ; Q4 g
(5.3)
Therefore sq .kTc / may assume only a ﬁnite number of values, which can be represented as binary values, for example, using the inverse bit mapper of Table 5.2. It is convenient to consider the sequence of bits that gives the binary representation of fsq g instead of the sequence of values itself. In our example, with a representation using three bits per sample, the bit rate of the system is equal to Rb D 3Fc (bit/s) (5.4)
The inverse process (decoding) takes place at the receiver: the bitmapper (BMAP) restores the quantized levels, and an interpolator ﬁlter yields an estimate of the analog signal.
The interpolator ﬁlter as a holder
Always referring to the sampling theorem, an ideal interpolator ﬁlter3 has the following frequency response Â Ã f (5.5) G I . f / D rect Fc
Table 5.2 Encoder inverse bitmapper.
Values Integer Binary representation sq .kTc / representation c.k/ D .c2 ; c1 ; c0 / Q 4 Q 3 Q 2 Q 1 Q1 Q2 Q3 Q4 0 1 2 3 4 5 6 7 000 001 010 011 100 101 110 111
2
The notation adopted in (5.3) to deﬁne the set reﬂects the fact that in most cases the set of values assumed by sq is symmetrical around the origin. 3 From the Observation 1.7 on page 71, if s .kT / is WSS, then the interpolated random process s .t/ is WSS q c q 2 2 and E[sq .t/] D E[sq .kTc /], whenever the gain of the interpolator ﬁlter is equal to one. As a result the signaltonoise ratio in (5.1) becomes independent of t and can be computed using the samples of the processes, 2 3q D E[s 2 .kTc /]=E[eq .kTc /].
5.1. Analog and digital access
339
g nTc
I
t T =1/Fc c
Figure 5.10. DAC interpolator as a holder.
Typically, however, the DAC employs a simple holder that holds the input values as illustrated in Figure 5.10. In this case Ã Â t Tc =2 D wTc .t/ (5.6) g I .t/ D rect Tc and G I . f / D Tc sinc Â f Fc Ã e
2³ f Tc =2
(5.7)
Unless the sampling frequency has been chosen sufﬁciently higher than twice the bandwidth of s.t/, we see that the ﬁlter (5.7), besides not attenuating enough the images of sq .kTc /, introduces distortion in the passband of the desired signal.4 A solution to this Q problem consists in introducing, before interpolation, a digital equalizer ﬁlter with a frequency response equal to 1= sinc. f Tc / in the passband of s.t/. Figure 5.11 illustrates the solution. A simple digital equalizer ﬁlter is given by G comp .z/ D 9 1 C z 16 8
1
1 z 16
2
(5.8)
whose frequency response is given in Figure 5.12.
g ~ ( kT ) sq c T c
I
g comp T c
wT
~ (t) sq
c
Figure 5.11. Holder ﬁlter preceded by a digital equalizer.
4
In many applications, to simplify the analog interpolator ﬁlter, the signal before interpolation is oversampled: for example, by digital interpolation of the signal sq .kTc / by at least a factor of 4. Q
340
Chapter 5. Digital representation of waveforms
Figure 5.12. Frequency responses of a threecoefﬁcient equalizer ﬁlter gcomp and of the overall ﬁlter gI D gcomp Ł wTc .
An alternative solution is represented by an IIR ﬁlter with: G comp .z/ D 9=8 1 C 1=8z
1
(5.9)
whose frequency response is given in Figure 5.13. In the following sections, by the term DAC we mean a digitaltoanalog converter with the aforementioned variations.
Sizing of the binary channel parameters
As will be further discussed in Section 6.2, in Figure 5.9 the binary channel is characterized by the encoder bit rate Rb . If B is the bandwidth of s.t/, the sample frequency Fc is such that Fc D 1 ½ 2B Tc (5.10)
If L D 2b is the number of levels of the quantizer, then the encoder bit rate is equal to Rb D bFc bit/s. Another important parameter of the binary channel is the bit error probability O Pbit D P[b` 6D b` ] (5.11)
If an error occurs, the reconstructed binary representation c.k/ is different from c.k/: Q consequently the reconstructed level is sq .kTc / 6D sq .kTc /. In the case of a speech signal, Q
5.1. Analog and digital access
341
Figure 5.13. Frequency responses of an IIR equalizer ﬁlter gcomp and of the overall ﬁlter gI D gcomp Ł wTc .
such an event is perceived by the ear as a fastidious impulse disturbance. For speech signals to have an acceptable quality at the receiver it must be Pbit Ä 10 3 .
5.1.2
Coding techniques and applications
At the output of an ADC, the PCM encoded samples, after suitable transformations, can be further quantized in order to reduce the bit rate. From [3], we list in Figure 5.14 various coding techniques, which are divided into three groups, that essentially exploit two elements: ž redundancy of speech, ž sensibility of the ear as a function of the frequency. Waveform coding. Waveform encoders attempt to reproduce the waveform as closely as possible. This type of coding is applicable to any type of signal; two examples are the PCM and ADPCM schemes. Coding by modeling. In this case coding is not related to signal samples, but to the parameters of the source that generates them. Assuming the voiced/unvoiced speech model, an example of a classical encoder (vocoder) is given in Figure 5.15, where a periodic excitation, or white noise segment, ﬁltered by a suitable ﬁlter, yields a synthesized speech segment. A more sophisticated model uses a more articulated multipulse excitation. Frequencydomain coding. In this case coding occurs after signal transformation to a domain different from time, usually frequency: examples are subband coding and transform coding.
342
Chapter 5. Digital representation of waveforms




Figure 5.14. Characteristics exploited by the different coding techniques.
Figure 5.15. Vocoder and multipulse models for speech synthesis.
5.1. Analog and digital access
343
Table 5.3 Voice coding techniques.
Bit rate (kbit/s) 1.2 2.4 4.8 8.0 9.6 16 32 64
Algorithm
Year
Codebook excited LP Multipulse excited LP Vector quantization Time domain harmonic scaling Adaptive transform coding Subband coding Residual excited LP Adaptive predictive coding Formant vocoder Cepstral vocoder Channel vocoder Phase vocoder Linear prediction vocoder Adaptive differential PCM Differential PCM Adaptive delta modulation Delta modulation Pulse code modulation
CELP MELP VQ TDHS ATC SBC RELP APC FORV CEPV CHAV PHAV LPCV ADPCM DPCM ADM DM PCM
1984 1982 1980 1979 1977 1976 1975 1968 1971 1969 1967 1966 1966
Various coding techniques are listed in Table 5.3. Table 5.4 illustrates the characteristics of a few systems, putting into evidence that for more sophisticated encoders the implementation complexity expressed in millions of instructions per second (MIPS), as well as the delay introduced by the encoder (latency), can be considerable. The various coding techniques are different in quality and cost of implementation. With respect to the perceived quality, on a scale from poor to excellent, three categories of encoders perform as illustrated in Figure 5.16: obviously a higher implementation complexity is expected for encoders with low bit rate and good quality. We go from a bit rate in the range from 4.4 to 9.6 kbit/s for cellular radio systems, to a bit rate in the range from 16 to 64 kbit/s for transmission over the public network. Generally a coding technique is strictly related to the application and depends on various factors: ž signal type (for example speech, music, voiceband data, signalling, etc.); ž maximum tolerable latency; ž implementation complexity. In particular, speech encoder applications for bit rate in the range 4–16 kbit/s are: ž long distance and satellite transmission; ž digital mobile radio (cellular radio);
344
Chapter 5. Digital representation of waveforms
Table 5.4 Parameters of a few speech coders.
Coder
Bit rate Computational Latency (kbit/s) complexity (MIPS) (ms) 64 32 16 8 4 2 0.0 0.1 1 10 100 1 0 0 25 35 35 35
PCM ADPCM ASBC MELP CELP LPC
Figure 5.16. Audio quality vs. bit rate for three categories of encoders.
ž modem transmission over the telephone channel (voice mail); ž speech storage for telephone services and speech encryption; ž packet networks with integrated speech and data.
5.2
5.2.1
Instantaneous quantization
Parameters of a quantizer
We consider a sample of a discretetime random process s.kTc /, obtained by sampling the continuoustime process s.t/ with rate Fc . To simplify the notation we choose Tc D 1, unless otherwise stated. With reference to the scheme of Figure 5.17, for a quantizer with L output values we have: ž input signal s.k/ 2 <; ž quantized signal sq .k/ 2 Aq D fQ L=2 ; : : : ; Q the alphabet Aq are called output levels;
1 ; Q1 ; : : : ; Q L=2 g;
the L values of
5.2. Instantaneous quantization
345
Figure 5.17. Quantization and mapping scheme: (a) encoder, (b) decoder.
ž code word c.k/ 2 f0; 1; : : : ; L 1g, which represents the value of sq .k/. The system with input s.k/ and output c.k/ constitutes a PCM encoder. The quantizer can be described by the function Q : < ! Aq (5.12)
For a given partition of the real axis in the intervals fRi g, i D L=2; : : : ; 1; 1; : : : ; L=2 S L=2 T such that < D i D L=2; i 6D0 Ri , Ri R j D ; for i 6D j, (5.12) implies the following rule Q[s.k/] D sq .k/ D Qi if s.k/ 2 Ri (5.13)
A common choice for the decision intervals Ri is given by: ( Ri D .−i ; −i C1 ] for i D L=2; : : : ; 1 Ri D .−i
1 ; −i ]
for i D 1; : : : ; L=2
(5.14)
where − L=2 D 1 and − L=2 D 1. We note that the decision thresholds f−i g are L 1, being − L=2 and − L=2 assigned. The mapping rule (5.12) is called the quantizer characteristic and is illustrated in Figure 5.18 for L D 8 and −0 D 0. The L values of sq .k/ can be represented by integers c.k/ 2 f0; 1; : : : ; L 1g or by a binary representation with dlog2 Le bits. For the quantizer characteristic of Figure 5.18, a binary representation is adopted that goes from 000 (the minimum level), to 111 (the maximum level); in this example the bit rate of the system is equal to Fb D 3Fc bit/s. Let eq .k/ D sq .k/ s.k/ (5.15) be the quantization error. From the relation sq .k/ D s.k/ C eq .k/ we have that the quantized signal is affected by a certain error eq .k/. We can formulate the problem as that of representing s.k/ with the minimum number of bits b, to minimize the system bit rate, and at the same time constraining the quantization error, so that a certain level of quality of the quantized signal is maintained. Observation 5.1 In this chapter the notation c.k/ is used to indicate both an integer number and its vectorial binary representation (see (5.18)). Furthermore, in the context of vector quantization the elements of the set Aq are called code words.
346
Chapter 5. Digital representation of waveforms
Figure 5.18. Threebit quantizer characteristic.
5.2.2
Uniform quantizers
A quantizer with L D 2b equally spaced output levels and decision thresholds is called uniform. For: ( i D L=2 C 1; : : : ; 1 −i C1 −i D 1 (5.16) i D 1; 2; : : : ; L=2 1 −i −i 1 D 1 8 > Qi C1 Qi D 1 < Q1 Q 1 D 1 > : Qi Qi 1 D 1 iD L=2; : : : ; 2 (5.17) i D 2; : : : ; L=2
where 1 is the quantization step size. Two types of characteristics are distinguished, midtread and midriser, depending on whether the zero output level belongs or not to Aq . Midriser characteristic. The quantizer characteristic is given in Figure 5.19 for L D 8: in this case the smallest value, in magnitude, assumed by sq .k/ is 1=2, even for a very small input value s. Let the binary representation of c.k/ be deﬁned according to the following rule: the most signiﬁcant bit of the binary representation of c.k/ denotes the sign .š1/ of the input value, whereas the remaining bits denote the amplitude. Therefore adopting the binary vector representation c.k/ D [cb
1 .k/; : : : ; c0 .k/]
c j .k/ 2 f0; 1g
(5.18)
5.2. Instantaneous quantization
347
7∆ 2 5∆ 2 3∆ 2 ∆ 2  4∆  3∆  2∆  ∆ 100
sq=Q[s]
010
011
001
000
 ∆ 2 3∆ 2 5∆ 2 7∆ 2
∆
2∆
3∆
4∆
s
101

110

111

Figure 5.19. Uniform quantizer with midriser characteristic (b D 3).
the relation between sq .k/ and c.k/ is given by sq .k/ D 1.1 2cb
1 .k// b 2 X jD0
c j .k/ 2 j C
1 .1 2
2cb
1 .k//
(5.19)
Midtread characteristic. The quantizer characteristic is shown in Figure 5.20. Zero is a value assumed by sq . Let the binary representation of c.k/ be the two’s complement representation of the level number. Then we have sq .k/ D 1 c.k/. Note that the characteristic is asymmetric around zero, hence we may use L 1 levels (giving up the minimum output level), or choose an implementation that can be slightly more complicated than in the case of a symmetric characteristic (see page 357).
Quantization error
We will refer to symmetrical quantizers, with midriser characteristic. An example with L D 23 D 8 levels is given in Figure 5.21: in this case the decision thresholds are −i D i1, i D L=2 C 1; : : : ; 1; 0; 1; : : : ; L=2 1, with, as usual, − L=2 D 1 and − L=2 D 1. The output values are given by Ã 8Â > iC1 1 > i D L=2; : : : ; 1 < 2 Ã (5.20) Qi D Â > > i 1 1 : i D 1; : : : ; L=2 2 Correspondingly the decision intervals are given by (5.14).
348
Chapter 5. Digital representation of waveforms
sq=Q[s]
3∆ 010 011
2∆ 001 ∆ 000 7∆ 2 5∆ 2  3∆ 2 111  ∆ 2  ∆ ∆ 2 3∆ 2
5∆ 2
7∆ 2
s
110
 2∆
101
 3∆
100
 4∆
Figure 5.20. Uniform quantizer with midtread characteristic (b D 3).
We note that if sq .k/ D Qi , then the b 1 least signiﬁcant bits of c.k/ are given by the binary representation of .jij 1/, and c.k/ assumes amplitude values that go from 0 to L=2 1 D 2b 1 1. If for each value of s we compute the corresponding error eq D Q.s/ s, we obtain the quantization error characteristic of Figure 5.21. We deﬁne the quantizer saturation value as −sat D −.L=2/
1
C1
(5.21)
that is shifted by 1 with respect to the last ﬁnite threshold value. Then we have jeq j Ä and ( eq D Q L=2 s for s > −sat Q L=2 s for s < −sat (5.23) 1 2 for jsj < −sat (5.22)
Consequently, eq may assume large values if jsj > −sat . This observation suggests that the real axis be divided into two parts: 1. the region s 2 . 1; −sat / [ .−sat ; C1/, where eq is called saturation or overload error .esat /;
5.2. Instantaneous quantization
349
Figure 5.21. Uniform quantizer (b D 3).
350
Chapter 5. Digital representation of waveforms
2. the region s 2 [ −sat ; −sat ], where eq is called granular error .egr /; the interval [ −sat ; −sat ] is also called quantizer range. It is often useful to compactly represent the quantizer characteristic in a single axis, as illustrated in Figure 5.21c, where the values of the decision thresholds are indicated by dashed lines, and the quantizer output values by dots.
Relation between
, b and τsat
The quantization step size 1 is chosen so that 2−sat D L1 Therefore, for L D 2b , 1D 2−sat 2b (5.25) (5.24)
If js.k/j < −sat , observing (5.22) this choice guarantees that eq is granular with amplitude in the range 1 1 Ä eq .k/ Ä 2 2 (5.26)
If js.k/j > −sat the saturation error can assume large values: therefore −sat must be chosen so that the probability of the event js.k/j > −sat is small. For a ﬁxed number of bits b, and consequently for a ﬁxed number of levels L, it is important to verify that, increasing −sat , 1 also increases and hence also the granular error; on the other hand, choosing a small 1 leads to a considerable saturation error. As a result, for each value of b there will be an optimum choice of −sat and hence of 1. In any case, to decrease both errors we must increase b with consequent increase of the encoder bit rate.
Statistical description of the quantization noise
In Figure 5.22 we give an equivalent model of a quantizer where the quantization error is modeled as additive noise. Assuming (5.26) holds, that is for granular eq , we make the following assumptions. 1. The quantization error is white, ( E[eq .k/eq .k n/] D Meq 0 nD0 n 6D 0 (5.27)
2. It is uncorrelated with the input signal E[s.k/eq .k n/] D 0 8n (5.28)
3. It has a uniform distribution (see Figure 5.23): peq .a/ D 1 1 1 1 ÄaÄ 2 2 (5.29)
5.2. Instantaneous quantization
351
Figure 5.22. Equivalent model of a quantizer.
p (a) e
q
1 __
∆
∆ − __ 2
∆ __ 2
a
Figure 5.23. Probability density function of eq .
We note that if s.k/ is a constant signal the above assumptions are not true; they hold in practice if fs.k/g is described by a function that signiﬁcantly deviates from a constant and 1 is adequately small, that is b is large. Figure 5.24 illustrates the quantization error for a 16level quantized signal. The signal eq .t/ is quite different from s.t/ and the above assumptions are plausible. If the probability density function of the signal to quantize is known, letting g denote the function that relates s and eq , that is eq D g.s/, also called quantization error characteristic, the probability density function of the noise is obtained as an application of the theory of functions of a random variable, that yields X ps .b/ 1 1 <a< (5.30) peq .a/ D jg 0 .b/j 2 2 1
b2g .a/
where g is the inverse of the error function, or equivalently the set of values of s corresponding to a given value of eq . We note that in this case the slope of the function g is always equal to one, hence g 0 .b/ D 1, and from (5.15) for 1=2 < a < 1=2 we get ¦ ² L L 1 g .a/ D Qi a; i D ; : : : ; 1; 1; : : : ; (5.31) 2 2 Finally, peq .a/ D
iD 2 X L 2 ; i 6D0 L
1 .Ð/
ps .Qi
a/
1 1 <a< 2 2
(5.32)
It can be shown that, if 1 is small enough, the sum in (5.32) gives origin to a uniform function peq , independently of the form of ps .
352
Chapter 5. Digital representation of waveforms
Figure 5.24. Quantization error, L D 16 levels.
Statistical power of the quantization error
With reference to the model of Figure 5.22, a measure of the quality of a quantizer is the signaltoquantization error ratio: 3q D E[s 2 .k/] 2 E[eq .k/] (5.33)
Choosing −sat so that eq is granular, from (5.29) we get Meq ' Megr ' 12 12 (5.34)
For an exact computation that includes also the saturation error we need to know the probability density function of s. The statistical power of eq is given by Z C1 2 Meq DE[eq .k/] D [Q.a/ a]2 ps .a/ da Z D
1 −sat − Zsat1 −sat
[Q.a/ [Q.a/
a]2 ps .a/ da C a]2 ps .a/ da :
Z
−sat
[Q.a/
1
a]2 ps .a/ da
(5.35)
C
In (5.35) the ﬁrst term is the statistical power of the granular error, Megr , and the other two terms express the statistical power of the saturation error, Mesat . Let us assume that ps .a/
5.2. Instantaneous quantization
353
is even and the characteristic is symmetrical, i.e. − i D −i and Q i D Qi ; then we get 8 9 L >X Z − > Z −sat <2 1 i = (5.36) .Qi a/2 ps .a/ da C .Q L a/2 ps .a/ da Megr D 2 2 > i D1 −i 1 > −L : ;
2
1
Z Mesat D 2
C1
−sat
.Q L
2
a/2 ps .a/ da
(5.37)
If the probability of saturation satisﬁes the relation P[js.k/j > −sat ] − 1, then Mesat ' 0; introducing the change of variable b D Qi a, as −i D Qi C 1=2 and −i 1 D Qi 1=2, we have L=2 X Z 1=2 b2 ps .Qi b/ db (5.38) Meq ' Megr D 2
i D1 1=2
If 1 is small enough, then ps .Qi and assuming 2. P L=2
i D0
b/ ' ps .Qi / for jbj Ä R C1
1
1 2
(5.39)
ps .Qi /1/ ' Megr D 2
ps .b/ db D 1, we get !Z
1=2 1=2
L=2 X i D1
ps .Qi /1
12 b2 db ' 1 12
(5.40)
In conclusion, as in (5.34), we have Meq ' 12 12 (5.41)
assuming that −sat is large enough, so that the saturation error is negligible, and 1 is sufﬁciently small to verify (5.39).
Design of a uniform quantizer
Assuming the input s.k/ has zero mean and variance ¦s2 , and deﬁning the parameter5 kf D ¦s −sat (5.42)
the procedure of designing a uniform quantizer consists of three steps. 1. Determine −sat so that the saturation probability is sufﬁciently small: Psat D P[js.k/j > −sat ] − 1 (5.43)
5
Often the inverse 1=k f D −sat =¦s is called loading factor.
354
Chapter 5. Digital representation of waveforms Ð For example, if s.k/ 2 N 0; ¦s2 , then6
Psat
8 > 0:046 > > > > Ã > Â < −sat D 0:0027 D 2Q > ¦s > > > > > 0:000063 :
−sat D2 ¦s −sat D3 ¦s −sat D4 ¦s
2. Choose L so that the signaltoquantization error ratio assumes a desired value 3q D 3. Given L and k f , we obtain 1D 2¦s 2−sat D L kf L (5.45) Ms ¦2 ' 2 s D 3k 2 L 2 f Meq 1 =12 (5.44)
Signaltoquantization error ratio
For L D 2b , observing (5.44) we have the following result Â Ã ¦s .3q /d B ' 6:02 b C 4:77 C 20 log −sat
(5.46)
Recalling that this law considers only granular error, if we double the number of quantizer levels for a given loading factor, i.e. increase by one the number of bits b, the signaltoquantization error ratio increases by 6 dB. Example 5.2.1 Let s.k/ 2 U[ smax ; smax ]. Setting −sat D smax , we get p smax −sat D D 3 H) .3q /d B D 6:02 b ¦s ¦s Example 5.2.2 Let s.k/ D smax cos.2³ f 0 Tc k C '/. Setting −sat D smax , we get p smax −sat D D 2 H) .3q /d B D 6:02 b C 1:76 ¦s ¦s
(5.47)
(5.48)
Example 5.2.3 For s.k/ not limited in amplitude, and assuming Psat negligible for −sat D 4¦s , we get .3q /d B D 6:02 b
6
7:2
(5.49)
The function Q is deﬁned in Appendix 6.A.
4 Ð For s. We also note that the quantizers obtained by the optimization procedure and by the method on page 353 are in general different. respectively. where ` increases with b. The optimum point depends on b: we have −sat D `¦s . and consequently −sat D 2b 1 1 D 3:05¦s .25.26.46) for ¦s − −sat . As shown in Figure 5. We note that for values of ¦s near −sat the approximation Meq ' Megr is no longer valid because Mesat becomes nonnegligible. given by (5.2. . The optimization of 3q for the uniform quantization of a signal with a speciﬁed amplitude distribution yields the results given in Table 5. Example 5. For the computation of Mesat we need to know the probability density function of s and apply (5. and of a ¼law (¼ D 255) quantizer (continuous lines). the optimum value of 3q is obtained by determining the minimum of .5. Instantaneous quantization 355 45 40 35 30 b=8 Λq (dB) 25 7 20 6 15 5 10 b=8 7 6 5 −60 −50 −40 −30 σs/ τsat (dB) −20 −10 0 Figure 5. Assuming a Laplacian signal we obtain the curves also shown in Figure 5. whereas for b D 8 we obtain −sat D 3:94 ¦s . ¦s2 and b D 5. in particular for b D 3 it turns out −sat D 2:3 ¦s .5 we have optimum performance for 1=¦s D 0:1881. versus the statistical power of the input signal is illustrated in Figure 5.k/ 2 N 0. The parameter b is the number of bits of the quantizer.Megr C Mesat /=Ms as a function of ¦s =−sat .35).46) and (5.46).25 for various values of b. observing Table 5. Signaltoquantization error ratio versus ¦s =−sat of a uniform quantizer for granular noise only (dashed lines).25. and consequently the value of 3q decreases. The plot of 3q .64) for a uniform and a ¼law quantizer. for a Laplacian signal (dasheddotted lines).5 [4]. The expression of 3q for granular noise only is given by (5. that coincide with the curves given by (5. We note that for the more dispersive inputs the optimum value of 1 increases.2.
26. where ¦s2 is computed for a voiced spurt.0743 U 6.02 12.34 L 3. .49 22.5860 0.60 25.1657 0.96 20. Digital representation of waveforms Table 5.0874 0.8660 0.5400 0.99 31.3459 0.00 17.08 30.a/ (U: uniform. We conclude this section observing that for a nonstationary signal. G: Gaussian.] b (bit/sample) U 1 2 3 4 5 6 7 8 −3 1opt =¦s ps .95 8. : gamma).89 1.7957 0.356 Chapter 5.78 13. setting −sat D 4¦s .0660 0.0.1881 0.40 9.4610 0.14 48.1041 0.4 2 Figure 5.27 19.a/ G 1.5 Optimal quantization step size and maximum corresponding value 3q of a uniform quantizer for different ps .23 35.0569 0.2130 0.4142 1.36 30.16 26.7320 0.14 1.06 24. L: Laplacian.12 42. Determination of the optimum value of 3q for b D 5 and s.38 24.3q /d B ps .a/ G 4. and consequently 3q is degraded by an amount equivalent to 3–5 bit.44 15. yields 3q ' 33 dB for b D 7. ¦s /.1083 0.13 40.25 0.07 11.k/ 2 N .0549 1.25 14. a voice signal.04 18.1547 1. good enough for telephone communications.83 35. in an unvoiced spurt ¦s2 can be reduced by 20–30 dB.2165 0.4330 0.3352 0. for example.5956 0.10 36.2800 0. However.76 4.0135 6 x 10 5 M 4 eq /M s 3 Me gr /Ms 2 1 Me sat /Ms 0 0.17 max.0541 0.0961 0.0308 L 1.1273 0.57 29.7309 0.0271 0.3 σs/τsat 0.01 7.35 0.9957 0. [From Jayant and Noll (1984).
1.2. Instantaneous quantization 357 Figure 5. the number of clock periods elapsed from the start represents c.k/ D 01 stop stop Figure 5. The ﬁrst implementation encodes one level at a time and is illustrated in Figure 5.27. Implementations of uniform PCM encoders We now give three possible implementations of PCM encoders.27.k/j. Example of encoding one level at a time for b D 3. Starting with the counter initialized to zero. Set V D js. when the generator output signal exceeds the level V . .k/ can be encoded as a separate bit.28.k/ D 00 if V < −2 ) c. For example.28 for b D 3: if V < −1 ) c. Uniform PCM encoder: encoding one level at a time. where − is the clock period of a counter with b 1 bits. The sign of s.k/j.k/. V is compared with the output signal of a ramp generator with slope 1=− . which gives the PCM encoding of js.5. let us consider the case illustrated in Figure 5.
The ﬁrst refers to stationary signals with a nonuniform probability density function: for such signals . We conclude this section explaining that the acronym PCM stands for pulse code modulation.c1 .k/. neglecting the sign bit. 3. To determine the bits c0 and c1 we can operate as follows: if V < −2 ) c1 D 0 if V < −1 C c1 21 1 ) c0 D 0 otherwise c1 D 1 otherwise c0 D 1 Only two comparisons are made. For example. which encodes one bit at a time. but the decision thresholds now depend on the choice of the previous bits. the code word length is 2.30.29. which encodes one code word of (b 1) bit at a time.k/ D 11 stop stop 1. PCM is not a modulation. In this case b 1 comparisons are made: it is as if we were to explore a complete binary tree whose 2b 1 leaves represent the output levels. for b D 3. is given in Figure 5.k/ D .29. and c. These encoders are called ﬂash converters. Generally the number of comparisons depends on V and it is at most equal to 2b 2. but rather a coding method. c0 /.3 Nonuniform quantizers There are two observations that suggest the choice of a nonuniform quantizer. through a logic network this word is mapped to a binary word of b 1 bits that yields the PCM encoding of s.358 Chapter 5. In this scheme V is compared simultaneously with the 2b 1 quantizer thresholds: the outcome of this comparison is a word of 2b 1 bit formed by a sequence of “0” followed by a sequence of “1”. if V < −3 ) c. Digital representation of waveforms clock threshold adjust logic reference voltage s(k)=V logic + comparator serial code bits Figure 5.k/ D 10 if V > −3 ) c. 5. We waited until the end of the section to avoid confusion about the term modulation: in fact. PCM encoder: encoding one bit at a time. A second possible implementation. The last implementation. is given in Figure 5.
Flash converter: encoding one word at a time.31.3. as that depicted for example in Figure 5. Three examples of implementation 1. for which the ratio between instantaneous power (estimated over windows of tenths of milliseconds) and average power (estimated over the whole signal) can exhibit variations of several dB. In Section 5.. Encoding of the nonuniformly quantized signal yq is obtained by a lookup table whose input is the uniformly quantized value xq . The most popular method. 2. depicted in Figure 5.33. uniform quantizers are suboptimum. whereas it is small if the signal is small: as a result the ratio 3q tends to remain constant for a wide dynamic range of the input signal. .30.3. 3.5.30. moreover. with a step size equal to the minimum step size of the desired nonuniform characteristic. for example.31. employs a uniform quantizer having a large number of levels.1 we will analyze in detail the last two methods.29 and 5. speech. with the techniques illustrated in Figures 5.31 can be implemented directly. for a nonuniform quantizer the quantization error is large if the signal is large.g. Under these conditions a quantizer with nonuniform characteristics. The characteristic of Figure 5. As also illustrated in Figure 5. e. a compression function may precede a uniform quantizer: at the decoder it is therefore necessary to have an expansion of the quantized signal. As shown in Figure 5. is more effective because the signaltoquantization error ratio 3q is almost independent of the instantaneous power. the variation of the average power over different links is also of the order of 40 dB. The second refers to nonstationary signals. Nonuniform quantizers 359 τ1 s(k)=V τ2 2 τ3 b1 to b1 decoding logic (b1)bit code word τ2b1 Figure 5.32.
F[s] D ln s We consider the two blocks shown in Figure 5.52) .360 Chapter 5.32a.50) In Figure 5.51) This quantization technique takes the name of companding from the steps of compressing and expanding. We ﬁnd that the ideal characteristics of F[Ð] should be logarithmic. The signal is ﬁrst compressed through a nonlinear function F. that must be expanded to yield a quantized version of s sq D F 1 [Q[y]] (5.32b illustrates in detail the principle of Figure 5. If −sat 6D 1 we need to normalize s to −sat .3. Digital representation of waveforms Figure 5.32 we assume −sat D 1. The signal y is uniformly quantized and the code word given by the inverse bit mapper is transmitted. Nonuniform quantizer characteristic with L D 8 levels. At the receiver the bit mapper gives yq . that yields the signal y D F.s/ (5.31. 5.1 Companding techniques Figure 5.34. (5.
k/] y.32. Encoding.k/ D ln js. Here −sat D 1 is assumed.5.3.53) (5.k/j (5. (b) nonuniform quantizer characteristic implemented by companding and uniform quantization. Nonuniform quantizers 361 Figure 5.k/ sgn[s. that is Let s.k/ D e y.54) . (a) Use of a compression function F to implement a nonuniform quantizer.
k/ ' 1 C eq .k/]eeq .k/j sgn[s.k/ sgn[s.362 Chapter 5.k/] D js.55) .k/.57) (5.k/ is given by sq . Figure 5.k/ If eq − 1.k/ The value c. Nonuniform quantization by companding and uniform quantization: (a) PCM encoder.k/ yields yq .k/ (5.55).k/ and the sign of s.33. Decoder. the quantized version of s.k/ D s. Digital representation of waveforms Figure 5.k/ D e yq .k/eeq . Assuming c. (b) decoder. and assume the sign of the quantized signal is equal to that of s.k/ is correctly received.k/ D Q[y. then eeq . observing (5.k/j C eq .56) (5.k/] D ln js. The quantization of y.k/. Nonuniform quantizer implemented digitally using a uniform quantizer with small step size followed by a lookup table.k/ is given by the inverse bit mapping of yq .34.
k/s.35 for two values of A.5 A=1 0.k/ represents the output error of the system.58) where eq .35.28)).61) 0.k/s.2 0.k/] Meq E[eq (5. The sign is considered separately: sgn[y] D sgn[s] 1 (5.3.k/ is uncorrelated with the signal ln js.k/ D s.59) where from (5. As eq .4 0.3 0.A/ A This law.3 0.k/] E[eq D 1 1 D 2 .2 0.8 0. thus an approximation of the logarithmic law is usually adopted.6 F(s) 0.Ajsj/ 1 > : Ä jsj Ä 1 1 C ln.k/ C eq .k/ (5.9 1 Figure 5.9 0.k/ (see (5. is adopted in Europe.7 A=87.6 0.5 s 0. Alaw (A D 87:56). For −sat D 1.1 0 0 0.60) y D F[s] D > 1 C ln. We note that a logarithmic compression function generates a signal y with unbounded amplitude. Regulatory bodies have deﬁned two compression functions: 1. Nonuniform quantizers 363 and sq .4 0. Consequently 3q does not depend on Ms . illustrated in Figure 5.1 0.8 0. Alaw. we get 3q D Ms 2 . .5.7 0.56 0.k/j.k/s 2 . and hence with s.41) we have that Meq depends only on the quantization step size 1. 8 Ajsj 1 > > 0 Ä jsj Ä < 1 C ln.A/ A (5.
−sat ].62) ln. which is well veriﬁed for a uniform input in the interval [ −sat .1 C ¼/ This law. for µ s × 1. is adopted in the United States and Canada. ¼law (¼ D 255).2 0.1 0.2 0. ¼law.8 0. 2.3 0.3 0.3q /d B D 6:02b C 4:77 20 log10 [ln.4 0. Note that in the saturation region they coincide with the curves obtained for a . ln. Digital representation of waveforms 1 0.60). illustrated in Figure 5.5 0.8 µ =50 0.1 0 0 0.64) Curves of 3q versus the statistical power of the input signal are plotted for ¼ D 255 in Figure 5.25.6 F(s) µ =5 0. we can see that for ¼law.6 0.7 0. considering only the granular error. we have ( Â Â Ã Ã) −sat −sat 2 p .1 C ¼/ as in the ideal case (5. Similar behavior is exhibited by (5.63) ln.54).5 s 0.1 C ¼/] 10 log10 1 C C 3 ¼¦s ¼¦s (5. the standard value of ¼ is equal to 255.364 Chapter 5.7 0.9 1 Figure 5. For −sat D 1.1 C ¼jsj/ (5.4 µ =0 0. We note that. The compression increases for higher values of ¼. y D F[s] D Signaltoquantization error ratio Assuming the quantization error uniform within each decision interval. we have ln[µ s] F[s] D (5.36 for four values of ¼.36.9 µ =255 0.
5.37.3.k/ is obtained through a ﬁrst multibit (5 in ﬁgure) quantization to generate xq . The relation between s.25 is that. Using the standard compression laws. Nonuniform quantizers 365 uniform quantizer with Laplacian input.7. if b D 8. For decoding. including also the sign we have 8 bit/sample. then we have a mapping of the 5 bits of xq to the 3 bits of yq using the mapper (sign omitted) of Table 5. Digital compression An alternative method to the compressionquantization scheme is illustrated by an example in Figure 5. we need to approximate the compression functions by piecewise linear functions. Observation 5. by increasing ¼. For decoding.37. Distribution of quantization levels for a 3bit ¼law quantizer with ¼ D 40.k/ and yq . . but the maximum value decreases. 3q ' 38 dB for a wide range of values of ¦s .39. as shown in Figure 5. For a sampling frequency of Fc D 8 kHz. a mapper with 12bit input and 8bit output is given in Table 5. We emphasize that also in this case 3q increases by 6 dB with the increase of b by one. For encoding.6.2 In the standard nonlinear PCM. which represents the reconstructed value sq . the plot of 3q becomes “ﬂatter”. We also note that. this leads to a bit rate of the system equal to Rb D 64 kbit/s. An effect not shown in Figure 5. we select for each compressed Figure 5. for each code word yq we select only one code word xq . a quantizer with 128 levels (7 bit/sample) is employed after the compression.
65) − L D 1 − L D C1 L 1 0 1 2 1 2 1 2 2 and the quantization levels L L . these masks are useful to verify the quantizer performance. and b D 8 (sign included).3. : : : .67) . 1. (5. as a function of ¦s (dBm) for input signals with Gaussian and sinusoidal distribution. : : : . respectively. illustrated in Figure 5. : : : . : : : . that differ in the compression law or in the accuracy of the codes [4].366 Chapter 5.40. we desire to determine the parameters of the nonuniform quantizer that optimizes 3q .Q[s.sq .k/] s.2 Optimum quantizer in the MSE sense Assuming we know the probability density function of the input signal s. In the literature there are other nonlinear PCM tables.k/. 1. The problem. stationary with variance ¦s2 . 5.k/ s. −sat D 3:14 dBm. consists in choosing the decision thresholds ² ¦ Á. − .66) 2 2 that minimize the statistical power of the error (minimum meansquare error criterion) fQi g iD Meq D E[.k//2 ] D E[. − .39 two masks that indicate the minimum tolerable values of 3q (dB) for an Alaw quantizer (A D 87:6). Signaltoquantization noise ratio mask We conclude this section by giving in Figure 5. − − L (5.k//2 ] (5.6 Example of nonlinear PCM from 4 to 2 bits (sign omitted). − . Coding of xq 0000 0001 0010 0011 0100 0101 1000 1001 1010 1011 1100 1101 1110 1111 Coding of yq 00 01 Coding of sq 0000 0001 10 0100 11 1011 code word a corresponding linear code word. as given in the third column of Table 5. Digital representation of waveforms Table 5.7.
70) i D1 −i 1 .a/ even.68) 2 −0 D 0 L (5. : : : .69) i D 1. : : : . because of the symmetry of the problem we can halve the number of variables to be determined by setting ( L − i D −i 1 i D 1.yq / 111WXYZ 110WXYZ 101WXYZ 100WXYZ 011WXYZ 010WXYZ 001WXYZ 000WXYZ Coding of sq 1WXYZ011111 01WXYZ01111 001WXYZ0111 0001WXYZ011 00001WXYZ01 000001WXYZ0 0000001WXYZ 0000000WXYZ Assuming ps .xq / 1WXYZ01WXYZ001WXYZ0001WXYZ00001WXYZ000001WXYZ0000001WXYZ 0000000WXYZ Compressed code . Nonuniform quantizers 367 Figure 5. Q i D Qi 2 and L=2 X Z −i Meq D 2 . Table 5.7 Non linear PCM from 11 to 7 bits (sign omitted).a/ da (5.3.38.Qi a/2 ps . The 12bit encoded input signals are mapped into 8bit signals. (5. Piecewise linear approximation of the Alaw compression function (A D 87:6). Linear code .5.
71) (5.67) are @Meq @−i @Meq @Qi D0 D0 i D 1.368 Chapter 5.72) . Digital representation of waveforms (a) Gaussian test signal (b) Sinusoidal test signal 2 Figure 5.39. : : : . : : : . i D 1. 3q versus ¦s for an Alaw quantizer (A D 87:56) and b D 8. L 2 L 2 1 (5. Necessary but not sufﬁcient conditions for minimizing (5.
a/ da −i In other words.73) 2Qi −i 2 Qi C1 −i2 C 2Qi C1 −i ] D 0 (5.77) sets Qi as the centroid of ps .a/ da −0 Q1 D Z −1 ps .75) Z −i 1 −i . These two rules are illustrated in Figure 5.−i / (5. From 1 @Meq D . Decision thresholds and output levels for a particular ps .78) .Ð/ in the interval [−i 1 .Qi a/ ps .77) to get −1 from the integral equation Z −1 aps .a/ da −0 (5. Max algorithm We present now the Max algorithm to determine the decision thresholds and the optimum quantization levels.40.5.75) establishes that the optimal threshold lies in the middle of the interval between two adjacent output values.a/ (b D 3). the equation 1 @Meq D2 2 @Qi yields Z Qi D Z −i −i 1 −i 1 Qi C Qi C1 2 (5.Qi C1 −i /2 ps . we use (5. Fixed Q1 “at random”.a/ da (5. and (5.3.a/ da D 0 (5.41. (5.76) aps . Nonuniform quantizers 369 p (a) s τ4 = τ3 τ2 τ1 τ0 τ1 τ2 τ3 Q4 τ4=+ 8 8 a Q4 Q3 Q2 Q1 Q1 Q2 Q3 Figure 5.Qi 2 @−i (5. −i ].71) gives 2 ps .−i / .−i /[Qi C −i2 −i /2 ps .74) that is −i D Conversely.77) ps . 1.
L=2/ Q L D 2− L 2 2 1 2.a/ da −i (5.81) ps .75) we obtain Qi C1 D 2−i C Qi for i D 1.75) and (5. Otherwise. For i D . 1.a/ da 1 then the parameters determined are optimum. Lloyd algorithm This algorithm uses (5. . −1 . 3. Digital representation of waveforms ps(a) τi1 Qi τi Qi+1 τi+1 a Figure 5.a/ da −i Qi C1 D Z −iC1 ps . 3.a/.a/ da QL D 2 −L 1 Z2 C1 −L 2 (5. We set a relative error ž > 0 and D0 D 1. if − L=2 D C1 satisﬁes the last equation (5. 2. − L=2 D C1g such that −0 D 0 < −1 < Ð Ð Ð < − L=2 D C1. Optimum decision thresholds and output levels for a given ps . : : : . (5.77).370 Chapter 5.77) Z C1 aps . if (5. 2.80) C QL 1 Now. From (5. We choose an initial partition of the positive real axis: P1 D f−0 .L=2/ 2 1 we obtain (5.82) .81) is not satisﬁed we must change our choice of Q1 in step 1) and repeat the procedure.77) to obtain −i C1 by the equation Z −iC1 aps . : : : .41. We use (5. but in a different order.79) The procedure is iterated for i D 2.
We obtain the optimum alphabet A j D fQ1 . L 2 (5.Ð/. : : : . otherwise we update the value of j : j 7. The considerations that follow have this objective. −1 . We go back to step 4. − L=2 D C1g for the alphabet A j 1 using (5.83) 6. 4. 8. unless some assumptions are made about ps . 5.Qi a/ da 2 ps .−i −i If the Qi s are optimum.70). in addition to determining the optimum value of 3q for a nonuniform quantizer.75). Expression of q for a very ﬁne quantization For both algorithms it is important to initialize the various parameters near the optimum values. We derive the optimum partition P j D f−0 .−i we have that Meq D 2 '2 L=2 XZ i D1 L=2 X i D1 −i 1 1/ for −i 1 Ä a < −i (5. We observe that the sequence D j > 0 is nonincreasing: hence the algorithm is converging.3.a/ da Z −i 1 (5.84) then we stop the procedure.a/ da (5. Q L=2 g for the partition P j using (5. : : : .5.a/ ' ps . not necessarily to the absolute minimum. If Dj 1 Dj Dj <ž j C 1. From (5. however.85) −i . at least approximately for a number of bits sufﬁciently high. We set the iteration index j at 1. Nonuniform quantizers 371 3. 2 D j D E[eq ] D 2 L=2 XZ i D1 −i 1 −i .Qi a/2 ps .77).86) .87) . assuming that ps . it must be @Meq @Qi D0 i D 1. We evaluate the distortion associated with the choice of P j and A j . : : : . (5.Qi 1/ a/2 ps .
88) −i Correspondingly. : : : .1−i /.1−i / D 2 ½ ps Substituting (5.1−i / D −i −i 1 i D 1.1−i /2 1=3 C ½.f1−i g min Meq C ½ K 2 L=2 X i D1 !# 1=3 ps .95) 1=3 1/ .90) It is now a matter of ﬁnding the minimum of (5.91) yields p K ½D 2L (5.−i 1/ .94) in (5. By setting to zero the partial derivative of (5.94) . introducing the length of the ith decision interval . the cost function is " ½.92) with Meq given by (5. L 2 1 (5.93) . it follows that L=2 X i D1 Meq D 2 ps .1−i / ' 2 Z 0 C1 ps .90) with respect to . : : : . L 2 (5.90).1−i /3 12 (5.89) where −0 D 0 and − L=2 D C1.a/ da D K 1=3 (5.91) Using the Lagrange multiplier method. with the constraint that the decision intervals cover the whole positive axis.−i 4 1 // D0 i D 1. we obtain ps .−i 1=3 1 / . this is obtained by imposing that L=2 X i D1 2 ps .1−i /.92) with respect to .372 Chapter 5.−i 1 / .−i that yields p . Digital representation of waveforms and Z Qi D Z −i −i a da 1 −i D da 1 −i C −i 2 1 (5. ps .−i 1/ (5.1−i / (5.
Q Q K3 ff D 12 Q K D Z C1 1 ps .opt D 22b ff (5.97) For a quantizer optimized for a certain probability density function.1−i / F 0 .1−i / 12 (5.98).a/]2 (5. 1/.k/=¦s .a/ da [F 0 .101) For L sufﬁciently large.−i 1/ Ä 1 F 0 . 3³ /.0.−i 1/ i D 1.k/ D s. s .−i / F.90) can be used to evaluate approximately Meq for a general quantizer characteristic. according to (5.32. : : : .k/ 2 N . Obviously (5.k/ 2 N .0. moreover.90) we have Meq D 2 L=2 X i D1 ps .opt D K3 12L 2 (5.100) assumes that F 0 does not vary considerably in the interval . even of the Alaw and ¼law types. and f f D 2=.a/ da Q 1=3 (5. −i ]. as in the case of a quantizer granular error. we have 3q D Ms Meq.99) p Q In the Gaussian case. the intervals become small and we get Meq ' 12 2 12 Z 0 −sat ps .100) where F 0 is the derivative of F. and for a high number of levels L D 2b (so that (5.5.−i 1/ (5. ¦s2 /. Substituting (5.100) in (5.98) where f f is a form factor related to the amplitude distribution of the normalized signal s . the quantization step size 1 of the uniform quantizer is related to the compression law according to the relation 1 D F.96) and the minimum value of Meq is given by Meq. Nonuniform quantizers 373 hence . In this case. from Figure 5. Actually (5.1−i / D K ps L 1=3 .3 Equation (5.−i 1 .−i 1 / ½2 .85) holds).3.−i 1/ ' .102) . Observation 5. the optimum value of 3q . follows the increment law of 6 dB per bit. s. L 2 1 (5.96) indicates that the optimal thresholds are concentrated around the peak of the probability density.
In Tables 5.42 the deviation . even for a small number of levels.8 Optimum quantizers for a signal with Gaussian dis2 tribution (ms D 0.618 7 2.501 0. a more dispersive distribution.844 1.256 6 1.10 are given parameter values of three optimum quantizers obtained by the Max or Lloyd method.510 1.388 3 1.258 0. optimized for a speciﬁc input distribution. quantized according to the ¼law. that is with longer tails.3q /d B according to the 6b law. and 5. Performance of nonuniform quantizers A quantizer takes the name uniform. In the case of uniform quantizers. has a different type of input. respectively. for Gaussian. best Table 5. The optimum value of 3q follows the 6b law only in the case of nonuniform quantizers for b ½ 4.00955 3q (dB) 4.401 2.942 5 1. −sat ]. ¦s D 1).13q /d B D 6:02 b maxf3q gd B (5.8. and various numbers of levels [4].25). Digital representation of waveforms where 1 is related to a uniform quantizer parameters according to (5.30 14. leads to less closely spaced thresholds and levels.363 0.374 Chapter 5. with increasing b the maximum of 3q occurs for a smaller ratio ¦s =−sat (due to the saturation error): this makes 13q vary with b and in fact it increases.64).40 9.62 20.798 0.748 1. For example. Laplacian.9. Gaussian.437 1.344 0.756 0.0345 0.099 0. 5.733 Meq 0. the ratio 3q D Ms =Meq has the expression given in (5. It is left to the reader to show that for a uniform signal s in [ −sat .657 4 1 2.152 1. [From Jayant and Noll (1984). Note that. Concerning the increment of .050 0.800 0.128 2 1 1. if it is optimized for input signals having the corresponding distribution. we consider what happens if a quantizer. and gamma input signal. a uniform quantizer.453 0. Finally.982 0.069 8 1 2. we show in Figure 5. Laplacian. and consequently to a decrease of 3q . or gamma.245 0.20 .117 0.103) for both uniform and nonuniform quantizers [4].] L 2 4 8 16 i −i Qi −i Qi −i Qi −i Qi 1 1 0.522 0.
597 2. ¦s D 1). will have very low performance for an input signal with a very dispersive distribution.087 1.178 7 3.] L 2 4 8 16 i −i Qi −i Qi −i Qi −i Qi 1 1 0.729 4 1 3.01 7.089 2.405 3 2.1761 0.420 0.313 0. can have even higher performance for a less dispersive input signal.057 1.111 5 1.017 8 1 4.478 0.268 0.051 0.073 2 1 2. a nonuniform quantizer.795 4 1 4.64 18.432 Meq 0. [From Jayant and Noll (1984).591 0.834 1.0712 0.6680 0.673 0. on the contrary.878 1.155 0.577 1.899 0.422 2.390 1.920 0.264 0. optimized for a speciﬁc distribution.5.959 6 3. Nonuniform quantizers 375 Table 5.707 1.833 0.533 0.307 5 2.230 0.0545 0.223 1.345 1.77 6.10 Optimum quantizers for a signal with gamma distri2 bution (ms D 0.527 0.233 0.061 8 1 6.3. ¦s D 1).9 Optimum quantizers for a signal with Laplacian dis2 tribution (ms D 0.33 11.822 7 5.578 6 2.] L 2 4 8 16 i −i Qi −i Qi −i Qi −i Qi 1 1 0.124 2 1 1.47 17.633 1.387 3 3.0196 3q (dB) 1.725 3.127 0.253 0.500 0.54 12.380 1.128 4. .195 Meq 0.0154 3q (dB) 3.12 Table 5.2326 0. [From Jayant and Noll (1984).07 for a uniform input.121 1.567 0.
optimized for a speciﬁc probability density function of the input signal.43. Input type: uniform (U). [From Jayant and Noll (1984).] Figure 5. Digital representation of waveforms 16 Γ 14 12 10 ∆ Λq (dB) L 8 Γ L 6 G 4 G 2 U 0 1 2 3 4 b 5 6 7 Figure 5.376 Chapter 5. Laplacian (L). Gaussian (G) and gamma (0) [4]. for Laplacian input. . ¼law (continuous line) and optimum nonuniform quantizer (dotted line). All quantizers have 32 levels (b D 5) and are optimized for ¦s D 1.42. Performance comparison of uniform (dashed line) and nonuniform (continuous line) quantizers. Comparison of the signaltoquantization error ratio for uniform quantizer (dasheddotted line).
If 1.k/ so that the quantizer characteristic adapts to the statistical power of the input signal. the idea is that of varying with time the quantization step size 1.44. as their characteristic is of logarithmic type (see Section 5.9 for the nonuniform Laplacian type quantizer. We note that the optimum nonuniform quantizer gives best performance. The performance also does not change for a wide range of the signal variance. All quantizers have 32 levels (b D 5) and are determined using: a) Table 5. even if this happens in a short range of values ¦s .k/ is the quantization step size at instant k. A comparison between uniform and nonuniform quantizers with Laplacian input is given in Figure 5.3.k/ 6D c. performance decreases according to the law 10 log Ms D 20 log ¦s (dB). The corresponding coding scheme. as we can see from (5. c) the ¼ (¼ D 255) compression law of Figure 5.1). Only a logarithmic quantizer is independent of the input signal level. 5. for a decrease in the input statistical power.36 with −sat =¦s D 1. General scheme The overall scheme is given in Figure 5.k/ if errors are introduced Q by the binary channel.43.4. For a uniform quantizer.5 for the uniform Laplacian type quantizer.98).4 Adaptive quantization An alternative method to quantize a nonstationary signal consists in using an adaptive quantizer.44. where c.5. with reference to Figure 5. . is called an adaptive PCM or APCM. which has parameters that are adapted (over short periods) to the level of the input signal.21 the quantizer characteristic is deﬁned as Figure 5. Adaptive quantization and mapping: general scheme. Adaptive quantization 377 The 0 quantizers have performance that is almost independent of the type of input. b) Table 5.
k/ D 1opt ¦s .opt g and f−i. ¦ y D 1.105) For a nonuniform quantizer.106) where fQi.opt g are given in Tables 5. 1.k/ D 1 ¦s .8.:::. and ¦s .378 Chapter 5. 2 iD (5. : : : . : : : . then we can use the following rule 1. 1 2 L i D 1. 1. for example.k/ −i . 2 2 L .k/ D i 1.107) However.opt ¦s . Therefore we let g.k/ is the standard deviation of the signal at instant k.k/ D Qi. an alternative to the scheme of Figure 5.44 is the following: the quantizer is ﬁxed and the input is scaled by an adaptive gain g. .k/g 2 is generated with a constant statistical power.opt ¦s .5). Digital representation of waveforms Ã 8Â > i C 1 1. : : : .9.k/ (5. we need to change the levels and thresholds according to the relations: Qi . so that a signal fy. As illustrated in Figure 5. 5. both methods require computing the statistical power ¦s2 of the input signal. The adaptive quantizers are classiﬁed as: Figure 5. and 5. 0.104) 1 If 1opt is the optimum value of 1 for a given amplitude distribution of the input signal assuming ¦s D 1 (see Table 5.k/ iD L L C 1.k/ (5.k/ > < 2 Ã output levels: Qi .k/ D Â > > i 1 1. ﬁxed quantization and mapping.k/ (5.10 for various input amplitude distributions. Adaptive gain.45.k/ D −i.45.k/ : 2 thresholds: −i .
¦s .e. that is what frequency is required to sample ¦s . if ¦s is estimated by observing fsq .k/ and .44 and Figure 5.4. the signals at the output of the quantizer.k/ D Q[s.k/]g or fc. Figure 5. Q 2. APCM scheme with feedforward adaptive gain and ﬁxed quantizer: a) encoder.45 are shown.46 and Figure 5. in Figure 5.k//q (or gq .k/ and . We emphasize that: 1.4. (a) Figure 5. ž feedback. and how many bits must be used to represent .k/) it may happen that sq . .k//q . 3. if ¦s is estimated by observing the signal fs.47. 5.k/ 6D sq .k/g. b) decoder.47.46. Adaptive quantization 379 ž feedforward.¦s . respectively. APCM scheme with feedforward adaptive quantizer: a) encoder. because of digital channel errors on both c. i.k/ so that it can be coded and transmitted over a binary channel. the system bit rate is now the sum of the bit rate of c.k/g itself.k/.1 Feedforward adaptive quantizer The feedforward methods for the two adaptive schemes of Figure 5.k/).k//q (or gq . b) decoder. we need to determine the update frequency of ¦s . The main difﬁculty in the two methods is that we need to quantize also the value of ¦s .5.¦s .k/.
K samples need to be stored in a buffer and then the average power must be computed: obviously.109) every K instants.k/ (5.109) Typically in this case we choose D D 0. and coding the values given by (5.108) where D expresses a certain lead of the estimate with respect to the last available sample: typically D D . quantizing.12 shows the performance of different ﬁxed and adaptive 8level (b D 3) quantizers. we . Performance With the constraint that ¦s varies within a speciﬁc range. For an exponential ﬁlter instead. Digital representation of waveforms The data sequence that represents f. ¦min Ä ¦s Ä ¦max . windows usually do not overlap.1 a/=.k D/ D 1 K k X nDk .K 1/ s 2 . the decimation and quantization of ¦s2 .n/ (5. for three values of a.k/ in (5. we prefer to determine a from the equivalence (1.k/. the corresponding values of K 1 and B¦ for 1=Tc D 8 kHz.1 a/s 2 .471) with the length of the rectangular window. it must be ¦max ½ 100 ¦min (5. To determine the update frequency of ¦s2 .11 Time constant and bandwidth of a discretetime exponential ﬁlter with parameter a and sampling frequency 8 kHz. whereas ¦max controls the saturation error level.380 Chapter 5.462) we have ¦s2 . Two methods to estimate ¦s2 . hence ¦s is updated every K samples.1 a/ (samples) 32 64 128 Filter bandwidth B¦ D . using a rectangular window of K samples.11.109) is equal to B¦ D .1. Typically.k//q g or fgq .11 we give. this introduces a latency in the coding system that is not always tolerable.k/ Table 5. from (1. The estimate of the signal power is obtained by a rectangular window with D D K 1.468) we have ¦s2 . in order to keep 3q relatively constant for a change of 40 dB in the input level.110) Actually ¦min controls the quantization error level for small input values (idle noise).¦s .2³ Tc / . If D D K 1. that gives a D 1 K 1 1 : this means decimating.k/g is called side information. a K 1 1 1 2 2 2 5 6 7 Time constant 1 D 1=. For example. In Table 5. Moreover. for a > 0:9.2³ Tc / (Hz) 40 20 10 D 0:9688 D 0:9844 D 0:9922 .k 1 D/ C .k/ are given in Section 1.K 1/=2 or D D K 1. from (1. For speech signals sampled at 8 kHz. Table 5.1 a/ recall that the 3 dB bandwidth of ¦s2 . however.k D/ D a¦s2 .
3 11. Although b D 3 is a small value to draw conclusions.opt D 11:4 dB) 9. However.7 13.2 Feedback adaptive quantizers As illustrated in Figure 5.48. APCM scheme with feedforward adaptive quantizer. a possible method consists in applying (5.5 7. where fs.8 11. Adaptive quantization 381 Table 5.108) or (5.opt D 14:6 dB) Laplacian (3q. but also the scaling factor ¦s .k/]g or fc. conversely there is a performance loss of 3 dB for K D 1024.5.n/g.k/ affects not only the identiﬁcation of the quantized level.48.4.k/ D Q[s. this signal is available only for n Ä k 1: Figure 5. therefore feedback methods do not require the transmission of side information. 5.k/ b=3 3q (dB) Nonadaptive Adaptive K D 128 (16 ms) Adaptive K D 1024 (128 ms) nonuniform Q ¼ law (¼ D 100.3 14. we note that using an adaptive Gaussian quantizer with K D 128 we get 8 dB improvement over a nonadaptive quantizer.n/g is substituted by fsq .opt D 12:6 dB) uniform Q Gaussian (3q.opt D 14:3 dB) Laplacian (3q.12 Performance comparison of ﬁxed and adaptive quantizers for speech.4 – 12. . −sat =¦s D 8) Gaussian (3q.3 9. Speech s.7 7.k/. Concerning the estimate of ¦s .1 12. We make the following observations: ž there is no need to transmit ¦s . If K − 128 the side information becomes excessive.9 6. the feedback method estimates ¦s from the knowledge of fsq .4 – 15 13.k/.109).k/g. ž a transmission error on c.5 are not considered.4.
1 a/sq .382 Chapter 5.k/j D 1] × pc1 . L=2g.2 0. Because of the lag in estimating the level of the input signal and the computational complexity of the method itself.8 0.6 0.4 0.k/j D 4] D 2 −opt.1 8 > P[jc.1 0 ps (a) σs =1 2 Q1. 0.3 If ¦s changes suddenly.3 0.4 0.49. we present now an alternative method to estimate ¦s adaptively. Estimate of σs (k) For an input with ¦s D 1 we compute the discrete amplitude distribution of the code words for a quantizer with 2b levels and jc.n/.1 0 τ1 (k) τ2 (k) τ3 (k) a Figure 5.k 1/. For example.k/j 2 f1.k/ D 1=K nD.5 0.108) becomes ¦s2 .2 0.6 0. if ¦s < 1 it will be P[jc.k/j D 4] − pc4 .3 0.8 0.49 for b D 3.opt Q3. Likewise.opt ps (a) σs<1 2 a 0. let Z −opt.opt τ0. the recursive estimate (5. .111) : : > Z C1 > > > > ps .5 0.k/ with unit variance (b D 3).k/j will be very different with respect to (5. As illustrated in Figure 5.opt τ3.k q q 2 1/ C . 2.k 1/ .7 0. Digital representation of waveforms Pk 1 2 this implies that the estimate (5.109) becomes ¦s2 . while P[jc. with q a lag of one sample.k/j D 1] D 2 > ps . Output levels and optimum decision thresholds for Gaussian s.opt τ1.111).opt Q4.K 1/ sq .opt Q2.a/ da D pc1 > > > −0 D0 < : : : : (5. the distribution of jc.a/ da D pc4 : P[jc.7 0.k/ D a¦s2 . : : : .opt τ2.
In practice.5.k 1/ (5. if instead it is jc. then ¦sq must increase to extend the quantizer range. Adaptive quantization 383 Figure 5.k 1/j]¦sq .p[1]/ pc1 . that is: ¦min Ä ¦sq . L=2. Adaptive quantizer where ¦s is estimated using the code words.p[2]/ pc2 : : : . therefore it must be pci ln p[i] D 0 (5. For example. The algorithm proposed by Jayant [4].115) pc L=2 D1 (5. if it is jc.50.k 1/j]] C E[ln ¦sq . i D 1.4.112) it follows that ln ¦sq .k/j.50. thus p[1] < 1.k 1/ (5.114). i D 1.112) in which fp[i]g.117) . : : : .k 1/] (5.k E[ln p[jc. is given by ¦sq .k/ Ä ¦max (5. what we do is vary ¦sq by small steps imposing bounds to the variations.k 1/j D L=2. illustrated in Figure 5. The objective is therefore that of changing ¦sq .k from which E[ln ¦sq ].p[L=2]/ In fact. from (5. 1/j]] D L=2 X i D1 1/]. thus p[L=2] > 1.k/ D p[jc. : : : .k/ D ln p[jc.k/ D E[ln p[jc.114) In steady state we expect that E[ln ¦sq .k/ so that the optimal distribution is obtained for jc. are suitable parameters.k/] D E[ln ¦sq .116) 1/j] C ln ¦sq . then ¦sq must decrease to reduce the quantizer range.113) The problem consists now in choosing the parameters fp[i]g. Intuitively it should be . L=2.k as in (5.k 1/j D 1.
From the input sample s. : : : . i D 1. it is strongly affected by the errors introduced by the binary channel.51 the values of fp[i]g are given in correspondence of fq.9. are in correspondence of the values of fq. At the receiver. and by (5. the possible output values are also known from the knowledge of ¦sq . 1g. ¦sq . Let 2i q.k/ (see (5.51.1 ¦ (5. Digital representation of waveforms Based on numerous tests on speech signals. L=2 [4].i/g D f1=7.k/. 3=7. we note that there is a large interval of possible values for p[i].106) the decision thresholds f−i . Summarizing.i/ D L 1 2 1 ² L 1 . on the other hand.i/g [4]. Then ¦sq .112) and the thresholds are updated: the quantizer is now ready for the next sample s. i D 1. consequently. at instant k. At the reception of c.9. : : : .k C 1/ is computed by (5. sq . and p[4] in the range from 1. : : : . c. Therefore p[1] is in the range from 0. [From Jayant and Noll (1984).k/ the index i in (5. Experimental measurements on speech signals indicate that this feedback adaptive scheme offers performance similar to that of a feedforward scheme. for L D 8 the values of fp[i]g. in turn the receiver updates the value of ¦sq .] . i D 1.k C 1/.k/. For example. 1 L 3 1 .106) is determined and.52). thus it can adapt very quickly to changes in the mean signal level.k/ is known.i/g.106)). Jayant also gave the values of the parameters fp[i]g. 4. L=2. An advantage of the algorithm of Jayant is that it is sequential.k/g are also known. especially if the index i is large.k C 1/ by (5. p 3 2 1 0 0 1 q Figure 5.8 to 0.k/ is produced by the quantizer characteristic (see Figure 5.8 to 2.118) In Figure 5.112). 5=7.384 Chapter 5. Interval of the multiplier parameters in the quantization of the speech signal as a function of the parameters fq.:::.
even nonlinear predictors.Q (k) 3 p[4] .5. let C.81) the prediction error is deﬁned as f . To avoid introducing a new signal.Q (k) 4 Figure 5. as well as in some schemes of the previous section on adaptive quantization. Obviously.z/ D N X i D1 s . let s .τ3(k) .k/ D O N X i D1 ci s. For each output level the PCM code word and the corresponding value of p are given.7 With reference to Figure 5.52. 8 The considerations presented in this section are valid for any predictor.53. the ﬁnite number of bits of this preliminary quantization should not affect further processing.119) From (2.Q (k) 1 . it is desirable to perform the various operations in the digital domain on a linear PCM binary representation of the various samples.k/ D s. obtained by an ADC.5 Differential coding (DPCM) The basic idea consists in quantizing the prediction error signal rather than the signal O itself.k i/ (5.k/ Considering the ztransform.k/ O (5.5.k/g is involved.120) ci z i (5. . 5.121) 7 In the following sections. Inputoutput characteristic of a 3bit adaptive quantizer.k/ be the prediction signal :8 s .Q (k) 2 p[3] 111 . for a linear predictor with N coefﬁcients.τ2(k) . the preliminary conversion by an ADC is omitted in all our schemes. when processing of the input samples fs.τ1(k) 100 τ1(k) τ2(k) τ3(k) s p[1] 101 p[2] 110 . Differential coding (DPCM) 385 Q (k) 4 sq=Q[s] 010 011 p[4] Q (k) 3 p[3] 001 Q (k) 2 p[2] 000 Q (k) 1 p[1] .
z/ D C.125) 5.53 in the equivalent scheme of Figure 5.k/g. Figure 5.386 Chapter 5.z/S. Computation of the prediction error signal f.z/] is the prediction error ﬁlter.54.k/ C f .k/ D s.122) Recalling (2.k/. (5. where sq .k/.k/ that is the reconstructed signal coincides with the input signal. In the case C.124).54 we have sq .53. s(k) +  f(k) f(k) sq(k) + f(k) + + sq(k) ^ s(k) c + ^ s(k) c (a) (b) Figure 5.54.123) (5.z/[1 C. that is for a predictor with a single coefﬁcient equal to one.k/ O (5. it is easy to prove that in the scheme O of Figure 5.k/ and O s .124).124) Figure 5.5.55.k/ D s .k/ coincides with the difference between two consecutive input samples.1 Conﬁguration with feedback quantizer With reference to the scheme of Figure 5. f .z/ and F.k/.81). the following relations hold. according to (5. It is interesting to rearrange the scheme of Figure 5.120) and (5.k/ is called the reconstruction signal and is given by sq . then O S. From (5. (a) Prediction error ﬁlter.z/ D S. We will now quantize the signal f f .z/ D z 1 . (b) Inverse prediction error ﬁlter.k/ and prediction s . [1 C. .z/] (5. Digital representation of waveforms s(k) +  f(k) c ^ s(k) Figure 5.54b shows how to obtain the reconstruction signal from f .54a illustrates how the prediction error is obtained starting from input s.
k/ s . Encoder: f .k/ D f q .k/ O s . Differential coding (DPCM) 387 Figure 5. f .k/ (5. f the signaltoquantization error ratio 3q.k/ f .k/] sq .131) ci sq .k C 1 In other words.k/ D s.127) (5. the quantized prediction error is transmitted over the binary channel.k/] (5.k C 1/ D O Decoder: sq . (b) decoder.k/ D Q[ f .k/ O (5. DPCM scheme with quantizer inserted in the feedback loop: (a) encoder.k/ C f q .126) (5. Let eq.k/ C f q .132) be the quantization error and 3q.k/ D s .5.k/ D s .k/ O s . f .133) . f D E[ f 2 .55.k C 1 (5.130) i/ (5.128) i/ (5.k C 1/ D O N X i D1 N X i D1 ci sq .k/] 2 E[eq.129) f q .5.
137) shows that to obtain a given 3q we can use a quantizer with a few levels. the reconstruction signal is different from the input signal.132).134) To summarize. . we need to determine the coefﬁcients fci g.k/. whereas G p depends on the predictor complexity and on the correlation sequence of the input fs.138) Regarding the predictor.k/ D s. In particular the statistical power of f f . that minimize M f . useful in scaling the quantizer characteristic. f (5. f M f Meq. From (5. f is only a function of the number of bits and of the probability density function of f f . Therefore G p can be sufﬁciently high also for a predictor with a reduced complexity. assuming the distribution of f f .k/g.k/ C eq.k/g and not fs.k/ 6D s. f . Digital representation of waveforms Recalling (5. this will cause a deterioration of G p with O respect to the ideal case fsq .1/ (5.k/g is known. recalling (2. f (5. it follows that 3q D G p 3q. Mf D Ms Gp (5. the correlation coefﬁcient of the input signal at lag 1. Observing (5.137) Ms Mf (5.91). f depends on the number of quantizer levels.128) and (5. can be derived from (5.135) where observing (5. we have sq .k/.k/ O (5.98) we know that for an optimum quantizer. sq .129) is fsq .3.126).k/ D s.k/g. For the quantizer.k/g is highly predictable.k/g. once the number of coefﬁcients N is ﬁxed.k/ C eq.k/ D s . If we ignore the dependence of G p on feq.k/ C f . assuming known Ms and G p .388 Chapter 5. provided the input fs. f is maximized by selecting the thresholds and the output values according to the techniques given in Section 5. Consequently. using (5. the optimum value of c1 is given by ².k/g.k/g.1/. if M f < Ms then also Meq. f g will be with respect to fs. i D 1.k/g.133) 3q. N .k/.136).k/g. f . We observe that the input to the ﬁlter that yields s . 3q. and the reconstruction error (or noise) depends on the quantization of f .139) ignoring the effect of the quantizer.134) the signaltonoise ratio is given by 3q D Given Gp D called prediction gain. 3q.k/ D s. which in turn determine the transmission bit rate.k/ in (5.136) Ms Ms M f D Meq. not of s. f < Meq and the DPCM scheme presents an advantage over PCM. Then we have Gp D 1 1 ² 2 . with the normalization by the standard deviation of f f . (5. This decrease will be more prominent the larger feq.k/g. : : : . For example in the case N D 1.k/g. f . that is for fsq .
1 ² 2 . being Q L=2 =Tc < max js. given the total 3q .13 Prediction gain with N D 1 for three values of ². a simple predictor with one coefﬁcient yields a prediction gain equivalent to one bit of the quantizer: consequently. (a) Reconstruction signal for a DPCM. In the speciﬁc case. We give in Table 5.k/g instead of fs.85 0. with (b) a 6 level quantizer. the output signal cannot follow the rapid changes of the input signal. c1 D ².2 10. the predictor of the scheme of Figure 5. 5.1 Figure 5.90 0.1/. can give poor performance because of the large quantization .56b.56a.k/g in the instants of maximum variation. Differential coding (DPCM) 389 Table 5.1/ D 0 there is no advantage in using the DPCM scheme.6 7.k 1/. the maximum level of the quantizer is instead related to the slope overload distortion in the sense that if Q L=2 is not sufﬁciently large. for an input with ².k/g presents a slope different from that of fs.1/ D 0:85.95 G p D 1=.56. O For a simple predictor with N D 1 and c1 D 1. as shown in Figure 5.55.k/ s.5. the DPCM scheme allows us to use a transmission bit rate lower than that of PCM. We note that. having as input fsq . Evidently.5.k/g. for an input having ².k/ D sq .k/g.1/ 0. Figure 5.k 1/j.1// (dB) 5. We note that the minimum level of the quantizer still determines the statistical power of the granular noise in fsq . fsq .13 the values of G p for three values of ². hence s .2 Alternative conﬁguration If we use few quantization levels.1/.5.56a illustrates the behavior of the reconstruction signal after DPCM with the sixlevel quantizer shown in Figure 5.
s .k/ 6D s .k/ D s.k/g. b) decoder.k/ so .o .k/ C so .141) (5.k/ s . where the following relations hold. from (5.145) ci sq. the O O prediction signal reconstructed at the decoder is so .k/. However.144) i/ (5.142) i/ (5.390 Chapter 5.57.k/] sq .o .k C 1/ D O N X i D1 N X i D1 ci s.k/ is obtained from the input signal without errors. Encoder: f .140) (5.143) f q .57. Digital representation of waveforms Figure 5.k/ D Q[ f .k/ D s. noise present in fsq . DPCM scheme with quantizer inserted after the feedback loop: a) encoder.k/ s .k C 1 (5.k C 1 At the encoder. In fact.k/ D f q . An alternative consists in using the scheme of Figure 5.k C 1/ D O Decoder: O sq.k/ O (5.144) and O .
For both conﬁgurations.5. for i Ä k 1.k/ O s .k/ C eq. the difference between O the prediction signals. may be nonnegligible. 5.z/] has poles near the unit circle.150) (5. This is difﬁcult to achieve if the transfer function 1=[1 C.k/ 6D s . f . however. O O A difﬁculty of the scheme is that.k/ D O N X i D1 ci sq .4 Note that the same problem mentioned above may occur also in the scheme of Figure 5.k/ C [Oo .k/ O D s. f .149) . depending on the function C.k/ C eq.k/ D so .k/ O (5.146) can assume values that are quite different from s.148) We introduce the following vectors and matrices.s. though to a lesser extent as compared to the scheme of Figure 5.k/ s s .k/ s . the inverse prediction error ﬁlter must suppress the propagation of such errors in a short time interval.5. we choose the coefﬁcients fci g that minimize the statistical power of the prediction error.k 1/ then O O sq.k/.k/.k/ C f .k/. For the design of the predictor.k/ C s. ². and consequently the impulse response is very long.k 1/ 6D s.147) where sq .145). Observation 5. and consequently so .k/ is the reconstruction signal.N /]T where ².k/ D so . which in the case of a feedback quantizer system is given by sq .k/ is given by O s .k 1/. : : : . the prediction signal s .k i/ (5.k 1/ 6D f .k//2 ] O (5.55 because of errors introduced by the binary channel.k/. so .k/g ρ D [².i/.57.k/ C eq. : : : .k/ D s. as f q .540).i/ is deﬁned in (1. Differential coding (DPCM) 391 (5. (5.k/ s .k/] C eq.o .1/. M f D E[. even if by chance so .i/ D s . Vector of prediction coefﬁcients c D [c1 . c N ]T Vector of correlation coefﬁcients of fs.o . As a result the output O O sq.5. f .3 Expression of the optimum coefﬁcients For linear predictors.z/. as the signal f f q . f .k/g at the encoder is affected by a smaller disturbance.
M f D Ms .2.28). and depends only on the second order statistic of fs.k/g.1 T copt ρ/ (5.N 2/ 6 3q D6 6 : : : :: : : : 6 : : : 6 Â : Ã 4 1 ².392 Chapter 5.n/ D rs . In this case some efﬁcient algorithms to determine c and M f in (5.153)). Effects due to the presence of the quantizer Observing (5.79).1 1 C 3q Then copt. the optimum prediction coefﬁcients are given by the matrix equation (2. f . We may consider the solution with the quantizer omitted.1=3q / (5.155).2.1/ 1 C . f Žn rsq .k/g the correlations are expressed by (5.154) (5.153) Recalling the analysis of Section 2. f .156) In general it is very difﬁcult to analyze the effects of feq.1 D ². for which (5.N 1/ 6 1 C 3q Â Ã 6 6 1 6 ².2. normalized by Ms 9 Ã 2Â 1 ². we get Dividing by Ms we obtain rsq . except in the case N D 1.k/g on G p .27) and (5.n/ Ms D ².1 and 2. the prediction gain is given by Gp D Ms D Mf 1 1 T copt ρ (5. hence 3q D 1.N 1/ ::: ².155) are given in Sections 2.1/ (5.154) and (5.n/ C 3q 1 Žn (5.155) The difﬁculty of this formulation is that to determine the solution we need to know the value of 3q (see (5.154) becomes Â Ã 1 D ².n/ C Meq.k/g and feq. Digital representation of waveforms Correlation matrix of sq .1/ : : : ².2.157) copt.78) copt D ρ The corresponding minimum value of M f is obtained from (2.152) .1/ 1C : : : ².158) 8 Assuming for fs.1/ 1 C 3q 3 7 7 7 7 7 7 7 7 7 5 (5.151) (5.
f / 2 2c1 ².161) hence 3q. Note that (5. where from (5. of the order of one second.160) 2 2 2c1 ². f D Ms =3q .5.135) it follows Meq. Rather we will derive G p for two values of c1 .1/ 3q. For c1 D ². then copt.4 Adaptive predictors In adaptive differential PCM (ADPCM).1/ C c1 / C c1 Meq. 5.1 c1 . that is. in fact.c1 =3q. f / is due only to the presence of the quantizer.1/ D 1 1 ² 2 .k 1/ C eq.1 ². the expression is complicate and will not be given here. f . the predictor is timevarying.k 1///2 ] (5. Only for 3q D 1 it is copt. f Ã (5. f depends only on the number of quantizer levels.1 ².163) We note that the choice c1 D 1 leads to a simple implementation of the predictor: however.1/. Therefore we have s .k i/ (5.1 2.1/ > 1=2.k/ D O N X i D1 ci .1/ C c1 (5. f Ã (5. and saturates for N ½ 2.159) The above relations show that if 3q is small. For c1 D 1 we have 1 Â 1 1 ² 2 .5. For N D 1 and any c1 it is M f D E[. Â 1 1 Gp D 2.164) .s. Differential coding (DPCM) 393 and Gp D 1 1 copt. observing (5. the prediction gain for a ﬁxed predictor is between 5 and 7 dB.1 D ².1/ we have Gp D where the factor .137) 3q D G p 3q.160) we obtain Gp D 1 1 2 .1/=3q.1/ 1 C 1=3q (5.1 is small and G p tends to 1. this choice results in G p > 1 only if ².5.161) allows the computation of the optimum value of c1 for a predictor with N D 1 in the presence of the quantizer: however.k/ ' Ms . 1. f . f As from (5. speech is a nonstationary signal and adaptive predictors should be used. if the system is very noisy. Various experiments with speech have demonstrated that for very long observations.162) ² 2 .s. It may occasionally happen that a suboptimum value is assigned to c1 : we will try to evaluate the corresponding value of G p .1// 1 3q.k/sq .1/ ² 2 .
and N ' 10.k/. however. however.394 Chapter 5. this method requires too many computations.60. of the sequential adaptive type. The adaptive predictor is estimated at every window by the feedforward method and yields G p > 1. then we solve the system of equations (5.0/ s (5. i < k.1. In particular. and consequently this method is not suitable to track rapid changes of the input statistic. Adaptive feedforward predictors The general scheme is illustrated in Figure 5. even for unvoiced spurts that present small correlation.k/C µ f q . The ﬁxed predictor is determined by considering the statistic of the whole signal. must be sent to the receiver to reconstruct the signal.2). Based on these samples the input autocorrelation function is estimated up to lag N using (1. : : : . this system introduces a minimum delay from fs. Also for ADPCM two strategies emerge.59a: we note that the speech level exhibits a dynamic range of 30–40 dB and rapidly changes value. Another simple alternative. respectively.k/sq . We note that.k/ 0<¼< 2 N r2q . where the predictor is adapted by the LMS algorithm (see Section 3.166) 1/.154) to obtain the coefﬁcients c and the statistical power of the prediction error. c N ]T is chosen to minimize M f over short intervals within which the signal fs.59. The prediction gain in the absence of the quantizer is shown for a ﬁxed predictor with N D 3 and an adaptive predictor with N D 10 in Figure 5.59b and in Figure 5. for some voiced spurts. : : : .i/g for a window of K samples and apply the same procedure as the feedforward method.478).k N /] (5.59c. is illustrated in Figure 5. and calculate c. Not considering the computation time.k/g.k/g to the decoder output fQq . the power measured on windows of 128 samples is shown in Figure 5. Sequential adaptive feedback predictors Also for adaptive feedback predictors we could observe fsq .i/g.58.k C 1/ D ¼1 c. Deﬁning sq .k/ D [sq . for speech signals sampled at 8 kHz. These quantities. An alternative could be that of estimating at every instant k the correlation of fsq . the observation is now available only for instants i < k.k/g the procedure is repeated. G p can reach the value of 20–30 dB.¦ f /q : the system is now ready to encode the samples of the observation window in sequence. s The performance improvement obtained by using an adaptive scheme is illustrated in Figure 5.k/g equal to K samples.k/g of K samples.k coefﬁcient adaptation is given by c. Digital representation of waveforms The vector c D [c1 . after being appropriately quantized for ﬁnite precision representation. for speech we choose K Tc ' 10–20 ms. give the parameters of the predictor cq and of the quantizer .165) . thus within certain windows the prediction gain is even less than 1. We consider an observation window for the signal fs. The digital representation of f f q . For the next K samples of fs. together with the quantized parameters of the system. In general.k/g is quasistationary. Speech signals have slowlyvarying spectral characteristics and can be assumed as stationary over intervals of the order of 5–25 ms. sq .
5.58. Differential coding (DPCM) 395 Figure 5.k/ 6D f q .k/ and therefore sq . in Q decoding the same equations are used with fQq .5. because of binary channel errors.14 gives the algorithm. ADPCM scheme with feedforward adaptation of both predictor and quantizer: (a) encoder. .61 illustrates the implementation.k/ in place of f q . while Figure 5.k/ in place of sq . Table 5.k/.k/. it occasionally happens fQq . (b) decoder. where ¼1 Ä 1 controls the stability of the decoder if.
k/ C f q .k/sq . : : : Observation 5.k/] c.k C 1/sq . however.k/ D s .k/ D Q[ f .396 Chapter 5. which presents characteristics that may change very rapidly with time. and corresponding prediction gain Gp for: (b) a ﬁxed predictor (N D 3).0/) s .0/ D 0 sq .k/ s .k/ s . Table 5.k C 1/ D cT . (c) an adaptive predictor (N D 10).k/ O sq . as for example a modem signal.k/ D s.k C 1/ D ¼1 c. .59.k/C µ f q . once the convergence is reached. Digital representation of waveforms Figure 5. (a) Speech level measured on windows of 128 samples.0/ D 0 (or sq . Initialization c. For these measurements the quantizer was removed.k/ O f q . 1. it is better to switch off the adaptation. the LMS adaptive prediction can be used to easily determine the predictor coefﬁcients.5 For a stationary input.k C 1/ O For k D 0.0/ D 0 O f .14 Adaptation equations of the LMS adaptive predictor. This observation does not apply to speech.0/ D s.
60. ADPCM scheme with feedback adaptation for both predictor and quantizer: (a) encoder. . Differential coding (DPCM) 397 Figure 5. (b) decoder.5.5.
z/ D S. Allpole predictor The predictor considered in the previous section implies an inverse prediction error ﬁlter or synthesis ﬁlter (see Figure 2. MA and ARMA). 5. For further study on various signal models (AR.398 Chapter 5. Obviously in both cases the quantizer is nonuniform.z/ D F.k/ (5.5. Performance Objective and subjective experiments conducted on speech signals sampled at 8 kHz have indicated that adopting ADPCM rather than PCM leads to a saving of 2 to 3 bits in encoding: for example.k i/ C w.12.9) with transfer function H . a 5bit ADPCM scheme yields the same quality as a 7bit PCM. Digital representation of waveforms Figure 5.z/ 1 1 C.167) which has only poles (neglecting zeros at the origin).168) . LMS adaptive predictor.z/ (5. The predictor refers to an input model whose samples are given by s. N X i D1 ai s.k/ D We consider two cases.5 Alternative structures for the predictor In this section we omit the quantizer and we analyze alternative structures for the predictor. we refer to Section 1.61.
fw.k/ s .k/g is white noise.170) Correspondingly from (5.k i/ i D 1.k/ D A C1 X nD 1 Žk nP (5. the synthesis ﬁlter has a FIR all zero transfer function H .k/ D O Correspondingly.k/.k/g is a periodic sequence of impulses.5.k/ D w. P. In this case (5. Then (5.k/ C ¼b f q .168) implies that also fs. 2.k C 1/ D bi .172) Polezero predictor The general case refers to an ARMA( p. We observe that the two cases model in a simpliﬁed manner the input to an allpole ﬁlter whose output is unvoiced (case 1) or voiced (case 2).k i/ (5.s. In this case s .k/ D w.z/ D O nD1 an z it yields f .z/ D 1 C q X i D1 bi z i (5.k i/ C bi f .174) i . q) input model.171) Incidentally we note that an approximate LMS adaptation of the coefﬁcients fbi g is given by bi . The optimum predictor is still C. w.k/g is called LPC residual.169) with P × N . The coding scheme that makes use of an allpole predictor is called linear predictive coding (LPC) and the prediction error f f .173) bi z ci z i (5. Allzero predictor For an MA input model.k i/ (5. the prediction signal is given by s .z/ D 1 q X i D1 p X i D1 p X i D1 q X i D1 ci s. The PN n and optimum predictor that minimizes E[.k/.k/ f q . for sq . Differential coding (DPCM) 399 1.k/g is a periodic signal of period PN n and it yields f . fw. : : : .168) implies an AR(N ) model for the input.5.k//2 ] is C.z/ D nD1 an z equal to a periodic sequence of impulses.k/. we have 1C H .k/ D s.k/ D O q X i D1 bi f . q (5.124).
16).166) for the coefﬁcients fci g and to (5.173).179) is related to the input fs.k P/ (5.62.k/g as shown in the scheme of Figure 5.k i/ (5.k/ D s. we refer to (5.175). that uses an LMS adaptive predictor with 2 poles and 4 zeros.k O P/ (5.173) and (5.721 standard at 32 kbit/s (see Table 5. The prediction error f .k/ Then.k/ D þs.63. . Let f f ` . Pitch predictor An alternative structure exploits the quasiperiodic behavior. (b) synthesis.174) are illustrated in Figure 5. This conﬁguration was adopted by the ADPCM G.176) þ is the pitch gain.k/ D f ` . ss .400 Chapter 5. Digital representation of waveforms s(k) + f(k) f(k) sq(k)=s(k) + ^ s(k) b b + ^ s(k) + c + c sq(k)=s(k) + (a) (b) Figure 5. of period P. For the LMS adaptation of the coefﬁcients in (5.k/ O O where s` . In (5.176) (5. The equations (5.178) is the shortterm estimate.k/ O N X i D1 ci f ` .k/ D O N X i D1 þs. the 5 bit quantizer is adapted by the Jayant scheme.177) ci f ` . of voiced spurts of speech. In this case it is convenient to use the estimate O s .k/ D s` .k/g be the corresponding prediction error f ` .k/ s .175) is the longterm estimate. Polezero predictor: (a) analysis.172) for the coefﬁcients fbi g.k/ C ss . in (5.k/ D s.k i/ (5.62. and P is the pitch period expressed in number of samples.
k/ þ. using only a onebit quantizer. Differential coding (DPCM) 401 s(k) +  fl(k) +  f (k) ^ (k) ss cl longterm predictor cl (z)= βz P ^ sl (k) c shortterm predictor c(z)= Σ ci zi i=1 N Figure 5. . Experimental measurements.n/ n6D0 (5. once P and þ are determined.P þs.154). Then the coefﬁcients fci g of the longterm predictor can be obtained by solving a system of equations similar to (5. APC Because P is usually in the range from 40 to 120 samples. has the advantage of allowing a very simple computation of the various parameters. The encoder and decoder schemes are given in Figure 5.181) where ². initially conducted by Atal [5].n/ represents the correlation coefﬁcient of fs.k/] c (5.k/2 ] D E[.5. where and ρ depend on the autocorrelation coefﬁcients of f f ` . The subdivision into two terms.k P//2 ] (5. Determination of the shortterm predictor through minimization of the cost function: min E[ f 2 . the autocorrelation sequence of f f ` .183) From the estimate of the autocorrelation sequence of fs.180) It follows:10 P D arg max ².5.P/ (5.k/g. even if not optimum.k/g. and þ D ².63.k/g at lag n. 1.179) is in fact an allpole type with a very high order N . Cascade of a longterm predictor with an allpole predictor. have demonstrated that adapting the various coefﬁcients by the feedforward method every 5 ms we get high quality speech reproduction with an overall bit rate of only 10 kbit/s.182) 2. 10 For the adopted notation see Footnote 3 on page 441. Computation of longterm predictor through minimization of the cost function min E[ f ` . for speech signals sampled at 8 kHz the whole predictor in (5.k/g is easily computed.s.64: they form the adaptive predictive coding (APC) scheme which differs from the DPCM for the inclusion of the longterm predictor.
402 Chapter 5. .66b exhibit some spectral lines. due to the periodic behavior of the corresponding signals in the time domain.65.64.66a and 5. We note that plots in Figures 5. as shown in Figure 5. Digital representation of waveforms Figure 5. these peaks are removed by the longterm predictor. the improvement given by the longterm predictor to lowering the LPC residual is shown in Figure 5. whereas these lines are attenuated in the plot of Figure 5.66.66c.65c. The frequencydomain representation of the three signals in Figure 5. For voiced speech. Without the longterm predictor the LPC residual presents a peak at every pitch period P. Adaptive predictive coding scheme.65 is given in Figure 5.
5. (b) LPC residual. Figure 5. DFT of signals of Figure 5.66. . (c) LPC residual with longterm predictor.5. Differential coding (DPCM) 403 Figure 5. (c) LPC residual with longterm predictor. (b) LPC residual.65. (a) Voiced speech.65: (a) voiced speech.
N .t/. Digital representation of waveforms Other longterm predictors with 2 or 3 coefﬁcients have been proposed: although they are more effective.180). There are also numerous methods that are more robust and effective than (5.6. .90) and the deﬁnition (1. thus partly assimilating the longterm predictor in the overall predictor.kTc / be the sampled version of s. : : : . WSS random process with bandwidth B. t 2 <. area functions or line spectrum pairs (LSP).6 5. 5. Observation 5. Observation 5.nTc / D rs n F0 C1 X `D 1 T0 F0 1 2B (5. obtained by ﬁltering the residual error so that it is reduced at frequencies where the signal has low energy and enhanced at frequencies where the signal has high energy. f / D Ps Â f ` F0 T0 Ã (5. Let x.n/ D rs .184) (5.t/.404 Chapter 5. To avoid this very laborious computation.1 Delta modulation Oversampling and quantization error For an input signal s. let the sampling period be Tc D where T0 D and F0 is the oversampling factor. parameters associated with the prediction coefﬁcients fci g i D 1. it is important to have a signaltonoise ratio that is constant in the frequency domain: this yields the socalled spectral shaping of the error. Two improvements with respect to the basic APC scheme are outlined in the following observations [3].187) and power spectral density Px .247).185) (5. the determination of their parameters is much more complicate than the approach (5. with autocorrelation Â Ã T0 rx .187) using (1.7 In APC.k/ D s.188) is obtained from (5. allpole predictors have been proposed with more than 50 coefﬁcients.188) Equation (5.186) (5.6 From the standpoint of perception. are normally sent: for example reﬂection coefﬁcients (PARCOR).181) to determine the pitch period P.
68.k/ be the quantized signal and eq .187) and from Figure 5.k/ (5.k/g become more correlated.k/ D x.188) we have that the spectrum of fx.k/ the corresponding quantization error xq .k/g presents images that are more spaced apart from each other.189) Figure 5. Let us now quantize fx. let xq . from (5. from (5.67 shows the effect of oversampling on fx. With reference to Figure 5.67.k/g.5.67a we note that by increasing F0 the samples fx.6.k/ C eq .k/g. . Effects of oversampling for two values of the oversampling factor F0 . moreover. Delta modulation 405 Figure 5. In particular.
k/ where eq. but at the expense of quadrupling the bit rate. we have Peq .196) increases proportionally to F0 : for example. Therefore oversampling by large factors is used only locally before PCM or in compact disk (CD) applications to simplify the analog interpolation ﬁlter at the receiver.190) Consequently.98)). at the output we will have yq .192). f / D Px .189).k/g with an ideal lowpass ﬁlter g having bandwidth B and unit gain. oversampling improves performance by a factor F0 . depending only on the number of quantizer levels (see (5. by ﬁltering fxq . under the assumption that the quantization noise is white. by increasing F0 the PSD of eq decreases in amplitude.o .44) and (5.189). from (5.k/ C eq. f / D Meq Then Meq. for the noncorrelation assumption between x and eq .o D Meq F0 (5. F0 D 4 improves 3q by 6 dB.k/g white with statistical power Meq .k/ has PSD given by Peq. General scheme.195) where 3q D Mx =Meq is to the signaltonoise ratio after the quantizer.192) and. Digital representation of waveforms Figure 5. Rb D b 1 F0 Db Tc T0 (5. However. with reference to (5. f / C Peq .68.193) (5.k/ D eq Ł g.o D 3q F0 (5.191) From (5. f / D Meq T0 F0 (5. the encoder bit rate.194) T0 f rect F0 2B (5. For feq . the effective signaltonoise ratio is given by 3q. Moreover.k/ D x. f / (5. we have Pxq .o .406 Chapter 5.o . . In conclusion.
69. . Figure 5. which is illustrated in Figure 5. is called a linear delta modulator (LDM). The following relations hold. We note. that onebit code words eliminate the need for framing of the code words at the transmitter and at the receiver.6. Delta modulation 407 5. moreover.197) and a quantizer with only two levels (b D 1).69. Then the encoder bit rate is equal to the sampling rate.6. thus simplifying the overall system. Rb D 1 (bit/s) Tc (5.5.198) The high value of F0 implies a high predictability of the input sequence: therefore a predictor with a few coefﬁcients gives a high prediction gain and the quantizer can be reduced to the simplest case of b D 1.2 Linear delta modulation (LDM) Linear delta modulation is a DPCM scheme with oversampled input signal. 1 × 2B Tc (5. LDM coding scheme. For a predictor with only one coefﬁcient c1 . the coding scheme.
An implementation of the scheme of Figure 5.k/ D .k/ O 1 . Letting b.201) h i O s . we will now establish a relation between the various parameters of an LDM. LDM implementation Digital implementation.k/ (5. where the DAC is often a simple holder. Analog implementation. that become simple accumulator expressions. An alternative to the previous scheme.k/ D 0/ if f . the integrator has a bandwidth B equal to that of the input signal. which performs the function of the ﬁlter g.o .70a and Figure 5. that are appropriately selected.k/ in the digital domain. and c1 ). to eliminate the outofband noise. by an updown counter (ACC). to get instead a small slope overload . Mixed analogdigital implementation.204) sq .k C 1/ D c1 s .i/g.c.k/ D s .203). typically we set c1 Ä 1 so that random transmission errors do not propagate indeﬁnitely in the reconstruction signal (5. i Ä k.70a.408 Chapter 5.70c. also for the LDM we need to choose a small 1 to obtain low granular noise.70b.69 for c1 D 1 is given in Figure 5.k/ ( f q . is obtained O by placing a DAC after the accumulator and carrying out the comparison in the analog domain.k/ D 1 c. The accumulated value is proportional to sq . In many applications it is convenient to implement the analog accumulator by an integrator: thus we obtain the implementation of Figure 5.k/ D c1 sq .k/ The system is based on three parameters (1=Tc .201) and (5.203) (5.71.k/ < 0 (5. For example the choice of c1 D 1 considerably simpliﬁes (5.k/ D s.199) (5.k sq.202) (5.200) (5. As for the DPCM scheme.k/. this implementation involves the accumulation of fb.k/ D 1/ if f . Choice of system parameters With reference to Figure 5. At the receiver. which requires carrying out the operation f .k/ ½ 0 1 . 1.k/ C f q .k/ D sgn[ f . Note that the decoder consists simply of an accumulator followed by a DAC. Digital representation of waveforms Encoder: f .sq Ł g/. and of the gain 1 of the quantizer step size.c.k/ O Decoder: f q .k/ D ( 1 1 c.k/ s .k/ D 0 1/ C f q . However. as illustrated in Figure 5.k/ D s.k/].203).
5.71.6. Graphic representation of LDM. . Delta modulation 409 Figure 5. granular noise s(k) s(k1) slope overload distortion ∆ sq (k) ^ s(k) } fq(k) t 1 +1 +1 { 0 Tc +1 +1 +1 +1 +1 1 +1 1 b(k) Figure 5.70. LDM implementations.
72. In particular.207) Figure 5.6.k 1/ (5.1/ D rs .205) ½ max þ þ t dt Tc and the reconstruction signal fsq . for a given value of maxt j.1 ²x . which requires a sampling rate of the order of 200 kHz.72. . (b) decoder. so that þ þ þ þd 1 þ s. 5.3 Adaptive delta modulation (ADM) To reduce both granular noise and slope overload distortion.Tc /=rs .d=dt/s.o of approximately 35 dB we need F0 ½ 33. Digital representation of waveforms distortion a large 1 is needed.k/g can follow very rapid changes of the input signal. the Jayant algorithm uses the following relation 1. In other words. the only possibility in the LDM is to reduce Tc .206) 1opt D 2Ms . An alternative is represented by an adaptive scheme for the step size 1. ADM coding scheme: (a) encoder.t/j.0/. In speech applications with a bandwidth of about 3 kHz.2F0 / where ²x .1// ln.410 Chapter 5.o of 9 dB: 3 dB are due to ﬁltering of the outofband noise and 6 dB to the reduction of the granular noise. to have a 3q. The optimum value of 1 is given approximately by p (5. if we reduce 1 to decrease the granular noise we must also reduce Tc to limit the slope overload distortion. as 1 can be halved. We note that doubling the sampling rate we obtain an increment in 3q. as shown in Figure 5.t/þ (5. as a consequence we have that LDM requires a very high oversampling factor F0 to give satisfactory performance.k/ D p1.
k 1/ C f q .k 1/ D c.k 1/ f q . A graphic representation of ADM encoding is shown in Figure 5.k/ 1.6.k/ D sq Ł g. Delta modulation 411 where pD ( po > 1 pg < 1 if c.k C 1/ D c1 [O . The following relations between the signals of Figure 5. Graphic representation of ADM.k 2/ (slope overload ) 1. Experiments on speech signals show that by doubling the sampling rate in the ADM we get an improvement of 10 dB in 3q .217) Typical values for po and pg are given by 1:25 < po < 2 and po pg Ä 1.5.k/.k/ D p1.k/ D s. In some applications ADM encoding is preferred to PCM because of its simple implementation.k/ also depends on c.k/ s .k/b.218) granular noise ^ s(k) 0 Tc t b(k) +1 +1 +1 1 +1 1 +1 1 +1 1 Figure 5.73.208) We note that in this scheme 1.k/ D c.k/b. in spite of the higher bit rate.207) is given by the equation ( Þ1.k/ D Þ1.k/ D c.211) (5.73.k/ D p1.72 hold.212) (5.216) sq .213) Continuously variable slope delta modulation (CVSDM) An alternative to the adaptation (5.k/ D sgn f .k/ (5.215) (5.k/ D 1.k/ (5.210) (5.k/ s .k/ O (5.k/ 6D c. (5.k 1/ C D1 otherwise sq(k) s(k) s(k1) slope overload distortion (5.k/ C f q .o .k if c. . Encoder: f .k/ D 1.209) b.k 1/ (5.k/ D c1 sq .k/] O s Decoder: 1.214) f q .k 1/ (slope overload ) 1/ (granular noise) (5.k/ sq.k 1/ C D2 if c.
1 (5. . we obtain the PCM output signal sampled at the minimum rate 1=T0 . (5. in some cases secondorder predictors are used.220) H .k/g to obtain a PCM representation of the input s. and D1 and D2 are suitable positive parameters with D2 × D1 .221) H .74.222) Figure 5.z/ D 1 c1 z 1 c2 z 2 The function H . Digital representation of waveforms where 0 < Þ Ä 1. this means 1=Tc D F0 =T0 ' 2 MHz.k/ D c1 sq .1 . the oversampling factor F0 must be at most equal to 2b and. choosing Þ < 1 mitigates the effects of transmission errors.219) The transfer function of the synthesis ﬁlter is given by 1 (5. for b D 8 and 1=T0 D 8 kHz.1 p1 z p2 z 1 / If p1 and p2 are real with 0 < p1 . The problem now is to determine the slope overload condition from the sequence fc. p2 Ä 1. in general. the decoder is equivalent to the cascade of two leaky integrators. we observe that to generate a PCM signal fc PC M .k/g with an accuracy of b bits.4 PCM encoder via LDM We consider an alternative scheme to the three implementations of Section 5. The value Þ controls the speed of adaptation. 5. Disregarding the ﬁltering effect on noise.2. that employs the LDM implementation of Figure 5.k/g.k/g to transmission errors.6. The main difﬁculty of this scheme is the sensitivity of fsq .t/. especially for Þ D 1.70c. where s . 1 − F0 Ä 2b For example. that brings a gain of 10 log10 F0 dB.k O 1/ C c2 sq . at the expense of worse performance.412 Chapter 5. It is sufﬁcient to accumulate fb.k 2/ (5. as illustrated in Figure 5. ADM with secondorder predictors With the aim of improving system performance. that is a lowpass ﬁlter followed by a downsampler. Linear PCM encoder via LDM.z/ D 1 /. using a decimator ﬁlter.2 to generate a linear PCM encoded signal.z/ can be split into the product of two ﬁrstorder terms.74.
74. which simpliﬁes the LDM integrator: therefore the decoder becomes a simple lowpass ﬁlter. It is interesting to observe the simplicity of the 6DM implementation.7 Coding by modeling In the coding schemes investigated so far.74.75. to enhance the low frequency components of speech signals we can insert a preemphasis integrator before the LDM encoder.7. 6DM coding scheme. 5.6. . A differentiator then has to be inserted at the LDM decoder. Moreover. similarly to the scheme of Figure 5. 5.75b: we note that the DACs are simple holders of binary signals. recalling that the spectrum of quantization noise in PCM and LDM is ﬂat. One of the most frequent applications of 6DM is in linear PCM encoders where.70c. the accumulator has been removed. Thus we get the general scheme of Figure 5. Coding by modeling 413 Figure 5.76. Therefore it can be removed to a large extent by a simple lowpass ﬁlter. with respect to the scheme of Figure 5.75a and the simpliﬁed scheme of Figure 5. PCM. given the general coding scheme of Figure 5. 6DM presents the advantage that the noise is colored and for the most part is found outside the passband of the desired signal.5 Sigma delta modulation ( DM) With reference to the scheme of Figure 5.5. the objective is to reproduce at the decoder a waveform that is as close as possible to the input signal. and their variations.k/g. Note that. DPCM. it is sufﬁcient to employ a 6DM followed by a digital decimator ﬁlter with input the binary signal fb. We now take a different approach and.
223) the coefﬁcients fci g and ¦ are obtained by the prediction algorithms of Section 2. the standard deviation in (5. Codebook excited linear prediction (CELP).224) ¦ D JN where J N is the statistical power of the prediction error. as we will assume that f f .414 Chapter 5. is illustrated in Figure 5.z/ D 1 ¦ N X i D1 (5. for example speech. however. Basic scheme of coding by modeling. Multipulse LP (MELP).k/g.k/g has unit statistical power. 6]. We will now analyze in detail some coding schemes. Digital representation of waveforms Figure 5. together with the pitch period P for the voiced case. the source fs.k/g. At the encoder. known also as the LPC vocoder. The difference among the various coding schemes consists in the form of excitation. derived from the residual signal. The excitation signal is selected by a collection of possible waveforms stored in a table. Vocoder or LPC The general scheme for the conventional LPC. .223) ci z i with input f f . and the LCP parameters are extracted. Three examples follow. The excitation signal consists of a certain number of impulses with suitable amplitude and lag. is modeled by an AR.2. In (5. For further study we refer the reader to [3. The excitation signal consists of a train of undersampled impulses.77.76. the signal is classiﬁed as voiced or unvoiced. Regular pulse excited (RPE). In particular.N / linear system H .223) is given by p (5.
77. as shown in Figure 5. the input signal sampled at 8 kHz is segmented into blocks of 180 samples. the excitation sequence is then quantized using a 3bit adaptive nonuniform quantizer. At the decoder. for the voiced case a train of impulses with period P is produced. The standard ETSI for GSM (06. For the analysis of the LPC parameters the covariance method is used. The excitation is then ﬁltered by the AR ﬁlter to generate the reconstruction signal. Coding by modeling 415 Figure 5. overall 54 bits per block are needed with a bit rate of 2400 bit/s. the prediction residual error is not transmitted. operating with blocks of 160 samples. Vocoder or LPC scheme.5. is a particular case of residual excited LP (RELP) coding in which the excitation is obtained by downsampling the prediction residual error by a factor of 3. where all the excitations are tried: the best is that which produces the output “closest” to the original signal.64.7. The choice of the best of the three subsequences (actually four are used in practice) is made by the analysisbysynthesis (ABS) approach. In an early LPC scheme for military radio applications (LPC10). The bit rate is 13 kbit/s.78a. with a latency lower than 80 ms.78b. .10) includes also a longterm predictor as the one of Figure 5. illustrated in Figure 5. whereas for the unvoiced case white noise is produced. RPE coding The RPE coding scheme.
The choice of the excitation (index of the codebook) is made by the ABS approach.78.79. RPE coding scheme. also in this case the predictor includes a long term component.416 Chapter 5. the excitations belong to a codebook obtained in a “random” way. CELP coding As shown in Figure 5.8) of the residual signal. Digital representation of waveforms Figure 5. trying to minimize the output of the weighting ﬁlter. . or by vector quantization (see Section 5.
80. CELP coding scheme. : : : . generic sample of a vector random process s. s N ]T . . using multidimensional signals opens the way to many techniques and applications that are not found in the scalar case [7.s.8 Vector quantization (VQ) Vector quantization (VQ) is introduced as a natural extension of the scalar quantization (SQ) concept.5. Vector quantization (VQ) 417 Figure 5. Multipulse coding It is similar to CELP coding. a reproduction vector sq D Q[s] chosen from a ﬁnite set of L elements (code vectors).k/. so that a given distortion measure d. 5. with the difference that the minimization procedure is used to determine the position and amplitude of a speciﬁc number of impulses.80 exempliﬁes the encoder and decoder functions of a VQ scheme. Q[s]/ is minimized. Figure 5. The analysis procedure is less complex than that of the CELP scheme. A D fQ1 .8. However. : : : . 8]. The encoder computes the distortion associated with the representation of the input vector s by each Figure 5.79. The basic concept is that of associating with an input vector s D [s1 . called codebook. Q L g. Block diagram of a vector quantizer.
a vector quantizer is characterized by ž Source or input vector s D [s1 .1 Characterization of VQ Considering the general case of complexvalued signals. associated with an observation window of a signal. ! A with Qi D Q[s] if i D arg min d. 8i 6D j (5.229) . L. : : : . Qi /. ` D 1. : : : .k/ D [c1 . L. : : : . We note that the information transmitted over the digital channel identiﬁes the code vector Qi : therefore it depends only on the codebook size L and not on N . are called Voronoi regions. Parameters determining VQ performance We deﬁne the following parameters. s.225) Deﬁnition 5.k N Tc /.226) which associates input vector pairs having the same reproduction vector.1 (Partition of the source space) The equivalence relation Q D f. Q` / ` (5. : : : . ž Quantization rule (minimum distortion) Q:C N N is a code vector. Digital representation of waveforms reproduction vector of A and decides for the vector Qi of the codebook A that minimizes it. s2 / : Q[s1 ] D Q[s2 ]g (5. It can be easily demonstrated that the sets fR` g are nonoverlapping and cover the entire space C N : L [ `D1 R` D C N Ri \ R j D . dimension of the code vectors. i D 1.s. ž Quantizer rate Rq D log2 L (bit/vector) or (bit/symbol) (5.418 Chapter 5. L (5. the decoder associates the vector Qi to the index i received. speech signal.227) The sets fR` g. identiﬁes a partition R D fR1 .228) In other words. s N ]T 2 C N .s.k N T . s. or the N LPC coefﬁcients.8. : : : .k/] 5.s1 . An example of input vector s is obtained by considering N samples at a time of a N C 1/Tc /]T . s2 . c N . An example of partition for N D 2 and L D 4 is illustrated in Figure 5. where Qi 2 C ž Distortion measure d. whose elements are the sets R` D fs 2 C N : Q[s] D Q` g ` D 1.k/ D [s.k/. ž Codebook A D fQi g.227) every subset R` contains all input vectors associated by the quantization rule with the code vector Q` .. : : : . : : : .81. as indicated by (5. s. R L g of the source space C N .
ž Rate per dimension RI D ž Rate in bit/s Rb D RI log2 L D (bit/s) Tc N Tc (5. Partition of the source space C 2 in four subsets or Voronoi regions.s. Vector quantization (VQ) 419 R4 R1 1 0 Q4 C2 Q 11 00 1 Q 1 0 3 Q 11 00 2 R2 R3 Figure 5.234) (5.a/ is known. Q[s]/] D L X `D1 L X `D1 (5. In other words. ž Distortion d.8. in (5.235) E[d.5. Qi / j s 2 R` ]P[s 2 R` ] N d` P[s 2 R` ] D (5. we can compute the mean distortion as D.s.k/ is stationary and the probability density function ps .236) .81. Qi / The distortion is a nonnegative scalar function of a vector variable.232) ð A ! <C (5. A/ D E[d.k/g. d:C N (5.233) If the input process s.230) where Tc denotes the time interval between two consecutive samples of a vector.s.R.231) N Tc is the sampling period of the vector sequence fs.231) Rq log2 L D (bit/sample) N N (5.
s.m / (5.a/ da R` (5.s.a.243) 11 Although the same symbol is used. . and we use the average distortion (5.m . : : : .s.2 .N / M.238) In practice we always assume that the process fsg is stationary and ergodic.241) . Q i.38).n j ¹ #¼=¹ (5.sn Q i.s Qi / D N N XX nD1 mD1 Qi jj2 D 2 N X nD1 jsn Q i.m . Qi / D jjs 2. Qi / D . Itakura–Saito distortion: d.R.k/. Comparison between VQ and scalar quantization e Deﬁning the mean distortion per dimension as D D D=N . 2. Q i. n. the metric deﬁned by (5.N / S. N . m D 1.240) The most common version is the squared distortion:11 d.1 .s Qi / H Rs . Digital representation of waveforms where N d` D Z R` d. A/ D lim K 1 X d. deﬁned in (1.346).k/.239) Q i. Distortion as the `¹ norm to the ¼th power: " N X ¼ d.a/ da Z ps .237) If the source is also ergodic we obtain D.n /Ł [Rs ]n. Q[s. : : : . Deﬁning Qi D [Q i. 1.N ]T we give below two measures of distortion of particular interest.n j2 (5. Qi / D jjs Qi jj¹ D jsn nD1 (5. with elements [Rs ]n.420 Chapter 5. Q` / ps .241) is the square of the Euclidean distance (1.N / e DV Q (5. for a given rate R I we ﬁnd (see [9] and references therein) e DS Q D F.sm Q i.242) where Rs is the autocorrelation matrix of the vector sŁ .k/]/ K kD1 K !1 (5.s.234).238) as an estimate of the expectation (5.
ž S. In the scalar case the partition regions must necessarily be intervals.240) to the continuous case we obtain jj ps .234) is minimized. Qi / D min d. The expression of M.s.N / D jj ps . In an N dimensional space.N / is the space ﬁlling gain. but only on the norm order N =. deﬁned as12 S.a/jj1=3 Q jj ps .247) jj ps .1/ D ž M. : : : . which differ for the correlation Q among the various vector components. we have M.N C2/ Q (5. deﬁned as M.N / increases as the correlation increases. : : : . obviously if the components of s are statistically independent.a/jj N =. Ri can be “shaped” very closely to a sphere. The asymptotic value for N ! 1 equals F.N / depends on the two functions ps .82.a/jj N =.244) .a/jj¼=¹ D Q ÄZ ÐÐÐ Z ps .a/.N / is the memory gain. otherwise M. 12 Extending (5.a/ is the probability density function of the input s considered with uncorQ related components. choosing the code vectors of the codebook A and the partitioning R so that the mean distortion given by (5. Q L g ﬁxed.246) where ps . Two necessary conditions arise.s.N C 2/ and shape ps . 5.8. Vector quantization (VQ) 421 where ž F.a1 . S.245) where ps .a/jj1 Q (5.N / D 1.a/ and ps . A/.234) the solution is given by Ri D fs : d. : : : .a/jj N =. we want to ﬁnd the optimum partition R that minimizes D.N C2/ Q jj ps .2 Optimum quantization Our objective is to design a vector quantizer. Q` /g Q` 2A i D 1. L (5. Ri contains all the points s “nearest” to Qi .1/ D 2³ e=12 D 1:4233 D 1:53 dB.N / is the gain related to the shape of ps .a/jj1=3 Q jj ps .a/ is the probability density function of the input s. we Q obtain S.N C2/ (5. Observing (5. For N ! 1.R.a/.5. Assuming the codebook A D fQ1 .248) As illustrated in Figure 5.8.N / D jj ps . Rule A (Optimum partition).a/. a N / da1 : : : da N Q¹ ½¼=¹ (5.N / does not depend on the variance of the random variables of s.
82. given in Figure 5.83.a/ da Ri and (5.s. Initialization.249) yields Z R Qi D Z i Ri a ps . Qi / j s 2 Ri ] D min E[d.a/ da 2 1 Ri N Z di D (5. Q j / j s 2 Ri ] Q j 2 Ri (5. .422 Chapter 5. (5. By minimizing (5.241). The iteration index is denoted by j.249) In other words Qi coincides with the centroid of the region Ri . Digital representation of waveforms Figure 5. choosing the squared distortion (5. 1. We choose an initial codebook A0 and a termination criterion based on a relative error ž between two successive iterations.236) we obtain the solution Qi : E[d. we want to ﬁnd the optimum codebook A.237) becomes Z jjs Qi jj2 ps .250) ps .251) ps . Rule B (Optimum codebook).a/ da Generalized Lloyd algorithm The generalized Lloyd algorithm. generates a sequence of suboptimum quantizers speciﬁed by fRi g and fQi g using the previous two rules.s. As a particular case. Assuming the partition is R given. Example of partition for K D 2 and N D 8.a/ da (5.
otherwise we update the value of j.252) we stop the procedure. We go back to step 2.236).8. . Vector quantization (VQ) 423 Figure 5. Using rule B we evaluate the optimum codebook associated to the partition R[A j 6. 5. Using the rule A we determine the optimum partition R[A j ] using the codebook A j .83. 1 ]. 4. Generalized Lloyd algorithm for designing a vector quantizer. 3. 2. We evaluate the distortion associated with the choice of A j and R[A j ] using (5. If Dj 1 Dj Dj <ž (5.5.
k/. the identiﬁcation of the distribution type. but this becomes a more difﬁcult problem with the increase of the number of dimensions N : in fact. as we also need to characterize the statistical dependence among the elements of the source vector.234) with (5. ž Also in the particular case (5. the implementation of this algorithm is difﬁcult for the following reasons.a/.s.k/. : : : .k/]/ K kD1 m D 1. For VQ with a large number of dimensions. the partition becomes also harder to describe geometrically. K (5.3 LBG algorithm An alternative approach led Linde. because it requires evaluating a multiple integral on the region Ri .3.251). given that the number of locally optimum codes can be rather large.2: the only difference is that the vector version begins with a codebook (alphabet) rather than with an initial partition of the input space. as well as trying different initial codebooks.253) (5. for example. : : : . it is often advantageous to provide a good codebook to the algorithm to start with. the calculation of the centroid is difﬁcult for the VQ. in the twodimensional case the partition is speciﬁed by a set of straight lines.m/g The average distortion is now given by DD K 1 X d. is no longer sufﬁcient. and for the multidimensional case to ﬁnd the optimum solution becomes very hard. and some of the locally optimum codes may give rather poor performance.k/g of the TS nearest to Qi .s. nevertheless.255) that is Ri is given by all the elements fs.L 1/ points.a/ is known.k/. in many applications. However. 5. L (5. whereas in the scalar quantization the partition of the real axis is completely speciﬁed by a set of .8.k/ : d. . Q[s. Gaussian or Laplacian. The algorithm is clearly a generalization of the Lloyd algorithm given in Section 5. and Gray [10] to consider some very long realizations of the input signal and to substitute (5. In fact.238) for K sufﬁciently large.s. Q` /g Q` 2A i D 1. Buzo. ž The algorithm assumes that ps . The sequence used to design the VQ is called training sequence (TS) and is composed of K vectors fs. ž The computation of the input space partition is much harder for the VQ. In the scalar quantization it is possible.254) and the two rules to minimize D become: Rule A Ri D fs. Digital representation of waveforms The solution found is at least locally optimum. to develop an appropriate model of ps .424 Chapter 5. Qi / D min d.
. it is necessary to use L vectors that are sufﬁciently spaced in time from each other. Choice of the initial codebook With respect to the choice of the initial codebook. the ﬁrst L vectors of the TS can be used. if the data are highly correlated. ž The computation of Qi in (5. ž It converges to a minimum. it is worthwhile pointing out some aspects of this new algorithm.256) is still burdensome.k/.256) where m i is the number of elements of the TS that are inside Ri . Slightly changing the components of this code vector (splitting procedure). At convergence. Before discussing the details.258) that is Qi coincides with the arithmetic mean of the TS vectors that are inside Ri .241) we have d.257) (5. Using the structure of the Lloyd algorithm with the new cost function (5. A more effective alternative is that of taking as initial value the centroid of the TS and start with a codebook with a number of elements L D 1.255) and (5. at this point.5. we arrive at the LBG algorithm. Iteratively the splitting procedure and optimization is repeated until the desired number of elements for the codebook is obtained. ž The partition is determined without requiring the computation of expectations over C N. we determine the optimum VQ for L D 2.s. which is not guaranteed to be a global minimum.k/2R i (5.256) simply becomes Rule B Qi D 1 X s.k/. and generally depends on the choice of the TS.k/ m i s. Q j / m i s.n j2 (5.256). However. for the squared distortion (5. Vector quantization (VQ) 425 Rule B Qi D arg min Q j 2C N 1 X d.k/ Q i. using the LBG algorithm.k/2R i N X nD1 jsn .254) and the two new rules (5. we derive two code vectors and an initial alphabet with L D 2. Qi / D and (5.8. ž It does not require any stationarity assumption. each optimum code vector is changed to obtain two code vectors and the LBG algorithm is used for L D 4.s. however.
whose block diagram is shown in Figure 5. The splitting procedure generates 2L N dimensional vectors yielding the new codebook A jC1 D fA j g [ fAC g j where A j D fQi AC D fQi j Typically ε is the zero vector. Q L g be the codebook at iteration jth. Digital representation of waveforms Let A j D fQ1 .259) Description of the LBG algorithm with splitting procedure Choosing ž > 0 (typically ž D 10 3 ) and an initial alphabet given by the splitting procedure applied to the average of the TS. ε D0 and 1 εC D 10 so that jjε C jj2 Ä 0:01 Ms 2 where Ms is the power of the TS. : : : . Selection of the training sequence A rather important problem associated with the use of a TS is that of empty cells. L . whereas its operations are depicted in Figure 5.84. i D 1. Possible causes of this phenomenon are: ž TS too short: the training sequence must be sufﬁciently long. ž poor choice of the initial alphabet: in this case. It is in fact possible that some regions Ri contain few or no elements of the TS: in this case the code vectors associated with these regions contribute little or nothing at all to the reduction of the total distortion.264) r Ms Ð1 N (5. in addition to the obvious solution of modifying this choice. L (5. We eliminate the code vector Qi from the codebook and apply the splitting procedure limited only to the region that gives .263) (5.261) (5.260) (5. we obtain the LBG algorithm. Let Ri be a region that contains m i < m min elements. (5.262) ε g εC g i D 1. we can limit the problem through the following splitting procedure.85. : : : .426 Chapter 5. so that every region Ri contains at least 3040 vectors. : : : .
In this situation.8.13 that can be useful in the design of a vector quantizer. LBG algorithm with splitting procedure. They can be considered valid in the case of strongly correlated vectors. for a sequence different from the TS (outside TS) the distortion is 13 These rules were derived in the VQ of LPC vectors. then we compute the new partition and proceed in the usual way. . the largest contribution to the distortion. there is a possibility of empty regions. where we recall K is the number of vectors of the TS. We give some practical rules. an appreciable difference between the distortion calculated with the TS and that calculated with a new sequence may exist. the distortion computed for vectors of the TS is very small. hence D D 0. it may in fact happen that. In the latter situation. Vector quantization (VQ) 427 Figure 5.5. ž If K =L Ä 30. and L is the number of code vectors. ž If K =L Ä 600. the extreme case is obtained by setting K D L. for a very short TS. taken from [3] for LPC applications.84.
Taking L D 256 we have a rate Rb D 8 bit/20 ms equal to 400 bit/s. for example.86. Digital representation of waveforms Figure 5. Operations of the LBG algorithm with splitting procedure.2). as vector source the LPC coefﬁcients with N D 10. Values of the distortion as a function of the number of vectors K in the inside and outside training sequences. As a matter of fact. does the TS adequately represent the input process and no substantial difference appears between the distortion measured with vectors inside or outside TS [10]. computed over windows of duration equal to 20 ms of a speech signal sampled at 8 kHz. only if K is large enough. even though very simple. .428 Chapter 5. We consider.85. in general very high. requires numerous computations. Figure 5.14 Finally we ﬁnd that the LBG algorithm. As illustrated in Figure 5.86. the 14 This situation is similar to that obtained by the LS method (see Section 3.
determined according to the LBG algorithm. Vector quantization (VQ) 429 LBG algorithm requires a minimum K D 600 Ð 256 roughly corresponds to three minutes of speech.4 Variants of VQ Tree search VQ A random VQ. in the tree search VQ we proceed by levels: ﬁrst. we compare s with Q A1 and Q A2 . 155000 vectors for the TS. at the expense of a larger memory. thus determining a full search.8.5. Comparison between full search and tree search. 2.87. N L E V D log2 L). then we proceed along the branch whose node has a representative vector “closest” to s. for a binary tree the procedure consists of the following steps. As illustrated in Figure 5. : : : . 3. Divide the training sequence into subsequences relative to every node of level n (n D 2. N L E V .8. is the tree search VQ. whereas in the memoryless VQ case the comparison of the input vector s must occur with all the elements of the codebook. 1. Calculate the optimum quantizer for the ﬁrst level by the LBG algorithm. To determine the code vectors at different nodes. requires: ž a large memory to store the codebook. the codebook contains 2 code vectors.87. collect all vectors that are associated with the same code vector. Apply the LGB algorithm to every subsequence to calculate the codebook of level n. 3. . which 5. Figure 5. in other words. ž a large computational complexity to evaluate the L distances for encoding. A variant of VQ that requires a lower computational complexity.
The advantage of a multistage approach in terms of cost of implementation is evident. illustrated in Figure 5.15 for a given value of Rq (bit/vector) in the cases of full search and tree search. Digital representation of waveforms Table 5.Ð. Multistage VQ The multistage VQ technique presents the advantage of reducing both the encoder computational complexity and the memory required.Ð. as illustrated in Figure 5. Let Rq D log2 L be the rate in bit/vector for both systems and assume that all the code vectors have the same dimension N D N1 D N2 . A third stage could be used to quantize the error of the second stage and so on. Ð/ are shown in Table 5. Computation of d. hence L 1 L 2 D L. Memory: L 1 C L 2 locations.15 Comparison between full search and tree search. Ð/ for encoding: L.Ð. We compare the complexity of a onestage scheme with that of a twostage scheme. . however.Ð. Although the performance is slightly lower. Product code VQ The input vector is split into subvectors that are quantized independently.88. the second stage performs quantization of the error vector e D s Q[s]: the quantized error gives a more accurate representation of the input vector. Computations of d. Successively.89. ž Onestage: Rq D log2 L. where the ﬁrst stage performs quantization with a codebook with a reduced number of elements. the memory requirements and the number of computations of d. Ð/ full search tree search for Rq D 10 (bit/vector) full search tree search 2 Rq 2Rq Computation of d. Memory: L locations. it has lower performance than a onestage VQ. ž Twostage: Rq D log2 L 1 C log2 L 2 . Computations of d.Ð. the computational complexity of the encoding scheme for a tree search is considerably reduced.430 Chapter 5. The idea consists in dividing the encoding procedure into successive stages. Ð/ 1024 20 Number of vectors to memorize P Rq 2 Rq ' 2 Rq C1 i i D1 2 Number of vectors to memorize 1024 2046 As an example. Ð/ for encoding: L 1 C L 2 .
Vector quantization (VQ) 431 Figure 5.5.8. Figure 5. Product code VQ.88.89. . Multistage (twostage) VQ.
g. With reference to Figure 5. A more general approach is the sequential search product code.. [7]. Figure 5. assuming L D L 1 L 2 and N D N1 C N2 . e. Digital representation of waveforms This technique is useful if a) there are input vector components that can be encoded separately because of their different effects.9 Other coding techniques We brieﬂy discuss two other coding techniques along with the perceptive aspects related to the hearing apparatus.432 Chapter 5. prediction gain and LPC coefﬁcients. we note that the rate per dimension for the VQ is given by log2 L 1 log2 L 2 log2 L D C N N N whereas for the product code VQ it is given by Rq D Rq D log2 L 1 log2 N2 C N1 N2 (5.90.266) 5. Block diagram of the ATC. where the quantization of the subvector n depends also on the quantization of previous subvectors. It presents the disadvantage that it does not consider the correlation that may exist between the various subvectors.265) (5. For further details we refer the reader to [3.89. that could bring about a greater coding efﬁciency. 6]. or b) the input vector has too large a dimension to be encoded directly. .
namely source coding. but it operates in the time domain by using a ﬁlter bank (see Figure 5. i.91. Block diagram of the SBC.3 and 3.5. We cite the Lempel–Ziv algorithm as one of the most common source coding algorithms. discretevalued source signal can be encoded with a lower average bit rate by means of entropy coding [4]. .5.m/g.91).5. 5. which is used to “compress” digital information messages. Source coding 433 Figure 5. The basic scheme is illustrated in Figure 5. Adaptive transform coding (ATC) The ATC takes advantage of the nonuniform energy distribution of a signal in some transformed domain.10.90 and includes a quantizer that adapts to the different inputs fS. to highly probable input patterns are assigned shorter code words and vice versa. which assigns code words of variable lengths to possible input patterns. using for example the DFT or the DCT (see Sections 3. a discretetime. In fact. Subband coding (SBC) The SBC exploits the same principle as the ATC.e.4).10 Source coding We brieﬂy mention an important topic.
1 G.727 Description PCM at 64 kbit/s ADPCM at 32 kbit/s ADPCM at 24 and 40 kbit/s G. A SBC scheme having two bands.729 Annex A G. It is also interesting to compare the various standards to code video signals given in Table 5.711 G.3 and 6. AAC .721 G.723 G.16 a partial list of the various standards to code audio and speech signals [11]. there is also a 5. Layer II MPEG1.728 G.726 G. 56 and 48 kbit/s .723.16 Main standards for audio and speech coding. 24 and 16 kbit/s (“embedded” means that a code also includes those of lower rate) LDCELP at 16 kbit/s (LD stands for low delay) CSACELP at 8 kbit/s CSACELP at 8 kbit/s with reduced complexity MPCMLQ at 5. Layer III MPEG2.4 kbit/s CELP at 4.11 Speech and audio standards We conclude this chapter by giving in Table 5.729 G.723+G. in each band there is a G. Layer I MPEG1. It is interesting to observe the various standards that adopt CELP coding. 32.95 kbit/s (VSELP stands for vector sum excited linear prediction) LPC at 2.721 embedded ADPCM at 40.1). for example.16 for speech and audio.18 with those of Table 5. Bit allocation in the two bands is dynamic.8 kbit/s RPELTP at 13 kbit/s (LTP stands for longterm prediction).722 11 12 13 14 15 16 17 18 IS54 (TIA) FS1015 (LPC10E) FS1016 GSMFR MPEG1.434 Chapter 5. Digital representation of waveforms 5. Standard 1 2 3 4 5 G. The ﬁrst nine are for narrowband speech applications (see Table 5. 0 ł 4 kHz and 4 ł 8 kHz is used.721 encoder.4 kbit/s SBC+ADPCM for wide band speech at 64.6 kbit/s version SBC at 192 kbit/s per audio channel (stereo) [generally 32 ł 448 kbit/s total] SBC at 128 kbit/s per audio channel [generally 32 ł 384 kbit/s total] SBC+MDCT+Huffman coding at 96 kbit/s per audio channel [generally 32 ł 320 kbit/s total] SBC+MDCT coding at 64 kbit/s per audio channel 6 7 8 9 10 G. 5+3 or 6+2 VSELP at 7. Table 5.17: we notice that most of them are for cellular radio applications. listed in Table 5.
(DoD) Abbreviations G.S. L’Aquila: Scuola Superiore G. Digital processing of speech signals. Noll.4 kbit/s Table 5. half rate for GSM 4. 4. encoding of speech and data 7.95 kbit/s. 1996. S. coding for North America cellular systems based on CDMA 12.728 G. enhanced full rate for GSM 5.27 and 6. 14.2. R. 1978. Bibliography 435 Table 5.6 kbit/s. Rabiner and R. Application Target bit rate ISDN Video Telephone 64ł128 kbit/s ISDN Video Conferencing 128 kbit/s MPEG1 CDRom Video 1. full rate for North America cellular systems based on DAMPS 1.6 kbit/s. 1997. W.17 Main standards based on CELP. Digital coding of waveforms. 2. NJ: PrenticeHall.5. vol.2 kbit/s. Sereno and P. Reiss Romoli. Codiﬁca numerica del segnale audio. [4] N. Jayant and P.5 Mbit/s MPEG2 TV (Broadcast Quality) 6 Mbit/s HDTV (Broadcast Quality) 24 Mbit/s TV (Studio Quality. . coding of audio in multimedia systems 16 kbit/s 8 kbit/s. Uncompressed) 216 Mbit/s HDTV (Studio Quality. Valocchi. Sept.723 G.4. [3] D.8 kbit/s 2. [2] IEEE Signal Processing Magazine. Schafer. (DoD) U. Compressed) 34 Mbit/s HDTV (Studio Quality. Body ITU ITU ITU TIA TIA TIA/ETSI ETSI U.3 kbit/s.18 Bit rates for video standards.729 IS54 IS95 US1 GSMHR FS 1016 MELP Bit rate 5. Englewood Cliffs. 1984. Englewood Cliffs.8 and 9.S. Uncompressed) 1 Gbit/s Bibliography [1] L. Compressed) 140 Mbit/s TV (Studio Quality. NJ: PrenticeHall.
in Proc. and A. [7] A. 1. IEEE ASSP Magazine. M. IEEE Trans. 1992. Sept. Vector quantization and signal compression. Remde. vol. Gray. Advances in speech coding. Lookabaugh and R. Jan. Gray. 614–617. [10] Y. 1980. [9] T. 1989. 28. V. 84–95. . M. 4–29. pp. Buzo. S. vol. [8] R. [11] IEEE Communication Magazine. “An algorithm for vector quantizer design”. pp. vol. MA: Kluwer Academic Publishers. vol. 35. on Communications. IEEE Trans. “A new model of LPC excitation for producing naturalsounding speech at low bit rates”. eds. and R. 1020– 1033. Linde. 1984. M. pp. Gersho.436 Chapter 5. ICASSP. Atal. Cuperman. on Information Theory. Apr. Boston. A. MA: Kluwer Academic Publishers. Sept. M. Atal and J. R. pp. S. Digital representation of waveforms [5] B. D. Gray. “High–resolution quantization theory and the vector quantizer advantage”. 1997. 1982. Gray. Boston. Gersho and R. 1991. “Vector quantization”. [6] B. 35.
M. it may consist for example of a twistedpair cable. in any case the channel modiﬁes the transmitted waveform by introducing for example distortion.1 Theory of optimum detection We consider ﬁrst the transmission of an isolated pulse. interference. or. an infrared link. orthogonal. Using the vector representation of signals discussed in Section 1. or a combination of them.v. and biorthogonal. a coaxial cable.2. an optical ﬁber. based on the received signal.1. which indicates the maximum rate that can be achieved for reliable transmission. and a comparison of the various methods is given in terms of spectral efﬁciency and required transmission bandwidth. In the case of digital transmission. The mapping of a digital sequence to a signal is called digital modulation. . The transmission rates achievable by the various modulation methods over a speciﬁc channel for a given target error probability are then compared with the Shannon bound. With reference to the system illustrated in Figure 6. : : : . the information is represented by a sequence of binary data (bits) generated by the source.12 we will give some simple results for a channel affected by ﬂat fading. a radio link. We postpone the study of other effects to the next chapters. A modulator may employ a set of M D 2 waveforms to generate a signal (binary modulation). only in Section 6. the transmitter generates randomly one of M realvalued waveforms sn . and sends it over the channel.t/.. n D 1. In this chapter we assume that the channel introduces only additive white Gaussian noise (AWGN). as discussed in Chapter 4. and noise. and present a survey of the main modulation techniques. M ½ 2 waveforms (Mary modulation).1 is modeled as a discrete r. in general. PSK. The performance of each modulationdemodulation method is evaluated with reference to the bit error probability. The task of the receiver is to detect which signal was transmitted. The variable a0 in Figure 6. and the device that performs the mapping is called digital modulator. referring to the detection theory. e. PAM.Chapter 6 Modulation theory The term modulation indicates the process of translating the information generated by a source into a signal that is suitable for transmission over a physical channel.g. in this chapter we will introduce the optimum receiver. 6. The transmission medium determines the channel characteristics. or by a digital encoder of analog signals (see Chapter 5). QAM. the waveform is corrupted by realvalued additive white Gaussian noise w having zero mean and spectral density N0 =2.
M i D 1. Z ii C1 1 ! sm D [sm1 .t/g. Mg.4) Recall that the set fsm g.t/ n D 1. In any case. i D 1. : : : .7) .t/ D and where wi D hw. the received. be a complete basis for the M signals fsm . : : : . M.t/.t/ C w.t/ Ł i .3) D sm . M (6. : : : .t/ where w . sm . M. Let f i . and represents the index.t/ dt (6.t/ dt m D 1.t/ where smi D hsm . The basis of I functions may be incomplete for the representation of the noise signal w. We represent timecontinuous signals using the vector notation introduced in Section 1.t/ C w? . or symbol. or observed. sm I ]T (6.t/ D sn .t/ The receiver. I . with values in f1.2) (6. that is a0 D m. 2. Assuming that the waveform with index m is transmitted. Z ii D 1 C1 I X i D1 (6.1.6) w. The theory O exposed in this section can be immediately extended to the case of complexvalued signals. I (6. is also called the system constellation. : : : . we express the noise as w.t/ D sm . must decide which among the M hypotheses Hn : r. m D 1. m D 1.1) is the most probable.t/g.t/ C w. : : : .t/ D w . : : : . : : : . based on r. let sm be the vector representation of sm .438 Chapter 6.t/ (6. : : : . Modulation theory Figure 6. t 2 <. of the transmitted signal. signal is given by r.5) wi i . and correspondingly must select the detected value a0 [1].2. Model of the transmission system.t/.t/ Ł i .
t2 /] ZZ Z Ž. Correlation: as w. i D 1. I . : : : . then they are statistically independent with equal variance given by ¦ I2 D N0 2 (6.9) Statistics of the random variables {wi } 1. i D 1. 2. are jointly Gaussian random variables.t1 /w Ł . As fwi g.13) . Theory of optimum detection 439 In other words.6.t/] Ł i . and hence also to w . The vector representation of the component of the noise signal that lies in the span of f i . : : : .7)). I .t/g.t/ j .t/ is white noise.10) N0 Ž. Mean: Z E[wi ] D as w has zero mean.t/g. 3. as they are linear transformations of a Gaussian process (see (6. I because of the orthogonality of the basis f i .t2 / dt1 dt2 (6.t1 / j .11) N0 2 Ł i . i D 1.12) Ł i .t1 /w Ł . : : : . : : : . : : : .t/ dt D0 i D 1. as we will state later using the theorem of irrelevance. Since w? .t2 /] D and E[wi wŁ ] D j D ZZ E[w. Hence the components fwi g are uncorrelated. fwi g.t/ w .t/ (6. I . t 2 <.t/ dt N0 D 2 D N0 Ži 2 j i.t1 2 t2 / (6.t1 / j . I (6. w is the component of w that lies in the space spanned by the basis f i .8) is orthogonal to f i . j D 1. w I ]T D w (6.t/g. it can be ignored because it is irrelevant for the detection. we have E[w. are jointly Gaussian uncorrelated random variables with zero mean.t2 / dt1 dt2 E[w. we can say that w? lies outside of the desired signal space and. whereas w? is the error due to this representation.t/ D w. i D 1.1.t1 t2 / Ł i . : : : .t/g is given by w D [w1 . I . : : : .
18) pn P[r 2 Rn j a0 D n] pn prja0 .17) The choice of M regions is made so that the probability of a correct decision is maximum.2). that allows the optimum detection of the desired signal.440 Chapter 6. that is the Fourier transform of the noisy signal ﬁltered by an ideal ﬁlter with passband B. M (6.ρ j n/ dρ D M X nD1 D M XZ nD1 Rn 1 Given a desired signal corrupted by noise. : : : . or sequence of samples. : : : . In other words. 2 Here we use the formulation (1.14) the components of the vector r are called sufﬁcient statistics 1 to decide among the M hypotheses. Therefore we get the formulation equivalent to (6. for n 6D m). the probability of correct decision is given by P[C] D P[a0 D a0 ] O D M X nD1 P[a0 D n j a0 D n]P[a0 D n] O (6. under the hypothesis that waveform n is transmitted.ρ j n/ D r exp jjρ sn jj2 (6.377) for realvalued signals. Recalling the total probability theorem. Hn : r D sn C w n D 1. is given by:2 Ã Â 1 1 ρ 2 <I prja0 . Let pn D P[a0 D n] be the transmission probability of the waveform n. . we would get the same results using the formulation (1. A particular case is represented by transformations that allow reconstruction of a signal using the basis identiﬁed by the desired signal. we are able to reconstruct the noisy signal within the passband of the desired signal. in general the notion of sufﬁcient statistics applies to any signal. Then we adopt the following decision rule: O choose Hn (and a0 D n) if r 2 Rn (6. For example.15) From the above results. or a priori probability. therefore the noisy signal ﬁltered by a ﬁlter with passband B is a sufﬁcient statistic.16) !I N0 N0 2³ 2 Decision criterion We subdivide the space < I of the received signal r into M nonoverlapping regions Rn SM ( nD1 Rn D < I and Rn \ Rm D .380) for complexvalued signals. no information is lost in considering a set of sufﬁcient statistics instead of the received signal. r I ]T with ri D hr. considering the basis feš j2³ f t . ii (6. f 2 Bg to represent a realvalued signal with passband B in the presence of additive noise. Modulation theory Sufﬁcient statistics Deﬁning r D [r1 . t 2 <. the probability density function of r.
. m D arg max f . Theory of optimum detection 441 We deﬁne the indicator function of the set Rn as ² ρ 2 Rn In D 1 0 elsewhere Then (6.ρ j n/ D pr .−3 . arg. for each value of ρ only one of the terms is different from zero.x.x. We give a simple example of application of the MAP criterion for I D 1 and M D 3. for a function f . 1.x.ρ j n/.20) The integrand function consists of M terms but. are the a posteriori probabilities. The probabilities P[a0 D n j r D ρ ]. n/.² j n/. −3 ] (6.19) In pn prja0 . If we indicate with −1 . n D 1. If two or more values of n that maximize f . is achieved if for each value of ρ we select among M terms the term that yields the maximum value of pn prja0 .x.6. 3. 2.22) P[a0 D n j r D ρ] pn (6. Therefore the maximum value of the integrand function for each value of ρ. n/ n (6.24) In other words.24) has the largest probability of having been transmitted. given that we observe ρ. n D 1. For a complex number c.18) becomes Z P[C] D M X < I nD1 (6.3 Maximum a posteriori probability (MAP) criterion: ρ 2 Rm Using the Bayes’ rule prja0 .23) (6. −2 ] [ .25) 3 arg means argument.1.−2 . Let the function pn pr ja0 .2. : : : . n/ is maximum for a given x.c/ denotes the phase of c.ρ j n/ n (6. −1 ] R2 D . and hence of the integral. −2 . M. the signal detected by (6. it is easy to verify that R1 D . C1/ R3 D . being the M regions nonoverlapping. be given as shown in Figure 6.−1 .2. n/ exist. and −3 the intersection points of the various functions as illustrated in Figure 6. Thus we have the following decision criterion.ρ j n/ dρ (6.21) denotes the value of m that coincides with the value of n for which the function f . a random choice is made to determine m.ρ/ the decision criterion becomes a0 D arg max P[a0 D n j r D ρ] O n a0 D m O if m D arg max pn prja0 .
Theorem of irrelevance With regard to the decision process. pn D 1=M. In some texts the ML criterion is formulated via the deﬁnition of the likelihood ratios: Ln .3 for the three signals of Example 1. the decision regions fRn g.ρ/ n (6.16) we get Â a0 D arg max exp O n 1 jjρ N0 sn jj 2 Ã (6. If the signals are equally likely a priori.ρ j n/ n (6.2.3. M. 2. are easily determined. we introduce a theorem that formalizes the distinction previously mentioned between relevant and irrelevant components of the received signal. n D 1. observing (6. M (6. Illustration of the MAP criterion.29) Taking the logarithm.ρ/ D prja0 . The decision region associated with each vector sn is given by the intersection of two halfplanes as illustrated in Figure 6. the criterion (6.442 Chapter 6. Maximum likelihood (ML) criterion. which is a monotonic function. s j .2 on page 10. Modulation theory p1 pra ( ρ 1) 0 p3 pra ( ρ 3) 0 p2 pra ( ρ 2) 0 τ 1 τ 2 τ 3 ρ Figure 6. we obtain a0 D arg min jjρ O n sn jj2 (6. An example is given in Figure 6.ρ j n/ prja0 . Considering a pair of vectors si .26) The ML criterion leads to choosing that value of n for which the conditional probability that r D ρ is observed given a0 D n is maximum. Moreover. 8n.28) From (6.22) becomes ρ 2 Rm a0 D m O if m D arg max prja0 .26). i. : : : .27) In this case the ML criterion becomes a0 D m O if m D arg max Ln . : : : . The procedure then is repeated for every pair of vectors.ρ j 1/ n D 1. we draw the straight line of points that are equidistant from si and s j : this straight line deﬁnes the boundary between Ri and R j .e.2.30) Hence the ML criterion coincides with the minimum distance criterion: “decide for the signal vector sm . which is closest to the received signal vector ρ”. .
n/ does not depend on the particular value n assumed by a0 . recalling the deﬁnition of conditional probability.2) is represented using a larger basis. Then.4. that is if pr2 jr1 .32) (6. Theory of optimum detection 443 φ2 A T 2 R3 s1 0 R1 A T 2 φ1 s3 s 2 R2 Figure 6.a0 .1 A sufﬁcient condition to disregard r2 is that pr2 jr1 . can be rewritten as pr1 .a0 .ρ 2 / We illustrate the theorem by the following examples.a0 . where the noise (6. Corollary 6.ρ 1 j n/ Substitution of (6. n/ pr1 ja0 .34) .2.1.ρ 1 .1 If pr2 jr1 .35) (6. n/ D pr2 jr1 .3.6.ρ 2 j ρ 1 .r2 ja0 .ρ 2 j ρ 1 . Example 6. as illustrated in Figure 6.ρ 2 j ρ 1 .ρ 2 j ρ 1 / (6.32) into (6. Let us assume that the signal vector r can be split into two parts. n/ D pr2 .ρ 2 j ρ 1 . Theorem 6.5) has two components w1 D w w2 D w? (6. ρ 2 j n/ D pr2 jr1 . under the hypothesis a0 D n. ρ 2 j n/ which. r2 ].1 The system (6.ρ j n/ D pr1 .33) (6. prja0 . r D [r1 .r2 ja0 .31) then the optimum receiver can disregard the component r2 and base its decision only on the component r1 .a0 .2.22) leads to the following result.ρ 1 .1. Construction of decision regions for the constellation of the Example 1.
n/ D pw2 . We note that the received signal vector r2 coincides with the noise vector w2 that is statistically independent of w1 and sn . .1.ρ 2 j ρ 1 . r2 cannot be disregarded by the optimum receiver: in fact.38) w1 w 2 r = w + r = w + w + sn 2 2 1 2 1 M r = w +s 1 1 n Figure 6.1. n/ D pr2 .37) does not depend on n: therefore (6.a0 .ρ 2 / (6.1.6 are statistically independent. Example 6. the noise vectors w1 and w2 in Figure 6. that is independent of the particular sn transmitted.2 In the system shown in Figure 6. Therefore we have pr2 jr1 .444 Chapter 6.36) hence. s1 s s 2 w1 / D pw2 . Modulation theory s1 s 2 w1 r = w +s 1 1 n r =w 2 2 w2 s M Figure 6.ρ 2 j ρ 1 .ρ 2 ρ 1 C sn / (6. Then pr2 jr1 . if r1 is known.a0 . then r2 depends only on the noise w2 .a0 .ρ 2 that depends explicitly on n. the component r2 D w? can be disregarded by the optimum receiver. Example 6. we get pr2 jr1 . from r2 D w2 C w1 and w1 D r1 sn . however. Under this condition. by Corollary 6. the noise vectors w1 and w2 are statistically independent.1.ρ 2 ρ1/ (6.1. As r2 D r1 C w2 .2: the vector r2 is irrelevant. Example 6.5.4.33) is veriﬁed and r2 is irrelevant.ρ 2 j ρ 1 . n/ D pw2 .3 As in the previous example.1: the vector r2 is irrelevant. Example 6.5.
We note that Example 6.1. observing (6. where a correlation demodulator is substituted by a matched ﬁlter with impulse response iŁ .40) t0 =2/=t0 /. .1. An equivalent implementation is based on the equivalence illustrated in Figure 6.− / d− t t0 (6.39) that does not depend on n. Implementations of the maximum likelihood criterion We give now two implementations of the ML criterion.9. Theory of optimum detection 445 w1 s1 s s 2 w 2 r = w2 + w 1 2 r = w +s 1 1 n M Figure 6. if the noise vectors w1 and w2 are statistically independent.t0 / D ri . M (6. and the second computes the M distances Dn D jjr sn jj2 n D 1.ρ 2 / (6.34) the signal r2 can be neglected by the optimum receiver. t0 /.t/ D r. Implementation type 1.a0 . In fact.6.8. and We note that the ﬁlter on branch i has impulse response given by rect. : : : . yields yi ..7. Example 6.t yields the output Z t yi . As illustrated in Figure 6.4.t0 t/.7. n/ D pw2 jw1 .t/g have ﬁnite duration in the interval .1 is a particular case of Example 6.4 In Figure 6. n/ D pw2 .1.ρ 2 j ρ 1 .− / iŁ .14).ρ 2 j ρ 1 sn .t/g and f i .4: the vector r2 is irrelevant. Example 6.a0 . assuming that fsm .6.1. it is: pr2 jr1 .0. Example 6.41) which sampled at t D t0 . there are two fundamental blocks: the ﬁrst determines the I components of the vector r.3: the vector r2 is relevant w1 s1 s 2 w 2 r =w 2 2 r = r +r 2 1 r = w +s 1 1 n s M Figure 6.1.1. from (6.
8. Implementation type 1 of the ML criterion.43) is also given in Figure 6. Z En D 1 C1 jsn . sn i O n sn jj2 D jjρjj2 C jjsn jj2 2Re[hρ.446 Chapter 6.t/ dt where E n is the energy of sn . Modulation theory Figure 6.39).9.t/sn .43) ½ En 2 ½ (6.8.44) Ä D arg max Re n ÄZ C1 1 Ł ². whereas the equivalent criterion (6. φ *(t) i t0 t (.44) is illustrated in Figure 6.t/j2 dt (6. Implementation type 2.42) jjsn jj2 2 (6. Using (1. (a) Correlation demodulator and equivalent (b) matched ﬁlter (MF) demodulator. from jjρ the ML criterion becomes Ä a0 D arg max Rehρ.45) The implementation of (6. .10.) d τ tt 0 (a) r(t) ri t0 r(t) φ * (t t) 0 i ri (b) Figure 6. sn i] ½ (6.
we express the error probability as M X nD1 Pe D pn P[r 2 Rn j a0 D n] = (6. 2 (6.6.10. from (1. Theory of optimum detection 447 t0 s* (t0 t) 1 Re[. Using the total probability theorem. assuming ² real.49) .47) We examine the case of two signals.11. Independently of the basis system.]  E1 2 E2 2 U1  U2 ^ a0=arg max Un n ^ a0 t0 s* (t0 t) M Re[.] t0 r(t) s* (t0 t) 2 Re[.46) where P[C] is given by (6.]  EM 2 UM Figure 6.t/j2 dt i D 1. Typically.48) d 2 D E 1 C E 2 2² E 1 E 2 where Z Ei D 1 C1 jsi . in the applications we have I < M.41) the squared distance between the two signals is given by p (6.1. Implementation type 2 of the ML criterion.18). hence implementation type 1 is more convenient. whose vector representation is illustrated in Figure 6. For convenience we choose 2 parallel to the line joining s1 and s2 . the error probability of the system is deﬁned as O Pe D P[E] D P[a0 6D a0 ] D 1 P[C] (6. Error probability In general.
equation (6. As for all projections on an orthonormal basis. we obtain Pe D Q Â d 2¦ I Ã (6. it is Ã ½ Â Ä d d (6.448 Chapter 6. Given the received signal vector r as in Figure 6.53) b2 1 (6. the noise component w2 is Gaussian with zero mean and variance N0 2 Then the conditional error probability is given by Ä P[r 2 R2 j a0 D 2] D P w2 < = ¦ I2 D where Q.11.51).55) DQ P[r 2 R1 j a0 D 1] D P w2 > = 2 2¦ I C1 From (6. and Ł s1 . which means a0 D 2.A.50) Pe D 1 fP[r 2 R1 j a0 D 1] C P[r 2 R2 j a0 D 2]g = = 2 (6. Likewise.56) .51) We assume that s2 is transmitted.a/ D Z (6.47) becomes 1 Z C1 (6. Modulation theory Figure 6.11.52) Ã ½ Â d d DQ 2 2¦ I (6. we get a decision error if the noise w D r s2 has a projection on the line joining s1 and s2 that is smaller than d=2. Binary constellation and corresponding decision regions.t/ dt ²D p E1 E2 In the case of two equally likely signals.54) p e 2 db 2³ a is the Gaussian complementary distribution function whose values are reported in Appendix 6.t/s2 .
t/j2 dt.1. For this reason it is useful to deﬁne the following signaltonoise ratio at the decision point Â D d 2¦ I Ã2 (6.1 The error probability depends only on the ratio between the distance of the signals of the constellation at the decision point and the standard deviation per dimension of the noise. Theory of optimum detection 449 Observation 6.48) and (6.6.63) . T / (6.t/ dt Es D 1 (6.58) Pe D Q @ 2N0 If E 1 D E 2 D E s . Antipodal signals (ρ = −1) The signal set is composed of two antipodal signals: s1 . we have Z ²D C1 1 Ł s1 . we get s Pe D Q E s .t/ .t/ D s. with s.0.t/ D s.60) js.t/ s. where p Á p Á s2 D s1 D Es Es The implementation type 1 of the optimum ML receiver is depicted in Figure 6.t/s2 .62) (6.1 ²/ N0 ! (6.12 follows. Substitution of (6.1.t/ D p Es The vector representation of Figure 6. (6.56) yields 0s 1 p E 1 C E 2 2² E 1 E 2 A (6. I D 1.59) 6.52) in (6.13.1 Examples of binary signalling We let M D 2 and E 1 D E 2 D E s .57) We will now derive an alternative expression for Pe as a function of the modulator parameters.61) The basis has only one element.t/ For E s D RT 0 s2 .t/ deﬁned in .
t/ D p T Es and s2 .70) and 2 .t/ are “windowed” versions of sinusoidal signals.2³ f 0 t C '0 C ³ / D s1 .t/ D A cos. r(t) φ* (Tt) T r r>0 .450 Chapter 6.2³ f 0 t/ T 0<t <T (6. if f 0 D k=2T .t/ 0<t <T (6.12.E s s1 0 d=2 Es Es φ Figure 6. Modulation theory s2 . then Es D E1 D E2 D A2 T 2 and ² ' 0 (6.2³ f 0 t/ s2 .66) Orthogonal signals (ρ = 0) We consider the two signals s1 .t/ D A cos.67) Observing (1. where s.71).t/ D A sin.t/.t/ D A cos.2³ f 0 t C '0 / s2 .13. ^ =2 a 0 ^ a0 Figure 6. k integer. As ² D 1. In this case s1 .t/ D p Es We note that 1 . ^ =1 a r<0 0 .2³ f 0 t/ D 1 . is shown in Figure 6.68) A basis is composed of the signals themselves r 2 s1 .t/ D 2 . (6.14.2³ f 0 t/ 0<t <T (6.t/ 0<t <T 0<t <T (6.69) r 2 sin . Vector representation of antipodal signals.60). or else f 0 × 1=T .t/ cos. .65) (6. ML receiver for binary antipodal signalling. deﬁned by (6.59) becomes s Pe D Q 2E s N0 ! (6.64) A modulation technique with antipodal signals is binary phase shift keying (2PSK or BPSK).
The vector representation is given in Figure 6.16.59) becomes s ! Es Pe D Q (6. for f0 D 2=T and '0 D ³=2. The optimum ML receiver is depicted in Figure 6. Theory of optimum detection 451 A A/2 s(t) 0 −A/2 −A 0 t T Figure 6. As ² D 0.71) N0 . As the two vectors s1 and s2 are p p orthogonal. This distance is reduced by a factor of 2 as compared to the case of antipodal signals with the same value of Es .2³ f0 t C '0 /. 0 < t < T. Vector representation of orthogonal signals.1.t/ D A cos. Plot of s.15. (6.6. φ2 Es s2 2E s s1 0 Es φ1 Figure 6. their distance is 2Es .14.15.
Figure 6. given by s1 . From the curves of probability of error versus E s =N0 plotted in Figure 6. Binary FSK We consider the two signals of Figure 6.t/ D A cos. ML receiver for binary orthogonal signalling.18.72) where '0 is an arbitrary phase.17. ^ =2 a 1 2 0 Figure 6.16.17. ^ =1 a 1 2 0 ^ a0 T s* (Tt) 2 U 2 U <U . f 0 C f d /t C '0 / 0<t <T (6. f 0 f d /t C '0 / s2 . Modulation theory T s* (Tt) r(t) 1 U 1 U >U . .t/ D A cos. We examine in detail another binary orthogonal signalling scheme.452 Chapter 6.2³. Error probability as a function of Es =N0 for binary antipodal and orthogonal signalling. we note that for a given Pe we have a loss of 3 dB in E s =N0 for the orthogonal signalling scheme as compared to the antipodal scheme.2³.
2T /.75) Introducing the modulation index h as the ratio between the frequency deviation f d and the Nyquist frequency of the transmission system equal to 1=.74) As FSK is a binary modulation. Theory of optimum detection 453 A s (t) 0 1 −A 0 t T A s (t) 0 2 −A 0 t T Figure 6.18. it holds Es D E1 D E2 ' and Z ²D C1 1 A2 T 2 (6. we have hD Therefore ² D sinc.2T / (6. if f 0 š f d × 1=T .t/s2 .73) s1 . and f d is the frequency deviation. f 0 is called the carrier frequency.6. we have s Pe D Q E s . Also in this case. We have two “windowed” sinusoidal functions.2h/. fd D 2 fd T 1=.1. fd D 0:3=T and '0 D 0.76) .1 ²/ N0 ! (6.t/ dt A2 T 2 D sinc.4 f d T / (6. Binary FSK signals with f0 D 2=T. one with frequency f 0 f d and the other with frequency f 0 C f d .
for h D 0:715 and Pe D Q. M.4 ρ 0.1.5). n D 1.74) we have ² D 0 for h D 1=2.19. we get the minimum p value of ². Modulation theory 1 0. however. they imply larger f d . 6. From (6.sn . ²min D 0:22.2 Bounds on the probability of error Let fsn .79) 4 In fact. with the consequent requirement of larger channel bandwidth. : : : ) that yield ² D 0.m (6.5 3 3. that is for f d D 1=. with a gain of 0. sm / D dnm D 1 (6. An error event that leads to choosing sn is expressed as Enm D fr : d. Correlation coefﬁcient ² as a function of the modulation index h. 1:22E s =N0 /.2 −0.19.5 4 Figure 6. sm /g (6.4 0 0. sn / < d.78) Upper bound We assume sm is transmitted.3 dB with respect to antipodal signalling. and a loss of 2.77) and 2 2 dmin D min dnm n.5 2 h 2.r. 2.t/ sm . : : : .4 There are other values of h (1. 1:5.t/j2 dt d . . MSK also requires that the phase of the modulated signal be continuous (see Section 18. be M equally likely signals. with squared distances Z C1 2 2 jsn .4T /: in this case we speak of minimum shift keying (MSK).2 0 −0. From the plot of ² as a function of h illustrated in Figure 6.r.t/g.8 0.5 1 1.7 dB in E s =N0 as compared to the case ² D 0.6 0.454 Chapter 6.
we have Â Ã dmin Nmin Pe ½ Q (6.86) M 2¦ I In other words. an error occurs if sn is chosen. Therefore. let dmin. then Q.80) In general. an error event E is reduced to considering only a signal at minimum distance.84) 2¦ I Lower bound Given sn . Theory of optimum detection 455 Using (6.83) Since dmin Ä dnm . the probability of the event Enm is given by Ã Â dnm P[Enm ] D Q 2¦ I M X mD1 (6. from (6.6.1.81) On the other hand. given sm .dnm =.n D dmin . For example. the error probability in the case of equally likely signals is Pe D P[E j sm ] 1 M (6. as sm is transmitted.85) Q Pe ½ M nD1 2¦ I We get a looser bound by introducing Nmin . we obtain the following lower bound Ã Â M 1 X dmin. the number of signals fsn . we have Nmin D M.n be the distance of sn from the nearest signal. and Pe ½ Q.n (6.82) P[E j sm ] D P nD1.n6Dm Then an upper bound on Pe is given by Ã Â M M 1 X X dnm Q Pe Ä M mD1 nD1. . we have " # M M [ X Enm Ä P[Enm ] (6.2¦ I //.3 with M D 3 we have r T Ap Ap d23 D T d13 D A T (6. Limiting the evaluation of the error probability to error events that are associated to the nearest signals. n D 1. for the constellation of Figure 6. M. Limiting the equation in (6.n6Dm nD1.2¦ I //. if there is one.n6Dm 2¦ I (6.dmin =.87) d12 D 2 2 2 p Then dmin D A T =2 and Nmin D 3.M 1/Q (6.83) is given by Â Ã dmin Pe Ä .47).t/g whose distance is dmin from the nearest signal: 2 Ä Nmin Ä M. In the particular case of dmin. n 6D m. applying the union bound.dmin =. and a looser bound than (6.2¦ I // Ä Q.53).85) to such signals. : : : .
20. Some bits may be inserted in the message fb` g to generate an encoded message fcm g. In that case the notion of binary channel cannot be referred to the transmission of the sequence fcm g.t/ D C1 X kD 1 kT / (6. However.456 Chapter 6.1 has been investigated assuming that an isolated waveform is transmitted. Simpliﬁed model of a transmission system.5 The value of the generic symbol ak is then modulated.8.89) which is sent over the transmission channel. it is associated with a waveform. that is. The bits of fcm g are then mapped to the symbols fak g. where the information message consists of a sequence of information bits fb` g generated at instants `Tb . 6. In the model of Figure 6. with reference to Figure 6. Let sCh . Modulation theory Figure 6.t Therefore the transmitter generates the signal s. Note that the system of Figure 6.1: ak ! sak . for example.20. we discuss now some aspects of a communication system. Chapter 12. according to the scheme of Figure 6.t/ be the signal at the output of the transmission channel.88) sak .20. according to rules that will be investigated in Chapters 11 and 12. assuming that the transmitted waveforms do not give rise to intersymbol interference (ISI) at the demodulator 5 Note that there are systems in which encoder and modulator are jointly considered. which assume values in an Mary alphabet and are generated at instants kT .t kT / (6.2 Simpliﬁed model of a transmission system and deﬁnition of binary channel With reference to Figure 6.t/ with PSD N0 =2. . which is assumed to introduce additive white Gaussian noise w. see. the transmission of a waveform is repeated every symbol period T .
for every choice of N Q distinct instants m 1 . sak .90) In the case of a binary symmetric channel (BSC).1 The transformation that maps cm into cm is called a binary channel. We note that the aim of the channel encoder is to introduce redundancy in the sequence fcm g.0.6. which is exploited by the decoder to detect and/or correct errors introduced by the binary channel. For example.dec/ O Pbit D P[b` 6D b` ] (6. 6 Absence of ISI in this context means that the optimum reception of the waveform transmitted at instant kT .91) In a memoryless binary symmetric channel the probability distribution of fcm g is obtained Q from that of fcm g and PBC according to the statistical model shown in Figure 6. cm N 6D cm N ] Q Q Q D P[cm 1 6D cm 1 ] P[cm 2 6D cm 2 ] : : : P[cm N 6D cm N ] Q (6. all signalling schemes that employ pulses with ﬁnite duration in the interval . it is assumed that P[cm 6D cm j cm D Q 0] D P[cm 6D cm j cm D 1]. the following relation holds: Q Q P[cm 1 6D cm 1 . this is a particular case of the Nyquist criterion for the absence of ISI that will be discussed in Section 7. and by the bit error probability Q Pbit D PBC D P[cm 6D cm ] cm . At the receiver. Simpliﬁed model of a transmission system and deﬁnition of binary channel 457 output6 we can still study the system assuming that an isolated symbol is transmitted. the symbol a0 transmitted at instant t D 0. It is characterized by Q the bit rate 1=Tcod . Memoryless binary symmetric channel. is not inﬂuenced by the presence of the waveforms associated with symbols transmitted at other instants.21. cm 2 6D cm 2 . for example. : : : .2.92) Figure 6.21.3. The information bits fb` g are then recovered by a decoding process. . m N . which is the transmission rate of the bits of the sequence fcm g. The overall objective of the transmission system is to reproduce the sequence of information bits fb` g with a high degree of reliability. cm 2 f0. We say that the BSC is memoryless if. : : : . 1g Q (6.t kT /.3. measured by the bit error probability . T / do not give rise to ISI. O Deﬁnition 6. the bits fc` g are obtained by inverse bit mapping from the detected Q O message fak g. However. m 2 .
ž 1=T : modulation rate or symbol rate (Baud). ž E b : average energy per information bit (V2 s/bit). ž ¹: spectral efﬁciency of the system (bit/s/Hz). Parameters of a transmission system We give several general deﬁnitions widely used to describe the various modulation systems that will be treated in the following sections. ž M: cardinality of the alphabet of symbols at the transmitter.i.t/. symbols. It is equal to the time interval between two consecutive bits of the information message. ž E sCh : average energy of an isolated pulse (V2 s). ž Bmin : conventional minimum bandwidth of the modulated signal (Hz). ž PsCh : available power of the desired signal at the receiver input (W). It expresses the minimum value of PsCh such that the system achieves a given performance in terms of bit error probability. ž 0 I : signaltonoise ratio per dimension. ž I : number of dimensions of the signal space or of the signal constellation.d. ž S: sensitivity (W).t/ D s.dec/ Chapter 5) and Pbit ' 10 7 –10 11 for data messages. Modulation theory . ž T : modulation interval or symbol period (s). sCh .dec/ Typically. . As in practical systems the transmitted signal s is distorted by the transmission channel. we consider the desired signal at the receiver input. in particular. ž Tb : bit period (s). it is required Pbit ' 10 2 –10 3 for PCM or ADPCM coded speech (see . ž Rb D 1=Tb : bit rate of the system (bit/s). ž N0 =2: spectral density of additive white noise introduced by the channel (V2 /Hz). ž L b : number of information message bits per modulation interval. We assume the message fb` g is composed of binary i. ž R I : rate of the encodermodulator (bit/dim). ž MsCh : statistical power of the desired signal at the receiver input (V2 ). for an ideal AWGN channel sCh . ž 0: conventional signaltonoise ratio at the receiver input. ž Twi : effective receiver noise temperature (K).458 Chapter 6. ž E I : average energy per dimension (V2 s/dim).
Rate of the encodermodulator : RI D Lb I (6.99) E sCh I (6. ak . Statistical power of the desired signal at the receiver input: MsCh D E sCh T (6. Conventional minimum bandwidth of the modulated signal : Bmin D 1 2T 1 T for baseband signals for passband signals (6.94) where the equality holds for a system without coding. or.97).6.99) becomes: Eb D E sCh log2 M (6.96) log2 M I (6.95) We note that. Average energy per dimension: EI D 6.2. Average energy per information bit: Eb D For an uncoded system.102) Bmin D .98) 7. (6. In general we have L b Ä log2 M (6.101) (6.100) EI Es D Ch RI Lb (6. L b information bits of the message fb` g are mapped in an Mary symbol. with abuse of language. for transmission of an isolated pulse E sCh is ﬁnite and we deﬁne MsCh through (6. MsCh is ﬁnite and consequently we deﬁne E sCh D MsCh T . 5.93) 2. Number of information bits per modulation interval : via the channel encoder (COD) and the bitmapper (BMAP). on the other hand.97) (6. Symbol period : T D Tb L b 4. for continuous transmission (see Chapter 7). In this case we also have RI D 3. Simpliﬁed model of a transmission system and deﬁnition of binary channel 459 Relations among parameters 1. for an uncoded system.
105) is useful to analyze the system.92) we obtain an alternative expression of (6. the statistical power must also double to maintain a given ratio 0.99).106) is the ratio between the energy per dimension of an isolated pulse E I and the noise variance per dimension ¦ I2 given by (6. We note that.93).N0 =2/2Bmin N0 Bmin T (6. 10. from (4. we have ¹D RI I Bmin T (6.105) In general 0 expresses the ratio between the statistical power of the desired signal at the receiver input and the statistical power of the noise measured with respect to the conventional bandwidth Bmin . for the same value of N0 =2.107) It is interesting to observe that in most modulation systems it turns out 0 I D 0. . Using (6. Conventional signaltonoise ratio at the receiver input: 0D MsCh E sCh D . if Bmin doubles.103) In practice ¹ measures how many bits per unit of time are sent over a channel with the conventional bandwidth Bmin . the general relation becomes 0 I D 2R I Eb N0 (6. from (6. 11.108) is usually employed to evaluate the link budget.13). In the next sections some examples of modulation systems without channel coding are illustrated. Spectral efﬁciency: ¹D Lb 1=Tb D Bmin Bmin T (6. 8.108) We observe that (6. and (6. Link budget: if the receiver is matched to the transmission medium for the maximum transfer of power. the deﬁnition of Bmin will be different and will include the factor 1=M.460 Chapter 6. 9. R I D ¹=2.104) Later we will see that. Signaltonoise ratio per dimension: 0I D 2E sCh EI D N0 =2 N0 I (6.7. for most uncoded systems. In terms of R I . Modulation theory For the orthogonal and biorthogonal signals of Section 6.105) given by 0D PsCh kTwi Bmin (6.
: : : .M C 1/ 2 M X iD1 i2 D M.t/ where Þn D 2n 1 M (6.t/ .109) In other words. 2. The ﬁlter output yields the transmitted signal sa0 .t/j2 dt (6. 7 A few useful formulae are: M X iD1 iD M. is the ﬁrst example of multilevel baseband signalling. An example for M D 8 is illustrated in Table 6.113) M 1 X M2 1 Eh En D M nD1 3 (6.e.6.1.110) t 2< n D 1. The symbol Þn is input to an interpolator ﬁlter with impulse response h Tx . The map is a Gray encoder (see Appendix 6. i. The minimum distance is equal to p dmin D 2 E h D d (6. M (6.115) The transmitter is shown in Figure 6.M C 1/. M (6. a transmitted isolated pulse is expressed as sn . Energy of sn : 2 E n D Þn E h Z Eh D C1 1 jh Tx .23. The bit mapper is composed of a serialtoparallel (S/P) converter followed by a map that translates a sequence of log2 M bits into the corresponding value of a0 .114) as illustrated in Figure 6. Let h Tx be a realvalued ﬁniteenergy pulse with support . Pulse amplitude modulation (PAM) 461 6.t/ D Þn h Tx .t/ D p Eh (6. also called amplitude shift keying (ASK).B).3.0.3 Pulse amplitude modulation (PAM) Pulse amplitude modulation. PAM signals are obtained by modulating in amplitude the pulse shape h Tx . : : : . t0 /.112) Vector representation: sn D Þn p Eh n D 1.22 for M D 8. M may take values larger than 2.111) Average energy of the system:7 Es D Basis function: h Tx .2M C 1/ 6 M X iD1 i3 D Â M Ã M C1 2 2 .
v. Minimum bandwidth of the modulated signal. In this case. see Deﬁnition 7. or signal constellation. of an 8PAM system.116) where sn D Þn . Modulation theory d=2 E h s1 s2 7 E h 000 5 E h 001 M=8 s3 3 E h 011 s4 . Gray coding of symbols (M D 8) Threebit sequence 000 001 011 010 110 111 101 100 Þn 7 5 3 1 1 3 5 7 a0 1 2 3 4 5 6 7 8 The type 1 implementation of the ML receiver is shown in Figure 6. with zero mean and variance N0 =2. Table 6.117) Bmin D 2T .22. From the observation of r.15) r is given by r D sn C w (6.23. : : : . M. from (6. The O transmitted bits are then recovered by an inverse bit mapper.1 on page 559: 1 (6. Figure 6. a threshold detector yields the detected symbol a0 .d=2/. Transmitter of a PAM system for an isolated pulse.24 and consists of a matched ﬁlter to h Tx followed by a sampler.462 Chapter 6. and w is a realvalued Gaussian r. equal to the Nyquist frequency. Vector representation.1 Bit map for a 8PAM. n D 1.Eh 010 0 s5 Eh 110 s6 3 Eh 111 s7 5 Eh s8 7 Eh 101 100 bitmapping Figure 6.
it follows that 0 I D 0.120) Ã ½ M d 2 p E h . ML receiver.119) Note that (6. 2.119) expresses 0 as the ratio between the signal energy and the variance of the noise component: therefore. : : : . of a PAM system for an isolated pulse.n .118) from (6. M 1. letting d D 2 considering the outer constellation symbols separately from the others.M=2// d.2T / (6. n D 1.1 Ä Â d M/ C w > 1 2 ½ Ä d DP w> 2 Â DQ d 2¦ I Ã . and Â Ä d D P Þ1 C w > 1 2 D P .2n M/ Eh D . as I D 1. we have P[E j s M ] D P[E j s1 ] D P[a0 6D 1 j a0 D 1] O Ä Â DP r> 1 M 2 Ã ½ d j a0 D 1 M 2 Ã ½ d j a0 D 1 (6. Symbol error probability: from the total probability theorem. implementation type 1.6.3. The p thresholds are set at . Pulse amplitude modulation (PAM) 463 Figure 6.105) it follows 0D Es N0 =2 (6.24. Spectral efﬁciency: ¹D Signaltonoise ratio: .1=T / log2 M D 2 log2 M (bit/s/Hz) 1=.
if an error event occurs. .Þn 1/ 2 2 ½ Ä ½ Ä d d d d C P Þn C w > . assuming absence of ISI at the decision point.125) expresses the fact that.25 for different values of M. for equally likely symbols we have Ã Â Ã½ Ä Â d d 1 C .M 2/2Q 2Q Pe D M 2¦ I 2¦ I Â Ã Â Ã 1 d D2 1 Q M 2¦ I In terms of E s we get8 Â Pe D 2 1 1 M Ã Q s ! (6. in other words. Thus with high probability only one bit of the log2 M bits associated with the transmitted symbol is incorrectly recovered. two other baseband modulation schemes are described: pulse position modulation (PPM) and pulse duration modulation (PDM). substitution of (6. for 0 sufﬁciently large.119) into (6. : : : .124) Assuming that Gray coding is adopted at the transmitter.121) (6.3. Curves of Pbit as a function of 0 are shown in Figure 6.t/ must be a Nyquist pulse (see Section 7.Þn C 1/ D P Þn C w < . the autocorrelation function of h Tx .123) (6. the bit error probability is given by Pbit ' Pe log2 M valid for 0 × 1 (6.464 Chapter 6.125) Equation (6.123) yields ! r Ã Â 3 1 Pe D 2 1 Q 0 M M2 1 (6. 8 These results are valid also for continuous transmission with modulation period T . M 1 P[E j sn ] D P[a0 6D n j a0 D n] ¦ ² ¦ Ä² ½ d d [ r > . it is very likely that one of the symbols at the minimum distance from the transmitted symbol is detected. Modulation theory and O n D 2.Þn C 1/ j a0 D n D P r < . Then.Þn 1/ 2 2 2 2 Ä ½ Ä ½ d d DP w< CP w> 2 2 Ã Â d D 2Q 2¦ I where ¦ I2 D N0 =2. In the Appendix 6.3).122) 6 M2 Es 1 N0 In terms of 0.C.
128) ³ A more general deﬁnition of 'n is given by 'n D M .473). modulated by h Tx .10 In the following sections we will denote by Âk the r. : : : .t/.t/ D h Tx .t/ D Re[e j'n h Tx . t0 /. Phaseshift keying (PSK) 465 10 −1 10 −2 M=16 10 −3 M=8 Pbit 10 −4 M=4 10 −5 M=2 10 −6 0 5 10 15 20 Γ=2E /N (dB) s 0 25 30 35 Figure 6. Let9 'n D ³ .66) by assuming h . signals are obtained by choosing one of the M possible values of the phase of a sinusoidal function with frequency f 0 . Bit error probability as a function of 0 for MPAM transmission.4 Phaseshift keying (PSK) Phaseshift keying is an example of passband modulation. 6. n D 1. Consequently. : : : . M (6. .2³ f 0 t C 'n / t 2< n D 1.t/ cos.v.0. In this section we consider the case of an isolated pulse transmitted at instant k D 0. An alternative expression of (6.126) then the generic transmitted pulse is given by sn .2n 1/ C '0 .25. where '0 is a constant phase.2n M 1/ n D 1.t/e j2³ f 0 t ] (6. Let h Tx be a realvalued ﬁniteenergy baseband pulse with support . that determines the transmitted signal phase at instant kT . : : : . where w is the rectangular window of duration T deﬁned Tx T T 9 in (1.t/ D w . 10 We obtain (6. the values of Âk are given by 'n . M.4.127) that is. M (6.127) is given by sn .6.
: : : . Note that the signal constellation lies on a circle and the various vectors differ in the phase 'n .2³ f 0 t/ Eh s 2 h Tx . s n .Q h Tx . 2 coincides with Þn .2n we have sn . 2. setting Þn D e j'n D e j M . sin .t/ sin.129) Energy of sn : if f 0 is greater than the bandwidth of h Tx .466 Chapter 6.2n sn D 2 M Á ³ 1/ . q We note that the desired signal at the decision point.2n M 1/ 1/ (6. M (6.t/ cos.138) dmin D 2 E s sin M M .130) ³ 1/ (6.I D Re[Þn ] D cos Þn.2³ f 0 t/ where Þn.135) (6.136) Vector representation: r ³ Eh h cos .t/ D Þn.26 for M D 8.131) (6. The minimum distance is equal to p p ³ ³ D 2E h sin (6.t/ D 2 h Tx .2n M ÁiT 1/ n D 1.2³ f 0 t/ Eh (6. Modulation theory Moreover.2n M ³ D Im[Þn ] D sin . using Parseval theorem we get En D Average energy of the system: Es D Basis functions: s 1 .I h Tx .t/ sin.Q ³ .137) as illustrated in Figure 6.133) (6.t/ M 1 X Eh En D M nD1 2 Eh 2 (6. aside from the factor E h .t/ cos.134) D 2 .2³ f 0 t/ (6.132) Þn.
130) is obtained by adding the two components. respectively. . cos. respectively. sin. A PSK transmitter for M D 8 is shown in Figure 6.26. together with the Gray coding of the various symbols represented by the bits b1 . by a Hilbert ﬁlter. For M > 8 detection can be made by observing the phase vr of the received vector11 r D [r I . and partially by a ﬁlter matched to h Tx . the components r and r of r will be indicated 1 2 by r I and r Q .26. Phaseshift keying (PSK) 467 Figure 6.28. From the general scheme of Figure 6. Signal constellation of an 8PSK system. 8. The ﬁlter output signals are multiplied by the carrier signal. 11 For the sake of notation uniformity with the following chapters.8. simple decision rules can be deﬁned. The bit mapper maps a sequence of log2 M bits to a constellation point with value Þn . b3 . the projections of sn on the axis 1 (in phase) and on the axis 2 (quadrature) are also represented. 4. For M D 2. for example. r Q ]T . In Figure 6. From Figure 6.2³ f 0 t/.26 we note that the decision regions are angular sectors with phase 2³=M. b2 . we note that the basis functions (6.4. The transmitted signal (6.Q are input to interpolator ﬁlters h Tx .I and Þn.2³ f 0 t/. The type 1 implementation of the ML receiver is illustrated in Figure 6.6.27.136) are implemented partially by a correlator with a sinusoidal signal. The quadrature components Þn. and by the carrier signal phaseshifted by ³=2.
Thresholds are set at . Figure 6. Minimum bandwidth of the modulated signal (passband signal): Bmin D Spectral efﬁciency: ¹D . : : : . n D 1.28.1=T / log2 M D log2 M (bit/s/Hz) 1=T (6. . of an MPSK system for an isolated pulse. Modulation theory Figure 6.140) 1 T (6. ML receiver. Transmitter of an 8PSK system for an isolated pulse.2³ =M/n. M.27.139) We note that for M D L 2 we have the same spectral efﬁciency as PAM. implementation type 1.468 Chapter 6.
16). we get Z Pe D 1 ³ M ³ M pÂ . moreover 0 I D 0 if M > 2.4.² I .29. observing (6.142) becomes ² Ä ZZ Á2 Á2 ½¦ p p 1 1 ²I exp E s cos 'n C ² Q E s sin 'n Pe D 1 d² I d² Q N0 Rn ³ N 0 (6. For a0 D n we get r D w C sn . exploiting the symmetry of the signalling scheme we get Pe D P[E j sn ] D1 D1 D1 P[C j sn ] P[r 2 Rn j a0 D n] ZZ pr . then.105) we have 0D E s =2 Es D N0 ¦ I2 (6.143) Using polar coordinates.137).6.141) We note that 0 also expresses the ratio between the energy per dimension and the variance of the noise components.29.142) where the angular sector Rn is illustrated in Figure 6. where sn is given by (6.z/ dz (6. Symbol error probability: with equally likely signals. ² Q j a0 D n/ d² I d² Q Rn (6. (6. Decision region for an MPSK signal. Phaseshift keying (PSK) 469 Signaltonoise ratio: from (6. .144) r θ w sm 2π M vr Figure 6.
148) Curves of Pbit as a function of 0 D E s =N0 are shown in Figure 6.z/ ' cos ze N0 (6.t/ D Eh The signal constellation.149) 1 .t/ cos. substituting (6.144).E s =N0 / cos2 z e 2 cos z 1 N0 s Q 2E s cos z N0 !#) (6. and observing (6. .363) in (6. for M ½ 4 we can use the approximation (6.470 Chapter 6.146) in (6. Modulation theory where pÂ . it is ¹ D 1. Therefore 0 I D 20 p E s and (6.145) for ³ Ä z Ä ³ .145) to obtain s Es Es sin2 z pÂ . the bit error probability is given by Pbit D Pe log2 M valid for Es ×1 N0 (6.144) cannot be solved in closed form. we get s ! 2E s ³ Pe D 2Q sin N0 M p D 2Q ³Á 20 sin M (6. Then I D 1. If E s =N0 × 1.31. comprises the vectors s1 D p p s2 D E s .146) ³ N0 In turn. We consider in detail two particular cases.2³ f 0 t C '0 / (6. where '0 is an arbitrary phase. Binary PSK (BPSK) For M D 2 we get '1 D '0 and '2 D ³ C '0 . The integral (6. This result is due to the fact that a BPSK system does not efﬁciently use the available bandwidth 1=T : in fact only half of the band carries information.150) Moreover.141).z/ D e E s =N0 ( 1C s 2³ " ³ E s . illustrated in Figure 6.147) Assuming that Gray coding is adopted at the transmitter. and a basis is given by the function s 2 h Tx . The information in the other half can be deduced by symmetry and is therefore redundant.30. hence dmin D 2 E s .
the evaluation of Pe yields Pe D Pbit s DQ 2E s N0 ! (6. obtained for antipodal signals.151) p D Q.64). Signal constellation of a BPSK system. The bit mapper of the transmitter maps ‘0’ in ‘ 1’ and ‘1’ in ‘C1’ to generate NRZ binary data (see Appendix 7. 20/ The transmitter and the receiver for a BPSK system are shown in Figure 6. At the receiver.31.A).30.6. From (6. Bit error probability as a function of 0 for MPSK transmission. the decision element implements the “sign” function to detect NRZ binary data.141). The inverse bit mapping to recover the bits of the information message is straightforward. Phaseshift keying (PSK) 471 10 −1 10 −2 M=32 10 −3 M=16 Pbit 10 −4 M=8 10 −5 M=4 M=2 10 −6 0 5 10 15 20 Γ=E /N (dB) s 0 25 30 35 Figure 6. Figure 6.32 and have a very simple implementation. and using (6. .4.
The ML receiver for QPSK is illustrated in Figure 6. ³. we get p Áh Pe D 1 P[C] D 1 P[C j s1 ] D 2Q 0 1 For 0 × 1.32.34. wQ > D 1 Q (6. The binary bit maps are given in Table 6.153) (6. the following approximations are valid: p Á Pe ' 2Q 0 and Pbit ' Q p Á 0 (6.35. With reference to the vector representation of Figure 6.154) The QPSK transmitter is obtained by simpliﬁcation of the general scheme (6.152) 2 2 2N0 As from (6. as illustrated in Figure 6. ³=2.2. As the decision thresholds are set at .472 Chapter 6.0. Quadrature PSK (QPSK) PSK for M D 4 is usually called quadrature PSK (QPSK).33.155) p Ái 0 1 2Q (6. the probability of correct decision is given by s !!2 p p ½ Ä Eh Eh Eh P[C j s1 ] D P w I > . 3³=2/ . as w I and w Q are statistically independent. Schemes of transmitter and receiver for a BPSK system with '0 D 0.134) E s D E h =2 and 0 D E s =N0 . Modulation theory Figure 6.130).
Table 6. Figure 6. Binary bit map b1 (b2 ) 0 1 Þn.2 Binary bit map for a QPSK system. Signal constellation of a QPSK system.Q ) p 1=p2 1= 2 .34. QPSK transmitter for an isolated pulse.4.I (Þn. Phaseshift keying (PSK) 473 φ2 b 1 b2 01 11 s2 s1 E s = Eh /2 φ1 s3 00 s4 10 Figure 6.6.33.
33). with reference to the scheme p Figure 6. .t/. a receiver estimates 'a from the received signal. We observe that.t/ D K wT . and considers the original constellation for detection. where 'a is the estimate of 'a .2M 1/³ ³ 3³ . To prevent this problem there are two strategies.5 Differential PSK (DPSK) We assume now that the receiver recovers the carrier signal.126). a receiver detects the data using the difference between the phases of signals at successive sampling instants. the transmitter ﬁlter is a simple holder. 6.474 Chapter 6.2³ f 0 t 'a /. the phase of the transmitted signal at instant kT is given by (6. M M M ¦ (6. it is as if the constellation at the receiver were rotated by 'a . In this case s n coincides with E s e j'a Þn .129). except for a phase offset of 'a . At the receiver the matched ﬁlter plus sampler becomes an integrator that is cleared before each integration over a symbol period of duration T . In other words. (see Figure 6. ML receiver for a QPSK system. with Âk 2 ² . using the signal O O r e j 'a .28. the reconstructed carrier is cos. using a simple threshold detector with threshold set at zero.:::. where Þn is given by (6. By the differential noncoherent method.35. In other words ž for MPSK. Modulation theory Figure 6.156) . In particular. Consequently. By the coherent method. decisions can be made independently on r I and r Q . it consists of an integrateanddump. for h Tx .
k k 1 D Âk (6. a differential encoder and a coherent receiver can be used.161) 12 Note that we consider a differential noncoherent receiver with which is associated a differential symbol encoder at the transmitter (see (6.³=M/. 1 sin 1 C sin Pe 1 C Q 1 N0 M N0 M (6. Ð/ (see Appendix 6. the approximation (6.157) that is. : : : .Ð. .A) it can be shown that the error probability of an isolated symbol is approximated by the following bound [2. For a phase offset equal to 'a introduced by the channel. .159) and the ambiguity of 'a is removed.5.159) are discussed in Chapter 18. the phase of the signal at the detection point becomes k D 0 k C 'a (6.k 1/T plus the increment Âk . M. Differential PSK (DPSK) 475 ž for MDPSK. For phasemodulated signals. However.169)). k D k 1 C Âk M M (6. as we will see in the next section.12 the transmitted phase at instant kT is given by ¦ ² 2³ 2³ 0 0 . which can assume one of M values.1 Error probability for an MDPSK system For E s =N0 × 1.:::.157)) or ((6. if M is large. using the deﬁnition of the Marcum function Q 1 .M 1/ Âk 2 0. n D 1.158) In any case.160) s s ! ³Á ³Á Es Es Q1 1 C sin 1 sin . 3] s s ! Es Es ³Á ³Á .6. N0 M N0 M Moreover.5. 6. We note that the decision thresholds for Âk are now placed at . the phase associated with the transmitted signal at instant kT is equal to that transmitted at the previous instant .2n 1/. three differential noncoherent receivers that determine an estimate of (6.369) can be used and we get "s r Âr Ã# Es ³ ³ 1 C sin 1 sin Pe ' 2Q N0 M M s ' 2Q Es ³ sin N0 M ! (6.
for Pbit D 10 3 .156). for M D 2 DPSK is usually preferred to PSK.165) I0 .36: we note that. Modulation theory For Gray coding of the values of Âk in (6. b/ where s aD Es .1 N0 p 1=2/ s bD p Es . For M D 2.476 Chapter 6. As a DPSK receiver is simpler as compared to a coherent PSK receiver.216).ab/ e 0:5. and to 3 dB for M > 4. 3] Pbit D Pe D 1 e 2 For M D 4.161) and PSK (6. DPSK presents a loss of only 1.163) (6. .147) is given in Figure 6.36. the exact formula is [2. 3] Pe D 2Q 1 .a 2 Cb2 / Es N0 (6. the bit error probability is given by Pbit D Pe log2 M (6.3 dB for M D 4.2 dB in 0 for M D 2. the exact formula of the error probability is [2. in that it does not require recovery of the carrier phase.1 C 1=2/ N0 (6.164) and where the function I0 is deﬁned in (4. a comparison in terms of Pbit between DPSK (6. Using the previous results.161).a. that increases to 2. Comparison between PSK and DPSK. 10 −1 PSK DPSK −2 10 10 −3 Pbit 10 −4 M=2 M=4 M=8 M=16 M=32 10 −5 10 −6 5 10 15 20 Γ (dB) 25 30 35 Figure 6.162) where Pe is given by (6.
In this case. Differential encoder.3 Bit map for a BPSK system. instead of between the phases of two consecutive samples. 1g.5. and bk D 0 causes a phase repetition. differentially encoded BPSK) Let bk be the value of the information bit at instant kT . Differential PSK (DPSK) 477 Note that. 6.2 Differential encoding and coherent demodulation If 'a is a multiple of 2³=M. if the reference sample is constructed using the samples received in the two previous modulation intervals. Decoder. In this way we establish a gradual transition between differential phase demodulation and coherent demodulation. DPSK and PSK yield similar performance [4]. because both the current sample and the reference sample are corrupted by noise. if the previously received sample is used as a reference.166) N where ý denotes the modulo 2 sum. and13 ck D ck 1 if bk D 1. c k O O 1/ D ck ý ck O O 1 (6. The phase Âk 2 f0. especially for M ½ 4. symbols are differentially encoded before modulation. For the bit map of Table 6. bk Transmitted phase Âk (rad) 0 1 0 ³ 13 c denotes the one’s complement of c: 1 D 0 and 0 D 1. For any c ck D ck 1 1 2 f0. 1g k½0 (6.5.3. therefore ck D ck 1 if bk D 0.4 we have that bk D 1 causes a phase transition. Binary case (M = 2.167) Table 6. DPSK gives lower performance with respect to PSK. 1g. If fck g are the detected coded bits at the receiver. the information bits are O recovered by O bk D c k ý . at the receiver the phase difference can be formed between the phases of two consecutive coherently detected symbols.6. N N N . BPSK system without differential encoding. ³ g is associated with bk by the bit map of Table 6. we encode the information bits as ý bk bk 2 f0. In particular. bk 2 f0. This drawback can be mitigated if the reference sample is constructed by using more than one previously received samples [4].
169).4 Bit map for a differentially encoded BPSK system.Ch . . does not O cause errors in fdk g. In this case (6. Â Ã Ä Â Ã½ O ck ý j ý O ck 1 ý j O D c k ý . is worse as compared to a system with absolute phase encoding. 1. 2³ =M. the phase asso ciated with the bit map is k 2 f³=M. c k 1 / D dk O O (6. 1.ck O O 1 ý 1/ D ck ý ck O O 1 O D bk (6.169) ý dk M where ý denotes the modulo M sum.478 Chapter 6. ck Transmitted phase 0 1 0 ³ k (rad) O We note that a phase ambiguity 'a D ³ does not alter the recovered sequence fbk g: in 0 D c ý 1g and we have O Ok fact.Ch ] 4 4Pe.ck ý 1/ ý . in this case fck g becomes fck O .Ch [1 Pe D 4Pe. Approximately. This encoding and bitmapping scheme are equivalent to (6. In fact. M we have ck D ck M 1 1g.172) (6.14 which causes a negligible loss in terms of 0. 1.28. : : : . O corresponding to a phase offset equal to f0. up to values of the order of 0:1.170) dk D c k ý .M 1/2³=Mg in f k g. we observe that an error in fck g O causes two errors in fdk g. : : : . 3³ =M. O However.2M 1/³ =Mg. c k 1 / M It is easy to see that an offset equal to j 2 f0.168) Multilevel case Let fdk g be a multilevel information sequence. Modulation theory Table 6. then the error probability after decoding is given by [2] Binary case Quaternary case Pbit D 2Pbit.157).Ch (6. a two step procedure is adopted: 14 If we indicate with P e. for small Pe . Because ck 2 f0. M 1g.Ch the channel error probability.Ch C 8Pe.M 1/g in the sequence fck g. : : : . At the receiver the information sequence is recovered by O O O (6.Ch Pbit. : : : . : : : . To combine Gray encoding of values of ck with the differential encoding (6. .173) 2 3 8Pe. Pe increases by a factor 2. .171) M M M M Performance of a PSK system with differential encoding and coherent demodulation by the scheme of Figure 6. with dk 2 f0.
0/ .1/ .1/ .5 Gray coding for M D 8.0/ ck ck Transmitted symbol ak (6.5 for M D 8.0/ .1/ .1/ c k D dk ý c k (6.0/ O.dk .ck .174) The bit map is given in Table 6.175) 0 0 1 1 0 1 0 1 3 1 1 3 .6 Bit map for the differential encoder 2B1Q.1/ c k D dk ý c k 1 . . and the binary representation of ck D . as illustrated for example in Table 6.i . represent the values of dk with a Gray encoder using a combinatorial table.1/ O.5. The equations of the differential decoder are O.6.5.169). ck /.i dk / 2 f0. 2.0/ For M D 4 we give the law between the binary representation of dk D .0/ .1 (Differential encoding 2B1Q) We consider a differential encoding scheme for a fourlevel system that makes the reception insensitive to a possible change of sign of the transmitted sequence. Differential PSK (DPSK) 479 Table 6. Three information bits 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 0 values of dk 0 1 2 3 4 5 6 7 1.1/ .6.0/ O. For M D 4 this implies insensitivity to a phase rotation equal to ³ in a 4PSK signal or to a change of sign in a 4PAM signal.1/ O.1/ dk D c k ý c k 1 O.1/ dk D c k ý c k Table 6. dk /. ck / 2 f0.1/ . determine the differentially encoded symbols according to (6. 1g: . Example 6. 1g. . .
179) (6.183) 2 2 .2³ f 0 t/ Þn. equation (6. .t/ cos.176) we also have sn .t/ D Þn.480 Chapter 6. However.177) indicates that the transmitted signal is obtained by modulating in amplitude two carriers in quadrature. : : : .I and Þn. 2.L 3/.Q h Tx .181) (6.L Then Es D D hence Eh D Es 3 M 1 (6.t/ D Re[Þn h Tx .0. are not all equal. M (6.Q denote the real and imaginary part of Þn .t/ sin. : : : . Energy of sn : if f 0 is larger than the bandwidth of h Tx .I h Tx . . n D 1.I .Q 2 [ . In fact QAM may be regarded as an extension of PSK.6 AMPM or quadrature amplitude modulation (QAM) Quadrature amplitude modulation is another example of passband modulation. 3.L 3 M 3 1 1/ Eh Eh 2 1/. and Þn I . 3. t0 /. Modulation theory 6.180) For a rectangular constellation M D L 2 . : : : . respectively.178) The expression (6.176) If we modulate a symbol of the constellation by a real baseband pulse h Tx with ﬁnite energy E h and support . if the amplitudes jÞn j.178) suggests that the transmitted signals are obtained not only by varying the phase of the carrier but also the amplitude.t/e j2³ f 0 t ] (6. From (6. Consider choosing a bit mapper that associates to a sequence of log2 M bits a symbol from a constellation of cardinality M and elements given by the complex numbers Þn n D 1. : : : . we have E n D jÞn j2 Average energy of the system: Es D M 1 X En M nD1 Eh 2 (6. we obtain the isolated generic transmitted pulse given by sn . 1. : : : .177) where Þn. M.L 1/] (6.2³ f 0 t/ t 2< n D 1. M (6. 1. hence the name amplitude modulationphase modulation (AMPM). Þn Q .182) .
187) dmin D E s M 1 1 0 φ (via Q) 11 1 1 11 11 11 00 0 0 00200 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 00 0 0 00M=256 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 00 0 0 00 11 11 1 0 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 00 11 0 0 00 0M=128 00 1 1 11 11 0 0 00 00 11 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 1 0 11 1 1 11 11 00 00 0 0 00 00 M=64 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 0 0 00 11 11 11 1 1 11 00 00 00 0 0 00 M=32 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 0 11 11 0 11 11 1 0 1 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 0 11 11 0 0 00 M=16 00 00 1 0 11 11 03 1 00 00 1 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 0 00 00 0 0 00 M=40 00 00 1 0 00 00 01 1 11 11 1 00 00 0 1 11 11 11 1 1 11 00 00 00 0 0 00 11 11 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 1 11 11 11111111111111 00000000000000 11 1 1 11 0 00 00 00 0 0 00111 11 1 1 11 0 00 00 0 0 00 1 11 11 1 0 11 1 1 11 11 11 00 0 0 00 00 00 φ 1 (via I) 1 1 11 1 11 11 0 0 00 0300 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 1 1 11 11 0 0 0 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 1 1 11 11 0 0 0 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 11 1 1 11 11 11 00 0 0 00 00 00 1 1 11 1 11 11 0 0 00 0 00 00 1 1 11 11 0 0 00 00 1 0 Figure 6.I .6. for every additional bit of information.185) is normalized to one. hence p (6. that is doubling M. . The term p Eh =2 in (6.37 for various values of M.2³ f 0 t/ 2 . : : : . except for the factor E h . q We note that.186) dmin D 2E h Consequently. Signal constellations of MQAM. we need to increase the average energy of the system by about 3 dB.177) are given by s 2 h Tx .185) as illustrated in Figure 6.t/ sin. M (6. in a QAM system s n coincides with Þn . 2 It is important to observe that for the signals in (6.185) the minimum distance between p two symbols is equal to 2E h .Q ]T 2 n D 1. to maintain a given dmin . Þn.6. according to the law 6 2 (6. AMPM or quadrature amplitude modulation (QAM) 481 Basis functions: basis functions for the signals deﬁned in (6.184) Vector representation: r sn D Eh [Þn.t/ D Eh (6.t/ D Eh s 2 h Tx .2³ f 0 t/ 1 .37.t/ cos.
as the 16QAM constellation is rectangular.39. M. the decision regions are also rectangular and detection on the I and Q branches can be made independently by observing r I and r Q . We note that.188) T (6.38 for M D 16. We note that the signals that are multiplied by the two carriers are PAM signals: in this example they are 4PAM signals. given r D [r I . The bit map and the signal constellation of a 16QAM system are shown in Figure 6. Signal constellation of a 16QAM system.39. b1 b 2 b 3 b 4 s4 1000 s3 1100 3 s2 0100 s1 0000 b 3 b4 00 01 11 10 s8 1001 3 s12 1011 s7 1101 1 1 s11 1 1111 s6 0101 1 s10 0111 s5 0001 s 93 0011 s16 1010 s15 3 1110 s14 0110 s13 0010 b 1 b2 10 11 01 00 Figure 6. n D 1. r Q ]T . The ML receiver for a 16QAM system is illustrated in Figure 6. : : : .482 Chapter 6.189) ¹ D log2 M . Transmitter of a 16QAM system for an isolated pulse. and choose the nearest to r. Modulation theory Figure 6. The following parameters of QAM systems are equal to those of PSK: 1 Bmin D (6.38. we need to compute the M distances from the points sn . The transmitter of an MQAM system is illustrated in Figure 6.40. however. In general.
We need to consider the following of cases (d D 2E h ): Ã½2 Ä Â d n D 1.193) P[C j sn ] D 1 Q 2¦ I 2¦ I Ä Â Ã½2 d P[C j sn ] D 1 2Q n D 6. 7. 16 (6. 12. and 0D Moreover. We ﬁrst evaluate the probability p correct decision for a 16QAM signal.194) 2¦ I The probability of error is then given by Ã Â Ã Â Ã Â d d d 2 2:25Q ' 3Q Pe D 1 P[C] D 3Q 2¦ I 2¦ I 2¦ I where the last approximation is valid for large values of d=.192) P[C j sn ] D 1 Q 2¦ I Ã½ Ä Â Ã½ Ä Â d d 1 2Q n D 2. AMPM or quadrature amplitude modulation (QAM) 483 Figure 6. 11 (6. rectangular constellation. 9.191) Es N0 (6.2¦ I /. we have 0I D 0 (6. 4.190) Symbol error probability for M D L 2 . 15 (6. (6. 3. 8. 13.40. 14. 10.6.195) .6. 5. ML receiver for a 16QAM system.
Bit error probability as a function of 0 for MQAM transmission with rectangular constellation. if M is increased by a factor 4.41. We arrived at the same result using the notion of dmin in (6.183).196) Another expression can be found in terms of 0 using (6. if we increase by one the number of bits per symbol. 10 −1 10 −2 M=256 10 −3 M=64 Pbit 10 −4 M=16 10 −5 M=4 10 −6 0 5 10 15 20 Γ=Es/No (dB) 25 30 35 Figure 6. we get Â Pe ' 4 1 1 p M Ã Q Â d 2¦ I Ã (6. (6.M 3 1/ ! 0 (6.197) The bit error probability is approximated as Pbit ' Pe log2 M (6. to achieve a given Pbit .190).186). Â Pe ' 4 1 1 p M Ã Q s . and (6.187). . we need to increase 0 by 6 dB: in other words. on average we need an increase of the energy of the system of 3 dB. Modulation theory In general. for a rectangular constellation with M elements.198) Curves of Pbit as a function of 0 are shown in Figure 6.41.484 Chapter 6. We note that.
For given Pbit and M. for given M ½ 4 and 0. for given M.95 12.92 15.7 Gain of QAM with respect to PSK in terms of 0. In general.65 4.20 7.³=M/ . Table 6. Comparison between PSK and QAM systems in terms of Pbit as a function of 0.6.02 9.42.M 0.6. the gain of QAM with respect to PSK is given in terms of 0 in Table 6. where only the argument of the Q function in the expression of Pbit is considered.00 1. AMPM or quadrature amplitude modulation (QAM) 485 Comparison between PSK and QAM A comparison between the performance of the two modulation systems is shown in Figure 6. Â M 10 log10 4 8 16 32 64 128 256 3=.7. while having the same spectral efﬁciency as PSK.92 1/ Ã (dB) 2 sin2 . QAM yields a lower Pbit . 10 −1 10 −2 PSK M=32 QAM M=256 PSK QAM M=64 10 −3 M=16 PSK M=8 QAM M=16 PSK QAM M=4 Pbit 10 −4 10 −5 10 −6 0 5 10 15 20 Γ=Es/No (dB) 25 30 35 Figure 6.42.
: : : .201) 1 2 n M p We note that the distance between any two signals is equal to dmin D 2E s .t/ D A sin. (b) M D 3.t/ n D 1.t/s Ł .t/ dt D E s Ži j i. Modulation theory 6.199) j 0 A basis for these signals is simply given by the functions sn . j D 1.t/ D p Es The vector representations of sets of orthogonal signals for M D 2 and M D 3 are illustrated in Figure 6. M (6. We will now consider a few examples of orthogonal signalling schemes.43. M (6.486 Chapter 6.2³ f n t C '/ where the conditions 1 2T 1 fn C f` D k (k integer) T guarantee orthogonality among the signals. .1 Modulation methods using orthogonal and biorthogonal signals Modulation with orthogonal signals The isolated generic transmitted pulse. 0. : : : .7.0.202) D or else f1 × 1 T (6. s j i D si . fn fn 1 0<t <T n D 1. belongs to a set of M orthogonal signals with support . M (6. 0 ]T (6.1 (Multilevel FSK) 1.203) φ2 Es s 2 s1 Es φ1 φ2 Es s 2 s1 Es φ3 Es s3 φ1 (a) M D 2. 1.7 6.43. : : : . sn . 0. 0. where p s n D E s [ 0. t0 / and energy E s . Coherent sn . Example 6.200) n . Vector representations of sets of orthogonal signals for (a) M D 2 and (b) M D 3. : : : . Figure 6. : : : . hence Z t0 hsi .7.
211) (6. : : : . L j D 0.208) 2 log2 M M (6.105) 0D As I D M.205) 1 1 (k integer) or else f 1 × fn C f` D k T T guarantee orthogonality among the signals. we have Bmin D M=T . ¹D Example 6.log2 M/=M. ¹ D . Noncoherent sn .2 (Binary modulation with orthogonal signals) In case we use only two orthogonal signals out of L available signals to realize a binary modulation. In (6.t/ D A sin.204) the uniform r. and 0 I D 20. the spectral efﬁciency is 2 L The system is not efﬁcient in exploiting the available bandwidth. for Bmin D L=. L 1 (6. M (6.204) D (6. .207) M 2T (6. : : : .2T /. Modulation methods using orthogonal and biorthogonal signals 487 2. 'n is introduced as each signal has an arbitrary phase.209) 2E s N0 M (6.206) 1 T For noncoherent demodulation. We note that in both cases the bandwidth required by the passband modulation system is proportional to M.7. from (6.103) we have ¹D and from (6. In the case of coherent demodulation.2³ f n t C 'n / where the conditions fn fn 1 0<t <T n D 1.210) be the Walsh code of length L D 2m .7. 0 D E s =. : : : . Example 6. we have 0I D 0 (6.D.v.N0 M/. determined as discussed in Appendix 6.7. we use the deﬁnition Bmin D Correspondingly.6.3 (Code division modulation) Let f pn. j g n D 1.
93) RI D log2 M M Correspondingly.103).t/g n D 1.4 (Binary code division modulation) We analyze in detail the previous example for M D 2 and L > M. 2 (6.220) are chosen as two Walsh signals of length L and duration L Tc D 2T . Example 6. j wTc . and (6. RI D ¹D 0D 0I D 1 D 0:5 2 1 2 D .213) as the modulation is of the baseband type.106) we obtain. In this case fsn .L=2T /T L 2E s N0 L Es N0 (6. Now.217) (6.7.t/g n D 1. 2T / depends on the behavior in . n D 1.5 (Binary orthogonal modulation with coding) We have M D 2 as in the previous example. Setting Bmin D L 2T (6. and 0 I D 0.218) (6. : : : .t j Tc / n D 1. for L D 8 are shown in Figure 6.488 Chapter 6.215) from (6.t/ in the interval . Choosing M D L.71.2 log2 M/=M.216) (6. the elements of the set fsn .93).7. the signal sn is given by sn .219) Example 6.t/ D L 1 X jD0 pn.214) coincide with two Walsh signals of length L and duration L Tc D T .T. L.212) The plots of sn . 0 D . Modulation theory Choosing M Ä L. ¹ D . from (6. we note that also in this case the required bandwidth is proportional to 1=Tc D M=T .105). Moreover. (6. and we adopt the deﬁnition Bmin D M 2T (6. : : : . 2 (6. (6. however.2E s /=. Redundancy is introduced because the behavior of sn . respectively.N0 M/. M 0 < t < L Tc D T (6.
s fUn g are statistically independent with variance E s N0 =2. Bmin D L 4T 0:5 D 0:25 RI D 2 0:25 2 2D .U` mU` /] D (6.0. : : : . With reference to the interval . M.t/ dt ½ (6. note that we also have I D 2. 2T /.t/ dt n D 1.200). M (6. as T is the modulation interval we have L b D 1=2.t/sn . Then. Then fUn g. Probability of error The ML receiver is given by the general scheme of Figure 6. bandwidth and rate are halved.8. : : : . we have ÄZ Un D E s Žn m C Re D E s Žn m t0 0 Ł w.227) C p E s wn where wn D Re[hw.Un mUn /.229) m (6.6. sn i] D Re 0 Assuming the signal sm is transmitted.221) (6. Modulation methods using orthogonal and biorthogonal signals 489 the interval .0.7. As the various signals have equal energy. the number of information bits is equal to log2 M D 1: consequently.228) . are Gaussian r. n i] is the nth noise component. the decision variables are given by ½ ÄZ t0 Ł r. T /.224) We note that this case can be regarded as an example of a repetition code where the same symbol is repeated twice. where I D M and i is proportional to si according to (6.225) (6.s with mean mUn D E[Un ] D E s Žn and crosscovariance N0 E s Ž` n 2 Hence.v.v.L=4T /T L 4E s N0 L (6. E[.222) The other parameters are given by ¹D 0D and 0I D Es N0 (6. with respect to the binary code division modulation presented above. n D 1. the r.226) Un D Re[hr.t/sn .223) (6.
15 We note that.45. : : : . Modulation theory The probability of correct decision.E s N0 =2/ Z a e 1 b2 1 2 E s .232) p e 2³ 1 We note that (6.233) (6.b M / db1 : : : db M da 1 1 1 1 Dp 2³.Þ/] dÞ p e 2³ 1 Let M be a power of 2: with each signal sm we associate a binary representation.N0 =2/ " 1 p 2³. Curves of Pbit as a function of 0 D 2E s =.a/ ÐÐÐ pU1 .490 Chapter 6.b1 / : : : pU M . for a given Pbit 0 decreases as M increases. in Figure 6. is equal to P[C j sm ] D P[Um > U1 .E s N0 =2/ Z C1 e 1 1 . . : : : .Þ/] M 1 dÞ (6.M 1/ wrong characters only M=2 yield a wrong bit.234) was carried out using the Hermite polynomial series expansion. For each bit of the transmitted character.235) 1 ' Pe 2 for M sufﬁciently large. in contrast with QAM modulation. as indicated in [5.44 and Figure 6. Then a signal error occurs if a character different from the transmitted character is detected. conditioned on sm . page 294]. Therefore we have M=2 Pbit D Pe M 1 (6. Therefore for equally likely signals we get P[C j sm ] D P[C] D P[C j sm ] The error probability is given by Pe D 1 P[C] Z C1 1 2 Þ r 2E s N0 !2 2E s N0 !2 (6.N0 =2/ db #M 1 da (6. Um > U M ] ½ ÄZ a Z C1 Z a D pUm . respectively. also called character.a E s /2 2 E s . This error event happens with probability Pe . Um > Um 1 .234) 1 M 1 D1 [1 Q.232) is independent of sm : consequently P[C j sm ] is the same for each sm .N0 M/ and E b =N0 are given. with log2 M bits.230) With the change of variables ÞDp it follows Z C1 a E s N0 =2 1 2 Þ þDp r b E s N0 =2 (6. among the possible .231) 1 [1 Q. Um > UmC1 . 15 The computation of the integral (6. The drawback is an increase of the required bandwidth with increasing M.
.45.7.44. Modulation methods using orthogonal and biorthogonal signals 491 10 −1 10 −2 M=128 M=32 M=16 M=8 M=4 M=2 10 −3 Pbit 10 −4 10 −5 10 −10 −6 −5 0 5 Γ=2Es/(N0M) (dB) 10 15 20 Figure 6.6. Bit error probability as a function of Eb =N0 for transmission with M orthogonal signals. 10 −1 10 −2 10 −3 Pbit 10 −4 10 −5 M=128 M=32 M=16 M=8 M=4 M=2 0 10 −6 −5 5 E / N (dB) b 0 10 15 20 Figure 6. Bit error probability as a function of 0 for transmission with M orthogonal signals.
Limit of the probability of error for M increasing to inﬁnity We give in Table 6. Therefore M!1 6.492 Chapter 6.10).236) for transmission with M orthogonal signals. In fact we can show that Pbit only if the following condition is satisﬁed Eb > N0 otherwise Pbit ! 1. Modulation theory Figure 6.237) !0 M!1 1:59 dB (6.46 shows a comparison between the error probability obtained by exact computation and the bound (6.8 the values of E b =N0 needed to achieve Pbit D 10 values of M. Exploiting the bound (6. Comparison between the exact error probability and the limit (6. .236) for two values of M.84).238) 1:59 dB is the minimum value of E b =N0 that is necessary to reach an error probability that can be made arbitrarily small for M ! 1 (see Section 6. a useful approximation of Pbit is given by s ! Es M Q Pbit Ä 2 N0 (6.236) Figure 6.46. for various (6.
5 3. 2E s N0 M 0 I D 20 Baseband signalling or passband signalling with coherent demodulation: Bmin D M 1 2 2T (6.7.239) (6. as I D M=2.9 : : : 1:59 6.4 8.2 Modulation with biorthogonal signals The elements of a set of M biorthogonal signals are M=2 orthogonal signals and their antipodal signals: for example.5 7.3 7. A further example of biorthogonal signalling with 2M signals is given by a signalling scheme using the M orthogonal signals in (6. We give the parameters of the system in the two cases of noncoherent and coherent demodulation. the required bandwidth is proportional to M=2. M 23 24 25 26 210 215 220 : : : 1 E b =N0 (dB) 9. 4PSK is a biorthogonal signalling scheme. Modulation methods using orthogonal and biorthogonal signals 493 Table 6.242) (6. Passband signalling with noncoherent demodulation: Bmin D M 1 2 T log2 M M (6. For biorthogonal signalling with M signals.240) (6.8 Values of Eb =N0 required to obtain Pbit D 10 6 for various values of M.212) and their antipodal signals.0 5.7.6.4 4.241) ¹D2 0D and.243) .
.244) (6.246) Probability of error The receiver consists of M=2 correlators.249) The bit error probability can be approximated as Pbit ' 1 Pe 2 (6. jUm j > jU M=2 j] Z D 0 C1 1 j.248) [1 2Q. To compute the probability of correct decision. A bound to (6.48. Figure 6.250) Curves of Pbit as a function of 0 D 4E s =. Assuming that sm is taken as one of the signals of the basis.M 2/Q N0 N0 where the ﬁrst term arises from the comparison with . Modulation theory ¹D4 0D and log2 M M (6. : : : . jUm j > jUm > jUmC1 j.Þ/] M=2 1 dÞ The symbol error probability is given by Pe D 1 P[C j sm ] (6.251) Pe Ä .N0 M/ and E b =N0 are plotted. subsequently it selects si or si depending on the sign of Ui . respectively.247) The optimum receiver selects the output with the largest absolute value. for various values of M.245) 4E s N0 M 0I D 0 (6. in Figure 6.494 Chapter 6. M 2 (6.251) for two values of M. or matched ﬁlters. : : : .249) for transmission with M biorthogonal signals is given by s s ! ! Es 2E s CQ (6.M 2/ orthogonal signals. and the second arises from the comparison with an antipodal signal.47 and in Figure 6. then P[C j sm ] D P[Um > 0. we proceed as in the previous case. jUi j. which provide the decision variables fUn g n D 1. jUm j 1 p e 2³ 1 2 Þ r 2E s N0 !2 (6. jUm j > jU1 j.49 shows a comparison between the error probability obtained by exact computation and the bound (6. : : : .
.47.48.6. Bit error probability as a function of Eb =N0 for transmission with M biorthogonal signals.7. 10 −1 10 −2 M=128 M=32 M=16 M=8 M=4 M=2 10 −3 Pbit 10 −4 10 −5 10 −6 −2 0 2 4 6 E / N (dB) b 0 8 10 12 14 Figure 6. Bit error probability as a function of 0 for transmission with M biorthogonal signals. Modulation methods using orthogonal and biorthogonal signals 495 10 −1 10 −2 10 −3 Pbit 10 −4 10 −5 M=128 M=32 M=16 M=8 M=4 M=2 −5 10 −10 −6 0 Γ=4Es/(N0M) (dB) 5 10 15 Figure 6.
cn.49. Uncoded sequences. 6. Then E w is the energy of the pulse sn evaluated 1 on a generic subperiod T . j ž f 1. and wT .8 Binary sequences and coding We consider a baseband signalling scheme where the transmitted signal is given by sn . : : : .t/ D p Ew n0 1 X jD0 T 1 Q where cn. As RI D log2 M D1 I (6. we have L b D log2 M D n 0 . 1] T cn. M 0 < t < n 0 T D Ts (6.254) For a modulation interval Ts . C1g.251) for transmission with M biorthogonal signals.t/. we derive the structure of the optimum receiver. hence M D I D n 0 . Comparison between the exact error probability and the limit (6. Interpreting the n 0 pulses cn. Modulation theory Figure 6. 1g (6.n 0 1/T / (6.t Q jT/ n D 1. Moreover. wT .0 .252) wT . it follows 2n 0 . j ž f 1.456)) with unit energy.253) as elements of an orthonormal basis. : : : . : : : .496 Chapter 6. Every sequence of n 0 binary coefﬁcients cn D [cn.255) . we have Bmin D 2T .t Q Q .n 0 is allowed. j wT .t/ D p rect t T =2 is the normalized rectangular window of T duration T (see (1.
the minimum distance between two elements of the set of signals (6. through appropriate binary functions. We consider a set of signals (6.57) u D 2 dmin d2 2E w 2E b D min D D 2N0 N0 N0 .265) (6. k0 1. we have RI D k0 log2 M D I n0 (6. we ﬁnd 2 H dmin D 4E w dmin (6. The error probability is determined by the ratio (6.264) (6.252) corresponding to M D 2k0 binary sequences cn with n 0 components. 1g arbitrarily: these components determine.8. also the remaining n 0 k0 components.268) . Because the number of elements of the basis is always I D n 0 .258) (6.260) Moreover.2¦ I /2 (6. as for example those with index j D 0.254).267) Es D n0 Ew n0 Ew D Ew I EI n0 Eb D Ew D RI k0 EI D 0I D 0D 2E w EI k0 2E b D D N0 =2 N0 n 0 N0 Es D 0I 1 Ts N0 2T H Indicating with dmin the minimum number of positions in which two vectors cn differ.263) (6. Binary sequences and coding 497 Es D n0 Ew Es D Ew I EI D Ew Eb D RI EI D 0I D 0D 2E w EI 2E b D D N0 =2 N0 N0 2E w Es D D 0I 1 N0 Ts N0 2T (6.257) (6.261) where in the last step equation (6.6. : : : .258) is used.262) (6. 1. assuming that only k0 components in (6. Coded sequences.259) (6.252) is p equal to dmin D 4E w . can assume values in f 1.266) (6.256) (6.
270) We note that for a given value of E b =N0 the coded system presents a larger . An example of coding is given by the choice of the following vectors (code sequences or code words) for n 0 D 4 and k0 D 2.50. Optimum receiver With reference to the implementation of Figure 6. as the elements of the orthonormal basis (6.252). we have dmin D 2. Modulation theory H In the case of uncoded sequences dmin D 1. where the projections of the received signal r. if dmin R I > 1.t/ onto the Figure 6. the signaltonoise ratio at the decision point is given by c D H 4dmin E w d H R I 2E b D min 2N0 N0 (6.50. 2 3 2 3 2 3 2 3 1 1 C1 C1 6 17 6 17 6 C1 7 6 C1 7 7 7 7 7 c1 D 6 c2 D 6 c3 D 6 (6.8. the optimum receiver can be simpliﬁed Q as illustrated in Figure 6.498 Chapter 6. ML receiver for the signal set (6. and therefore dmin D 8E w . Using (6.269) c0 D 6 4 15 4 C1 5 4 15 4 C1 5 1 C1 1 C1 H 2 For this signalling system. H We will discuss in Chapter 11 the design of codes that yield a large dmin for given values of the parameters n 0 and k0 . and H consequently a lower bit error probability.253) are obtained by shifting the pulse wT . alternative coding methods will be examined in Chapter 12.t/.265). . A drawback of these systems is represented by the reduction of the transmission bit rate Rb for a given modulation interval Ts .
cn 0 1 ]T is formed. the one that differs in the Q smallest number of positions with respect to the sequence c. Passband PAM is considered as single sideband (SSB) modulation or double sideband (DSB) modulation (see Appendix 7.50 yields the detected signal of the type O O (6.51.21. The vector components r D [r0 . : : : .252) under the assumption of uncoded sequences. This scheme is usually called hard input decoding and is clearly suboptimum as compared to the scheme with soft input. in correspondence of a given R I . as illustrated in Figure 6. however the PAMCDSB technique has a Bmin that is double as compared to PAM or PAMCSSB methods. For a given value of the symbol error probability. components of the basis (6. PAM. ML receiver for the signal set (6. Then we Q choose among the possible code sequences cn . Q ci D ci D sgn. we now derive 0 I as a function of R I for some multilevel modulations.51. In the latter case Bmin is equal to 1=T . : : : .50 is obtained by ﬁrst detecting the single components ci ž f 1. We note that.51. the receiver can be simpliﬁed by computing the Euclidean distance component by component. a simpliﬁcation of the scheme of Figure 6.9. that represents the minimum theoretical value of 0 I . rn o 1 ]T are then used to compute the Euclidean distances with each of the possible code sequences. hence 0 D E s =N0 . Comparison between coherent modulation methods 499 Figure 6. 6. c1 . : : : .ri / O i D 0. PAMCSSB and PAMCDSB methods require the same statistical power to achieve a certain Pe . the binary vector c D [c0 . or equivalently the detected code sequence c D [c0 . 2k0 .252). We note . n D 1.9 Comparison between coherent modulation methods Table 6.C). cn o 1 ]T . In the binary case under examination. The scheme of Figure 6. 1g according to the scheme Q Q Q of Figure 6.10). according to O O the ML criterion. for which Pbit can be made arbitrarily small by using channel coding without constraints in complexity and latency (see Section 6. : : : . Successively. The result will be compared with the Shannon limit given by 0 I D 22R I 1. r1 . for a given noise level.9 summarizes some important results derived in the previous sections. n 0 1 (6.271) The resulting channel model (memoryless binary symmetric) is that of Figure 6. For the uncoded system. This procedure is usually called softinput decoding.253) are obtained sequentially.6. : : : . In some receivers for coded systems.
273) 1 0 D z0 (6.1=Tb /=Bmin (bit/s/Hz) 2 Encoder.9.500 Chapter 6.M ! ! r M M 0 CQ 0 4 2 4 that an equivalent approach often adopted in the literature is to give E b =N0 . bandwidth. 1. related to R I through (6.104).M > 2/ Â 1 Â 1 Â 1 1 M 1 M Ã Q 2 log2 M log2 M log2 M 1 log2 M 2 1 1 1 log2 M 2 1 log2 M M 2 log2 M M 1 p M log2 M 1 2 log2 M Q Âr 2 sin2 2Q Es N0 ! orthogonal (BB) . considering only the argument of the Q function in Table 6.107). Modulation theory Table 6.9 Comparison of various modulation methods in terms of performance.Signalmodulator tonoise rate R I ratio 0 (bit/dim) 1 2E s N0 2E s N0 Es N0 Es N0 binary antipodal (BB) MPAM MPAM C SSB MPAM C DSB MQAM . and spectral efﬁciency. MPAM.M D L 2 / BPSK o 2PSK QPSK o 4PSK MPSK . From 3 M2 and 0I D 0 we obtain the following relation 0I D z 0 2R I . Pe D 10 6 . as a function of ¹.M M 2T M 4T 2 log2 M M log2 M M 2E s N0 M 4E s N0 M biorthogonal (BB) . A ﬁrst comparison is made by assuming the same symbol error probability.272) . related to 0 I through (6. we have the following results. As Q. z 0 / D 10 6 implies z 0 ' 22.2 3 1/ (6.274) R I D log2 M (6. p for all systems. Modulation Approximated symbol error probability Pe p Á 0 Âr 2Q Ã 2Q Ã 4Q p Á 20 ³Á 0 M r 1/Q r 2/Q M 0 2 Ã 1 T Âr 3 M2 6 1 3 1 M2 r M 1 0 0 Ã Ã ! 0 Minimum bandwidth Bmin (Hz) 1 2T 1 2T 1 T 1 T Spectral efﬁciency ¹ D .
We note that. the gap can be reduced by channel coding. An exact comparison is now made for a given bit error probability. Orthogonal modulation.³=M/ with ³=M. From 3 M and 0I D 0 we obtain 0I D z 0 2R I . for a given value of R I . MPSK.276) We note that for QAM a certain R I is obtained with a number of symbols equal to MQAM D 22R I . Comparison between coherent modulation methods 501 2. Using the approximation r Pe ' . and is obtained by approximating sin.52.278) Equation (6. whereas for PAM the same efﬁciency is reached for MPAM D p 2 R I D MQAM . moreover.9. 3.6. Biorthogonal modulation. . whereas PSK requires a much larger value of 0 I . and corresponding very small values of 0 I . for large R I . 5.M 1/Q ! (6. The symbol error probability is approximately the same as that of orthogonal modulation for half the number of signals. PAM and QAM allow a lower 0 I with respect to PSK. PAM and QAM require the same value of 0 I . and ³ 2 with 10. Both R I and ¹ are doubled. Using the Pbit curves previously obtained. MQAM. We also note that.2 3 1/ (6.278) holds for M ½ 4. orthogonal and biorthogonal modulation operate with R I < 1. 4.275) log2 M (6. We observe that the required 0 I is much larger than the minimum value obtained by the Shannon limit. It turns out 0I D z 0 4R I 2 20 (6.10. the behavior of R I as a function of 0 I for Pbit D 10 6 is illustrated in Figure 6.279) M 0 2 we note that the multiplicative constant in front of the Q function cannot be ignored: therefore a closedform analytical expression for 0 I as a function of R I for a given Pe cannot be found.277) RI D 1 2 1 0 D z0 (6. As will be discussed in Section 6.
QAM. The parameter in the ﬁgure denotes the number of symbols M of the constellation. but require much lower values of 0. by increasing the bandwidth it is possible to decrease 0. Modulation theory Shannon limit Figure 6. in this region.502 Chapter 6. in this region. from which the required bandwidth is also obtained. the tradeoff is between Pbit and 0. However.52.52.249)) has the same performance as orthogonal modulation (see (6. Tradeoffs for QAM systems There are various tradeoffs that are possible among the parameters of a modulation method. ﬁxed 0. for different modulation methods and bit error probability equal to Pbit D 10 6 . we obtain 0 as a function of ¹. we get ¹ as a function of Pbit .2Tb /. or Bmin < 1=. given ¹ (and the bandwidth).41 for MQAM. We assume that 1=Tb is ﬁxed. 0I required for a given rate RI . as illustrated in Figure 6. or equivalently ¹ > 2.1=Tb /=Bmin . ﬁnally. For a given Pbit . a slight decrease in 0 may determine a large increase of the bandwidth. As illustrated in Figure 6. and PSK are bandwidth efﬁcient modulation methods as they cover the region for R I > 1. by increasing the number of levels: we note that. The bandwidth is traded off with the power.234)). Comparison of modulation methods PAM. where the parameter is ¹ D log2 M D . Orthogonal and biorthogonal modulation are not very efﬁcient in bandwidth (R I < 1). We consider for example Figure 6. biorthogonal modulation (see (6. higher values of 0 are required to increase ¹. but requires half the bandwidth.52. The Pbit of orthogonal or biorthogonal modulation is almost independent of M . that is 0. We note that to modify ¹ a modulator with a different constellation must be adopted.
the redundancy of the alphabet can be used to encode sequences of information bits: in this case we speak of coded systems (see Example 6.1 has rate R I D 3 (bit/dim). For transmission over an ideal AWGN channel.93) of the encodermodulator rate.105) by choosing Bmin D B. 6.280) is a limit derived by Shannon assuming the transmitted signal s.95). Channel capacity is deﬁned as the maximum of the average mutual information between the input and output signals of the channel [6.282) . Equation (6. the capacity can be expressed in bits per dimension as CD 1 2 log2 .93).10 Limits imposed by information theory We consider the transmission of signals with a given power over an AWGN channel having noise power spectral density equal to N0 =2. From (6. For example.135) of the passband B associated with the frequency response R of a channel. Let us consider for example a monodimensional transmission system (PAM) with an alphabet of cardinality A for a given rate R I .t/ is a Gaussian random process with zero mean and constant power spectral density in the passband B. such that L b < log2 M. where I is the number of signal space dimensions. noncoherent receivers were preferred in radio mobile systems because of their simplicity.6.5).281) With reference to a message composed of a sequence of symbols.1 C 0/ (bit/s) (6.140). which belong to an I dimensional space.7. Some speciﬁc examples will be illustrated in Chapter 12.280) where 0 is obtained from (6. we have the cardinality of alphabet A is equal to M D 2 R I D 8. we deﬁne the maximum spectral efﬁciency as ¹max D C[b=s] D log2 . 7]. We recall the deﬁnition (6. Let us take a PAM system with R I D 3 and M D 16: redundancy may be introduced in the sequence of transmitted symbols. channel capacity is given in bits per second by C[b=s] D B log2 . B D B d f . The mapping of sequences of information bits into sequences of coded output symbols may be described by a ﬁnite state sequential machine.1 C 0 I / (bit/dim) (6. Limits imposed by information theory 503 and depends mainly on the energy E s of the signal and on the spectral density N0 =2 of the noise. We recall the deﬁnition (1. R I D L b =I . with bandwidth given by (1. as L b D 3 and I D 1. that is M > 2 R I from (6. for example. In addition to the required power and bandwidth. even though the performance is inferior to that of coherent receivers (see Chapter 18) [2].103). Using (6. the choice of a modulation scheme is based on the channel characteristics and on the cost of the implementation: until recently.1 C 0/ (bit/s/Hz) B (6.10.280) and (6. the encodermodulator for the 8PAM system with bit map deﬁned in Table 6.
283) (6.Á j Þn / log2 6 M 7 p1 .Þn C ¾ Þi /2 ¾ 2 2¦ I2 log Q D log2 M C e exp d¾ p 2 M nD1 1 2¦ I2 2³¦ I i D1 (6.284) Extension of the capacity formula for an AWGN channel to multiinput multioutput (MIMO) systems can be found in [9. we have ( ) . Capacity of a system using amplitude modulation Let us consider an MPAM system with M ½ 2. but it does not give any indication about the practical realization of channel coding.53.280)). there exists channel coding that allows transmission of information with an arbitrarily small probability of error. equivalently. p M 1 5 dÁ 4X nD1 pi pr ja0 . We give without proof the following fundamental theorem [8.:::. where the Shannon limit given by (6.124) for which a symbol error probability . in terms of encodermodulator rate or. 6].282). The capacity can be upper limited and approximated for small values of 0 I by a linear function.e/ 0 I log2 .286) 2¦ I2 With the further hypothesis that only codes with equally likely symbols are of practical interest. The capacity C as well as the signaltonoise ratio given by (6. Modulation theory obtained assuming a Gaussian distribution of the transmitted symbol sequence.285) C D max pn p r ja0 . Theorem 6. We note that Shannon’s theorem indicates the limits.287) Q is illustrated in Figure 6.Á j Þn / / exp (6. 10].504 Chapter 6. The capacity of a realvalued AWGN channel having as input an MPAM signal is given in bits per dimension by [11] 3 2 M X Z C1 pr ja0 . The channel capacity is therefore given by " !# ¾2 M Z M X 1 1 X C1 .Á j Þi / i D1 where pn indicates the probability of transmission of the symbol a0 D Þn . the computation of the maximum of C with respect to the probability distribution of the input signal can be omitted.0 I / (6.106).Á j Þn / (6. within which we can develop systems that allow reliable transmission of information. such coding does not exist if R I > C. By the hypothesis of white Gaussian noise. in terms of transmission bit rate (see (6. and also lower limited and approximated for large values of 0 I by a logarithmic function as follows: 0I − 1 : C Ä 0I × 1 : C ½ 1 2 1 2 log2 . where 0 I is given by (6.2 (Shannon’s theorem) For any rate R I < C.Á Þn /2 pr ja0 .
choosing 4PAM modulation. are also indicated [12]. Limits imposed by information theory 505 Figure 6. The asymptotic loss of ³ e=6 (1. Capacity of an ideal AWGN channel for Gaussian and MPAM input signals. For large values of 0 I . and an arbitrarily small error probability can be obtained for 0 I D 5 dB. no matter how sophisticated they are: to bridge the gap . the additional achievable gain is negligible. [From Forney and Ungerboeck (1998). we obtain in practice the entire gain that would be expected from the expansion of the input alphabet. Let us consider. we see that the coded transmission of 1 bit of information per modulation interval with rate R I D 1 is possible.53 that for small values of 0 I the choice of a binary alphabet is almost optimum: in fact for 0 I < 1 (0 dB) the capacity given by (6.53. where we have a symbol error probability equal to 10 6 for 0 I D 13:5 dB. To achieve the Shannon limit it is not sufﬁcient to use coding techniques with equally likely input symbols. for example. If the number of symbols is further increased. Therefore we conclude that.10. the capacity of multilevel systems asymptotically approximates a straight line that is parallel to the capacity of the AWGN channel.287) with a binary alphabet of input symbols.282) is essentially equivalent to the capacity given by (6. by doubling the number of symbols with respect to an uncoded system.53 dB) is due to the choice of a uniform distribution rather than Gaussian for the set of input symbols. We see from Figure 6.] equal to 10 6 is obtained for uncoded transmission. the uncoded transmission of 1 bit of information per modulation interval by a 2PAM system. at an error probability of 10 6 . If the number of symbols in the alphabet A doubles.6. We note that the curves saturate as information cannot be transmitted with a rate larger than R I D log2 M. c 1998 IEEE. This indicates that a coded 4PAM system may achieve a gain of about 8:5 dB in signaltonoise ratio over an uncoded 2PAM system.
shaping techniques are required [13] that produce a distribution of the input symbols similar to a Gaussian distribution.53 dB.506 Chapter 6. .292) We note that the relation between Pe and 0 I is almost independent of M. To reach capacity. We now consider two cases. in other words. shaping and equalization. for high 0 I instead constellations with more than two elements must be used. For a scheme that achieves the capacity. Pe can therefore be expressed as Ã Âq Ã Ã Âq Â 1 N I ' 2Q Q Pe D 2 1 30 30 I M (6. coding must be extended with shaping techniques. High signaltonoise ratios. We note from Figure 6. Coding techniques for small 0 I and large 0 I are therefore quite different: for low 0 I . R I is equal to the capacity of the channel C and 0 I D 1 (0 dB). This relation (6. For an uncoded MPAM system. This relation is used in the comparison illustrated in Figure 6. if M is large. if R I < C. the binary codes are almost optimum and the shaping of the constellation is not necessary. or.124). moreover.289) and (6. as it must be in practice. Coding strategies depending on the signaltonoise ratio The formula of the capacity (6.289) bits of information are mapped into each transmitted symbol.282) can be expressed as 0 I =. as we will see in Chapter 13. to reach the capacity in channels with limited bandwidth.22C suggests the deﬁnition of the normalized signaltonoise ratio 0I D 0I 22R I 1 1/ D 1. Modulation theory of 1.54 between uncoded systems and the Shannon limit given by 0 I D 1.291) For large M. then 0 I > 1. ! r Ã Â 3 1 Pe D 2 1 Q 0I (6.93).288) for a given R I given by (6.290) M M2 1 We note that Pe is function only of M and 0 I .53 that for high values of 0 I it is possible to ﬁnd coding methods that allow reliable transmission of several bits per dimension. R I D log2 M (6.288) we obtain 0I D 0I M2 1 (6. The average symbol error probability is given by (6. Therefore the value of 0 I indicates how far from the Shannon limit a system operates. using (6. the gap that separates the system from capacity. Moreover. techniques are required that combine coding.
Bit error probability as a function of Eb =N0 for an uncoded 2PAM system. usually E b =N0 is adopted as a ﬁgure of merit. then E b =N0 ³ . (6. For low 0 I . [From Forney and Ungerboeck (1998).] Low signaltonoise ratios. ž if R I D 1=2. for example. Limits imposed by information theory 507 Figure 6. c 1998 IEEE. For systems with limited power and unlimited bandwidth. and symbol error probability as a function of 0 I for an uncoded MPAM system. then E b =N0 D .3=2/ 0 I . For low values of 0 I it is customary to introduce the following ratio (see (6.10. then by increasing the bandwidth. if the bandwidth can be extended without limit for a given power.107)): 22R I 1 Eb D 0I N0 2R I We note the following particular cases: ž if R I − 1. both 0 I and R I tend to zero.293) . or equivalently the number of dimensions M of input signals.6. ž if R I D 1.3). then E b =N0 D 0 I . by using an orthogonal modulation with T ! 0 (see Example 6.8).54. For low values of 0 I the capacity is less than 1 and can be approximated by binary transmission systems: consequently we refer to coding methods that employ more binary symbols to obtain the reliable transmission of 1 bit (see Section 6.7.ln 2/ 0 I .
295) (6. if. Figure 6. . Modulation theory From (6. ž if the bandwidth is limited. For Pe D 10 6 . If the modulation rate of the coded system remains unchanged. In particular. 1:59 dB/ (6.3=2/ 0 I .293) and the Shannon limit 0 I > 1. the bandwidth can be extended only by a factor 2 with respect to an uncoded system.296) N0 Coding gain Deﬁnition 6. Figure 6. the reference uncoded 2PAM system operates at about 12. reliable transmission can be achieved only if E b =N0 > 1:59 dB. the Shannon limit can be achieved by a code having a gain of about 9 dB.5 dB is possible.54 illustrates the bit error probability for an uncoded 2PAM system as a function of both E b =N0 and 0 I . we obtain the Shannon limit in terms of E b =N0 for a given rate R I as Eb 22R I 1 > 2R I N0 This lower limit monotonically decreases with R I . from (6.9)). at this probability of error.508 Chapter 6. in principle.5 dB from the ultimate Shannon limit. assuming a limited bandwidth system.54 also shows the symbol error probability for an uncoded MPAM system as a function of 0 I for large M. we examine again the three cases: ž if R I tends to zero. for small and large values of 0 I . the ultimate Shannon limit is given by Eb > ln 2 N0 .294) in other words.294) we ﬁnd that the Shannon limit in terms of E b =N0 is higher. equation (6. for example. the symbol error probability or bit error probability for an uncoded 2PAM system can be expressed in two equivalent ways: s ! Ã Âq 2E b Pbit ³ Q 30 I D Q (6. Thus a coding gain up to 12. we typically refer to 0 or 0 I . respectively. if the bandwidth can be sufﬁciently extended to allow the use of binary codes with R I − 1. or in the value of 0 or 0 I (see (11. if R I D 1=2 the limit becomes E b =N0 > 1 (0 dB).295) afﬁrms that even though an inﬁnitely large bandwidth is used. ž if R I D 1. as E b =N0 D .2 The coding gain of a coded modulation scheme is equal to the reduction in the value of E b =N0 . then a binary code with rate R I D 1=2 can yield a coding gain up to about 10. For Pbit D 10 6 .8 dB. that is required to obtain a given probability of error relative to a reference uncoded system. Let us consider as reference systems a 2PAM system and an MPAM system with M × 1. instead. a reference uncoded MPAM system operates at about 9 dB from the Shannon limit: in other words.
11 Optimum receivers for signals with random phase . however. assuming a certain class of coding and decoding techniques. Receivers.11. t0 /.204)): s1 .E b =N0 /0 is 2 about 2 dB above the signaltonoise ratio at which capacity is achieved.300) ] n D 1.bb/ where sn is the complex envelope of sn . Example 6. 6. ³ /.302) 0<t <T 0<t <T (6.bb/ 2 2 jsn .'2 / D A cos.'/ C w. 2.0.299) (6. at the receiver we assume the carrier is known. Typically.t/ e j2³ f 0 t ] Let us consider transmission over an AWGN channel of one of the signals n D 1.t/ D sn .t/e j' (6. which do not rely on the knowledge of the carrier phase. Therefore for a given channel we can determine the minimum signaltonoise ratio .'/ D Re[sn .E b =N0 /0 below which reliable transmission is not possible. ³ / (6. : : : . 2.6. M e j2³ f 0 t In other words. If in (6.2³ f 1 t C '1 / s2 . with support . relative to the carrier frequency f 0 .1 (Noncoherent binary FSK) The received signals are expressed as (see also (6. '2 2 U [ ³. We sometimes refer to R0 as a practical upper bound of the transmission bit rate. in [ ³.301) .2³ f 2 t C '2 / where r AD 2E s T '1 .t/e j' e j2³ f 0 t ] .t/j dt En D sn . we observe the signal r.t.v. .t/ D Re[sn . : : : . then the energy of sn is given by Z t0 Z t0 1 .297) every signal sn has a bandwidth smaller than f 0 .'1 / D A cos. for a phase ' that we assume to be a uniform r.t/ dt D (6.297) .bb/Ł D Re[sn . for codes with rate Rc D 1 (see Chapter 11). We give three examples of signalling schemes that employ noncoherent receivers.t.t.bb/ sn .bb/ .11. except. Optimum receivers for signals with random phase 509 Cutoff rate It is useful to introduce the notion of cutoff rate R0 associated with a channel. M (6.t/ where .bb/ sn .298) 0 0 2 At the receiver. for a given modulation and class of codes [2]. are called noncoherent receivers.t.
t. (6. Modulation theory and f1 D f0 fd f2 D f0 C fd (6. the ML criterion to detect the transmitted signal has been previously developed starting from (6.t/ sn . p/ dt N0 0 1 N0 Z 0 t0 2 sn .' .308) 0<t <T (6.t. '1 / and s2 .'/ D sn . : : : .307) Example 6.t/ cos.2³ f 0 t C '/ and s2 .306) (6.2 (Onoff keying) Onoff keying (OOK) is a binary modulation scheme where. p/ D p e N0 I .203).bb/ We consider an Mary baseband signalling scheme. M (6.t.3 on page 58).t.bb/ sn . We recall that if f 1 C f 2 D k1 and if 2 f d T D k (k integer) then s1 .t/g. M.26).2³ f 0 t C '/ n D 1.7. and E s is the average energy of a pulse. 2³.t. The minimum value of f d is given by .ρ j n.'/ D 0 where A D p 4E s =T .305) 1 (k1 integer) T or else f0 × 1 T (6. Example 6. s1 . The conditional probability density function of the vector r is given by 1 1 jjρ sn jj2 prja0 . '2 / are orthogonal. fsn .3 (DSB modulated signalling with random phase) .t. : : : .t. p/ dt Ã (6.309) ML criterion Given ' D p.'/ D A cos. The received signals are expressed as . n D 1. that is for known '. f d /min D 1 2T (6.510 Chapter 6.11.304) which is twice the value we ﬁnd for the coherent demodulation case (6. for example.N0 =2// Â Z t0 2 D K exp r. that is modulated in the passband by the double sideband technique (see Example 1.310) .303) where f d is the frequency deviation with respect to the carrier f 0 .11.
bb/ r.6.bb/Ł r. which is equivalent.27): Ã Â Â Ã Z t0 2 En exp Ln [ p] D exp r. p Â/ dp 8Â (6.t/ sn .313) The dependency on the r. Optimum receivers for signals with random phase 511 Using the result Z 0 t0 2 sn .314) using (6. M (6.t.t/ e j .314) becomes Ln D e En N0 1 2³ 1 2³ Z Z ³ ³ ³ ³ p 2 En Re[L n e e N0 jp] dp (6. I0 .312) N0 N0 0 Given ' D p. where information is also carried by the phase of the signal. (6. 16 Averaging with respect to the phase ' cannot be considered for PSK and QAM systems. but not equal. p arg L n / e N0 n dp We recall the following properties of the Bessel functions (4.216): 1.300). to that deﬁned in (6. .317) 2.t/ sn . ' is removed by taking the expectation of Ln [ p] with respect to ':16 Z ³ Ln D Ln [ p] p' .v. : : : .x/ is a monotonic increasing function for x > 0.316) De En N0 p 2 En jL j cos. p/ dt n D 1. We deﬁne 1 Ln D p En Z 0 t0 iŁ h .315) Introducing the polar notation L n D jL n je j arg L n .t.t/ e j2³ f 0 t dt (6. the maximum likelihood criterion yields the decision rule a0 D arg max Ln [ p] O n (6. 1 I0 .11.x/ D 2³ Z ³ ³ e x cos. p/ d p ³ De En N0 1 2³ Z ³ ³ Â exp 2 Re N0 ÄZ 0 t0 .311) we deﬁne the following likelihood function. pC2³ f 0 t/ dt ½Ã dp (6.t/ sn .'/ dt D E n (6.
196) and (1.t/jtDt0 (6. does not modify the magnitude of L n .t/ (see (1. where '0 is a constant.bb/ .t/ D jsn .315).t// (6.bb/ .324) . if yn is the complex envelope of yn .322) The cascade of the phasesplitter and the “modulo” transformation is called the envelope detector of the signal yn .56. M (6. Alternatively. and then determines the squared magnitude. As shown in Figure 6.202). however.bb/ A simpliﬁcation arises if the various signals sn have a bandwidth B much lower than . can be implemented by a complexvalued passband ﬁlter (see (6. .199) we have . the matched ﬁlter can be real valued if it is followed by a phasesplitter: in this case the receiver is illustrated in Figure 6.318) Taking the logarithm we obtain the loglikelihood function Ã Â p 2 En En `n D ln I0 jL n j N0 N0 (6.a/ jL n j D jyn . and considering that both ln and I0 are monotonic functions.bb/ from sn . the scheme ﬁrst determines the real and the imaginary parts of L n starting . the bold line denotes a complexvalued signal.319) If the signals have all the same energy. recalling (1.323) Moreover. the ML decision criterion can be expressed as a0 D arg max jL n j O n (6.320) is illustrated in Figure 6.57.316) becomes Ln D e En N0 Ã Â p 2 En I0 jL n j N0 n D 1. Note that the available signal is . the generic branch of the scheme in Figure 6. .bb/ .322) and (1.bb/ D jyn .t/ (6.321) From (6. composed of the I branch and the Q branch.55 for the case of all E n equal.bb/ yn .t/jtDt0 (6. For the generic branch.512 Chapter 6.t/ e j'0 . rather than sn .bb/ sn .t/j cos.2³ f 0 t C arg yn .315)). at the matched ﬁlter output the following relation holds .t/je j8n . from (6. the desired value jL n j coincides with the absolute value of the output signal of the phasesplitter at instant t0 .320) Implementation of a noncoherent ML receiver The scheme that implements the criterion (6.bb/ jL n j D jyn .t/e j2³ f 0 t ] .bb/ sn .t/: this. Modulation theory Then (6. In this case.t/ D Re[yn .55. : : : .202)). Polar notation is adopted for the complex envelope: .bb/ f 0 .
56.6.4 (Noncoherent binary FSK) We show in Figure 6. we can use one of the schemes of Figure 6. Implementation of a branch of the scheme of Figure 6. Example 6. .11.t/j.1.11. to determine the amplitude jyn . Optimum receivers for signals with random phase 513 Figure 6. r(t) t0 s(bb)* (t0 t)e j(2π f0 t+ ϕ 0) n . Noncoherent ML receiver of the type squarelaw detector.58.bb/ Now.55. .55 by a complexvalued passband matched ﬁlter.59 two alternative schemes of the ML receiver for the modulation system considered in Example 6.11. if f 0 × B. 2 L n  2 Figure 6.
Example 6.325) .58.5 (Onoff keying) We illustrate in Figure 6. (a) Ideal implementation of an envelope detector. we have N0 UTh D p I0 1 . using passband matched ﬁlters. Noncoherent ML receiver of the type envelope detector.60 the receiver for the modulation system of Example 6.326) (6.57. recalling (6.2 where.11. (a) (b) Figure 6.e E 1 =N0 / 2 E1 where E1 D A2 T 2 (6.514 Chapter 6. Modulation theory Figure 6. and (b) two simpler approximate implementations.318).11.
59. . a0 =1 ^ U<UTh .11. Optimum receivers for signals with random phase 515 Figure 6. Envelope detector receiver for an onoff keying system. Two ML receivers for a noncoherent 2FSK system. a =2 0 ^ a 0 Figure 6. r(t) T w (t)cos(2 π f t+ ϕ ) T 0 0 envelope detector ^ U U>UTh .60.6.
'2 /. further simpliﬁcations can be done by extracting functions that are common to the different branches. we show in Figure 6.t/ D s1 .c 2 C w2.6 (DSB modulated signalling with random phase ) With reference to the Example 6. Depending upon the signalling type.61 the receiver for a baseband Mary signalling scheme that is DSB modulated with random phase.t/ sin.11.t/ cos. if we deﬁne V1 D we have Pbit D P[V1 < V2 j s1 ] Now.s w2. We assume that s1 is transmitted: then r.332) w.516 Chapter 6.t/ cos.t/ and Pbit D P[U1 < U2 j s1 ] Equivalently.t.333) w.11. that is s1 .t.2³ f 1 t C '0 / dt (6.2³ f 1 t C '0 / dt Z 0 T .t/ cos.t/ sin.330) r. Modulation theory Example 6.s D If we deﬁne w1.c D w1. '1 / ? s2 .t/ sin.s D Z 0 T w.2³ f 2 t C '0 / dt (6.329) (6.3.2³ f 2 t C '0 / dt Ã2 C ÂZ 0 T r.t/ C w.11.4. we have ÂZ U2 D 0 T (6. recalling assumption (6.331) D where 2 w2.2³ f 2 t C '0 / dt Ã2 (6. Error probability for a noncoherent binary FSK system We now derive the error probability of the system of Example 6.2³ f 2 t C '0 / dt Z 0 T Z 0 T w.328) p U1 and V2 D p U2 (6.327) (6.305).c D w2.
Two receivers for a DSB modulation system with Mary signalling and random phase. . Optimum receivers for signals with random phase 517 Figure 6.11.6.61.
302) we also have N0 T 2 2 t2 / cos.2³ f 1 t C '0 / dt '1 / C w1.s ] D 0 2 2 E[w2.N0 T =4/.N0 T =4/ 1.t1 2 (6.v1 / dv1 Ã pV2 .2³ f 2 t1 C '0 / sin.N0 T =4/ Â I0 Ã v1 .AT =2/ 1.v2 / dv2 (6.339) .c and w2.v 2/ (6.v.s Ã2 Ã2 (6.þx/ d x D eþ =2Þ Þ (6.c ] D E[w2. with statistical power 2. has a Rayleigh probability density v2 e pV2 .518 Chapter 6.c w2.s with E[w2.s ] D Z 0 T Z 0 T N0 Ž.s .v1 C.'0 2 r AT Es T D (6. Therefore V2 .334) Â D AT cos.t/ cos.t/ is a white Gaussian random process with zero mean. Modulation theory we have ÂZ U1 D 0 T r.c and w1.x =2/ I0 .v2 / dv2 0 Z D 0 C1 ÂZ v2 0 1 Es 2 N0 pV1 .v2 / D N0 T =4 2 v2 2.340) D 1 e 2 17 To compute the following integrals we recall the WeberSonine formula: Z C1 0 2 1 2 x e Þ.2³ f 1 t C '0 / dt '1 / C w1.AT =2/2 / 2.t/ sin.s are two jointly Gaussian r.'0 2 AT sin.c ] D E[w2.v1 / D N0 T =4 2 .335) 2 2 As w.337) whereas V1 has a Rice probability density function v1 e pV1 .c Ã2 Ã2 C Â C ÂZ 0 T r.2³ f 2 t2 C '0 / dt1 dt2 D 0 E[w2.s ] D where from (6. w2.330) assumes the expression17 Z C1 Pbit D P[V1 < v2 j V2 D v2 ] pV2 .336) Similar considerations hold for w1.v1 / N0 T =4 (6.338) Consequently equation (6.
11. The performance of a binary FSK system and that of a differentially encoded BPSK system with coherent detection are compared in Figure 6. such as DBPSK.342) indicates that DBPSK is better than FSK by about 3 dB in 0.305).340) we have 1 E 1 2 Ns 0 e (6. Performance comparison of binary systems The received signals are given by (6.62. Optimum receivers for signals with random phase 519 It can be shown that this result is not limited to FSK systems and is valid for any pair of noncoherent orthogonal signals with energy E s . is illustrated in Figure 6. .75) it follows s ! Es FSK(CO): Pbit D Q (6.163) we have FSK(NC): Pbit D DBPSK: Pbit D 1 e 2 Es N0 (6. In particular. where f d satisﬁes the constraint (6.301).343) N0 10 −1 10 −2 10 −3 FSK (ρ =0) FSK NC Pbit CO −4 10 (d. thus from (6.62. for the same Pbit .342) A comparison between (6. Bit error probability as a function of 0 for BPSK and binary FSK systems. from (6.6.e. and from (6. The differential receiver for DBPSK directly gives the original uncoded bits.341) 2 A comparison with a noncoherent binary system with differentially encoded bits.341) and (6. The correlation coefﬁcient between the two signals is equal to zero.62.)BPSK (ρ =−1) 10 −5 DBPSK 10 −6 5 6 7 8 9 10 Γ=E /N (dB) s 0 11 12 13 14 15 Figure 6. with coherent (CO) and noncoherent (NC) detection.
344) We observe that the difference between coherent FSK and noncoherent FSK is less than 2 dB for Pe Ä 10 3 .349).350) .I and g1.520 Chapter 6. that is g1. and becomes less than 1 dB for Pe Ä 10 5 .t/ n 2 f1. from (6. As jg1 j determines the signal level at the input of the receiver..349) we recall the following result: Z C1 Q 0 x p Ð1 þ 1 Þx dx D e þ 2 s 1 þ 2 þC Þ ! (6. 6.64) we have18 s ! 2E s (d.346) where E s is the average energy of the transmitted signal.345) where g1 D g1. ³. also a random attenuation (see Section 4. The received signal is expressed as . Mg 0<t <T (6.: p0 . FSK systems with M > 2 are not widely used.a/ p0 .I C jg1. : : : .215). 19 For the computation of the integral in (6.344) in (6.t/ D Re[sn .343) and (6.Q is a Rayleigh r. In polar notation g1 D jg1 je j' .5). 0D Es jg1 j2 N0 (6. ³ / and pjg1 j is given by (4.Q are uncorrelated Gaussian r. we regard the expressions of Pe obtained in the previous sections as functions of 0.349) 0 Limiting ourselves to binary signalling schemes and substituting Pe given by (6. (6.a/ da (6. in addition to a random phase. We deﬁne the average signaltonoise ratio 0avg D Es E[jg1 j2 ] N0 1 0avg (6.t/g1 e j2³ f 0 t ] C w. We also note that because of the large bandwidth required.12 Binary modulation systems in the presence of ﬂat fading We assume now that the channel introduces. with zero mean and equal variance.s. is a function of jg1 j and therefore it is a random variable.a/ (6.v. we obtain:19 18 For a more accurate evaluation of the probability of error see footnote 14 on page 478. (6.v.)BPSK: Pbit ' 2Q N0 (6.a/ D e a=0avg 1. see (6. Modulation theory Taking into account differential decoding.e. To evaluate the mean error probability we apply the total probability theorem.206). that yields Z C1 Pe D Pe .341). where ' 2 U . Therefore we consider Pe as the conditional error probability for a given value of jg1 j.348) To compute the performance of a signalling scheme in the presence of ﬂat fading.v.6.347) Then the probability density of 0 is that of a chisquare r. the signaltonoise ratio at the receiver input.342).bb/ r.
351) (6.6.63 and compared with the case of transmission over an AWGN channel. Frequency diversity: the same signal is transmitted using several carriers. noncoherent DPSK and FSK systems are valid alternatives. as it is assumed that an estimate of the phase ' is available. separated from each other in frequency by an interval that is larger than the coherence bandwidth of the channel. or at least highly uncorrelated. In case the uncertainty on the phase is relevant. Orthogonal binary FSK with noncoherent detection Pbit D 4.353) The various expressions of Pbit as a function of 0avg are plotted in Figure 6.12. and to the references therein. Binary modulation systems in the presence of ﬂat fading 521 1.354) 1 2 C 0avg (6. Diversity In the previous section it became apparent that the probability of error for transmission over channels with Rayleigh fading varies inversely proportional to the signaltonoise ratio. The basic idea consists in providing the receiver with several replicas of the signal via independent channels. for the same N0 . Differentially encoded BPSK with coherent detection s 0avg Pbit D 1 1 C 0avg (6. for communication. so that the probability is small that the attenuation due to fading is high for all the channels. which is very hard to obtain under fading conditions.1 C 0avg / (6. 1. For a systematic method to determine the performance of systems in the presence of a channel affected by multipath fading.352) We note that both the above expressions are in practice a lower limit to Pbit . that is exploiting channels that are independent. DBPSK Pbit D 1 2. We note that to achieve a certain Pbit . . we refer the reader to [14]. rather than exponentially as in the AWGN channel case: therefore a large transmission power is needed to obtain good system performance. 3. it is required a substantially larger E s as compared to the case of transmission over an AWGN channel. Orthogonal binary FSK with coherent detection s ! 0avg 1 Pbit D 1 2 2 C 0avg 2. To mitigate this problem it is useful to resort to the concept of diversity. There are various diversity techniques.
Bit error probability as a function of 0avg for BPSK. 17]. Combinations of the previous techniques: for the many techniques of combining available. 5. not necessarily by using the same transmission channels in the two directions.18 and to the bibliography [15. 6. Modulation theory Figure 6. . when two users A and B can send information to each other simultaneously. we can select the antenna that provides the signal with higher power. we refer to Section 8. 16. maximal ratio) and nonlinear (square law ). 3. 2. both linear (equal gain.1 Transmission methods Transmission methods between two users A transmission link between two users of a communication network may be classiﬁed as a) Full duplex.63.522 Chapter 6. 4. DBPSK. spaced by an interval that is larger than the coherence time of the channel. by setting two or more antennas close to each other.13 6. Time diversity: the same signal is transmitted over different time slots. for a ﬂat Rayleigh fading channel. Space diversity: multiple reﬂections from ground and surrounding buildings can make the power of the received signal change rapidly.13. Polarization diversity: several channels are obtained for transmission by orthogonal polarization. and binary FSK systems. selection.
g. Subdivision of the channel passband into N B separate subbands that may be used for transmission (see Figure 6. Examples of TDD systems are the DECT.A. and in general highspeed transmission systems over twistedpair cables for LAN applications (see Section 17. N S 1 (see Figure 6. from A to B or from B to A.2).2 Channel sharing: deterministic access methods We distinguish three cases for channel access by N users: 1. 20 The access methods discussed in this section are deterministic. Signalling by N0 orthogonal signals (see for example Figure 6. c) Fullduplex systems over a single band: in this case the two users transmit simultaneously in two directions using the same transmission band. c) Simplex. the receiver eliminates echo signals by echo cancellation techniques.1. 3. which uses a twisted pair cable. but in practice alternative methods are still preferred because of the complexity required by echo cancellation.71).1). and the pingpong BRISDN. N users may share the channel using one of the following methods. We note that fullduplex transmission over a single band is possible also over radio channels. thus allowing fullduplex transmission.6. when two users A and B can send information in only one direction at a time. Transmission methods 523 b) Half duplex.64a).1. which uses a twisted pair cable (see page 1146).2). as each user knows exactly at which point in time the channel resources are reserved for transmission.20 1. each slot is identiﬁed by an index i. . which uses a radio channel (see Section 17. b) Timedivision duplexing (TDD): in this case the two users are assigned different slots in a time frame (see Section 6. an alternative approach is represented by random access techniques.A. collision resolution protocols [18] (see also Chapter 17). we speak of fullduplex TDD systems.13. each in turn subdivided into N S adjacent subsets called slots. Subdivision of a sequence of modulation intervals into adjacent subsets called frames.13.64b).2).. 6. The two directions of transmission are separated by a hybrid. examples are the HDSL (see Section 17.13. 2. If the duration of one slot is small with respect to that of the message. Three methods In the following we give three examples of transmission methods which are used in practice. Examples of FDD systems are the GSM. and the VDSL. that is the link is unidirectional. which uses a radio channel (see Section 17. Within a frame. Frequency division multiple access (FDMA): to each user is assigned one of the N B subbands. e. alternatively.6). a) Frequencydivision duplexing (FDD): in this case the two users are assigned different transmission bands using the same transmission medium. i D 0. : : : . CSMA/CD. ALOHA. when only A can send information to B.
Illustration of (a) FDMA. obtained by multiplexing 24 PCM speech coded signals at 64 kbit/s. that of the 256 bits of the frame only 30 Ð 8 D 240 bits carry information related to signals. For example. each 8bit sample of each channel is inserted into a preassigned slot of a frame composed of 32 slots. for binary modulation. thus making it a 7bit code word per sample. The entire frame is formed of 24 Ð 8 C 1 D 193 bits. .65. preserving the orthogonality between modulated signals of the various users.544 Mbit/s. equal to the interval between two PCM samples of the same channel. Modulation theory Figure 6. whose elements identify the modulation intervals. there is then only one bit for the synchronization of the whole frame. that is obtained by multiplexing 30 PCM coded speech signals (or channels) at 64 kbit/s. and Japan the base group. we note. the overall digital message has a bit rate of 256 bit/125 µs = 2. 2.1 (Timedivision multiplexing) Timedivision multiplexing (TDM) is the interleaving of several digital messages into one digital message with a higher bit rate.048 Mbit/s. Time division multiple access (TDMA): to each user is assigned one of the N S time sequences (slots).64. 8 bits are employed for signalling between central ofﬁces (channel ch16). As shown in Figure 6.71. at 2. within a modulation interval each user then transmits the assigned orthogonal signal or the antipodal signal.13. equivalent to 32Ð8 D 256 bits. The frame structure must contain information bits to identify the beginning of a frame (channel ch0) by 8 known framing bits. In the United States. 3. analog to E1.048 Mbit/s. for the case N0 D 8. Canada.524 Chapter 6. however. Example 6. The remaining 30 channels are for the transmission of signals. In this case the frame is such that one bit per channel is employed for signalling: this bit is “robbed” from the least important bit of the 8bit PCM sample. As the duration of a frame is of 125 µs. as an example we illustrate the generation of the European base group. Code division multiple access (CDMA): to each user is assigned a modulation scheme that employs one of the N0 orthogonal signals. is called T1 carrier system and has a bit rate of 1. to each user may be assigned one orthogonal signal of those given in Figure 6. We give an example of implementation of the TDMA principle. and (b) TDMA. called E1.
and M. M. New York: Kluwer Academic Publishers. M. G. Stegun. Simon. eds. A. Benedetto and E. IEEE Trans. 1990. 1995.65. [5] M.6. 1965. Bibliography 525 Figure 6. [2] S. 3rd ed. Principles of communication engineering. 1999. . Handbook of mathematical functions. [4] D. Proakis. 38.048 Mbit/s. [3] J. Biglieri. on Communications. Abramovitz and I. “The performance of trelliscoded MDPSK with multiple symbol detection”. Jacobs. 1391–1403. vol. K. M. pp. 1965. Wozencraft and I. Sept. New York: McGrawHill. Shahshahani. Principles of digital transmission with wireless applications. Bibliography [1] J. Digital communications. TDM in the European base group at 2. New York: John Wiley & Sons. New York: Dover Publications.. Divsalar.
4th ed. H. pp. 379–427 (Part I) and 623–656 (Part II). pp. 28. [16] T. pp. [8] C. on Information Theory. NJ: PrenticeHall. vol. ed. [12] G. 585–595. and S. [7] T. Communication systems and techniques. Wireless Person. 16. [15] M.526 Chapter 6. [14] M. IEEE Trans. Norwell. Shannon. on Telecomm. New York: McGrawHill. pp. Rappaport. “A mathematical theory of communication”. [17] G. E. 6. Gans. on Information Theory. 1995. IEEE Trans. [20] R. Trans. on Information Theory. vol. W. 10. 27. Abramson. 1968. “Exponentialtype bounds on the generalized Marcum Qfunction with application to error probability analysis over fading channels”. M. G. “Another recursive method of computing the Qfunction”. Tranter. Englewood Cliffs. “Channel coding with multilevel/phase signals”. 1993. vol. D. Stuber. MA: Kluwer Academic Publishers. Modulation theory [6] R. 38. Ungerboeck. modulation. E. Piscataway: IEEE Press. Principles of communications: systems. Schwartz. on Information Theory. Forney. vol. 2384–2415. J. 281–300. “On limits of wireless communications in a fading environment when using multiple antennas”. 1998. 1992. New York: John Wiley & Sons. Bell System Technical Journal. K. 359–366. 1999. Wireless communications: principles and practice.. Gallager. pp. 2000. 1948. [10] E. IEEE Trans. and noise. [13] G. Ungerboeck. 55–67. McGee. Simon and M. [9] G. vol. Europ. July 1970. 311–335. vol. vol. Foschini and M. IEEE Trans. pp. “Modulation and coding for linear Gaussian channels”. pp.. [11] G. 44. D. 1982. Commun. (N. J. Information theory and reliable communication. Cover and J. vol. Nov. Jan. pp. Alouini. 1966. Oct.–Dec. Thomas. 48.. Jr. New York: John Wiley & Sons. Ziemer and W.S. June 1998. F. . L. Principles of mobile communication. and G. 1996. 1991.). Jr. S.. Telatar. “Trellis shaping”. 500–501. Mar. Forney. IEEE Trans. Stein. Elements of information theory. “Capacity of multi–antenna Gaussian channels”. R. [18] Multiple access communications: foundations for emerging technologies. Bennett. New York: John Wiley & Sons. [19] W. Mar. on Communications. 1996.
A.a/ D D1 and the complementary error function erfc .b/ db D p e b =2 db (6.a/ D pw .a/ (6.a/D 1 C erf p 2 2 Â Ã 1 a Q.360) (6.b/ D p 2³ ¦ 1 .b/ db (6.b m/2 2¦ 2 (6.a/ D 2³ a Two other functions that are widely used are the error function erf .6.358) b2 Z 2 C1 p a 2 1 p e 2³ db which are related to 8 and Q by the following equations Ä Â Ã½ a 1 8. deﬁned as Z C1 1 2 p e b =2 db (6.a/D erfc p 2 2 (6.A Gaussian distribution function and Marcum function 6.a/ D 1 8.357) Q.356) 2³ 1 1 It is often convenient to use the complementary Gaussian distribution function.A.a/ D 1 erf .361) .1 The Q function The probability density function of a Gaussian variable w with mean m and variance ¦ 2 is given by e pw . Gaussian distribution function and Marcum function 527 Appendix 6.359) Z 1C2 1 p a 2 pw .355) We deﬁne normalized Gaussian distribution (m D 0 and ¦ 2 D 1) as the function Z a Z a 1 2 8.
In Table 6. 4:4565. Modulation theory Table 6. 2:8717. 1:8406. 8:0757. 7:9333.10 Complementary gaussian distribution.a/ D p exp (6. 1:3346. 1:1507. 1:3945. 4:6017.a/ 5:0000. 9:8659. 9:9644. 1:0718. 5:4799. 2:0558. 1:0421. 01/ means 5:0000 ð 10 1 . 1:8658. 6:6807. 5:3034. 1:8175. 1:3008. 9:6800. 1:4882. 2:5551. 6:8714.a/ D p (6.10 the values assumed by the complementary Gaussian distribution are given for values of the argument between 0 and 8. 1:4807. 1:3903.a/ 3:4670. 01/Ł 01/ 01/ 01/ 01/ 01/ 01/ 01/ 01/ 01/ 01/ 01/ 01/ 02/ 02/ 02/ 02/ 02/ 02/ 02/ 02/ 02/ 02/ 02/ 03/ 03/ 03/ a 2:7 2:8 2:9 3:0 3:1 3:2 3:3 3:4 3:5 3:6 3:7 3:8 3:9 4:0 4:1 4:2 4:3 4:4 4:5 4:6 4:7 4:8 4:9 5:0 5:1 5:2 5:3 Q. 2:0658. 2:1186. 7:7688. 1:3499. Â Ã Â 2Ã 1 a 1 1 exp bound1 : Q 1 . 6:8033. 4:0160. 4:2074. 2:8232. 4:8096. 6:2378. 08/ 08/ 08/ 09/ 09/ 09/ 10/ 10/ 10/ 10/ 11/ 11/ 11/ 11/ 12/ 12/ 12/ 13/ 13/ 13/ 14/ 14/ 14/ 15/ 15/ 15/ 16/ Ł Writing 5:0000. 1:4388. 2:1125. 2:8665. 4:7918.66. 1:6983.363) 2 2³a Â 2Ã a 1 (6. 5:7901. 3:5930.362) 2 2 a 2³a Â 2Ã a 1 bound2 : Q 2 . 3:1671. 3:8209. 1:0780. We present below some bounds of the Q function. 6:8092. 1:7864. 2:3263. 2:6001.a/ D exp 2 2 The Q function and the above bounds are illustrated in Figure 6. 3:0954. 03/ 03/ 03/ 03/ 04/ 04/ 04/ 04/ 04/ 04/ 04/ 05/ 05/ 05/ 05/ 05/ 06/ 06/ 06/ 06/ 06/ 07/ 07/ 07/ 07/ 08/ 08/ a 5:4 5:5 5:6 5:7 5:8 5:9 6:0 6:1 6:2 6:3 6:4 6:5 6:6 6:7 6:8 6:9 7:0 7:1 7:2 7:3 7:4 7:5 7:6 7:7 7:8 7:9 8:0 Q. 3:1909. 3:0854. 3:3157. 8:1975. 3:4458. 6:2210. 4:6612. 2:4196. 8:5399. 9:6760. 2:7425. 5:2310. 3:3977. 5:4125. 3:3693. 2:2750. 1:8990.528 Chapter 6.364) bound3 : Q 3 .a/ 3:3320. 1:5911. 3:0106. 7:2348. . 1:5866. 1:0724. a 0:0 0:1 0:2 0:3 0:4 0:5 0:6 0:7 0:8 0:9 1:0 1:1 1:2 1:3 1:4 1:5 1:6 1:7 1:8 1:9 2:0 2:1 2:2 2:3 2:4 2:5 2:6 Q. 1:3567. 4:8342. 6:2097. 1:2798. 5:9904.
From (6.a. b/ ' Q. b × b 1 C Q 1 .66. deﬁned in (4.b a/2 2 b>a>0 (6. a/ ' 2Q.368) b2 2 (6.b Ä Q 1 .365) where I0 is the modiﬁed Bessel function of the ﬁrst type and order zero. 0/D1 Moreover.ax/ dx We deﬁne the ﬁrstorder Marcum function as Z C1 xe Q 1 .367) where the Q function is given by (6. b/ We also give the Simon bound [14] e .2 The Marcum function x 2 Ca 2 2 I0 .b a/ a > 0.A. A useful approximation valid for b × 1.0.a.370) .365).366) (6.A. b/De Q 1 . The Q function and relative bounds.a. 6.a. Gaussian distribution function and Marcum function 529 Figure 6. is given by a/ (6. b/ D b (6.216). a × 1. b/ Ä e .6.369) (6.357). two particular cases follow: Q 1 .a.b.bCa/2 2 Q 1 . for b × 1 and b × b a the following approximation holds Q 1 .
aCb/2 2 # Ä Q 1 . In (6.a. A recursive method for computing the Marcum function is given in [19].370) the upper bound is very tight.530 Chapter 6. Modulation theory and 1 " 1 e 2 .a b/2 2 e . .371) We observe that in (6. and the lower bound for a given value of b becomes looser as a increases.371) the lower bound is very tight. b/ a>b½0 (6.
B Gray coding In this appendix we give the procedure to construct a list of 2n binary words of n bits. The case for n D 1 is immediate. Gray coding 531 Appendix 6.375) and appending a 1 in front.6.373) The remaining two words are obtained by inverting the order of the words in (6. the ﬁrst 4 words are obtained by repeating the list (6.375) and appending a 0 in front of the words of the list.375) Iterating the procedure for n D 3.372) and appending a 1 in front: 1 1 1 0 The ﬁnal result is the following list of words: 0 0 1 1 0 1 1 0 (6. . By induction it is just as easy to prove that two adjacent words in each list differ by one bit at most.B. Inverting then the order of the list (6.372) The list for n D 2 is constructed by considering ﬁrst the list of .374) (6. We have two words with two possible values 0 1 (6.1=2/22 D 2 words that are obtained by appending a 0 in front of the words of the list (6. where adjacent words differ in only one bit.372): 0 0 0 1 (6. the ﬁnal result is the list of 8 words 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 0 (6.376) It is easy to extend this procedure to any value of n.
C Baseband PPM and PDM In addition to the widely known PAM.t/ D g0 t T n M Ã n2A (6. Signaltonoise ratio In both PPM and PDM the information lies in the position of the fronts of the transmitted pulses: therefore demodulation consists in ﬁnding the fronts of the pulses.t/ D g0 Â Ã t n n2A (6. one could receive perfectly rectangular pulses. M The transmitted isolated pulse is Â sn . with amplitude equal to A.67. The set of PDM waveforms for M D 4 is illustrated in Figure 6. If the channel bandwidth were inﬁnite.67 and an alphabet given by A D f0.69. . 3.380) (6. For an alphabet A D f1. has a rise time t R different from g (t) 0 0 T/M T t Figure 6. which are disturbed by noise. instead. depends on the value of the transmitted symbol. PPM consists of a set of pulses whose shift. Fundamental pulse shape of PPM.377) For M D 4 the set of waveforms is represented in Figure 6. two other baseband pulse modulation techniques are pulse position modulation (PPM) and pulse duration modulation (PDM). Mg the transmitted isolated pulse is given by sn . 2.379) where g0 is given in Figure 6. 1. We consider the fundamental pulse shape of Figure 6.68. : : : . Modulation theory Appendix 6.67. with respect to a given time reference. : : : . 2.378) 1g (6. that is a multiple of a minimum time duration equal to T =M. the input symbol determines the duration of the transmitted pulse. In practice the received pulse. In PDM.532 Chapter 6.
68. Baseband PPM and PDM 533 n =1 T/4 t n =2 2T/4 t n =3 3T/4 t n =4 0 T t Figure 6. n =1 T/4 t n=2 2T/4 t n=3 3T/4 t n =4 0 T t Figure 6. PPM waveforms for M D 4.6. . PDM waveforms for M D 4.C.69.
382) yields E[" 2 ] ' N0 4A2 B (6. as the channel has a ﬁnite bandwidth B.384) On the other hand.ti /. as illustrated in Figure 6.ti / is related to the noise w. crosses a given threshold.382) We consider the following approximated expression that links the rise time to the bandwidth of the pulse tR ' 1 2B (6.385) .ti / D tR A (6. amplitude A. pulse plus noise. the signaltonoise ratio is given by (6.105) with Bmin D 1=.383) Substitution of the above result in (6. Modulation theory Figure 6. and rise time t R of the received pulse: w. that is 0D 2E sCh N0 (6.381) Assuming the noise stationary with PSD N0 =2 over the channel passband with bandwidth B. the meansquare error is given by E[" ] D 2 Â tR A Ã2 E[w ] D 2 Â tR A Ã2 N0 B (6.534 Chapter 6. and noise disturbs the reception of the pulse. zero.ti / ".2T /.70. assuming the average duration of the pulses is −0 . PPM and PDM demodulation in the presence of noise.70. The error ". The detection of the front of a pulse is obtained by establishing the instant ti in which the received signal.
where a tradeoff between the channel bandwidth and the signaltonoise ratio at the decision point (see (6.6. .387) (6.C.385) in (6.386) For a more indepth analysis we refer to [20].387)) is illustrated. substitution of (6.384) yields E[" 2 ] D −0 1 2B 0 (6. Baseband PPM and PDM 535 where E sCh D −0 A2 Finally.
For the ﬁrst orders.D Walsh codes We illustrate a procedure to obtain orthogonal binary sequences. Eight orthogonal signals obtained from the Walsh code of length 8. of length 2m . 1g. 1g. with binary elements from the set f0.536 Chapter 6.71.389) 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 8Tc 8Tc 8Tc 8Tc 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 8Tc 8Tc 8Tc 8Tc Figure 6. Modulation theory Appendix 6. with values f 1.388) (6. we have A0D[0] Ä ½ 0 0 A1D 0 1 (6. . We consider 2m ð 2m Hadamard matrices Am .
A Walsh code of length 2m is obtained by taking the rows (or columns) of the Hadamard matrix Am and by mapping 0 into 1. it is easily seen that two words of a Walsh code are orthogonal.6.390) (6.t/ D rect t Tc =2 Tc (6. Walsh codes 2 3 0 17 7 15 0 0 1 1 0 0 1 1 0 0 0 0 0 1 1 1 1 0 1 0 1 1 0 1 0 0 0 1 1 1 1 0 0 0 1 1 0 1 0 0 1 3 7 7 7 7 7 7 7 7 7 7 5 537 0 60 A2D6 40 0 2 0 60 6 60 6 60 A3D6 60 6 60 6 40 0 In general the construction is recursive 0 1 0 1 0 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 0 0 1 1 (6. From the construction of Hadamard matrices.71 shows the 8 signals obtained with the Walsh code of length 8: the signals are obtained by interpolating the Walsh code sequences by a ﬁlter having impulse response wTc .D.391) Ä AmC1 D Am Am N Am Am ½ (6.393) .392) N where Am denotes the matrix that is obtained by taking the 1’s complement of the elements of Am . Figure 6.
.
not explicitly indicated in Figure 7.1 Baseband digital transmission (PAM systems) Let us brieﬂy examine the fundamental blocks of the baseband digital transmission system illustrated in Figure 7.d. therefore we speak of uncoded transmission. b1 .2) 1 Rb D (bit/s) (7. symbols.2) Tb Transmitter Bit mapper. To select the values of ak we consider pairs of input bits and map them into quaternary symbols as indicated in Table 7. b0 . 1 We distinguish three types of coding: 1) source or entropy coding. Their objectives are respectively: 1) “compress” the digital message by lowering the bit rate without losing the original signal information (see Chapter 5).2. and 3) “shape” the spectrum of the transmitted signal (see Appendix 7.Chapter 7 Transmission over dispersive channels In this chapter we will reconsider amplitude modulation (PAM and QAM. The signals at various points of a ternary PAM system are shown in Figure 7. The system bit rate. taking into account the possibility that the transmission channel may distort the transmitted signal [1. Note that bits are mapped into symbols without introducing redundancy.1 This situation will be maintained throughout the chapter. 1g. 2) channel coding. The bit mapper uses a onetoone map to match a multilevel symbol to an input bit pattern. . or in other words the sequence of symbols fak g is not obtained by applying channel coding.1) Usually fb` g is a sequence of i. that are emitted every Tb seconds: fb` g D f: : : . ak 2 A D f 3. 1.1. is equal to (see Section 6. : : : g (7. see Chapter 6) for continuous transmission. Source. 2) increase the “reliability” of the transmission by inserting redundancy in the transmitted message. so that errors can be detected and/or corrected at the receiver (see Chapters 11 and 12). 1. Let us consider. 2]. generates a message fb` g composed of a sequence of binary symbols b` 2 f0. We will also consider the effects of errors introduced by the digital transmission on a PCM encoded message (see Chapter 5) [3]. which is associated with the message fb` g.i.1. 7.1. for example. b 1 . We assume that a source. b2 .A). 3g. and 3) line coding. symbols fak g from a quaternary alphabet.
540 Chapter 7. For uncoded quaternary transmission the symbol period or modulation interval T is given by T D 2Tb . if the values of ak belong to an alphabet A with M elements. Block diagram of a baseband digital transmission system. 1g. Transmission over dispersive channels Figure 7. Signals at various points of a ternary PAM transmission system with alphabet A D f 1. Figure 7.3) T log2 M Tb . then 1 1 1 D (Baud) (7. 1=T is the modulation rate or symbol rate of the system and is measured in Baud: it indicates the number of symbols per second that are transmitted.1. 0.2. In general.
Baseband digital transmission (PAM systems) 541 Table 7.6) (7.t/ D gCh Ł s. b2k 0 1 1 0 b2kC1 0 0 1 1 ak 3 1 1 3 We note that in Section 6. that is ak 2 A D f . Now the values of ak are associated with fÞn g.t kT / (7. : : : . : : : .9) . with impulse response gCh . For a PAM system. see (6.M 1/. the modulator associates the symbol ak with the amplitude of a given pulse h T x : ak ! ak h T x .t/ D h T x Ł gCh .t/ D C1 X kD 1 (7. Modulator.8) The transmission channel introduces an effective noise w. Therefore the desired signal at the output of the transmission channel still has a PAM structure.t/ C w. : : : .4) Therefore the modulated signal s.7. From the relation sCh .7) ak qCh . 1. .t/ that is input to the transmission channel is given by s. Mg.t/ we deﬁne qCh . Therefore the signal at the input of the receive ﬁlter is given by: r.t/ (7. 1. : : : . that is ak 2 f1.t kT / (7.1 Example of quaternary bit map.t kT / (7.5) Transmission channel The transmission channel is assumed to be linear and time invariant.3 we considered an alphabet whose elements were indices.109).M 1/g.1. n D 1.t/ then we have sCh .t/ D C1 X kD 1 ak h T x . 2. M.t/ D sCh .
here appears as a subscript.t/ D h T x Ł gCh Ł g Rc .t0 C kT / D ak q R . the detected binary information O O message fb` g is obtained. From the sequence fak g. and its choice is fundamental for system performance. The parameter t0 is called timing phase.t/ D w Ł g Rc .15) O where Q[rk ] is the quantizer characteristic with rk 2 < and ak 2 A.t/ D s R .t0 / is the amplitude of the overall impulse response at the sampling instant t0 . alphabet of ak .t/ D g Rc Ł sCh .t0 / D ak h 0 (7.t/ then s R . 2 To simplify the notation. Transmission over dispersive channels Receiver The receiver consists of three functional blocks: 1. the sample index k.t/ is conﬁned to a modulation interval.t/ C w R .10) (7. Sampler.t/ Let the overall impulse response of the system be q R .12) where h 0 D q R . 1g is given in Figure 7. An example of quantizer characteristic for A D f 1.11) ak q R .t/ D In the presence of noise.t kT / (7. . 2. using an inverse bit mapper. in the absence of noise. Threshold detector.t/ where w R . If the duration of q R .t/ D qCh Ł g Rc . The simplest structure is the instantaneous nonlinear threshold detector: ak D Q[rk ] O (7.t/.3. then the various pulses do not overlap and. 0. This block is assumed linear and time invariant with impulse response g Rc . r R . Then the desired signal is given by: s R .542 Chapter 7. From the sequence frk g we detect the transmitted sequence fak g. associated with the instant t0 C kT .13) C1 X kD 1 (7.14) (7. sampling at instants t0 CkT yields:2 rk D r R . 3. Ampliﬁerequalizer ﬁlter.
7. From Figure 7. The spectral density of a ﬁltered PAM signal is obtained by multiplying Pa . From the spectral analysis (see Example 1. Characteristic of a threshold detector for ternary symbols with alphabet A D f 1.t/.4.t/.9 on page 69) we know that s is a cyclostationary process with average power spectral density given by (see (1. f /þ Pa .t/. We recall that the receiver structure described above was optimized in Chapter 6 for an ideal AWGN channel. f / D þ Q. Power spectral density of a PAM signal The PAM signals found in various points of the system.t/. have the following structure: s. a PAM signal s. t 2 <. q s(t) .t/ may be regarded as a signal generated by an interpolator ﬁlter with impulse response q.3. as shown in Figure 7. and amplitude h0 of the overall impulse response. s R .398)): þ2 þ þ þ1 N Ps .t/ is the impulse response of a suitable ﬁlter. The PAM signal as output of an interpolator ﬁlter.t/ we obtain a signal that is still PAM. 0. with a pulse given by the convolution of the ﬁlter impulse responses. 1g.t kT / (7. f / (7.17) þ þT where Pa is the spectral density of the message and Q is the Fourier transform of q. f / in (7. then the bandwidth B of the transmitted signal is equal to that of h T x .t/ D C1 X kD 1 ak q.16) where q.1. As Pa . sCh .9. ak T Figure 7. Baseband digital transmission (PAM systems) 543 ^ = Q[r ] ak k h0 2 0 −1 1 h0 2 rk Figure 7. f / is periodic of period 1=T . and s. In other words.17) by the squared magnitude of the ﬁlter frequency response.4.4 it is important to verify that by ﬁltering s.
2 Passband digital transmission (QAM systems) We consider the QAM transmission system illustrated in Figure 7.21) Ã (7.20) P þT þ T and þ2 þ C1 X Â þ þ .17): in this case the sequence has memory.Þ/ ma D Þ2A Þ2A (7.Þ/ Ma D jÞj2 p. be the probability distribution of each symbol.d.A for a description of the more common line codes. the spectrum of the transmitted signal is shaped by introducing correlation among transmitted symbols by a line encoder. and by assigning equal probabilities to antipodal values. that describes the decomposition of the PSD of a message into ordinary and impulse functions. : : : .544 Chapter 7.1 (PSD of an i. 7. see (7. the decomposition of the PSD of s is given by þ þ2 2 þ þ 2 . Transmission over dispersive channels Occasionally the passband version of PAM that is obtained by DSB modulation is also proposed. Passband PAM. The bit mapper uses a map to associate a complexvalued symbol ak to an input bit pattern. Þ M g and p.C. f / is nonzero at frequencies multiple of 1=T .1 on page 51. In Figure 7.Þ/. is discussed in Appendix 7. f / D þ 1 Q. We obtain ma D 0 by choosing an alphabet with symmetric values with respect to zero. f /j2 ¦a (7. symbols with values from the alphabet A D fÞ1 . The mean value and the statistical power of the sequence are given by X X Þp.c/ N s . We refer the reader to Appendix 7.22) ` T We note that spectral lines occur in the PSD of s if ma 6D 0 and Q. symbol sequence) Let fak g be a sequence of i.i. Following Example 1. f /þ ¦a T D jQ. Þ 2 A. Typically the presence of spectral lines is not desirable for a transmission scheme. Þ2 .d.1. In some applications.5.d/ N s . f / D þ 1 Q. Transmitter Bit mapper.i.18) (7. Example 7. f /þ jma j2 P Ž f þ þT `D 1 C1 þ þ m þ2 X þ Â ` Ãþ2 Â þ þ aþ þ Ž f þQ Dþ þ þ T `D 1 T þ ` T Ã (7.6 are given two examples of constellations and corresponding . together with the associated parameters.19) 2 Consequently ¦a D Ma jma j2 .7.
4PSK (or QPSK) symbols are taken from an alphabet with four elements.2. Similarly each element in a 16QAM constellation is uniquely identiﬁed by four bits.t kT / kT / C j C1 X kD 1 (7.Q h T x . Typically the pulse h T x is realvalued.I D Re [ak ] and ak.24) ak.I 1 ak. the baseband modulated signal is complexvalued: s . Two constellations and corresponding bit map. however.I C jak. where ak D ak. binary representations.7.bb/ .6. Passband digital transmission (QAM systems) 545 Figure 7.23) and ak.I h T x . each identiﬁed by two bits.Q (1000) (1100) (0100) (0000) 3 (1001) (1101) (0101) (0001) 1 ak. (b) 16QAM.t/ D D C1 X kD 1 C1 X kD 1 ak h T x .Q D Im [ak ].t kT / .t ak.5. ak. Block diagram of a passband digital transmission system. Modulator. Figure 7.Q (7.Q (11) (01) (10) (00) 3 (1011) 1 (1111) 1 (0111) 3 a (0011) k.I (1010) (1110) (0110) (0010) 3 (a) QPSK.
7.t/g F ! S.bb/ .26) The transformation in the frequency domain from s . Transmission over dispersive channels S (bb)(f) B= 1 (1+ρ ) 2T 0 B S (+) (f) f B 0 S (f) f0 B f0 f 0 +B f f0 B f 0 f 0+B 0 f0 B f0 f 0 +B f Figure 7. f / þT þ (7.!0 tC'0 / 2 F ! S . f / D S .C/ . f / D þ HT x .7.t/ D 1 s .bb/ to s is illustrated in Figure 7.546 Chapter 7.395). Fourier transforms of baseband signal and modulated signal.398)): þ þ2 þ1 þ N Ps . f/ (7.bb/ .25) then the realvalued transmitted signal is given by: s.C/Ł . Let f 0 (!0 D 2³ f 0 ) and '0 be. f 2 f 0 /e j'0 (7. respectively.27) .bb/ is a cyclostationary process of period T with an average PSD (see (1. s .C/ . the carrier frequency (radian frequency) and phase.t/ D 2Refs .C/ . f / C S . We deﬁne s .t/e j .C/ . f / D 1 S .bb/ . f /þ Pa . Power spectral density of a QAM signal From the analysis leading to (1.
bb/ is in general a complexvalued signal.!0 t C '0 / C j sin. as for example in the case of PAMDSB (see Appendix 7.bb/ s . and the crosscorrelation terms become negligible.bb/Ł . using (7.bb/Ł .bb/Ł .C).25) and (7.407)). As e j . Taking the average correlation in a period T . as for example in the case of QAM with i.29) is shown in Figure 7.bb/ s . 3 The result (7.304)) that relates rs to rs .t.t D Re e j .!0 tC'0 / ] " C1 X ak h T x . then s is cyclostationary in t of period equal to T p .!0 t C '0 / (7.!0 tC'0 / D cos. 2.26).28) follow.t/ D Re[s .i.30) Figure 7. starting from a relation similar to (1. f 4 N f 0 / C Ps . .27) and (7. in the equation similar to (1. f f 0 /] (7. t − / 6D 0. an implementation based on this representation requires a processor capable of complex arithmetic. t − / in Fourier series (in the variable t). for 1=T − f 0 it happens that the autocorrelation terms approximate the same terms found in the previous case.t. In this situation the crosscorrelations. As s .t/e j . and expanding rs . From (7. t − / D 0 is satisﬁed.bb/Ł .bb/Ł . it turns out s. as the crosscorrelations are zero. we get that s is a cyclostationary random process with an average PSD given by3 N N Ps .t. the results (7.24). We ﬁrst consider the situation where the condition rs .bb/ . we ﬁnd that the process s is cyclostationary in t of period T . do not vanish.8.bb/ and rs . If a real value T p exists.8.d.bb/ . Three equivalent representations of the modulator 1. From the equation (similar to (1.bb/ . Taking the average correlation over the period T p .304).29) The blockdiagram representation of (7. such that T p is an integer multiple of both T and 1= f 0 .2.304). f / D 1 [Ps .bb/ s .7.28) needs clariﬁcation. We now consider the situation where rs .!0 tC'0 / kD 1 # kT / (7.28) We note that the bandwidth B of the transmitted signal is equal to twice the bandwidth of h T x .t.bb/ s . QAM transmitter: complexvalued representation.bb/ s . Passband digital transmission (QAM systems) 547 Moreover. circularly symmetric symbols (see (1. t − / is a periodic function in t of period T . and in particular the case where rs .
t/ D s M0 Ł g Rc . f / (7.9. f / D S. . f /GCh .t/ D s Ł gCh . where the information bits select only the value of the carrier phase. Using the polar notation ak D jak je jÂk . f / D S M0 .40. f / D SCh . 3.t/ D cos.9. The received signal is given by: sCh .t/ F ! S R .31) The blockdiagram representation of (7.!0 tC'0 / jak je jÂk h T x .t If jak j is a constant we obtain the PSK signal (6. g Rc .38).!0 t C '0 / C1 X kD 1 ak.5 (see also Figure 6. f C f 0 /e j'1 (7.t kT / kT / (7.35) Figure 7.t kT / sin. s M0 . The implementation of a QAM transmitter based on (7.34) then it is ﬁltered by a lowpass ﬁlter (LPF).I h T x .D.!0 tC'1 / F ! S M0 . the received signal is translated to baseband by a frequency shift. (7.t kT / (7.t kT / " D Re C1 X kD 1 kD 1 C1 X kD 1 jak je j .t/e j . s R .t/ D sCh .33) First.31) is shown in Figure 7.!0 tC'0 CÂk / # h T x . Coherent receiver In the absence of noise the general scheme of a coherent receiver is shown in Figure 7. f /G Rc . which follows the scheme of Figure 6.548 Chapter 7.t/ F ! SCh .127).31) is discussed in Appendix 7. f / (7.!0 t C '0 / C1 X kD 1 ak.29) becomes: s.Q h T x .29) takes the form: " # C1 X s.!0 t C '0 C Âk /h T x . QAM receiver: complexvalued representation.t/ D Re e j . Transmission over dispersive channels (7.32) D jak j cos.
3 Baseband equivalent model of a QAM system Recalling the relations of Figure 1.bb/ of Figure 7. 4 .9 is simpliﬁed into that of Figure 7.t/.bb/ and r . 7.7. we illustrate the baseband equivalent scheme with reference to Figure 7.3. We note that in the above analysis. .1=2/sCh .bb/ We note that. therefore the signals s . the receive carrier phase '1 may be different from the transmit carrier phase '0 .10.10 illustrates these transformations. Frequency responses of the channel and of signals at various points of the receiver. Figure 7.30. then s R . In the particular case where g Rc is a realvalued ﬁlter. if g Rc is a nondistorting ideal ﬁlter with unit gain. as the channel may introduce a phase offset. This is the same as assuming as reference carrier e j . then the receiver in Figure 7.2³ f 0 tC'0 / .t/ D .11 are deﬁned apart from the term e j'0 .11:4 by assuming that the transmit and receive carriers have the same We note that the term e j'0 has been moved to the receiver. Baseband equivalent model of a QAM system 549 G (f) Ch f0 S (f) Ch f −2f0 1 f0 −B f0 G (f) Rc f0 +B f SMo(f) −2f0 SR (f) f −2f0 f Figure 7.5.
f / D 2 0 elsewhere Moreover.bb/ .36) Pw I . is an even function around the frequency f 0 .12 is a reference scheme for both PAM and QAM.1 Signal analysis We refer to Section 1. We will include the factor e j . f C f 0 / for f ½ f 0 .t/ D w I . f C f 0 / for f ½ f 0 . we can study QAM systems by the same method that we have developed for PAM systems.38) > : GCh .550 Chapter 7.'1 '0 / appears in Figure 7. Therefore the scheme of Figure 7.39) .37) To simplify the analysis. f / D e j . f / D Pw.t/ have a power spectral density that is given by ( 2Pw .11.bb/ .'1 '0 / (7. Consequently the additive noise has a spectral density equal to . hence 2 2 2 2 ¦w I D ¦w Q D 1 ¦w. each having spectral density Pw . then the real and imaginary parts of w. for the study of a QAM system we will adopt the PAM model of Figure 7. Hence the scheme of Figure 7. f /.7. f C f 0 / for QAM p 2 We note that for QAM we have gC .3.'1 '0 / p 2 2 .t/ D e j .1=2/e j .1 holds also for QAM in the presence of additive noise: the only difference is that in the case of a QAM system the noise is complexvalued with orthogonal inphase and quadrature components. f / D 2Pw .bb/ D ¦w 2 (7.1=2/Pw. f / D Pw Q .bb/ . assuming that all signals and ﬁlters are in general complex. Baseband equivalent model of a QAM transmission system. we recall here that if for f > 0 the spectral density of w. f / D 0. We note that p the factor .'1 '0 / = 2 p in the impulse response of the transmission channel gCh .t/ C jw Q . Pw I w Q . that is w I ? w Q . frequency. and the factor 1= 2 in the impulse response g Rc . Pw .bb/ gCh . f C f 0 / f ½ f0 1 (7.4 for an analysis of passband signals. 7.11. where 8 for PAM > GCh . f / < GC .1. f C f 0 /1. Transmission over dispersive channels Figure 7.t/ (7.
t/ is in fact s .43) (7.t/ D C1 X kD 1 C1 X kD 1 ak h T x .t/ D 1 p g Rc .41) ak qC .t/ C jw0Q . In the case of white noise it is:6 Pw0I . complexvalued. sequence of symbols with values from a complexvalued alphabet A. sC . Modulated signal. Baseband equivalent model of PAM and QAM transmission systems.42) 4. In fact (7. only the component w0I is considered.t/ D 3. to simplify the notation. Signal at the channel output.t/ D w0I . Because the bandwidth of g Rc is smaller than f 0 . 5 6 N0 (V2 /Hz) 2 (7.bb/ . the ﬁlter g Rc will be indicated in many passband schemes simply as g Rc . f / D N0 (V2 /Hz) In the model of PAM systems. 2.7.9.t kT / (7. . fak g. f / D and PwC .5 s. the symbols of the sequence fak g assume real values.3. with spectral density PwC . 1. In PAM systems.44) We point out that for QAM s. Sequence of input symbols.40) In the following. Circularlysymmetric.t/ D h T x Ł gC . additive Gaussian noise. We summarize the deﬁnitions of the various signals in QAM systems.12. Baseband equivalent model of a QAM system 551 Figure 7. f / D Pw0Q .t kT / qC .t/ 2 (7. With reference to the scheme of Figure 7. the relation between the impulse responses of the receive ﬁlters is given by g Rc .43) should include the condition f > f 0 .t/.t/ . this condition can be omitted. wC .t/ (7.
2T /.t/ D wC Ł g Rc .53) expressed as Ma E qC 0D N0 =2 0D (7. sCh .d.49) (7.54) (7.t kT / (7.t/ where s R .50) (7.i.t/ 6. Sequence of detected symbols. input symbol sequence. Transmission over dispersive channels 5. we obtain Ma E qCh N0 =2 where Ma is the statistical power of the data and E qCh is the energy of the pulse h T x Ł gCh .48) C1 X kD 1 (7. (7. using (1.52) .38). fak g O (7.t/ In PAM systems.N0 =2/2Bmin N0 Bmin N0 . In general.552 Chapter 7.5. for a channel output signal.1 and 7.51) PAM systems.t/ D with q R . observing (7. f / D N0 =2. then (7. we get qCh D qC .105). Signal at the output of the complexvalued ampliﬁerequalizer ﬁlter g Rc . 7.46) ak q R . having minimum bandwidth Bmin . Because for PAM.t/ D sC .t/] MsCh E sCh D D . rC . g Rc is a realvalued ﬁlter.t/ D s R .t/ C wC .399) we have E sCh D Ma E qCh and. that we recall here. For an i. we have 0D We express now E sCh 2 E[sCh .45) (7.47) Signaltonoise ratio The performance of a system is expressed as a function of the signaltonoise ratio 0 deﬁned in (6.t/ D qC Ł g Rc .t0 C kT / 8. with reference to the schemes of Figures 7. Signal at the decision point at instant t0 C kT .t/ C w R .t/ and w R . r R . Received or observed signal. yk D r R . for Bmin D 1=. and assuming the noise w white with Pw .Bmin T / in the cases of PAM and QAM systems.53) qCh D can be (7.
7. instead. . observing also (7.53) of PAM systems.3. assuming as carrier frequency f 0 the center frequency of the passband ( f 1 .56) represents the ratio between the energy per component of sC . PAM (possibly using a line code) as well as QAM transmission systems may be considered over cables. 7 The term Ma E qC =2 represents the energy of both Re[sC . 2. Consequently. given by T E[. a PAM signal needs to be translated in frequency (PAMDSB or PAMSSB). whereas for radio links. Transmission channel The transmission channel is modelled as a time invariant linear system. Baseband equivalent model of a QAM system 553 QAM systems.13.t/ D rect T .t/j2 ] D E[jsC .14. expressed as7 0D Ma E qC =2 N0 =2 (7. Transmitter The choice of the transmit pulse is quite important because it determines the bandwidth of the system (see (7. the channel is bandlimited with a ﬁnite bandwidth f 2 f 1 . Therefore it is represented by a ﬁlter having impulse response gCh .t/].2 Characterization of system elements We consider some characteristics of the signals in the scheme of Figure 7. or a QAM system may be used. h T x .t/]/2 ]. and the PSD of the noise components.t/ is circularly symmetric (see (1.bb/ 2 E[sCh . f / is as represented in Figure 7. with wide spectrum. Two choices are shown in Figure 7.38). coincides with (7. f 1 may be in the range of MHz or GHz.56). From (1. where the passband goes from f 1 to f 2 . equal to N0 =2.17) and (7. where Â Ã t T 2 1.12. as Bmin D 1=T .t/] D 1 E[jsCh .38).Im[sC . using (7. 7.t/] and Im[sC . we obtain . assuming that sC .t/ D wT . (7.407)). the majority of channels are characterized by frequency responses having a null at DC.3. Then (7.t/ with longer duration and smaller bandwidth. for transmission over radio. f 1 may be of the order of a few hundred Hertz.t/j2 ] 2 (7. For transmission over cables.295).57) Ma E qC N0 (7. f 2 ). Therefore the shape of the frequency response GCh . In any case.28)).t/]/2 ] and T E[.56). As described in Chapter 4. h T x .55) Hence.Re[sC .51) becomes 0D We note that (7.
With reference to the general model of Figure 7.554 Chapter 7. arg GC . f / (7.t/ may be very different from s.t t0 / (7.12.59) 2. Two examples of transmit pulse hTx . for channels encountered in practice conditions (7. f /j D G0 for j f j < B (7. In short. f / D jGC . a channel presents ideal characteristics. the phase response is proportional to f for j f j < B. known as Heaviside conditions for the absence of distortion. jGC .13.t/.144).79). Transmission over dispersive channels Figure 7.t/ D G0 s. According to (1. f /je j arg GC . if the following two properties are satisﬁed: 1. . the magnitude response is a constant for j f j < B.32: the overall effect is that the signal sC . for PAM and QAM transmission systems we will refer instead to the Nyquist criterion (7. we adopt the polar notation for GC : GC . that is: sC . An example of frequency response of a radio channel is given in Figure 4.60) are too stringent.61) In practice.59) and (7. f / D 2³ f t0 for j f j < B (7.60) Under these conditions. channels introduce both “amplitude distortion” and “phase distortion”.t/. s is reproduced at the output of the channel without distortion.58) Let B be the bandwidth of s.
Baseband equivalent model of a QAM system 555 Figure 7. In general.16.12 can be represented as in Figure 7.62) where G M . .14. It is easy to prove that in the two systems the relation between rC . if the frequency response of the receive ﬁlter G Rc . where the sampler is followed by a discretetime ﬁlter. such that the following factorization holds: G Rc . f / contains a factor C. f / D G M .3. and a data detector.15. periodic of period 1=T . the only disturbances considered here.t/ and yk is the same. Frequency response of the transmission channel. Receiver We return to the receiver structure of Figure 7.12. as illustrated in Figure 7.15 yk should be equal to ak . f /C.7. linear distortion and additive noise. f / is a generic function. in the system of Figure 7. In practice. then the ﬁltersampler block before the data detector of Figure 7.e j2³ f T / (7. consisting of a ﬁlter g Rc followed by a sampler with sampling rate 1=T . may determine a signiﬁcant deviation of yk from the desired symbol ak . Ideally.e j2³ f T /.
Receiver structure with analog and discretetime ﬁlters.t0 C kT / D and w R.k C w R:k (7. One of the simplest data detectors is the threshold detector. 7.49).t0 C . Figure 7.65) (7.I C jyk. (b) 16QAM.Q k. from (7. Transmission over dispersive channels Figure 7. The last element in the receiver is the data detector.Q .t0 C kT / Then.3 Intersymbol interference Discretetime equivalent system From (7. that associates with each value of yk a possible value of ak in the constellation. at the decision point the generic sample is expressed as yk D s R.63) .I 0 1 2 3 −5 −5 −4 −3 −2 −1 y k.I 0 1 2 3 4 5 (a) QPSK. Values of yk D yk.16. the decision regions for a QPSK system and a 16QAM system are illustrated in Figure 7. Using the rule of deciding for the symbol closest to the sample yk .64) C1 X iD 1 ai q R .46) we deﬁne s R. 3 5 4 2 3 2 1 1 k.15. in the presence of noise and linear distortion.Q 0 0 y y −1 −1 −2 −3 −2 −4 −3 −3 −2 −1 y k.k i/T / (7.3.k D w R .k D s R .17.556 Chapter 7.
t/ D q R . Decision regions for a QPSK system and a 16QAM system.70) We observe that. the more valid . timeshifted by t0 .3. Baseband equivalent model of a QAM system 557 yk.Q 3 1 yk.66) (7. which behaves as a disturbance with respect to the desired term ak h 0 .7. (b) 16QAM. For the analysis. The coefﬁcients fh i gi 6D0 are called interferers. it is often convenient to approximate ik as noise with a Gaussian distribution: the more numerous and similar in amplitude are the interferers. Moreover (7.k D where ik D C1 X i D 1. Figure 7. i 6Dk C1 X iD 1 (7. the detection of ak from yk by a threshold detector takes place in the presence of the term ik .t0 C t/ and deﬁning h i D h.k (7.65) becomes yk D ak h 0 C ik C w R. Introducing the version of q R .Q 3 1 1 yk.68) ai h k i D ÐÐÐ C h 1 akC1 C h 1 ak 1 C h 2 ak 2 C ÐÐÐ (7.i T / D q R .I (a) QPSK. as h.I 3 1 3 yk. even in the absence of noise.67) ai h k i D ak h 0 C ik (7.69) represents the intersymbol interference (ISI).t0 C i T / it follows that s R.17.
6 on page 62.18. f / D PwC . with period T.75) In particular.k g given by (7.Re[w R. that relates the signal at the decision point to the data transmitted over a discretetime channel with impulse response given by the sequence fh i g.65). Discretetime equivalent scheme. where PwC .k D ¦w R D PwC .d.77) In the case of PAM (QAM) transmission over a channel with white noise. the variance of w R. the variance per dimension of the noise is given by PAM QAM 2 ¦ I2 D E[w 2 ] D ¦w R R.I m[w R. of a QAM system.18). Transmission over dispersive channels Figure 7. is this approximation. we derive the discretetime equivalent scheme.k ]/2 ] D 1 ¦w R 2 (7. (7. symbols.74) Pw R Â f ` 1 T In any case. called overall discretetime equivalent impulse response of the system.71) i D 1.k is equal to that of w R and is given by Z C1 2 2 ¦w R. i 6D0 Variance of ik : jh i j2 (7. C1 X Mean value of ik : m i D ma hi (7. the ﬁrst two moments of ik are easily determined.i. f /jG Rc .73) Ã (7. In the case of i. Concerning the additive noise fw R. f /j2 d f 1 (7.8 being Pw R . f / D N0 =2 (N0 ). i 6D0 2 2 ¦i D ¦a C1 X i D 1.k 2 ¦ I2 D E[.68).k g.78) 8 See Observation 1.k ]/2 ] D E[. f /jG Rc . .72) From (7.k g is given by Pw R.76) (7. with period T (see Figure 7. f /j2 the PSD of fw R. f / D C1 X `D 1 (7. with fs R.75) yields a variance per dimension equal to ¦ I2 D N0 E g Rc 2 (7.558 Chapter 7.k .
79) the lefthand side of (7.t/ that satisﬁes the conditions (7. are illustrated in Figure 7. yk is a replica of ak . We observe that (7. From the conditions (7. to obtain yk D ak it must be: Nyquist criterion in the time domain ( h0 D 1 hi D 0 i 6D 0 (7. hence the condition for the absence of ISI is formulated in the frequency domain for the generic pulse h as: Nyquist criterion in the frequency domain C1 X `D 1 H Â f ` T Ã DT (7.1 The frequency 1=.81) we deduce an important fact: the Nyquist pulse with minimum bandwidth is given by: h. C1 X iD 1 hi e j2³ f i T D Â C1 1 X H f T `D 1 ` T Ã (7. so that.3.80) where H.90).t/. Baseband equivalent model of a QAM system 559 where E g Rc is the energy of the receive ﬁlter.68).t/ D h 0 sinc t T F ! H.19a. The solution is the Nyquist criterion for the absence of distortion in digital transmission.78) holds for PAM as well as for QAM. in the absence of noise. From (7. is called Nyquist frequency. .7. which coincides with half of the modulation frequency. for three values of the parameter ². A pulse h.82) Deﬁnition 7.2T /. A family of Nyquist pulses widely used in telecommunications is composed of the raised cosine pulses whose time and frequency plots.79) is said to be a Nyquist pulse with modulation interval T .79) have their equivalent in the frequency domain. The conditions (7.81) From (7. They can be derived using the Fourier transform of the sequence fh i g (1. f / is the Fourier transform of h.80) is equal to 1. f / D Th 0 rect f 1=T (7.79) and ISI vanishes. Nyquist pulses The problem we wish to address consists in ﬁnding the conditions on the various ﬁlters of the system.
84) . f / D T rcos Â f . We deﬁne 8 >1 > > > > > > > < 0 Ä jxj Ä 0 jxj 1 ² ²1 2 C A 1 2 jxj > ² 1 2 1C² 2 (7. Transmission over dispersive channels Figure 7.560 Chapter 7.19.² 1=T Ã (7. Time and frequency plots of raised cosine and square root raised cosine pulses for three values of the rolloff factor ². ²/ D 2 B³ > cos @ 2 > > > > > > > :0 < jxj Ä 1C² 2 then H.x.83) ² rcos.
f / D T rcos 1=T and inverse Fourier transform ½ Ä Â Ã½ Â Ã Ä t 1 t 1 t C ² cos ³ C sinc ² C h.t/ D . f / in (7.88) .t/ D sinc T 4 T Â Ã Â Ã t t D sinc cos ³² T T 1 1 2 Ã Â t C sinc ² T 1 2 Ã½ 1 Ã Â t 2 2² T (7. Baseband equivalent model of a QAM system 561 with inverse Fourier transform Â Â Ã Ä t t ³ sinc ² C h.² df D T 1 T rcos 1=T 1 ²Á D H. for a QAM system the required bandwidth is . the bandwidth of the baseband equivalent system is equal to . Consequently. Later we will also refer to square root raised cosine pulses.² df D 1 T rcos h.89) Ã f .1 C ²/=. the area of H.0/ D 1=T 1 and the energy is Â Ã Z C1 f 2 2 Eh D .2T /.1 ²/ T T 4 T 4 Ä Â Ã½ Â Ã t 1 1 t C ² cos ³ sinc ² T 4 T 4 ½ Ä ½ Ä t t t C 4² cos ³.0/ D 1 C1 (7.1 ²/ sinc .² 1=T df D T (7.² H.85) It is easily proven that.3.86) .0/ 1 4 ²Á 4 (7.1 ²/ T T T " D Â Ã2 # t t ³ 1 4² T T In this case Z h.0/h. that is Ã Â Z C1 f (7.87) We note that. from (7.84) is equal to one.7.84).91) . with frequency response given by s Ã Â f (7.90) and the pulse energy is given by Z Eh D C1 1 T 2 rcos f .1 C ²/ sin ³ .1 C ²/=T . from (7.83).² df D 1 T rcos 1=T Â Â Ã s Â ² 1 4 ³ Ã (7.
89) and (7. We consider a PAM transmission system where y. as a function of t0 . Observation 7. a data sequence can be transmitted with modulation rate 1=T without errors if H. Plots of h.2T /. On the other hand. must have a bandwidth equal to at least 1=.t0 / C i0 . are further apart. f / satisﬁes the Nyquist criterion and there is no noise. is in the range between 0 and 1.t0 /] need to be represented. We consider quaternary transmission with ak D Þn 2 A D f 3.19b. for i 6D 0.t0 / D C1 X iD 1 ai q R . f / in (7. 1.93) D ÐÐÐ Ca C T / C a1 q R . from (7.t0 iT/ (7. We note that ² determines how fast the pulse decays in time.t0 / D C1 X i D 1. is given by y0 D y. We now illustrate. the ISI may result a dominant disturbance with respect to noise and impair the performance of the system. for various values of ² are shown in Figure 7.t0 /.88) is not the frequency response of a Nyquist pulse.68) we observe that if the samples fh i g. for Þn 2 A. the pattern of y0 as a function of t0 is shown in Figure 7. a graphic method to represent the effect of the choice of t0 for a given pulse q R .t0 1 q R . Transmission over dispersive channels We note that H. i0 . 1.21: it is seen that the possible values of y0 . with frequency response GC . 3g (7.1 From the Nyquist conditions we deduce that: 1. In the absence of noise. are not sufﬁciently small with respect to h 0 .t0 / D 0 and y0 D a0 q R .88). f /. 2.t0 /] and Im[y.94) and pulse q R as shown in Figure 7.67) the discretetime impulse response fh i g depends on the choice of the timing phase t0 (see Chapter 14) and on the pulse shape q R .t0 2T / C Ð Ð Ð (7. otherwise intersymbol interference cannot be avoided. called excess bandwidth parameter or rolloff factor.t0 is the ISI.t0 / is real: for a QAM system.t/ and H.92) D a0 q R . given respectively by (7.t0 / where i0 .66) and (7.t0 iT/ T / C a2 q R . i 6D0 ai q R . through an example. Eye diagram From (7. both Re[y.562 Chapter 7. at the decision point the sample y0 . therefore they offer a greater . In the absence of ISI. The parameter ².20. In relation to each possible value Þn of a0 . the channel.
5 −T 0 T t 2T 3T 4T Figure 7. . Desired component Þn qR .7.t0 / as a function of t0 . Pulse shape for the computation of the eye diagram.5 R 0 −0. 3g. Þn 2 f 3.5 1 q (t) 0. 1. 1.21.20. Baseband equivalent model of a QAM system 563 1.3. 3 αn=3 2 1 αn=1 αnqR(t0) 0 −1 α =−1 n −2 αn=−3 −3 −T 0 T 2T 3T 4T t0 Figure 7.
564 Chapter 7. We also note that.t0 / around the desired sample Þn q R .t0 / is determined by the values imax .t0 / (7. The height a is an indicator of the noise immunity of the system. i 6D0 jq R .t0 . a1 . In the general case. a0 DÞn fak g. Consequently the eye may be wider as compared to the case of i. we show in Figure 7. Þn / Ä iabs .t0 / D Þmax we have that imax . a2 .t0 / C Þn 2 A imin . which in this example occurs at instant t0 D 1:5T . it may result in i0 .98) and iabs .23 the eye diagram obtained with a raised cosine pulse q R . symbols. In fact.t0 / (7. in general. Þn / ½ iabs .96) The eye diagram is characterized by the 2M proﬁles ( imax .i. The price we pay is a larger bandwidth of the transmission channel. In this example. In general. however. deﬁning Þmax D max Þn n (7. Þn / Þn q R . Þn / D fak g. The width b indicates the immunity with respect to deviations from the optimum timing phase.t0 / imin . the M 1 “pupils” of the eye diagram have a shape as illustrated in Figure 7.t0 /.t0 / i0 .t0 . a0 DÞn max min i0 .d.24. the choice t0 D 1:5T guarantees the largest margin against noise.100) (7.22.t0 / D iabs . where there exists correlation between the symbols of the sequence fak g. The range of variations of y0 .97) If the symbols fak g are statistically independent with balanced values. We observe that as a result of the presence of ISI. and this value is added to the desired sample a0 q R . a 1 . for a given t0 and for a given message : : : .101) C1 X i D 1. .t0 i T /j (7.99) We note that both functions do not depend on a0 D Þn . the values of y0 may be very close to each other.t0 .t0 . For the considered pulse. Þn / (7. : : : .t0 / 6D 0. for two values of the rolloff factor. the eye diagram is given in Figure 7. and therefore reduce considerably the margin against noise. For example a raised cosine pulse with ² D 1 offers greater immunity against errors in the choice of t0 as compared to the case ² D 0:125. it is easy to show that imax .t0 / and imin .t0 .t0 /. where two parameters are identiﬁed: the height a and the width b. the timing phase that offers the largest margin against noise is not necessarily found in relation to the peak of q R . For quaternary transmission.t0 . Þn / D imin .95) (7. Transmission over dispersive channels margin against noise in relation to the peak of q R .t0 / D iabs . that is both Þn and Þn belong to A.
7. we must omit plotting the values of y.αn) 1 y0(t0) 0 −1 −2 −3 −T 0 T t0 2T 3T 4T Figure 7. are mapped on the same interval. for example. as they would be affected by the transient behavior of the system. Baseband equivalent model of a QAM system 565 3 αnqR(t0)+imax(t0. [t1 C 2T.k ak 2 A (7. To plot the eye diagram we need in principle to generate all the Mary symbol sequences of length Nh : in this manner we will reproduce the values of y.t/ for the ﬁrst and last Nh 1 modulation intervals. A long random sequence of symbols fak g is transmitted over the channel. We note that.3. for transmission with i.22.20. t1 C 3T /. t1 C T /. t1 C T /. and the portions of the curve y. from (7. Then the contours of the obtained eye diagram correspond to the different proﬁles (7. We now illustrate an alternative method to obtain the eye diagram.65) the samples of the received signal at the decision point are given by yk D ak C w R. and deﬁne Nh D dth =T e.t/ is at most equal to Nh .d. Eye diagram for quaternary transmission and pulse qR of Figure 7.αn) 2 αnqR(t0)+imin(t0.t/ in correspondence of the various proﬁles.t/. symbols.i. if the pulse q R .4 Performance analysis Symbol error probability in the absence of ISI If the Nyquist conditions (7.t/ D s R . has a duration equal to th .102) . [t1 C T. we select t1 so that the center of the eye falls in the center of the interval [t1 . 7. on [t1 . Moreover. t1 C T /. at every instant t 2 < the number of symbols fak g that contribute to y. Typically. it means that for all values of t0 the worst case ISI is larger than the desired component and the eye is shut. t 2 <. t1 C 2T /.79) are veriﬁed.97). : : : ]. If the contours of the eye do not appear.3.t/ relative to the various intervals [t1 .
1 0 t/T 0.4 0.5 −0.5 (b) Figure 7.2 −0.2 −0.24.4 −0. Height a and width b of the ‘‘pupil’’ of an eye diagram.1 0 t/T 0.2 0. Eye diagram for quaternary transmission and raised cosine pulse qR with rolloff factor: (a) ² D 0:125 and (b) ² D 1.1 0.5 −0.3 0.3 −0. Transmission over dispersive channels 5 4 3 2 1 0 −1 −2 −3 −4 −5 −0.2 0.5 (a) 5 4 3 2 1 0 −1 −2 −3 −4 −5 −0.4 −0.3 0.1 0. a b t 0 Figure 7.3 −0.4 0.23. .566 Chapter 7.
9 We observe that this memoryless decision criterion is optimum only if the noise samples fw R. deﬁning Ã Â dm 2 (7.106) D 2¦ I and using (6. p / (7. and on the variance per dimension of the noise w R. ¦ I2 is given by (7. we have the following correspondences: r R.57). r2 ]T ! s D [s1 . Im[w R.e.6. We also note that (7.106) it follows that Â Pe ' 4 1 D 2 N0 E g Rc (7.107) and (7.109) Apparently. With reference to Table 7.7. then. as is the case if equation (7. the error probability depends on the distance dm between adjacent signals.9 Moreover.k g are statistically independent.Q ] w R. ak.1. regarding yk as an isolated sample. Hence.Q ] ak D [ak. we consider dm D 2h 0 D 2.B. that is the minimum value of Pe . given the observation yk . only the variance of the noise w R. but it must be chosen such that the condition (7.1 and to Figure 7. and the values assumed by ak are equally likely.122) and (6.k D yk D [yk. here g Rc is not arbitrary. and still considering the ML detection criterion described in Section 6.103) (7.3.78).105) If w R.I .104) (7. Baseband equivalent model of a QAM system 567 For a memoryless decision rule on yk .k ]] ! r D [r1 . s2 ]T ! w D [w1 . the above equation could lead to choosing a ﬁlter g Rc with very low energy. However.k is needed and not its PSD.k ]. Now. i.108) p M We note that. we have MPAM Â Pe D 2 1 1 M Ã Q.k has a circularly symmetric Gaussian probability density function. ¦ I2 .108) imply that the best performance. w2 ]T (7. the detection criterion leads to choosing the value of Þn 2 A that is closest to yk .79) for the absence of ISI is satisﬁed.196). We will see in Chapter 8 a criterion to design the ﬁlter g Rc .k .107) Ã 1 p Q. hence from (7. so that × 1. is obtained when the ratio is maximum. for a channel with white noise. yk.106) coincides with (6. (7. in this speciﬁc case between adjacent values Þn 2 A. MQAM / (7.I .k D [Re[w R. for the purpose of computing Pe . The general case of computation of Pe in the presence of ISI and nonGaussian noise is given in Appendix 7. Matched ﬁlter receiver Assuming absence of ISI.43) holds. .
Using (7.112) 1 [G Rc .109) assumes the form D MF D 2E qC N0 (7.25. Therefore (7. We further observe that the matched ﬁlter receiver corresponds to the optimum receiver developed in Chapter 6. f / D K Q C . In CAP systems. we have Ł G Rc .10 on page 73) is provided by the receive ﬁlter g Rc matched to qC : hence the name matched ﬁlter (MF). f /] jtDt0 D K rqC . However.56) it is possible to determine the relation between the signaltonoise ratios at the decision point and 0 at the receiver input: for a QAM system it turns out MF D 0 1 2 Ma (7.0/ D 1 (7.568 Chapter 7. The only difference is that in that case the energy of the matched ﬁlter is equal to one. The scheme of a QAM system is repeated for convenience in Figure 7. Transmission over dispersive channels Assuming that the pulse qC that determines the signal at the channel output is given.E we describe a Monte Carlo method for simulations of a discretetime QAM system.4 Carrierless AM/PM (CAP) modulation The carrierless AM/PM (CAP) modulation is a passband modulation technique that is closely related to QAM (see Section 7. it is not possible by varying the ﬁlter g Rc to obtain a higher at the decision point than (7. with reference to the scheme of Figure 7. for a certain modulation system with a given pulse qC and a given 0. . from the condition h0 D F we obtain K D 1 E qC (7. and consequently a better Pe . 7.2).114) is often used as an upper bound of the system performance.114) where Ma =2 is the statistical power per dimension of the symbol sequence.113) The matched ﬁlter receiver is of interest also for another reason. the solution (see Section 1. In particular.110) where K is a constant. In Appendix 7. In this case.112) in (7. f /Q C .110) might imply. f / e j2³ f t0 (7.114). The equation (7.110) yields E g Rc D 1=E qC .12 and for white noise wC . We stress the point that. we note that we have ignored the possible presence of ISI at the decision point that the choice of (7. the carrier is omitted by using passband ﬁlters and exploiting the periodicity of the PSD of the sequence fak g.111) Substitution of (7.
I and h T x. pb/ (7.4. Carrierless AM/PM (CAP) modulation 569 Figure 7.7. the QAM scheme of Figure 7. pb/ (7.26.I .I and g Rc.I h T x. Figure 7.I .t Q .t/ D g Rc . pb/ ak.I . QAM implementation using passband ﬁlters.t/ sin.117) (7.115) (7. where the impulse responses of the transmit ﬁlters are given by h T x.116) (7. where for simplicity we set '0 D 0.119) kT / . then the pulses h T x. if f 0 is larger than the bandwidth of h T x .t/ D g Rc .25.2³ f 0 t/ h T x.t/ cos. pb/ ak.116) are orthogonal. Using passband ﬁlters. QAM implementation using baseband ﬁlters.t kT / e j2³ f 0 t kD 1 1 X kD 1 . pb/ .t/ D h T x . pb/ .Q .Q h T x. the .26. pb/ . in a QAM system the transmitted signal can be expressed as " # 1 X s Q AM . Note that this property holds through the transmission channel.Q are related by the Hilbert transform (1. the same relation exists between the pulses g Rc.t/ cos.Q .1 we observe that.t/ sin. pb/ .2³ f 0 t/ .163). Consequently the two pulses (7. pb/ .2³ f 0 t/ and the impulse responses of the receive ﬁlters are given by g Rc.2³ f 0 t/ g Rc.Q .t/ D h T x .Q .115) and (7.25 is modiﬁed into the scheme of Figure 7.29). From (7.118) Applying Theorem 1.t/ D Re ak h T x .t Q D kT / . pb/ hypothesis always veriﬁed in practice.
the receiver uses a passband matched ﬁlter of the phasesplitter type. if the transfer function of the transmission medium is unknown a priori. pb/ D .26. ﬁltered by the transmission channel.t/e j2³ f 0 t D h T x.t/ (7. In the case where f 0 is not much larger than 1=. Modulator and demodulator for a CAP system. pb/ kT / Because the pulses h T x. On the other hand.t/ C j h T x. In general. Therefore in the scheme of Q Q Figure 7. the input to the modulation ﬁlters at instant k is given by ak D ak e j2³ f 0 kT . pb/ .Q .I h T x.Q .Q .27. We note that CAP modulation is equivalent to QAM.2T /.t .27).t .t/ D h T x .121) kT / ak. pb/ Q h T x . are still related by the Hilbert transform. pb/ kT / (7. CAP modulation is obtained by leaving out the additional phase. implemented by two realvalued ﬁlters with impulse responses given by (7.I D Re[ak e j2³ f 0 kT ] and ak. an acquisition mechanism of the carrier must be introduced and typically QAM is adopted.570 Chapter 7. the receive ﬁlters are adaptive (see Chapter 8).117) and (7. If f 0 is an integer multiple of 1=T .I .Q h T x.t/ and h T x.I . where ak.t/ D Re 1 X kD 1 1 X kD 1 # Q ak h T x . as shown in Figure 7.27. with the difference that in a QAM Q system the input symbols fak g are substituted by the rotated symbols fak g. QAM is usually selected if f 0 × 1=.120) the transmitted signal is then given by " sC A P . pb/ ak. Q which is equal to the symbol ak with an additional deterministic phase that must be removed at the receiver. for transmission channels that introduce frequency offset. then there is no difference between CAP and QAM.118) (see Figure 7. Transmission over dispersive channels Figure 7.Q D Im[ak e j2³ f 0 kT ].2T /. as usually occurs in data transmission systems over metallic cables.t .I . . CAP modulation may be preferred to QAM because it does not need carrier recovery. If we deﬁne .t/.
. the transformation that O maps input bits fb` g in output bits fb` g is called binary channel and is represented in Figure 7.1 PCM signals over a binary channel With reference to Figure 7.5.1 for PAM and to Figure 7.5 Regenerative PCM repeaters This section is divided into two parts: the ﬁrst considers a PCM encoded signal (see Chapter 5) and evaluates the effects of digital channel errors on the reproduced analog signal.124) assuming errors are i.21 on page 457).122) Correspondingly we give in Figure 7.P bit P bit 1. In the following it is useful to evaluate the error probability of words c composed of b bits: Pe.28. 1 bl 0 P bit 1.29 the statistical model associated with a memoryless binary symmetric channel. .c D 1 . 7.d. therefore the various distributions are obtainable starting from: O Pbit D P[b` 6D b` ] (7.123) Pbit /b ' (7.5. On the other hand.i. if Pbit − 1 it follows that .1 1 bPbit and Pe.d.28 (see also Figure 6.i. A binary channel is typically characterized by the bit rate Rb D 1=Tb (bit/s) and by the bit error probability.5 for QAM. the second compares the performance of an analog transmission system with that of a digital system for the transmission of analog signals represented by the PCM method.c ' bPbit Figure 7.. Binary channel associated with digital transmission.1 Pbit /b (7. The simplest model considers errors to be i.P bit 1 ^ bl 0 Figure 7.7.29. Regenerative PCM repeaters 571 7. Memoryless binary symmetric channel.
126) In (7.30. Q The operations that transform s. which represents the information contained in the instantaneous samples of an analog signal by words of b bits.k/] (7. Figure 7. b 1.k/.k/2 j j jD0 > : Q i D −sat C . The signal s . .25).572 Chapter 7.t/ is then reconstructed from the received bits by a DAC.k/ is the word of b bits transmitted over the binary channel. with a quantization stepsize 1 given by (5. PCM is essentially performed by an analogtodigital converter.k/ D [cb 1 . We assume that each word is composed of b bits: hence there are L D 2b possible words corresponding to L quantizer levels.125) c. 2) quantization. Transmission over dispersive channels Linear PCM coding of waveforms As seen in Chapter 5. with components c j .k/ 2 f0. 1.i C 1 /1 2 (7. −sat ]. we assume the rule 8 b 1 > < i D X c . Figure 7. Composite transmission scheme of an analog signal via a binary channel. j D 0. to simplify the analysis. 1g. The quantizer is assumed uniform in the range [ −sat .125) where. The inverse bit mapper of the ADC performs the following function: sq .t/ into fb` g are summarized as 1) sampling. c0 . and 3) coding. : : : . which in turn is transmitted over a binary channel.kTc / D Q i ! c.30 gives the composite scheme where an input analog signal is converted by an ADC into a binary sequence fb` g. : : : .
41) it follows Meq D −2 12 D sat 12 3 Ð 22b (7.kTc / D s.k/] the received word. using (7.kTc / D s.k/] D Pbit Q (7. sq .kTc / Q Therefore the overall relation is sq . Hence.134) . assuming the quantization noise uniform.k/ 6D c j .129) Given the onetoone map between words and quantizer levels.kTc / D Q ıQ Q Q where 8 b 1 X > <ıD Q c j . the bit mapper of the Q Q DAC performs the inverse operation of (7.kTc / and the binary channel reconstructs sq with a certain error eCh . the presence of the quantization noise in the ADC.kTc / C eCh . : : : .kTc / C eCh . the errors on the detection of the binary sequence at the output of the binary channel.127) as illustrated in Figure 7.131) where the two error terms are assumed uncorrelated.124) it follows that P[Qq .kTc / C eq . The quantizer introduces an error eq such that sq .125): c.k/2 j Q jD0 > : Q ıQ D −sat C .k/ ! sq . Regenerative PCM repeaters 573 We assume the binary channel symmetric and memoryless. Q If we denote with c. if we express the generic detected bit at the output of the binary channel as c j . the error probability Q is given by: P[c j .132) (7. as they are to be ascribed to different phenomena.133) (7.kTc / 6D sq . In particular.Q C 1 /1 ı 2 (7.kTc / D sq .k/ D [cb 1 .30 the reconstructed analog signal s is different from the transmitted signal s Q for two reasons: 1.k/.7.130) Overall system performance In Figure 7.29.kTc / C eq . from (5.k/ 2 f0.128) (7.5.c ' bPbit s (7.kTc / Q (7. c0 . 2.kTc /] D Pe. 1g.
574 Chapter 7.143) . 0.k/ 6D c j .k/] D 1P[" j .k/ (7.k/ D c j .139) (7.k/ D 0] j D P[c j .137) We note that " j .140) ( E[" 21 ] D Pbit j 0 for j1 D j2 for j1 6D j2 22b 1 3 (7.136) " j .126) and (7. from (7.kTc /] D 12 Pbit b 1 X jD0 P[" j D 1] D 1 2 Pbit (7. Consequently.k/ 2 f 1.kTc /j2 ] s Meq Ms C MeCh (7. we get E[" j . from (7.c j .135) becomes eCh .kTc /] E[jQq .k//2 j Q (7.132) we have b 1 X .k/ c j .kTc / D 1 b 1 X jD0 c j . observing (7.135) eCh .k/" j2 . recalling footnote 3 on page 338. Transmission over dispersive channels The computation of MeCh is somewhat more difﬁcult.kTc / D 1 jD0 Let the error on the jth transmitted bit be Q " j .133) the output signaltonoise ratio is given by 3PCM D D D E[s 2 .141) 22 j D 12 Pbit (7.t/j2 ] s E[s 2 . the statistical power of the output signal of an interpolator ﬁlter in a DAC is equal to the statistical power of the input samples. with probabilities given by 8 > P[" j D 1] D 1 Pbit > 2 < > > : Then.138). First.k/ then (7. 1g.138) P[" j D 0] D 1 Pbit (7.k/2 j (7.k/] D hence from (7.137) 2 E[eCh .k/] D Pbit Q For a memoryless binary channel E[" j1 .t/] E[jQ .k/ 6D 0] C 0P[" j .k/] D 0 and E[" 2 .t/ s.kTc / s.142) We note that.
7.142) and (7. In particular.2 Regenerative repeaters The signal sent over a transmission line is attenuated and corrupted by noise.4 Ð 22b / the output is affected mainly by errors introduced by the binary channel.31 for various values of b.7.22b 1/ (7. whereas for Pbit > 1=. and for a signaltoquantization noise ratio 3q D Ms =. . We observe that in the general case of nonuniform quantization there are no simple expressions similar to (7. Using (7. To cover long distances it is therefore necessary to place repeaters along the transmission line to restore the signal.31. going from b D 6 to b D 8 bits per sample yields an increment of 3PCM of only 2 dB. For example for Pbit D 10 4 . that is the output error is mainly due to the quantization error. the above observations remain valid.144) is represented in Figure 7.144) We note that usually Pbit is such that Pbit 22b − 1: thus it results 3PCM ' 3q . however. equation (7. Signaltonoise ratio of a PCM system as a function of Pbit .142).5. −sat . we get 3PCM D 3q 1 C 4Pbit . For Pbit < 1=. −sat ] whereby 3q D 22b .144).5.4 Ð 22b / the output signal is corrupted mainly by the quantization noise. Regenerative PCM repeaters 575 55 50 b=8 45 40 b=6 35 (dB) 30 PCM Λ 25 b=4 20 15 b=2 10 5 0 −8 10 10 −7 10 −6 10 −5 10 P bit −4 10 −3 10 −2 10 −1 10 0 Figure 7.12 =12/ (see (5. for a signal s 2 U.33)).134) and (7.
if ac is the attenuation of the generic section i.145) In this example both the transmission channel and the various ampliﬁers do not introduce distortion.146) Analogously for N analog repeater sections. however. the signaltonoise ratio at the ampliﬁer output of a single section is given by (4. overall signal at the ampliﬁer input of repeater i. transmitted signal with bandwidth B and available power Ps . as the overall noise ﬁgure is equal to F D N Fsr (see (4. contributing to an increase of the disturbance in s . desired signal at the output of transmission section i. signal at the output of a system with N repeaters. the only disturbance in s . then PsCh D 1 Ps ac (7. Transmission over dispersive channels Analog transmission The only solution possible in an analog transmission system is to place analog repeaters consisting of ampliﬁers with suitable ﬁlters to restore the level of the signal and eliminate the noise outside the passband of the desired signal.92): 3D PsCh kT0 F A B (7.148) it is assumed that (4. expressed as 3a D is given by 3a D 3 Ps D kT0 FB N (7.t/ is due to additive noise introduced by the various Q devices.t/. ž r. the overall signaltonoise ratio. if F A is the noise ﬁgure of a single ampliﬁer. For a source at noise temperature T0 . in a system with analog repeaters.2.t/j2 ] s (7.83) holds. Q . possible distortion experienced by the desired signal through the various transmission channels and ampliﬁers also accumulates. the noise builds up repeater after repeater and the overall signaltonoise ratio worsens as the number of repeaters increases.t/. We consider the simpliﬁed scheme of Example 4.77)).2 on page 271 with ž s. with available power PsCh . it must be remembered that in practical systems. ž w.t/] E[jQ .t/. effective noise at the input of repeater i.t/. Q We note that. ž s .t/.t/ C w.148) E[s 2 . as a statistical power ratio is equated with an effective power ratio. ž sCh .t/ D sCh . Moreover.t/ s. Hence. The cascade of ampliﬁers along a transmission line. deteriorates the signaltonoise ratio.576 Chapter 7.t/.147) Obviously in the derivation of (7.
N D M log M (7.5.M 1/ Q Pbit D 0 (7.153) M2 1 2 0D Note that in (7.N ' 1 .32. and then retransmitted by a modulator.t/.125) we get ! r 3 2. Regenerative PCM repeaters 577 Digital transmission In a digital transmission system.32.t/. . then from (6. as an alternative to the simple ampliﬁcation of the received signal r.148). Figure 7. for a given overall Pbit . that in the ﬁrst case sCh depends linearly on s. however.1 Pbit / N ' N Pbit (7. regeneration allows a signiﬁcant saving in the power of the transmitted signal.108)11 PsCh (7.150) M log2 M M2 1 where from (6. Modeling each regenerative repeater by a memoryless binary symmetric channel (see Deﬁnition 6.M M Q M log2 M2 1 N Ã Âr 2. To obtain an expression of Pbit . we can resort to the regeneration of the signal. Basic scheme of digital regeneration.1 on page 457) with error probability Pbit . it is necessary to specify the type of modulator. whereas in the second it represents the modulated signal that does not depend linearly on s.152) Analog repeaters: Pbit.N D 2. 10 We note that a more accurate study shows that the errors have a Bernoulli distribution [4]. the digital message fb` g is ﬁrst reconstructed. Let us consider an MPAM system. the bit error probability at the output of N regenerative repeaters is equal to10 Pbit. With reference to the scheme of O Figure 7. Ã Âr 1/ 3 0 (7. Note. Even if a regenerative repeater is much more complex than an analog repeater.M 1/ N Q 3 0 Regenerative repeaters: Pbit. given the signal r. 11 To simplify the notation. and errors of the different repeaters statistically independent.151) kT0 F A Bmin It is interesting to compare the bit error probability at the output of N repeaters in the two cases.7. and ignoring the probability that a bit undergoes more errors along the various repeaters.149) assuming Pbit − 1.152) we used (7. we have indicated with the same symbol s Ch the desired signal at the ampliﬁer input for both analog transmission and digital transmission.
we get 8 22b > > > Ã Âq N analog repeaters > > > 3 > 1 C 4. and the minimum bandwidth of the transmission channel is equal to Bmin D b 1 D B 2T log2 M (7.157) 3PCM D 2b 2 > > > Âq Ã N regenerative repeaters > > > 3 > 1 C 4.144). the modulation interval T is equal to log2 M=Rb .22b 1/N Q : b Or else. for example by CELP.t/ and modulation of the message.146) we have 0D log2 M 3 b (7.155) We note that the digital transmission of an analog signal may require a considerable expansion of the required bandwidth. and/or a modulator with higher spectral efﬁciency.22b 1/Q < bN (7. valid for a uniform quantizer with 3q D 22b . if M is small. see (5. Obviously.156) The comparison between the two systems is based on the overall signaltonoise ratio for the same transmitted power and transmission channel characteristics. which includes PCM coding of s. Using (7.152) and (7.148).22b 1/Q < b > > > > > > > 1 C 4.578 Chapter 7.t/ with the digital transmission.44).153). Substituting the value of 0 given by (7. using a more efﬁcient digital representation of waveforms. that is assuming a uniform signal.154) Consequently. from (7.151).t/ the bit rate of the message is given by Rb D b 2B (7. Bmin may result very close to B or even smaller.155) in (7. for example. For PCM coding of s. for an MPAM modulator. by resorting to multilevel transmission. we get 8 22b > > > Âq Ã > > > 3a > 1 C 4. To simplify the notation. using (7.22b : 22b 1/N Q Âq 3a N b Ã N analog repeaters (7. and recalling (7. initially we will consider a 2PAM as modulator.158) N regenerative repeaters 3PCM D . Transmission over dispersive channels Comparison between analog and digital transmission We now compare the analog transmission of a signal s.156) for M D 2 in (7.
Consequently.156). from (7. which implies a modulator with M D 2b levels. Using regenerative repeaters. the increment in terms of 3 is equal to 6. as long as a sufﬁciently large number of bits and 3a larger than 17 dB are considered. digital transmission is more efﬁcient than analog transmission if the number of repeaters is large. therefore. then Pbit is very small.34.33. in practice it is interesting to determine the minimum value of 3 (or 0) so that 3PCM and 3a reach a certain value. of the order of 20–40 dB. the quantization error becomes predominant at the receiver.b 1/ 10 log10 . 3PCM as a function of 3a for analog repeaters and 2PAM. While the previous graphs relate 3PCM directly to 3a . for the same Pbit the modulator requires an increment of about 6. Regenerative PCM repeaters 579 45 b=7 40 b=6 35 b=5 30 (dB) 25 b=4 Λ PCM 20 b=3 15 b=2 10 5 0 0 5 10 15 20 Λ a 25 (dB) 30 35 40 45 Figure 7. In the case of analog repeaters. depending on the applications. say.b 1/ dB in terms of 0. plotted in Figure 7.5. The curve of 3PCM as a function of 3 for 128PAM. for example N D 20 in Figure 7. However. We note the threshold effect of Pbit as a function of 0 in a digital transmission system: if the ratio 0 is higher than a certain threshold. the plot of 3PCM as a function of 3a is given in Figure 7. We show also a comparison for the same required bandwidth. the PCM system is penalized by the increment of the bandwidth of the transmission channel.33. Therefore also for the same bandwidth. .35 these relations by varying the number N of repeaters and using a PCM encoder with b D 7. is shifted to the right by about 28 dB with respect to 2PAM.35. In this case. We illustrate in Figure 7. assuming an adequate number of bits for PCM coding is used. 3PCM is always much higher than 3a . We note that 3PCM is typically higher than 3a .b 1/. with respect to 2PAM. The parameter b denotes the number of bits for linear PCM representation.7.
as a function of 3 (signaltonoise ratio of each repeater section). 45 Λ PCM (N=10) 40 Λ 35 PCM (N=100) 30 Λ PCM . and 3PCM for digital transmission with 2PAM and b D 7. The dashed line represents 3PCM for 128PAM and b D 7.34.580 Chapter 7. Λ a (dB) ΛPCM(N=1000) 25 Λa(N=10) Λa(N=100) Λ (N=1000) a 20 15 10 5 0 10 15 20 25 30 35 40 Λ (dB) 45 50 55 60 65 Figure 7. . Transmission over dispersive channels 45 b=7 40 b=6 35 b=5 30 Λ PCM (dB) 25 b=4 20 b=3 15 b=2 10 5 0 0 5 10 15 20 Λ a (dB) 25 30 35 40 45 Figure 7. obtained by varying the number N of regenerative repeaters. The parameter b is the number of bits for linear PCM representation.35. 3PCM as a function of 3a for 2PAM transmission and N D 20 regenerative repeaters. 3a for analog transmission obtained by varying the number N of analog repeaters.
3rd ed. Upper Saddle River.159) we illustrate in Figure 7. Digital and analog communication systems. for a given objective. Roden. NJ: PrenticeHall. NJ: PrenticeHall. Bibliography 581 Figure 7. G. Upper Saddle River. [2] J. random variables and stochastic processes. for analog transmission and digital transmission with three different modulators. [4] A. The number of bits for PCM coding is b D 7. S. Papoulis. . Couch. 1997. Englewood Cliffs. Probability. Finally. Communication system engineering. 1996. for three different modulators. New York: McGrawHill. Minimum value of 3 as a function of the number N of regenerative repeaters required to guarantee an overall signaltonoise ratio of 36 dB.36 the minimum value of 3 as a function of the number of regenerative repeaters. 1991. [3] M..7.36. Analog and digital communication systems. 1994. Proakis and M. Salehi. 3PCM D 3a D 36 dB (7. W. Bibliography [1] L. NJ: PrenticeHall.
and D. E. Hayes. pp. vol. MacLane. [9] D. A. R. Digital communication. Sept. Birkoff and S. Mazo. pp. and K. Shanmugan. 1992. Pasupathy. Simulation of communication systems.582 Chapter 7. 1992. J. [8] G. 3rd ed. P. 1999. “Partialresponse signaling”. Lee. “Intersymbol interference error bounds with application to ideal bandlimited signaling”. [10] B. pp. Saltzberg. Boston. MA: Kluwer Academic Publishers. L. and S. on Communications. C. IEEE Trans. New York: Macmillan Publishing Company. on Information Theory. Principles of digital transmission with wireless applications.. 23. on Information Theory. vol. [6] P. Duttweiler. S. Benedetto and E. Messerschmitt and E. IEEE Trans. 1965. “An upper bound on the error probability in decisionfeedback equalization”. Kabal and P. 1994. 9. Gitlin. [11] R. J. Data communication principles. Messerschmitt. 490–497. .. July 1974. July 1968. Balaban. G. Weinstein. 563–568. Transmission over dispersive channels [5] S. A survey of modern algebra. 921–934. New York: Plenum Press. 2nd ed. [12] M. [7] D. Jeruchim. Biglieri. 1975. New York: Plenum Press. New York: Kluwer Academic Publishers. 20. IEEE Trans. G. vol.
b` 2 f0. especially in case the information message contains long sequences of ones or zeros.A Line codes for PAM systems The functions of line codes are: 1. Four formats are illustrated in Figure 7. The channel input is a PAM signal s. simply. NRZ space (NRZS): “1” is represented by no level transition. the binary sequence fb` g. 5]. NRZ level (NRZL) or.38. 1g. in particular [1. NRZ mark (NRZM): “1” is represented by a level transition. PAM transmitter with line encoder. for transmission over AWGN channels in the absence of ISI. several representations of binary symbols are listed.1 Line codes With reference to Figure 7. 4. “0” by a level transition. every other case is represented by the zero level. 7.37. Figure 7. 2. this task may be performed also by the transmit ﬁlter. is represented by a level transition. 3.A. in the second.t/. or be the output of a channel encoder. This appendix is divided in two parts: in the ﬁrst. and match it to the characteristics of the channel (see (7. . The sequence fak g is produced by a line encoder.7. 2. Dicode NRZ: A change of polarity in the sequence fb` g.37. For indepth study and analysis of spectral properties of line codes we refer to the bibliography. Line codes for PAM systems 583 Appendix 7. Nonreturntozero (NRZ) format The main feature of the NRZ family is that NRZ signals are antipodal signals: therefore NRZ line codes are characterized by the lowest error probability. 1.A. “10” or “01”. to shape the spectrum of the transmitted signals. partial response systems are introduced. obtained by modulating a rectangular pulse h T x . NRZ: “1” and “0” are represented by two different levels. to facilitate synchronization at the receiver.17)). could be directly generated by a source. 3. “0” by no level transition. to improve system performance in terms of Pe .
NRZ line codes.584 Chapter 7. Unipolar RZ: “1” is represented by a pulse having duration equal to half a bit interval. RZ line codes are illustrated in Figure 7. Transmission over dispersive channels NRZ−L 2 NRZ−M 2 1 1 0 1 1 0 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 0 −1 −1 −2 0 2 4 6 t/T 8 10 −2 0 2 4 6 t/T 8 10 NRZ−S 2 Dicode NRZ 2 1 1 0 1 1 0 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 0 −1 −1 −2 0 2 4 6 t/T 8 10 −2 0 2 4 6 t/T 8 10 Figure 7. “0” by a zero pulse.L) or Manchester NRZ: “1” is represented by a transition from high level to low level. 4. Dicode RZ: A change of polarity in the sequence fb` g. using a pulse having duration equal to half a bit interval. Polar RZ: “1” and “0” are represented by opposite pulses with duration equal to half a bit interval. we observe that the signal does not have zero mean. Long sequences of ones or zeros in the sequence fb` g . “10” or “01”. 2.39. is represented by a level transition. Bipolar RZ or alternate mark inversion (AMI): Bits equal to “1” are represented by rectangular pulses having duration equal to half a bit interval. “0” by a transition from low level to high level. Biphase level (B. for example. This property is usually not desirable. as.38. sequentially alternating in sign. for transmission over coaxial cables. Returntozero (RZ) format 1. Biphase (Bφ) format 1. bits equal to “0” by the zero level. every other case is represented by the zero level. 3.
Biphase mark (B. however. 2. “0” is represented by a second transition within the bit interval. “0” is represented by a constant level. Block line codes The input sequence fb` g is divided into blocks of K bits. Line codes for PAM systems 585 Unipolar RZ 2 1. Delay modulation or Miller code “1” is represented by a transition at midpoint of the bit interval. that this line code leads to a doubling of the transmission bandwidth. Each block of K bits is then mapped into a block of N symbols belonging to an alphabet of cardinality M.40. if “0” is followed by another “0”.39.M) or Manchester 1: A transition occurs at the beginning of every bit interval. “1” is represented by a second transition within the bit interval.7. do not create synchronization problems. but requires a lower bandwidth. with . This code shapes the spectrum similar to the Manchester code. It is easy to see.S): A transition occurs at the beginning of every bit interval.40. Biphase line codes are illustrated in Figure 7. “1” is represented by a constant level. “0” is represented by a constant level. RZ line codes. The delay modulation line code is illustrated in Figure 7.A. Biphase space (B.5 Polar RZ 2 1 1 0.5 0 0 1 1 0 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 −1 −0. 3. a transition occurs at the end of the bit interval.5 −1 0 2 4 6 t/T 8 10 −2 0 2 4 6 t/T 8 10 Bipolar RZ 2 Dicode RZ 2 1 1 0 1 1 0 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 0 −1 −1 −2 0 2 4 6 t/T 8 10 −2 0 2 4 6 t/T 8 10 Figure 7.
160) The KBNT codes are an example of block line codes where the output symbol alphabet is ternary f 1. Transmission over dispersive channels Biphase−L 2 Biphase−M 2 1 1 0 1 1 0 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 0 −1 −1 −2 0 2 4 6 t/T 8 10 −2 0 2 4 6 t/T 8 10 Biphase−S 2 Delay Modulation 2 1 1 0 1 1 0 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 0 −1 −1 −2 0 2 4 6 t/T 8 10 −2 0 2 4 6 t/T 8 10 Figure 7.162) š1 0 if bk 6D bk if bk D bk 1 1 (7.161) At the decoder the bits of the information sequence may be recovered by O O bk D a k C bk O Note that ak 2 f 1. f / 4 sin2 .163) From (7.and delay modulation line codes.161). the relation between the PSDs of the sequences fak g and fbk g is given by Pa . in particular ( ak D 1 (7. 1g (7. f / j1 e j2³ f T 2 j D Pb . 0. 1g. f / D Pb . 1g.40. 0.³ f T / .586 Chapter 7. B. Alternate mark inversion (AMI) We consider a differential binary encoder. the constraint 2K Ä M N (7. that is a k D bk bk 1 with bk 2 f0.
Note that the PSD presents a zero at f D 0. 0.168) p2 C . In any case.M 3/.M 3/. 1g.M 1/.41 for different values of p.164) (7. given that an error occurs in fak g. and a bit bk D 1 is mapped alternately in ak D C1 or ak D 1. Also in this case ma D 0. long sequences of information bits fbk g that are all equal to 1 or 0 generate sequences of symbols fak g that are all equal: this is not desirable for synchronization. Next. observing O (7. from (7. by c k D bk ý c k where ý denotes the modulo 2 sum. Line codes for PAM systems 587 Therefore Pa .A. for MQAM systems the results can be extended to the signals on the I and Q branches. where the symbols fak g belong to the following alphabet12 of cardinality M: ak 2 A D f . O Moreover. This problem can be solved by precoding: from the O sequence of bits fbk g we ﬁrst generate the sequence of bits fck g.165) if bk D 1 if bk D 0 (7.161) is a reduced noise immunity with respect to antipodal transmission. Consequently. . which. independently of the distribution of fbk g.162). a disadvantage of the encoding method (7.1 2 p/ sin2 ³ f T Ð The plot of Pa e j2³ f T is shown in Figure 7.166) decoding may be performed simply by taking the magnitude of the detected symbol: O O bk D jak j (7. Moreover. the biggest problem is the error propagation at the decoder. a k D ck with ak 2 f 1. we have Á sin2 . f / exhibits zeros at frequencies that are integer multiples of 1=T . because a detector at the receiver must now decide among three levels. .³ f T / Pa e j2³ f T D 2 p. it results in ( š1 ak D 0 ck 1 1 (7. 12 In the present analysis only MPAM systems are considered.A. 1g.1.M 1/g (7. 7.1 p/ (7. that is for ak 2 f 1. generates a sequence of bits fbk g that are in O error until another error occurs in fak g.7. .166) In other words. we recall in Figure 7. : : : . 1g.169) and w. from (7. a bit bk D 0 is mapped into the symbol ak D 0. with ck 2 f0.161) we have ma D 0. in particular at f D 0.167) It is easy to prove that for a message fbk g with statistically independent symbols. If the power of the transmitted signals is constrained. Hence.2 Partial response systems From Section 7. . and p D P[bk D 1].t/ is an additive white Gaussian noise.42 the block diagram of a baseband transmission system. We observe that the AMI line code is a particular case of the partial response system named dicode [6].
43a. is added to the desired signal. Block diagram of a baseband transmission system. Sampling the received signal at instants t0 CkT yields the sequence fyk g.t/ (7.42. Power spectral density Pa .t/ by the receive ﬁlter. as illustrated in Figure 7. and D is the unit delay operator. We assume that the transmission channel is ideal: the overall system can then be represented as an interpolator ﬁlter having impulse response q.171) where the coefﬁcients fli g are equal to the samples fh i g. The discretetime equivalent of the system is shown in Figure 7. obtained by ﬁltering w.588 Chapter 7.k D w R .41.ej2³ fT / of an AMI encoded message.t0 C i T /g. ak T h Tx s(t) g sCh (t) r (t) Ch w(t) Ch g rR (t) yk t 0 +kT ^ ak Rc Figure 7.t/. . Transmission over dispersive channels Figure 7.t/ D h T x Ł g Rc .D/ D N 1 X i D0 li D i (7.170) A noise signal w R . and w R.43b. The partial response (PR) polynomial of the system is deﬁned as l. We assume that fh i g is equal to zero for i < 0 and i ½ N . where fh i D q.t0 C kT /.
D/ and limits the system bandwidth.42. PR version of the system of Figure 7.44 is equivalent to that of Figure 7.43a with q. periodic of period 1=T .44 are given by .174) the system of Figure 7.D/ is deﬁned in (7.7.172) The symbols at the output of the ﬁlter l. observing (7.e j2³ f T /. on one hand.D/.171).42. the decomposition of Figure 7.44 suggests two possible ways to implement the system of Figure 7. where l. the equivalent discretetime model is obtained for h i D li . C1 X mD 1 G f mÁ DT T (7. that forces the system to have an overall discretetime impulse response equal to fh i g. and g is an analog ﬁlter satisfying the Nyquist criterion for the absence of ISI.44. to design an efﬁcient receiver. The scheme of Figure 7. A PR system is illustrated in Figure 7.44. As it will be clear from the analysis. and.174) Also.172). ak T l(D) ak (t) g rR (t) yk t 0 +kT ^ ak w R (t) Figure 7. ž an analog ﬁlter g that does not modify the overall ﬁlter h. In other words.t/ ak D N 1 X i D0 li ak i (7.44. from (7. Line codes for PAM systems 589 Figure 7.D/ in Figure 7.t iT/ (7.42: .173) Note that the overall scheme of Figure 7.t/ D N 1 X i D0 li g.42 is decomposed into two parts: ž a ﬁlter with frequency response l.43. on the other. allows simpliﬁcation of the study of the properties of the ﬁlter h. Equivalent schemes to the system of Figure 7.A.
174) the ﬁlter q assumes the expression Ã Â N 1 X t iT q.45.P R/ .e j2³ f T / G.2T / can be widened. from (7.177) 2T Substitution of (7. many PR systems are designed for minimum bandwidth. f / around f D 1=.590 Chapter 7.175) it must be 1 (7.179) b) Spectral zeros at f D 1=. therefore the transmit ﬁlter h T x and the receive ﬁlter g Rc must satisfy the relation HT x .177) into (7.176) g is a Nyquist ﬁlter. the .n 1/ derivatives are continuous and the nth derivative is discontinuous.P R/ .e j2³ f T / G.176) The implementation of a PR system using a digital ﬁlter is shown in Figure 7.D/. f / G Rc . Implementation of a PR system using a digital ﬁlter. f / is continuous if and only if l. With the aim of maximizing the transmission bit rate. a) System bandwidth. f / Tx Rc (7.175) 2. then jq. 1. Digital: the ﬁlter l.t/. f / G . observing (7. then the transition band of G.172) yields the following conditions on the ﬁlter g: 8 Â Ã 1 < F 1 t T jfj Ä G. it is known that if Q. f / D Q. f / (7.t/j asymptotically decays as 1=jtjnC1 .D/ has . f / D (7.2T /. f / D l. The continuity of Q.P R/ must satisfy the relation Tx Rc H. It is easily proven that in a minimum bandwidth system.D/ has a zero of multiplicity greater than one in D D 1.178) ! g. i. On the other hand.P R/ and receive ﬁlter g .e. The choice of the PR polynomial Several considerations lead to the selection of the polynomial l. From the theory of signals.1 C D/n as a factor. f / and its ﬁrst . . then the transmit ﬁlter h . Note from (7.D/ is implemented as a component of the transmitter by a digital ﬁlter. f / D G.n 1/th derivative of Q.t/ D sinc 2T : T 0 elsewhere l.175) and (7.172) that in both relations (7. f / and of its derivatives helps to reduce the portion of energy contained in the tails of q. thus simplifying the design of the analog ﬁlters. Transmission over dispersive channels ak T l(D) ak(t) h Tx (PR) s(t) g Ch g (PR) Rc rR (t) yk t 0 +kT ^ ak w(t) Figure 7.t/ D li sinc T i D0 (7.45. if l. f / D 0 jfj> Correspondingly. Analog: the system is implemented in analog form.
e j2³ f T / at f D 0.D/ have an alphabet A.D/ D 1 D j2³ f T .t/.g. observing (7.1 š D/. polynomials l.D/ D 1 C D (7.t/ D n l .M 1/ C 1 Ä M . it is possible to evaluate the expression of Q.t/ of cardinality M . d) Number of output levels. In the case of minimum bandwidth systems.4 on page 58).t/ of the output alphabet A. Example 7.t/ Ä M n l (7.t/ .e j2³ f T / D 2j e j³ f T sin. or for transmission over channels with frequency responses that exhibit a spectral null at the frequency f D 0.2T / and has the following expression l. obtained by setting D D e l. as well as the correQ sponding expressions of Q. f / and q. then n l increases and. e) Some examples of minimum bandwidth systems. (7.2 (Duobinary ﬁlter) The duobinary ﬁlter introduces a zero at frequency f D 1=. the symbols at the output of the ﬁlter l.D/ that are often found in practical applications of PR systems are considered.D/ has been selected.180).D/ contains more than one factor .182) is given by (7. if the coefﬁcients fli g are all equal.t/ D q t Q 2 (7. detection of the sequence fak g by a threshold detector will cause a loss in system performance.180) In particular. e. then M .173).³ f T / Example 7. We note that. f / and q. Note that a zero of l.7.D/ are described.D/ different from zero. A transmitted signal with attenuated spectral components at low frequencies is desirable in many cases. f / In Table 7.D/ in D D 1 corresponds to a zero of l. f / D e j³ f . From (7. If we indicate with n l the number of coefﬁcients of l.N 1/T q.183) The frequency response. Line codes for PAM systems 591 c) Spectral zeros at f D 0. it is convenient to consider the timeshifted pulse Ã Â . If the power of the transmitted .1 (Dicode ﬁlter) The dicode ﬁlter introduces a zero at frequency f D 0 and has the following expression l.A.7. Q In the next three examples. also the number of output levels increases.184) .M 1/ C 1. then the following inequality for M .A.t/ once the polynomial l. and the cardinality M .t/ .A.181) Q Q.t/ signal is constrained. if l.t/ holds n l .N 1/=2.2 the more common polynomials l. As the coefﬁcients fli g are generally symmetric or antisymmetric around i D . for the implementation of SSB modulators (see Example 1.N 1/T Q.
Transmission over dispersive channels Table 7.³t=T / 64T 3 t ³ .³t=T / 2 2 ³t t 4T 4M 3 1 2D 2 C D 4 D2 D4 4M 3 2C D D2 4M 3 2 4M 3 The frequency response is given by l.4³ f T / 2M 2M 1 1 1 2M 1 1 C 2D C D 2 D2 4M 3 1C D 4M 3 1 D D2 C D3 16T 2 cos.t/ Q 4T 2 cos.4t 2 T 2/ M .³t=T / ³t t 2 4T 2 Â Ã 3t T T2 sin.³ f T / j 2T sin.186) The plot of the impulse response of a duobinary ﬁlter is shown in Figure 7.³ f T / sin. We notice that the tails of the two sinc functions cancel each other.³t=T / ³ T 2 4t 2 8T t cos.179) we have q.2³ f T / 4T sin.t/ D sinc j2³ f T / D 2e j³ f T cos. Example 7.185) Â Ã Â Ã t T t C sinc T T (7. and has the following expression l.188) .³ f T / j 2T sin.³t=T / ³t T 2 t 2 cos.46 with a continuous line.187) /D1 e D 2j e j2³ f T (7.4t 2 T 2 / 8T 3 sin.4t 2 9T 2 /.3 (Modiﬁed duobinary ﬁlter) The modiﬁed duobinary ﬁlter combines the characteristics of duobinary and dicode ﬁlters.A.2T / q.2³ f T / C j 3T sin.4³ f T / C j 3T sin.2³ f T / 4T sin2 .1 C D/ D 1 j4³ f T D2 sin.4t 2 3T 2 / ³ .³t=T / ³ 4t 2 T 2 2T 2 sin.³t=T / 2 ³t t T2 Â Ã 2 2T 3t 2T sin.e j2³ f T D/ .2³ f T / T C T cos.2³ f T / T C T cos.³ f T / (7.e Observing (7.D/ D .D/ Q Q.³t=T / ³ t2 T 2 2T 3 sin.³ f T / D3 j 4T cos.2 Properties of several minimum bandwidth systems. in line with what was stated at point b) regarding the aymptotical decay of the pulse of a PR system with a zero in D D 1. f / for j f j Ä 1=.2³ f T / (7.2³ f T / 4T cos2 .1 The frequency response becomes l.³t=T /.4t 2 9T 2 /. l.³ f T / sin.592 Chapter 7.t/ 1C D 1 D D2 2T cos.
46 with a dashed line.P R/ .191) 1 > :0 jfj > 2T In Figure 7. (7.47 the PSD of a minimum bandwidth PR system is compared with that of a PAM system. Line codes for PAM systems 593 1. The spectrum of the sequence of symbols fak g is assumed white. a modiﬁed duobinary ﬁlter is considered. the spectrum of the transmitted signal is given by (see (7.5 q(t) 0 −0.e j2³ f T /j2 P .189) The plot of the impulse response of a modiﬁed duobinary ﬁlter is shown in Figure 7.17)) þ þ2 þ1 þ þ l. f / D þ (7.t/ for duobinary ( ) and modiﬁed duobinary (. With reference to the PR system of Figure 7.5 −1 −3 −2 −1 0 1 t/T 2 3 4 5 Figure 7.46. f / Ps . Using (7. f / D (7.t/ D sinc Â Ã t T Â sinc t 2T T Ã (7.179) it results in q.e j2³ f T / H. f) Transmitted signal spectrum.45.A.190) Tx þ T For a minimum bandwidth system. f / < jfj Ä a 2T Ps .P R/ . f /þ Pa . Plot of q. For the PR system.178).7. f / given by (7.190) simpliﬁes into Tx 8 1 > jl. with H.5 1 0. so that the spectrum is obtained as the .) ﬁlters.
whereas the summation represents the ISI term that is often designated as “controlled ISI”.P R/ . e j2³ f T /j2 D j2 sin.43b. . Tx plotted with continuous lines. as it is deliberately introduced. the signal s R.192) The term l0 ak is the desired part of the signal s R.k g.k D ak D l0 ak C N 1 X i D1 li ak i (7. f T /. 1. At the equalizer output. and the spectrum is plotted with a dashed line. PSD of a modiﬁed duobinary PR system and of a PAM system.2³ f T /j2 and jH. Transmission over dispersive channels Figure 7. 13 We discuss four possible solutions. f /j2 D T 2 rect. product of the functions jl.D/ in the following form .47. at instant k the symbol ak plus a noise term is 13 For a ﬁrst reading it is suggested that only solution 3 is considered. A zeroforcing linear equalizer (LEZF) having D transform equal to 1=l. The study of the other solutions should be postponed until the equalization methods of Chapter 8 are examined. . the transmit ﬁlter h T x is a square root raised cosine with rolloff factor ² D 0:5.k .t/ s R.D/ is used. For the PAM system. LEZF. Symbol detection and error probability We consider the discretetime equivalent scheme of Figure 7.594 Chapter 7.k can be expressed as a function of symbols fak g and coefﬁcients fli g of the ﬁlter l.t/ The receiver detects the symbols fak g using the sequence of samples fyk D ak C w R.
however. DFE. This solution.48a. 4. and makes use of a threshold detector with .t/ ary nature of the symbols ak . O N 1 X i D1 1 yk D ak C Q l0 ! li ek i w R. however.t/ levels. A second solution resorts to a decisionfeedback equalizer (DFE). corresponds to maximumlikelihood sequence detection (MLSD) of fak g.193) ak a detection error. obtained. We observe that at the decision point the signal yk has the expression N 1 yk D Q l0 If we indicate with ek D ak we obtain .k N 1 X i D1 ! li ak O i (7. the detected symbols fak g are obtained by an Mlevel threshold detector. as shown in Figure 7. Line codes for PAM systems 595 Figure 7. because the noise is eliminated by the threshold detector. We note. that the ampliﬁcation of noise by the ﬁlter 1=l.t/ ak C w R. It yields the best performance. Viterbi algorithm. having D transform equal to 1 l. Four possible solutions to the detection problem in the presence of controlled ISI. exploits .192) in (7. 3. This structure does not lead to noise ampliﬁcation M as solution 1. then substituting (7.t/ levels followed by a LEZF.48b. shown in Figure 7. there is still the problem of error propagation.D/=l0 .e j2³ f T / D 0. but there is no noise ampliﬁcation as the ISI is removed by the feedback ﬁlter.k C (7.t/ the M .193). Threshold detector with M .7.48c.D/ is inﬁnite at frequencies f such that l. An Mlevel threshold detector is also employed by the DFE.194) The equation (7. O as illustrated in Figure 7. shown in Figure 7. 2.194) shows that a wrong decision negatively inﬂuence successive decisions: this phenomenon is known as error propagation.48d.A. This solution. .48.
596
Chapter 7. Transmission over dispersive channels
Solution 2 using the DFE is often adopted in practice: in fact it avoids noise ampliﬁcation and is simpler to implement than the Viterbi algorithm. However, the problem of error propagation remains. In this case, using (7.194) the error probability can be written as þ # Ã "þ Â N 1 þ þ X 1 þ þ P þw R;k C Pe D 1 li ek i þ > l0 (7.195) þ þ M i D1 A lower bound Pe;L can be computed for Pe by assuming the error propagation is absent, or setting fek g D 0, 8k, in (7.195). If we denote by ¦w R the standard deviation of the noise w R;k , we obtain Ã Ã Â Â l0 1 (7.196) Q Pe;L D 2 1 M ¦w R Assuming w R;k white noise, an upper bound Pe;U is given in [7] in terms of Pe;L : Pe;U D .M=.M M N 1 Pe;L 1// Pe;L .M N
1
1/ C 1
(7.197)
From (7.197) we observe that the effect of the error propagation is that of increasing the error probability by a factor M N 1 with respect to Pe;L . A solution to the problem of error propagation is represented by precoding, which will be investigated in depth in Chapter 13.
Precoding
We make use here of the following two simpliﬁcations: 1. the coefﬁcients fli g are integer numbers; 2. the symbols fak g belong to the alphabet A D f0; 1; : : : ; M because arithmetic modulo M is employed. We deﬁne the sequence of precoded symbols fak g as: N ! N 1 X . p/ . p/ li ak i mod M N a k l0 D a k N
i D1 . p/
1g; this choice is made
(7.198)
We note that (7.198) has only one solution if and only if l0 and M are relatively prime [8]. In case l0 D Ð Ð Ð D l j 1 D 0 mod M, and l j and M are relatively prime, (7.198) becomes ! N 1 X . p/ . p/ ak j l j D ak N li ak i mod M N (7.199)
i D jC1
For example, if l.D/ D 2C D Therefore (7.199) is used.
D 2 and M D 2, (7.198) is not applicable as l0 mod M D 0.
7.A. Line codes for PAM systems
597
Applying the PR ﬁlter to fak g we obtain the sequence N ak D
.t/ N 1 X i D0
. p/
li ak N
. p/ i
(7.200)
From the comparison between (7.198) and (7.200), or in general (7.199), we have the fundamental relation
.t/ ak mod M D ak
(7.201)
.t/ Equation (7.201) shows that, as in the absence of noise we have yk D ak , the symbol ak can be detected by considering the received signal yk modulo M; this operation is O memoryless, therefore the detection of ak is independent of the previous detections fak i g, O i D 1; : : : ; N 1. Therefore the problem of error propagation is solved. Moreover, the desired signal is not affected by ISI. If the instantaneous transformation
ak
. p/
. p/
D 2ak N
. p/
.M
1/
(7.202)
is applied to the symbols fak g, then we obtain a sequence of symbols that belong to the N . p/ in (7.169). The sequence fa . p/ g is then input to the ﬁlter l.D/. Precoding alphabet A k consists of the operation (7.198) followed by the transformation (7.202). However, we note that (7.201) is no longer valid. From (7.202), (7.200), and (7.198), we obtain the new decoding operation, given by ! .t/ ak ak D C K mod M (7.203) 2 where K D .M 1/
N 1 X i D0
li
(7.204)
A PR system with precoding is illustrated in Figure 7.49. The receiver is constituted by a threshold detector with M .t/ levels that provides the symbols fak g, followed by a block O .t/ that realizes (7.203) and yields the detected data fak g. O
Error probability with precoding
To evaluate the error probability of a system with precoding, the statistics of the symbols .t/ fak g must be known; it is easy to prove that if the symbols fak g are i.i.d., the symbols .t/ fak g are also i.i.d.
ak a (p) precoder
k
a (t) l(D)
k
yk
^ (t) ak
decoder
^ ak
Figure 7.49. PR system with precoding.
598
Chapter 7. Transmission over dispersive channels
If we assume that the cardinality of the set A.t/ is maximum, i.e. M .t/ D M n l , then the .t/ output levels are equally spaced and the symbols ak result equally likely with probability
.t/ P[ak D Þ] D
1 M nl
Þ 2 A.t/
(7.205)
.t/ In general, however, the symbols fak g are not equiprobable, because several output levels are redundant, as can be deduced from the following example.
Example 7.A.4 (Dicode ﬁlter) We assume M D 2, therefore ak D f0; 1g; the precoding law (7.198) is simply an exclusive or and ak N
. p/ . p/
D ak ý ak N
. p/ 1
(7.206)
The symbols fak g are obtained from (7.202), ak they are antipodal as ak are given by
. p/ . p/
D 2ak N
. p/
1
(7.207)
D f 1; C1g. Finally, the symbols at the output of the ﬁlter l.D/
.t/ ak D ak . p/ . p/ 1 . p/ . p/ 1
ak
D 2 ak N
ak N
Á
(7.208)
.t/ The values of ak 1 , ak , ak and ak are given in Table 7.3. We observe that both output N N levels š2 correspond to the symbol ak D 1 and therefore are redundant; the three levels are not equally likely. The symbol probabilities are given by .t/ P[ak D š2] D .t/ P[ak 1 4 1 2
. p/
. p/
(7.209)
D 0] D
Figure 7.50a shows the precoder that realizes equations (7.206) and (7.207). The decoder, O realized as a map that associates the symbol ak D 1 to š2, and the symbol ak D 0 to 0, is O illustrated in Figure 7.50b.
Table 7.3 Precoding for the dicode ﬁlter.
ak N
. p/ 1
ak 0 1 0 1
ak N
. p/
.t/ ak
0 0 1 1
0 1 1 0
0 C2 0 2
7.A. Line codes for PAM systems
599
0 1
D
(a) precoder
^ (t) a
k
0 2
0 1
^ ak
(b) decoder
Figure 7.50. Precoder and decoder for a dicode ﬁlter l.D/ with M D 2.
Alternative interpretation of PR systems
Up to now we have considered a general transmission system, and looked for an efﬁcient design method. We now assume that the system is given, i.e. that the transmit ﬁlter as well as the receive ﬁlter are assigned. The scheme of Figure 7.44 can be regarded as a tool for the optimization of a given system where l.D/ includes the characteristics of the .t/ transmit and receive ﬁlters: as a result, the symbols fak g no longer are the transmitted symbols, but are to be interpreted as the symbols that are ideally received. In the light of these considerations, the assumption of an ideal channel can also be removed. In this case the ﬁlter l.D/ will also include the ISI introduced by the channel. We observe that the precoding/decoding technique is an alternative equalization method to the DFE that presents the advantage of eliminating error propagation, which can considerably deteriorate system performance. Q In the following two examples [9], additive white Gaussian noise w R;k D wk is assumed, and various systems are studied for the same signaltonoise ratio at the receiver. Example 7.A.5 (Ideal channel g) a) Antipodal signals. We transmit a sequence of symbols from a binary alphabet, ak 2 f 1; 1g. The received signal is Q yk D ak C w A;k
2 where the variance of the noise is given by ¦w A D ¦ I2 . Q At the receiver, using a threshold detector with threshold set to zero, we obtain
Â Pbit D Q
1 ¦I
Ã (7.211)
← ←
ak
ak
(p)
(p) 1 a k +1
← ←
(7.210)
600
Chapter 7. Transmission over dispersive channels
.t/ b) Duobinary signal with precoding. The transmitted signal is now given by ak D ak C . p/ . p/ ak 1 2 f 2; 0; 2g, where ak 2 f 1; 1g is given by (7.202) and (7.198). The received signal is given by
. p/
yk D ak C w B;k Q
2 where the variance of the noise is ¦w B D 2¦ I2 , as ¦ 2.t/ D 2. Q ak
.t/
(7.212)
At the receiver, using a threshold detector with thresholds set at š1, we have the following conditional error probabilities: Ã Â 1 .t/ P[E j ak D 0] D 2Q ¦w B Q Ã Â 1 .t/ .t/ P[E j ak D 2] D P[E j ak D 2] D Q ¦w B Q Consequently, at the detector output we have Pbit D P[ak 6D ak ] O
.t/ .t/ D P[E j ak D 0] 1 C P[E j ak D š2] 1 2 2 Ã Â 1 D 2Q p 2 ¦I
We observe a worsening of about 3 dB in terms of the signaltonoise ratio with respect to case a). c) Duobinary signal. given by
.t/ The transmitted signal is ak D ak C ak 1.
The received signal is (7.213)
yk D ak C ak
1
C wC;k Q
2 where ¦wC D 2¦ I2 . We consider using a receiver that applies MLSD to recover the data; Q from Example 8.12.1 on page 687 it results in p ! Â Ã 8 1 (7.214) Pbit D K Q DKQ 2¦wC ¦I Q
where K is a constant. We note that the PR system employing MLSD at the receiver achieves a performance similar to that of a system transmitting antipodal signals, as MLSD exploits the correlation .t/ between symbols of the sequence fak g. Example 7.A.6 (Equivalent channel g of the type 1 C D) In this example it is the channel itself that forms a duobinary signal.
7.A. Line codes for PAM systems
601
d) Antipodal signals. Transmitting ak 2 f 1; 1g, the received signal is given by yk D ak C ak
1
C w D;k Q
(7.215)
2 where ¦w D D 2¦ I2 . Q An attempt at preequalizing the signal at the transmitter by inserting a ﬁlter l.D/ D .t/ 1=.1 C D/ D 1 D C D 2 C Ð Ð Ð would yield symbols ak with unlimited amplitude; therefore such a conﬁguration cannot be used. Equalization at the receiver using the scheme of Figure 7.48a would require a ﬁlter of the type 1=.1 C D/, which would lead to unlimited noise enhancement. Therefore we resort to the scheme of Figure 7.48c, where the threshold detector has thresholds set at š1. To avoid error propagation, we precode the message and transmit the . p/ sequence fak g instead of fak g. At the receiver we have
yk D ak
. p/
C ak
. p/ 1
C w D;k Q
(7.216)
We are therefore in the same conditions as in case b), and Ã Â 1 Pbit D 2Q p 2 ¦I
(7.217)
e) MLSD receiver. To detect the sequence of information bits from the received signal (7.215), MLSD can be adopted. Pbit is in this case given by (7.214).
602
Chapter 7. Transmission over dispersive channels
Appendix 7.B
Computation of Pe for some cases of interest
7.B.1
Pe in the absence of ISI
yk D h 0 ak C w R;k ak 2 A
In the absence of ISI, the signal at the decision point is the type (7.102) (7.218)
where w R;k is the sample of an additive noise signal. Assuming fw R;k g stationary with probability density function pw .¾ /, from (7.218) for ak D Þn 2 A we have p yk jak .² j Þn / D pw .² Therefore the MAP criterion (6.26) becomes ² 2 Rm a k D Þm O if Þm D arg max pn pw .²
Þn
h 0 Þn /
(7.219)
h 0 Þn /
(7.220)
We consider now the application of the MAP criterion to an MPAM system, where Þn D 2n 1 M n D 1; : : : ; M (7.221)
The decision regions fRn g, n D 1; : : : ; M, are formed by intervals, or, in general, by the union of intervals, whose boundary points are called decision thresholds f−i g, i D 1; : : : ; M 1. Example 7.B.1 (Determination of the optimum decision threholds) We consider a 4PAM system with the following symbol probabilities: ¦ ² 3 3 1 1 ; ; ; f p1 ; p2 ; p3 ; p4 g D 20 20 2 5 The noise is assumed to have an exponential probability density function pw .¾ / D þ e 2
j¾ jþ
(7.222)
(7.223)
2 where þ is a constant; the variance of the noise is given by ¦w D 2=þ 2 . The curves
pn pw .²
h 0 Þn /
n D 1; : : : ; 4
(7.224)
are illustrated in Figure 7.51. We note that, for the choice in (7.222) of the symbols probabilities, the decision thresholds, also shown in Figure 7.51, are obtained from the intersections between curves in (7.224) relative to two adjacent symbols; therefore they are given by the solutions of the M 1 equations pi pw .−i h 0 Þi / D pi C1 pw .−i h 0 Þi C1 / i D 1; : : : ; M 1 (7.225)
7.B. Computation of Pe for some cases of interest
603
pnpw(ρh oαn), n=1,2,3,4
τ
1
τ
2
τ
ρ
3
Figure 7.51. Optimum thresholds for a 4PAM system with nonequally likely symbols.
We point out that, if the probability that the symbol ` is sent is very small, p` − 1, the measure of the corresponding decision interval could be equal to zero, and consequently this symbol would never be detected. In this case the decision thresholds will be fewer than M 1. Example 7.B.2 (Computation of Pe for a 4PAM system) We indicate with Fw .x/ the probability distribution of w R;k : Z x pw .¾ / d¾ Fw .x/ D
1
(7.226)
For a MPAM system with thresholds −1 ; −2 , and −3 , the probability of correct decision is given by (6.18): 4 X Z pn pw .² h 0 Þn / d² P[C] D
nD1 Rn
Z
−1 1
D p1 Z C p3
pw .² pw .²
h 0 Þ1 / d² C p2
Z
−2 −1
pw .²
C1
h 0 Þ2 / d² (7.227) h 0 Þ4 / d² h 0 Þ2 /] Fw .−3 h 0 Þ4 /]
−3 −2
h 0 Þ3 / d² C p4
Z
−3
pw .²
D p1 [Fw .−1 C p3 [Fw .−3
h 0 Þ1 /] C p2 [Fw .−2 h 0 Þ3 / Fw .−2
h 0 Þ2 /
Fw .−1
h 0 Þ3 /] C p4 [1
604
Chapter 7. Transmission over dispersive channels
We note that, if Fw is a continuous function, optimum thresholds can be obtained by equating to zero the derivative of the expression in (7.227) with respect to −1 ; −2 , and −3 . In the case of equally likely symbols and equidistant thresholds, i.e. −i D h 0 .2i equation (7.227) yields P[C] D 1 Â 2 1 1 M Ã Fw . h 0 / (7.229) M/ i D 1; : : : ; M 1 (7.228)
We note that (7.229) is in agreement with (6.122) obtained for Gaussian noise.
7.B.2
Pe in the presence of ISI
We consider MPAM transmission in the presence of ISI. We assume the symbols in (7.221) are equally likely and the decision thresholds are of the type given by (7.228). With reference to (7.65), the received signal at the decision point assumes the following expression: yk D h 0 ak C ik C w R;k where ik represents the ISI and is given by X ik D h i ak
i 6D0
(7.230)
i
(7.231)
and w R;k is Gaussian noise with statistical power ¦ 2 and statistically independent of the i.i.d. symbols of the message fak g. We examine various methods to compute the symbol error probability in the presence of ISI.
Exhaustive method
We refer to the case of 4PAM transmission with Ni D 2 interferers due to one nonzero precursor and one nonzero postcursor. Therefore we have ik D ak
1h1
C akC1 h
1
(7.232)
We deﬁne the vector of symbols that contribute to ISI as a0 D [ak k Then ik can be written as a function of a 0 as k ik D i.a0 / k (7.234)
1 ; akC1 ]
(7.233)
Therefore, ik is a random variable that assumes values in an alphabet with cardinality L D M Ni D 16.
7.B. Computation of Pe for some cases of interest
605
Starting from (7.230) the error probability can be computed by conditioning with respect to the values assumed by a0 D [Þ .1/ ; Þ .2/ ] D α 2 A2 . For equally likely symbols and k thresholds given by (7.228) we have Ã Ã Â Â h 0 i.a0 / 1 X k P[a0 D α] Pe D 2 1 Q k M ¦ α2A2 (7.235) Ã Ã Â Â 1 1 X h 0 i.α/ D2 1 Q M L ¦ 2
α2A
This method gives the exact value of the error probability in the presence of interferers, but requires the computation of L terms. This method can be costly, especially if the number of interferers is large: it is therefore convenient to consider approximations of the error probability obtained by simpler computational methods.
Gaussian approximation
If interferers have a similar amplitude and their number is large, we can use the central limit theorem and approximate ik as a Gaussian random variable. As the process w R;k is Gaussian, the process z k D ik C w R;k is also Gaussian with variance
2 ¦z2 D ¦i C ¦ 2 2 where ¦i is given by (7.72). Then
(7.236)
(7.237) Â Ã (7.238)
Â Pe D 2 1
1 M
Ã Q
h0 ¦z
It is seen that this method, although very convenient, is rather pessimistic, especially for large values of 0. As a matter of fact, we observe that the amplitude of ik is limited by the value X imax D .M 1/ jh i j (7.239)
i 6D0
whereas the Gaussian approximation implies that the values of ik are unlimited.
Worstcase bound
This method substitutes ik with the constant imax deﬁned in (7.239). In this case Pe is equal to Ã Â Ã Â h 0 imax 1 Q (7.240) Pe D 2 1 M ¦ This bound is typically too pessimistic, however, it yields a good approximation if ik is mainly due to one dominant interferer.
606
Chapter 7. Transmission over dispersive channels
Saltzberg bound
With reference to (7.230), deﬁning z k as the total disturbance given by (7.236), in general we have Ã Â 1 P[z k > h 0 ] Pe D 2 1 (7.241) M Let Þmax D maxfÞn g D M
n
1
(7.242)
in the speciﬁc case, and I be any subset of the integers Z 0 , excluding zero, such that X
i 2I
jh i j <
h0 Þmax
(7.243)
Moreover, let I C be the complementary set of I with respect to Z 0 . Saltzberg applied a Chernoff bound to the probability P[z k > h 0 ] [10], obtaining 0 B B B P[z k > h 0 ] < exp B B B @ !2 1 h 0 Þmax jh i j C C C i 2I 0 1C C C X 2 2 2A A 2 @¦ C ¦a jh i j X
i 2I C
(7.244)
The bound is particularly simple in the case of binary signaling, where fak g 2 f 1; 1g, 0 B B B Pe < exp B B B @ !2 1 C h0 jh i j C C i 2I 0 1C C C X 2A A @¦ 2 C 2 jh i j X
i 2I C
(7.245)
P where I is such that i 2I jh i j < h 0 . In this case it is rather simple to choose the set I so that the limit is tighter. We begin with I D Z 0 . Then we remove from I one by one the indices i that correspond to the larger values of jh i j; we stop when the exponent of (7.245) has reached the minimum. Considering the limit of the function Q given by (6.364), we observe that for I D Z 0 and I C D ;, the bound in (7.244) practically coincides with the worstcase limit in (7.240). Taking instead I D ; and I C D Z 0 we obtain again the limit given by the Gaussian approximation for z k that yields (7.238). For the mathematical details we refer to [10]; for a comparison between the Saltzberg bound and other bounds we refer to [5, 11].
7.B. Computation of Pe for some cases of interest
607
GQR method
The GQR method is based on a technique for the approximate computation of integrals called Gauss quadrature rule (GQR). It offers a good compromise between computational complexity and approximation accuracy. If we assume a very large number of interferers, to the limit inﬁnite, ik can be modelled as a continuous random variable. Then Pe assumes the expression Ã Z C1 Â Ã Ã Â Â 1 h0 ¾ 1 pik .¾ / d¾ D 2 1 I (7.246) Q Pe D 2 1 M ¦ M 1 By the GQR method we obtain an approximation of the integral, given by I D
Nw X jD1
Â wj Q
h0 ¦
¾j
Ã (7.247)
In this expression the parameters f¾ j g and fw j g are called, respectively, abscissae and weights of the quadrature rule, and are obtained by a numerical algorithm based on the ﬁrst 2Nw moments of ik . The quality of the approximation depends on the choice of Nw [5].
608
Chapter 7. Transmission over dispersive channels
Appendix 7.C
General scheme
Coherent PAMDSB transmission
For transmission over a passband channel, a PAM signal must be suitably shifted in frequency by a sinusoidal carrier at frequency f 0 . This task is achieved by DSB modulation (see Example 1.6.3 on page 41) of the signal s.t/ at the output of the baseband PAM modulator ﬁlter. In the case of a coherent receiver, the passband scheme is given in Figure 7.52. For the baseband equivalent model, we refer to Figure 7.53a. Now we consider the study of the PAMDSB transmission system in the uniﬁed framework of Figure 7.12. Assuming the receive ﬁlter g Rc realvalued, we apply the operator Re [ ] to the channel ﬁlter impulse response and to the noise signal, and we split the factor 1=2 evenly among the channel ﬁlter and the receive ﬁlter responses; setting 1 g Rc .t/ D g Rc .t/ p , we thus obtain the simpliﬁed scheme of Figure 7.53b, where the noise 2 signal contains only the inphase component w0I .t/ with PSD Pw0I . f / D and " gC .t/ D Re or, in the frequency domain, GC . f / D e
j .'1 '0 / G . f Ch
N0 (V2 /Hz) 2 #
(7.248)
e
j .'1 '0 / g .bb/ .t/ Ch
p 2 2
(7.249)
C f 0 /1. f C f 0 / C e j .'1 p 4 2
'0 / G Ł . Ch
f C f 0 /1.
f C f0 / (7.250)
For a noncoherent receiver we refer to the scheme developed in Example 6.11.6 on page 516.
Figure 7.52. PAMDSB passband transmission system.
7.C. Coherent PAMDSB transmission
609
Figure 7.53. PAMDSB system.
Transmit signal PSD
Considering the PSD of the message sequence, the average PSD of the modulated signal s.t/ is given by (7.28), 1 N Ps . f / D [Pa . f 4T 2 f 0 / jHT x . f f 0 /j2 C Pa . f C f 0 / jHT x . f f 0 /j2 ] (7.251)
Consequently the transmitted signal bandwidth is equal to twice the bandwidth of h T x . The minimum bandwidth is given by Bmin D 1 T (7.252)
Recalling the deﬁnition (6.103), the spectral efﬁciency of the transmission system is given by ¹ D log2 M (bit/s/Hz) which is halved with respect to MPAM (see Table 6.9). (7.253)
Signaltonoise ratio
We assume the function e
j .'1 '0 / g .bb/ .t/ Ch
p 2 2
(7.254)
is realvalued; then from Figure 7.53a, using (1.295), we have the following relation: E[jsC .t/j2 ] D
.bb/ E[jsCh .t/j2 ] D E[jsCh .t/j2 ] 2
(7.255)
610
Chapter 7. Transmission over dispersive channels
Setting qC .t/ D h T x Ł gC .t/ from (6.105) and (7.252) we have 0D where, for an MPAM system (6.110), Ma D M2 1 3 (7.258) Ma E qC N0 (7.257) (7.256)
In the absence of ISI, for deﬁned in (7.106), (7.107) still holds; moreover, using (7.257), for a matched ﬁlter receiver, (7.113) yields
MF
D
E qC 20 D N0 =2 Ma r Q ! 1
(7.259)
Then the error probability is given by Â Pe D 2 1 1 M Ã 60 M2 (7.260)
We observe that the performance of an MPAMDSB system and that of an MPAM system are the same, in terms of Pe as a function of the received power. However, because of DSB modulation, the required bandwidth is doubled with respect to both baseband PAM transmission and PAMSSB modulation.14 This explains the limited usage of PAMDSB for digital transmission.
14 The PAMSSB scheme presents in practice considerable difﬁculties because the ﬁlter for modulation is non
ideal: in fact, this causes distortion of the signal s.t/ at low frequencies that may be compensated for only by resorting to line coding (see Appendix 7.A).
7.D. Implementation of a QAM transmitter
611
Appendix 7.D
Implementation of a QAM transmitter
Three structures, which differ by the position of the digitaltoanalog converter, may be considered for the implementation of a QAM transmitter. In Figure 7.54 the modulator employs for both inphase and quadrature signals a DAC after the interpolator ﬁlter h T x , followed by an analog mixer that shifts the signal to passband. This scheme works if the sampling frequency 1=Tc is much greater than twice the bandwidth B of h T x . For applications where the symbol rate is very high, the DAC is placed right after the bit mapper and the various ﬁlters are analog (see Chapter 19). In the implementation illustrated in Figure 7.55, the DAC is placed instead at an intermediate stage with respect to the case of Figure 7.54. Samples are premodulated by a digital mixer to an intermediate frequency f 1 , interpolated by the DAC and subsequently remodulated by a second analog mixer that shifts the signal to the desired band. The intermediate frequency f 1 must be greater than the bandwidth B and smaller than 1=.2Tc / B, thus avoiding overlap among spectral components. We observe that this scheme requires only one DAC, but the sampling frequency must be at least double as compared to the previous scheme.
Figure 7.54. QAM with analog mixer.
Figure 7.55. QAM with digital and analog mixers.
612
Chapter 7. Transmission over dispersive channels
Figure 7.56. Polyphase implementation of the ﬁlter hTx for Tc D T=8.
For the ﬁrst implementation, as the system is typically oversampled with a sampling interval Tc D T =4 or Tc D T =8, the frequency response of the DAC, G I . f /, may be considered as a constant in the passband of both the inphase and quadrature signals. For the second implementation, unless f 1 − 1=Tc , the distortion introduced by the DAC should be considered and equalized by one of these methods (see page 338): ž including the compensation for G I . f / in the frequency response of the ﬁlter h T x , ž inserting a digital ﬁlter before the DAC, ž inserting an analog ﬁlter after the DAC. We recall that an efﬁcient implementation of interpolator ﬁlters h T x is obtained by the polyphase representation, as shown in Figure 7.56 for Tc D T =8, where Â Ã T h .`/ .m/ D h T x mT C ` ` D 0; 1; : : : ; 7 m D 1; : : : ; C1 (7.261) 8 To implement the scheme of Figure 7.56, once the impulse response is known, it may be convenient to precompute the possible values of the ﬁlter output and store them in a table or RAM. The symbols fak;I g are then used as pointers for the table itself. The same approach may be followed to generate the values of the signals cos.2³ f 1 nTc / and sin.2³ f 1 nTc / in Figure 7.55, using an additional table and the index n as a cyclic pointer.
7.E. Simulation of a QAM system
613
Appendix 7.E
Simulation of a QAM system
In Figure 7.12 we consider the baseband equivalent scheme of a QAM system. The aim is to simulate the various transformations in the discretetime domain and to estimate the bit error probability. This simulation method, also called Monte Carlo, is simple and general because it does not require any special assumption on the processes involved; however, it is intensive from the computational point of view. For alternative methods, for example semianalytical, to estimate the error probability, we refer to speciﬁc texts on the subject [12]. We describe the various transformations in the overall discretetime system depicted in Figure 7.57, where the only difference with respect to the scheme of Figure 7.12 is that the
(a) Transmitter and channel block diagram.
(b) Receiver block diagram.
Figure 7.57. Baseband equivalent model of a QAM system with discretetime ﬁlters and sampling period TQ D T=Q0 . At the receiver, in addition to the general scheme, a multirate structure to obtain samples of the received signal at the timing phase t0 is also shown.
614
Chapter 7. Transmission over dispersive channels
ﬁlters are discretetime with quantum TQ D T =Q 0 , Which is chosen to accurately represent the various signals. Binary sequence fb` g. The sequence fb` g is generated as a random sequence or as a PN sequence (see Appendix 3.A), and has length K . Bit mapper. The bit mapper maps patterns of information bits to symbols; the symbol constellation depends on the modulator (see Figure 7.6 for two constellations). Interpolator ﬁlter h T x from period T to TQ . The interpolator ﬁlter is efﬁciently implemented by using the polyphase representation (see Appendix 1.A). For a bandlimited pulse of the raised cosine or square root raised cosine type, the maximum value of TQ , submultiple of T , is T =2. In any case, the implementation of ﬁlters, for example, the ﬁlter representing the channel, and nonlinear transformations, for example, the transformation due to a power ampliﬁer operating near saturation (not considered in Figure 7.57), typically require a larger bandwidth, leading, for example, to the choice TQ D T =4 or T =8. In the following examples we choose TQ D T =4. For the design of h T x the window method can be used (Nh odd): ÄÂ Ã ½ Nh 1 TQ w Nh .q/ q D 0; 1; : : : ; Nh 1 (7.262) h T x .q TQ / D h i d q 2 where typically w Nh is the discretetime rectangular window or the Hamming window, and h i d is the ideal impulse response. Frequency responses of h T x are illustrated in Figure 7.58 for h i d square root raised cosine pulse with rolloff factor ² D 0:3, and w Nh rectangular window of length Nh , for various values of Nh (TQ D T =4). The corresponding impulse responses are shown in Figure 7.59. Transmission channel. For a radio channel the discretetime model of Figure 4.35 can be used, where in the case of channel affected by fading, the coefﬁcients of the FIR ﬁlter that model the channel impulse response are random variables with a given power delay proﬁle. For a transmission line the discretetime model of (4.150) can be adopted. We assume the statistical power of the signal at output of the transmission channel is given by MsCh D MsC . Additive white Gaussian noise. Let w I .q TQ / and w Q .q TQ / be two Gaussian statistically N N independent r.v.s, each with zero mean and variance 1=2, generated according to (1.655). To generate the complexvalued noise signal fwC .q TQ /g with spectrum N0 , it is sufﬁcient to use the relation wC .q TQ / D ¦wC [w I .q TQ / C j w Q .q TQ /] N N (7.263)
7.E. Simulation of a QAM system
615
0
N = 17 h N = 25 h N = 33
h
−10
−20
 HT (f)  (dB)
−30
x
−40
−50
−60
0
0.2
0.4
0.6
0.8
1 fT
1.2
1.4
1.6
1.8
2
Figure 7.58. Magnitude of the transmit ﬁlter frequency response, for a windowed square root raised cosine pulse with rolloff factor ² D 0:3, for three values of Nh (TQ D T=4).
0.3
h (q T )
Nh=17
Q Tx
0.2
0.1
0
−0.1
−5
0
5
q / TQ
10
15
20
0.3
h (q T )
Nh=25
Q Tx
0.2
0.1
0
−0.1
0
5
10
q / TQ
15
20
25
0.3
h (q T )
Nh=33
Q Tx
0.2
0.1
0
−0.1
0
5
10
15 q / TQ
20
25
30
Figure 7.59. Transmit ﬁlter impulse response, fhTx .qTQ /g, q D 0; : : : ; Nh 1 , for a windowed square root raised cosine pulse with rolloff factor ² D 0:3, for three values of Nh (TQ D T=4).
616
Chapter 7. Transmission over dispersive channels
where
2 ¦wC D N0
1 TQ
(7.264)
Usually the signaltonoise ratio 0 given by (6.105) is given. For a QAM system, from (7.51) and (7.55) we have MsC MsC D 2 (7.265) 0D N0 .1=T / ¦wC .TQ =T / The standard deviation of the noise to be inserted in (7.263) is given by r MsC Q 0 ¦wC D (7.266) 0 We note that ¦wC is a function of MsC , of the oversampling ratio Q 0 D T =TQ , and of the given ratio 0. In place of 0, the ratio E b =N0 D 0= log2 M may be assigned. Receive ﬁlter. As will be discussed in Chapter 8, there are several possible solutions for the receive ﬁlter. The most common choice is a matched ﬁlter g M , matched to h T x , of the square root raised cosine type. Alternatively, the receive ﬁlter may be a simple antialiasing FIR ﬁlter g A A , with passband at least equal to that of the desired signal. The ﬁlter attenuation in the stopband must be such that the statistical power of the noise evaluated in the passband is larger by a factor of 5–10 with respect to the power of the noise evaluated in the stopband, so that we can ignore the contribution of the noise in the stopband at the output of the ﬁlter g A A . If we adopt as bandwidth of g A A the Nyquist frequency 1=.2T /, the stopband of an ideal ﬁlter with unit gain goes from 1=.2T / to 1=.2TQ /: therefore the ripple Žs in the stopband must satisfy the constraint 1 N0 2T Â 1 Žs N0 2T Q from which we get the condition Žs < 10 Q0
1
1 2T
Ã > 10
(7.267)
1
(7.268)
Usually the presence of other interfering signals forces the selection of a value of Žs that is smaller than that obtained in (7.268). Interpolator ﬁlter. The interpolator ﬁlter is used to increase the sampling rate from 1=TQ 0 to 1=TQ : this is useful when TQ is insufﬁcient to obtain the accuracy needed to represent the timing phase t0 . This ﬁlter can be part of g M or g A A . From Appendix 1.A, the efﬁcient 0 0 implementation of fg M . pTQ /g is obtained by the polyphase representation with TQ =TQ branches. To improve the accuracy of the desired timing phase, further interpolation, for example, linear, may be employed.
7.E. Simulation of a QAM system
617
Timing phase. Assuming a training sequence is available, for example, of the PN type fa0 D p.0/; a1 D p.1/; : : : ; a L 1 D p.L 1/g, a simple method to determine t0 is to choose the timing phase in relation to of the peak of the overall impulse response. Let 0 fx. pTQ /g be the signal before downsampling. If we evaluate
0 m opt D arg max jrxa .mTQ /j m
þ þ L 1 þ1 X þ þ þ 0 D arg max þ x.`T C mTQ / pŁ .`/þ m þL þ `D0 then
0 t0 D m opt TQ 0 m min TQ 0 m max TQ
(7.269)
0 0 0 m min TQ < mTQ < m max TQ
(7.270)
In (7.269) and are estimates of minimum and maximum system delay, 0 respectively. Moreover, we note that the accuracy of t0 is equal to TQ and that the amplitude 0 /=r .0/. of the desired signal is h 0 D rxa .m opt TQ a Downsampler. The sampling period after downsampling is usually T or Tc D T =2, with timing phase t0 . The interpolator ﬁlter and the downsampler can be jointly implemented, 0 according to the scheme of Figure 1.81. For example, for TQ D T =4, TQ D T =8, and Tc D 0 T =2 the polyphase representation of the interpolator ﬁlter with output fx. pTQ /g requires two branches. Also the polyphase representation of the interpolatordecimator requires two branches. Equalizer. After downsampling, the signal is usually input to an equalizer (LE, FSE or DFE, see Chapter 8). The output signal of the equalizer has always sampling period equal to T . As observed several times, to decimate simply means to evaluate the output at the desired instants. Data detection. The simplest method resorts to a threshold detector, with thresholds determined by the constellation and the amplitude of the pulse at the decision point. Viterbi algorithm. An alternative to the threshold detector, which operates on a symbol by symbol basis, is represented by maximum likelihood sequence detection by the Viterbi algorithm (see Chapter 8). Inverse bit mapper. The inverse bit mapper performs the inverse function of the bit mapper. It translates the detected symbols into bits that represent the recovered information bits. Simulations are typically used to estimate the bit error probability of the system, for a certain set of values of 0. We recall that caution must be taken at the beginning and at the N end of a simulation to consider transients of the system. Let K be the number of recovered bits. The estimate of the bit error probability Pbit is given by number of bits received with errors O Pbit D N number of received bits, K (7.271)
618
Chapter 7. Transmission over dispersive channels
N O It is known that as K ! 1, the estimate Pbit has a Gaussian probability distribution N N with mean Pbit and variance Pbit .1 Pbit /= K . From this we can deduce, by varying K , O the conﬁdence interval [P ; PC ] within which the estimate Pbit approximates Pbit with an assigned probabilty, that is O P[P Ä Pbit Ä PC ] D Pconf (7.272)
N For example, we ﬁnd that with Pbit D 10 ` and K D 10`C1 , we have a conﬁdence interval O of about a factor 2 with a probability of 95%, that is P[1=2Pbit Ä Pbit Ä 2Pbit ] ' 0:95. This is in good agreement with the experimental rule of selecting N K D 3 Ð 10`C1 (7.273)
For a channel affected by fading, the average Pbit is not very signiﬁcant: in this case it is meaningful to compute the distribution of Pbit for various channel realizations. In pratice N we assume the transmission of a sequence of N p packets, each one with K p information bits N to be recovered: typically K p D 1000–10000 bits and N p D 100–1000 packets. Moreover, the channel realization changes at every packet. For a given average signaltonoise ratio O N 0 (see (6.347)), the probability Pbit .n p /, n p D 1; : : : ; N p is computed for each packet. As O a performance measure we use the percentage of packets with Pbit .n p / < Pbit , also called bit error probability cumulative distribution function (cdf), where Pbit assumes values in a certain set. This performance measure is more signiﬁcant than the average Pbit evaluated for a very N long, continuous transmission of N p K p information bits. In fact the average Pbit does not show that, in the presence of fading, the system may occasionally have a very large Pbit , and consequently an outage.
Chapter 8
Channel equalization and symbol detection
With reference to PAM and QAM systems, in this chapter we will discuss several methods to compensate for linear distortion introduced by the transmission channel. Next, as an alternative to a memoryless threshold detector, we will analyze detection methods that operate on sequences of samples. Recalling the analysis of Section 7.3, we ﬁrst review three techniques relying on the zeroforcing ﬁlter, linear equalizer, and DFE, respectively, that attempt to reduce the ISI in addition to maximizing the ratio deﬁned in (7.106).
8.1
Zeroforcing equalizer (LEZF)
From the relation (7.66), assuming that HTx . f / and GC . f / are known, and H. f / is given, for example, by (7.84), then the equation H. f / D Q R . f /e j2³ f t0 D HTx . f /GC . f /GRc . f /e j2³ f t0 can be solved with respect to the receive ﬁlter, yielding Ã Â f ;² T rcos 1=T GRc . f / D e HTx . f /GC . f / (8.1)
j2³ f t0
(8.2)
From (8.2), the magnitude and phase responses of GRc can be obtained. In practice, however, although the condition (8.2) leads to the suppression of the ISI, hence the ﬁlter gRc is called linear equalizer zeroforcing (LEZF), it may also lead to the enhancement of the noise power at the decision point, as expressed by (7.75). In fact, if the frequency response GC . f / exhibits strong attenuation at certain frequencies in the range [ .1 C ²/=.2T /; .1 C ²/=.2T /], then GRc . f / presents peaks that determine a 2 large value of ¦w R . In any event, the choice (8.2) guarantees the absence of ISI at the decision point, and from (7.109) we get
LE ZF
D
2 N0 E gRc
(8.3)
620
Chapter 8. Channel equalization and symbol detection
Obviously, based on the considerations of Section 7.3, it is
LE ZF
Ä
MF
(8.4)
where MF is deﬁned in (7.113). In the particular case of an ideal channel, that is GCh . f / D G0 in the passband of the system, and assuming h Tx is given by s Ã Â f (8.5) ;² HTx . f / D T rcos 1=T then from (7.42) (8.6) p where from (7.38), k1 D G0 for a PAM system, whereas k1 D .G0 = 2/e j .'1 '0 / for a QAM system. Moreover, from (8.2), neglecting a constant delay, i.e. for t0 D 0, it results that s Ã Â 1 f (8.7) ;² GRc . f / D rcos k1 1=T In other words gRc .t/ is matched to qC .t/ D k1 h Tx .t/, and
LE ZF
QC . f / D HTx . f / GC . f / D k1 HTx . f /
D
MF
(8.8)
Methods for the design of a LEZF ﬁlter with a ﬁnite number of coefﬁcients are given in Section 8.7.
8.2
Linear equalizer (LE)
We introduce an optimization criterion for GRc that takes into account the ISI as well as the statistical power of the noise at the decision point.
8.2.1
Optimum receiver in the presence of noise and ISI
With reference to the scheme of Figure 7.12 for a QAM system, the criterion of choosing the receive ﬁlter such that the signal yk is as close as possible to ak in the meansquare sense is widely used.1 Let h Tx and gC be known. Deﬁning the error ek D a k yk yk j2 ] (8.9)
the receive ﬁlter gRc is chosen such that the meansquare error J D E[jek j2 ] D E[jak is minimized.
1
(8.10)
It would be desirable to ﬁnd the ﬁlter such that P[ak 6D ak ] is minimum. This problem, however, is usually O very difﬁcult to solve. Therefore we resort to the criterion of minimizing E[jyk ak j2 ] instead.
8.2. Linear equalizer (LE)
621
The following assumptions are made: 1. the sequence fak g is wide sense stationary (WSS) with spectral density Pa . f /; 2. the noise wC is complexvalued and WSS. In particular we assume it is white with spectral density PwC . f / D N0 ; 3. the sequence fak g and the noise wC are statistically independent. The minimization of J in this situation differs from the classical problem of the optimum Wiener ﬁlter because h Tx and gC are continuoustime pulses. By resorting to the calculus of variations (see Appendix 8.A), we obtain the general solution GRc . f / D
Ł QC . f / e N0 j2³ f t0
Pa . f / 1 T C Pa . f / T
C1 X `D 1
þ Â 1 þ þQC f N0 þ
Ãþ ` þ2 þ T þ
(8.11)
where QC . f / D HTx . f /GC . f /. Considerations on the joint optimization of the transmit and receive ﬁlters are discussed in Appendix 8.A. 2 If the symbols are statistically independent and have zero mean, then Pa . f / D T ¦a , and (8.11) becomes:
Ł GRc . f / D QC . f /e j2³ f t0 2 ¦a þ Â C1 1 X þ 2 þQC f N0 C ¦a T `D 1 þ
Ãþ ` þ2 þ T þ
(8.12)
The expression of the cost function J in correspondence of the optimum ﬁlter (8.12) is given in (8.40). From the decomposition (7.62) of GRc . f /, in (8.12) we have the following correspondences:
Ł G M . f / D QC . f /e j2³ f t0
(8.13)
and C.e j2³ f T / D
2 ¦a þ Â C1 1 X þ 2 þQC f N0 C ¦a T `D 1 þ
Ãþ ` þ2 þ T þ
(8.14)
The optimum receiver thus assumes the structure of Figure 8.1. We note that g M is the matched ﬁlter to the impulse response of the QAM system at the receiver input.2 The ﬁlter c is called linear equalizer (LE). It attempts to ﬁnd the optimum tradeoff between removing the ISI and enhancing the noise at the decision point.
2
As derived later in the text (see Observation 8.13 on page 681) the output signal of the matched ﬁlter, sampled at the modulation rate 1=T , forms a “sufﬁcient statistic” if all the channel parameters are known.
15) Note that the system is perfectly equalized.12).t/ D h Tx Ł gC Ł g M . With reference to the scheme of Figure 8.622 Chapter 8.16) . x k . We assume that the ﬁlter c may have an inﬁnite number of coefﬁcients. We analyze two particular cases of the solution (8. it may be IIR.t/ D qC Ł g M .1 and for any type of ﬁlter g M . for which g M . Channel equalization and symbol detection Figure 8. Optimum receiver structure for a channel with additive white noise.15) is the linear equalizer zeroforcing. then C. D. with the following deﬁnitions: ž ﬁlter input signal. that is if jQC . as it completely eliminates the ISI. which must be suitably estimated.t t0 //.2a. i.t/ ' 0. i. q is the overall impulse response of the system at the sampler input: q. In the absence of noise. the particular case of a matched ﬁlter. it is possible to determine the coefﬁcients of the FIR equalizer ﬁlter c using the Wiener formulation.1.e. not necessarily matched.e j2³ f T / D 1 Â C1 þ X þ 1 þQC f T `D 1 þ Ãþ ` þ2 þ T þ (8. wC .t/ D rqC . expresses in number of symbol intervals the delay introduced by the equalizer. 1. In this case the ﬁlter (8. ž desired output signal.t t0 / (8. ek D dk yk . 2. there is no ISI. and C.t/ D qC . In the absence of ISI at the output of g M . ž ﬁlter output signal. The overall delay from the emission of ak to the generation of the detected symbol ak is equal to t0 C DT seconds. yk . f /j2 is a Nyquist pulse. We notice the presence of the parameter D that denotes the lag of the desired signal: this parameter. is very interesting from a theoretical point of view. dk D ak ž estimation error. Alternative derivation of the IIR equalizer Starting from the receiver of Figure 8.e. O Ł However.e j2³ f T / is constant and the equalizer can be removed. .
− / D N0 rqC . In any case from (8. f /j2 (8.t0 t/ has support .t/ D wC Ł g M .16).t/ (8.t 0 / Ł qC . assuming wC is white noise. we have w R .t/ is given by PqC .2.8. given by Ł rqC . Linear equalizer as a Wiener ﬁlter. t 0 /]. then g M .19) (8.20) .17) The Fourier transform of rqC . as rqC is a correlation function. to obtain a causal ﬁlter g M the minimum value of t0 is tqC .t/ D [qC .t/ D qC .0. In Figure 8.t0 / is taken in relation to the maximum value of jq. tqC /.t/ with autocorrelation function given by rw R . Hence. where rqC is the autocorrelation of the deterministic pulse qC .t/j.− / D rwC Ł rqC .2a.2. t0 /.t0 tqC . Linear equalizer (LE) 623 Figure 8. the desired sample q.18) Ł We note that if qC has a ﬁnite support . f / D jQC .− / (8.
f / D N0 jQC .z/ Q (8. satisﬁes the relation: Â Ã 1 (8.nT / In particular.25) (8.e j2³ f T / D F[h n ] D T `D 1 þ T þ (8. from (8.23) Ł which.z/ (8. sampling at instants tk D t0 C kT yields the sampled QAM signal xk D C1 X iD 1 (8.n/] (8. f /j2 In Figure 8.20).27) Moreover.28) z Also.z/ D Z[c n ] D where Pdx .22) The discretetime equivalent model is illustrated in Figure 8.29) The Wiener solution that gives the optimum coefﬁcients is given in the ztransform domain by (2.z/ D Z[r dx .31) Pdx . f / D N0 PqC .24) (8.2b.21) ai h k i C wk Q (8.624 Chapter 8.50): Copt . the correlation sequence of fh n g has ztransform equal to Â Ã 1 Z[rh .n/] D Z[rw R .z/ D Z[h n ] D PqC .3. The discretetime overall impulse response is given by h n D q. it results in h 0 D rqC . using the properties of Table 1.26) 8.nT / D rqC .t0 C Q kT / is given by: Z[rw . from (1. rqC .90).m/] D 8.z/ D Z[r x .30) . the ztransform of the autocorrelation of the noise samples wk D w R .n/] and Px . by the Hermitian symmetry of an autocorrelation sequence.z/ (8.nT /] D N0 8. Channel equalization and symbol detection Then the spectrum of w R is given by: Pw R .0/ D E qC The sequence fh n g has ztransform given by 8. nT /.23) is given by Â Ãþ C1 þ ` þ2 1 X þ þ þQC f 8.z/ Px .2a.z/8 Ł Ł (8. the Fourier transform of (8.t0 C nT / D rqC .z/ D 8 Ł Ł z On the other hand.
the computation of the autocorrelation of the process fx k g yields (see also Table 1. Finally. from (8.32) 2. 2 rdx .z/ Therefore.26). Linear equalizer (LE) 625 We assume the following assumptions hold: 1.n/ D E[x k x k n ] D ¦a rh .2. (8.6): Ł 2 rx .n/ Q (8.38) .37) Taking into account the property (8.35) Thus.z/ D 1 2 Ł C N0 8.z/ (8. with symbols that are statistically independent and with zero mean.28).z/ ¦a 8.z/ D ¦ a 8Ł Â 1 zŁ Ã z Â 1 zŁ D 2 Px .z/8 zŁ 2 ¦a 8Ł Â (8. Q Then the crosscorrelation between fdk g and fx k g is given by Ł rdx .37) is simpliﬁed as Copt . 2 ra .36) C N0 8. using (8.z/ D ¦ a 8.n/ C rw . The sequence fak g is WSS. we obtain 2 Pdx . Ã 1 z D zŁ Â Ã Copt . fak g and fwk g are statistically independent and hence uncorrelated.z/8 Ł Ã (8.8.33) D hŁ k Ł n i E[ak D ai ] using assumption 2. f / D T ¦a (8.34) Under the same assumptions 1 and 2.n/ D ¦a h Ł D n (8.n/ D E[dk x k n ] " D E ak C1 X iD 1 D C1 X iD 1 !Ł # ai h k n i C wk Q n (8. from assumption 1.z/ D 2 ¦a z D 2 N0 C ¦a 8.30).n/ D ¦a Žn 2 and Pa .
n /k w (8. for z D e j2³ f T .i/ dx (8. sampled with a sampling rate equal to the modulation rate 1=T .53): 2 Jmin D ¦d N 1 X i D0 copt. We recall the general expression for the Wiener ﬁlter (2.e j2³ f T / d f (8.e j2³ f T /Copt . In relation to the optimum ﬁlter C opt . and Jmin D 2 ¦a N0 2 N0 C ¦a E qC (8.106) yields Ã2 Â D D (8.n /i (8.44) LE ¦I .131)). or by using the partial fraction expansion method (see (1.z/.42) where fh n g is given by (8.40) 2 D ¦a T N0 df 2 N0 C ¦a 8.38) corresponds to (8. (7. as i D .14).z//.43) the total disturbance given by ISI plus noise is modeled as Gaussian noise with variance 2¦ I2 .N0 C ¦a 8. We note that in the absence of ISI. Hence. substitution of the relations (8. At the decision point we have yk D D ak D C C1 X iD 1 i 6D D i ak D i C .en Ł copt. for a minimum distance among symbols of the constellation equal to 2. apart from the term z D .38).23) and copt. we determine the minimum value of the cost function. at the output of the MF we get 8. which accounts for a possible delay introduced by the equalizer.626 Chapter 8.41) Signaltonoise ratio γ We deﬁne the overall impulse response at the equalizer output. Channel equalization and symbol detection It can be observed that.e j2³ f T / If 8.40) may be computed by evaluating 2 2 the coefﬁcient of the term z 0 of the function ¦a N0 =.43) We assume that in (8. which can be obtained by series expansion of the integrand.36) in (8. f / C opt .i rŁ .39) Ł Pdx . the integral (8.n is the impulse response of the optimum ﬁlter (8.h n Ł copt.39) yields Jmin D 2 ¦d Z T Z 1 2T 1 2T 1 2T 1 2T Ł Pdx .z/ D h 0 D E qC .z/ is a rational function of z. (8.e j2³ f T / d f 2 D ¦d Z 1 2T 1 2T Finally.
the ﬁlter g M may be designed according to the average characteristics of the channel. .3. The receiver is represented in Figure 8. because the noise wC Figure 8. LE with a ﬁnite number of coefﬁcients 627 In case the approximation D ' 1 holds. otherwise. the total disturbance in (8. 8. Two alternative approaches are usually considered. it is necessary to design a receiver that tries to identify the channel characteristics and at the same time to equalize it through suitable adaptive algorithms.8.4.43) coincides with ek . In particular if the desired signal sC has a bandwidth B and x is sampled with period Tc D T =F0 .40). First solution. The ﬁlter c is then an adaptive transversal ﬁlter that attempts. The classical block diagram of an adaptive receiver is shown in Figure 8.45) where Jmin is given by (8. then the passband of gAA should extend at least up to frequency B. Moreover. with F0 ½ 2.  Figure 8. Second solution. if it is possible to rely on some a priori knowledge of the channel.3. Receiver implementation by an analog matched ﬁlter followed by a sampler and a discretetime linear equalizer. if the channel is either unknown a priori or it is time variant.4. in real time. to equalize the channel by adapting its coefﬁcients to the channel variations. where F0 is the oversampling index. The matched ﬁlter g M is designed assuming an ideal channel. and (8. Receiver implementation by discretetime ﬁlters. Therefore the equalization task is left to the ﬁlter c.3 LE with a ﬁnite number of coefﬁcients In practice. hence 2¦ I2 ' Jmin .3.44) becomes LE ' 2 Jmin (8. The antialiasing ﬁlter gAA is designed according to speciﬁcations imposed by the sampling theorem.
5 (see Observation 8.4 is implemented as a decimator ﬁlter (see Appendix 1. to act as a matched ﬁlter. hence the cutoff frequency of gAA is between B and F0 =. Channel equalization and symbol detection is considered as a wideband signal. Note that the ﬁlter c of Figure 8.t0 C nT / D h Tx Ł gC Ł g M . 3. where fh n g is the discretetime impulse response of the overall system.3: the discretetime equivalent scheme is illustrated in Figure 8. In turn.628 Chapter 8.5. which we will describe next (see Chapter 3). given by h n D q. to simplify the implementation of the ﬁlter gAA .2 on page 641). 2.t0 C nTc / is deﬁned over a discretetime domain with period Tc D T =F0 . to ﬁlter the residual noise outside the passband of the desired signal sC . Thus the discretetime ﬁlter c needs to accomplish the following tasks: 1.3. it is convenient to consider a wide transition band.2T /. 2. . to equalize the channel. where the input signal xn D x.5. which employs the Wiener formulation and requires the computation of the matrix R and the vector p. g A A should also attenuate the noise components outside the passband of the desired signal sC . In practice.t/jtDt0 CnT (8. the direct method.A). Discretetime equivalent scheme associated with the implementation of Figure 8. and the output signal yk is deﬁned over a discretetime domain with period T .46) Figure 8. Adaptive LE We analyze now the solution illustrated in Figure 8. The description of the direct method is postponed to Section 8. two strategies may be used to determine an equalizer ﬁlter c with N coefﬁcients: 1. the adaptive method.
typically a PN sequence is used (see Appendix 3. Select the law of coefﬁcient update. x k N C1 ] (8. The MSE criterion is typically adopted: J . c1.1. L TS 1. As the spectrum of the training sequence must be wide.53) Evaluation of the error in training mode is possible if a sufﬁciently long sequence of L TS symbols known at the receiver.k .8. For example. : : : . allowing the computation of the optimum coefﬁcients of the equalizer ﬁlter c. fak g. The design strategy consists of the following steps. : : : . 1.3. L TS C D 1 (8. .k/ D E[jek j2 ] (8.52) 3. : : : . c N 1. Deﬁne the performance measure of the system.k . for an FIR ﬁlter c with N coefﬁcients using the LMS algorithm (see Section 3.49) (8.A). a) Training mode ek D a k D yk k D D.47) with w R .3 on page 641).48) 2. We note that even the direct method requires a training sequence to determine the vector p and the matrix R (see Observation 8.t/.2) we have ckC1 D ck C ¼ek xŁ k where a) input vector T xk D [x k .k ] (8. the automatic identiﬁcation of the channel characteristics takes place.t0 C kT / Q (8. and consequently channel equalization. : : : . During this time interval.50) b) coefﬁcient vector T ck D [c0. x k 1 . LE with a ﬁnite number of coefﬁcients 629 and wk D w R . The duration of the transmission of TS is equal to L TS T .0/ (8.51) c) adaptation gain 0<¼< 2 N rx . is transmitted. To evaluate the signal error ek to be used in the adaptive algorithm we distinguish two modes. 1. called training sequence (TS).t/ D wC Ł g M . k D 0.
57) (8.4.56) (8.630 Chapter 8.t/ The noise is given by wn D w R . Linear adaptive equalizer implemented as a transversal ﬁlter with N coefﬁcients. In (8. and the transmission of information symbols may O start. shown in Figure 8.54) Once the transmission of the TS is completed.t0 C nTc / D wC Ł gAA . we assume that the equalizer has arrived at convergence. Therefore ak ' ak .4 Fractionally spaced equalizer (FSE) We consider the receiver structure with oversampling illustrated in Figure 8.54).6. The implementation of the above equations is illustrated in Figure 8. Channel equalization and symbol detection Figure 8. has impulse response given by h i D q.t0 C i Tc / where q. The discretetime overall system.53) we then substitute the known transmitted symbol with the detected symbol to obtain (8.t/ D h Tx Ł gC Ł gAA .6.55) . 8.7.t/jtDt0 CnTc Q (8. b) Decision directed mode O ek D a k D yk k ½ L TS C D (8.
t/: s R .A).t/. where Tc D T =F0 . deﬁned on the discretetime domain with sampling period Tc .`F0 Tc / D 0. the discretetime pulse satisﬁes the Nyquist criterion if h.0/ 6D 0. t 2 <.61) 0 As mentioned earlier. Fractionally spaced linear equalizer (FSE).3 we considered continuoustime Nyquist pulses h.7. and the Nyquist conditions . Before analyzing this system. n integer. In the particular case F0 D 1 we have Tc D T . introducing the downsampler helps to illustrate the advantages of operating with an oversampling index F0 > 1. the nth sample of the sequence is given by xn D The output of the ﬁlter c is given by 0 yn D N 1 X i D0 C1 X kD 1 hn k F0 ak C wn Q (8. The input signal to the ﬁlter c is the sequence fxn g with sampling period Tc D T =F0 . only the sequence fyk g is produced (see Appendix 1.t nT / t 2< (8. the ﬁlter c is decomposed into a discretetime ﬁlter with sampling period Tc that is cascaded with a downsampler.59) We note that the overall impulse response at the ﬁlter output. in a practical implementation of the ﬁlter the sequence fyn g is not explicitly generated. Let us consider a QAM system with pulse h.58) ci xn i (8.8. Let h. for all integers ` 6D 0. Fractionally spaced equalizer (FSE) 631 {hi =q(t0 +iTc )} ak T h xn Tc ~ wn Figure 8. we have 0 yk D yk F0 (8. However. is given by i D h Ł ci (8. c y’n Tc F0 yk T ^ akD T For the analysis. and h.t/ D C1 X nD 1 an h.62) In Section 7. If F0 is an integer.t/ be deﬁned now on a discretetime domain fnTc g.4. we recall the Nyquist problem. and by fyk g the downsampled sequence.60) 0 If we denote by fyn g the sequence of samples at the ﬁlter output.
t/ is deﬁned in (8.e j2³ f Tc /H .56).55).e j2³ f Tc / Â C1 X j2³ f Tc 1 D C.nT / D 0.e.60).55).8. where q. and h.7. it is easy to deduce the behavior of a discretetime Nyquist pulse in the frequency domain: two examples are given in Figure 8. the QAM pulse deﬁned on the discretetime domain with period Tc is given by (8.e j2³ f Tc / D C. let us assume q. 1 2T 0 1 2T 1 T f h(t) H(e j2π fTc ) 0 T 2T t=nT (b) F0 D 1. Discretetime Nyquist pulses and relative Fourier transforms.t/ is real with a bandwidth smaller than 1=T .63) The task of the equalizer is to yield a pulse f i g that approximates a Nyquist pulse. Using the 1 polar notation for Q.64) . in the frequency domain a discretetime Nyquist pulse is equal to a constant.8. it may happen that H . sampling the equalizer input signal with period equal to T . In fact. Channel equalization and symbol detection h(t) H(e j2π fTc ) 0 T 2T t=nTc (a) F0 D 2.e.2T /. Recalling the input–output downsampler relations in the frequency domain. using (8.e j2³ f Tc / assumes very small values at frequencies near f D 1=. 1 2T 0 1 2T 1 T f Figure 8. impose that h.8. 2T / we have Q Â 1 2T Ã D Ae j' and Q Â 1 2T Ã D Ae j' (8.e / Q f T `D 1 F0 ` T Ã e Â Ã F j2³ f ` T0 t0 (8. the pulse f i g at the equalizer output before the downsampler has the following Fourier transform: 9. With reference to the scheme of Figure 8. We see that choosing F0 D 1. From (8.632 Chapter 8. because of an incorrect choice of the timing phase t0 . i. for F0 D 1. We note that. i.0/ 6D 0. a pulse of the type shown in Figure 8. for n 6D 0. for F0 D 2 and F0 D 1.
Therefore. whose output can be used to determine the optimum timing phase. in a practical implementation the equalizer output is not generated at every sampling instant multiple of T =2. With respect to the basic scheme of Figure 8.2T /. f / is Hermitian. or converge with difﬁculty in the adaptive case. In fact. we consider now the adaptive method as depicted in Figure 8. For this choice.269) with 0 accuracy TQ D T =2 is usually sufﬁcient to determine the timing phase. but only at alternate sampling instants. Therefore the choice of t0 may be less accurate. The LMS adaptation equation is given by: ckC1 D ck C ¼ek xŁ 2k (8. In conclusion. if c has an input signal sampled with sampling period T =2 it also acts as an interpolator ﬁlter. In fact.9. the input samples of the ﬁlter c have sampling period T =2.66) (8. the FSE receiver presents two advantages over Tspaced equalizers: 1.e j2³ f T /þ DQ 1 f D 2T 2T 2T Ã Â t0 D 2A cos ' C ³ T If t0 is such that 'C³ then H .4.8. The choice of the oversampling index F0 D 2 is very common.68) The adaptive FSE may incur a difﬁculty in the presence of noise with variance that is small with respect to the level of the desired signal: in this case some eigenvalues of the autocorrelation matrix of xŁ may assume a value that is almost zero and consequently 2k .65) 2i C 1 t0 D ³ T 2 i integer (8.5 (see Observation 8.e j2³ 2T T / D 0 1 1 T Ã e 1 j2³ 2T Á 1 T t0 (8. If F0 ½ 2 is chosen. and the output samples have sampling period T . f / does not occur. the correlation method (7. It is less sensible to the choice of t0 . It is an optimum structure according to the MSE criterion.7 on page 644). from (8. as we will see in Chapter 14. this problem is avoided because aliasing between replicas of Q. Fractionally spaced equalizer (FSE) 633 as Q.63). 2. Adaptive FSE The direct method to compute the coefﬁcients of a FSE is described in Section 8.67) In this situation the equalizer will enhance the noise around f D 1=.7. in the sense that it carries out the task of both matched ﬁlter (better rejection of the noise) and equalizer (reduction of ISI). Note that coefﬁcient update takes place every T seconds. Â Ã Â þ t0 1 1 þ e j2³ 2T C Q H .
the coefﬁcients of the ﬁlter c may vary in time and also assume very large values. Adaptive FSE (F0 D 2).634 Chapter 8.69) Þck / ¼Þ/ck C ¼ek xŁ 2k (8. This effect can be illustrated also in the frequency domain: outside the passband of the input signal the ﬁlter c may assume arbitrary values. Channel equalization and symbol detection x2k T/2 x 2k1 T/2 x 2k2 T/2 x 2kN+1 * c 0. To mitigate this problem.70) # jci j (8. with numerous solutions that present the same minimum value of the cost function.k ACC * c 2. The leaky LMS algorithm.k ACC * c 1.ek xŁ 2k D .9. in the limit case of absence of noise.71) .k ACC ek µ + yk a kD Figure 8. 2. Let " J1 D J C Þ E then ckC1 D ck C ¼. As a result. 1. recalling the leaky LMS algorithm (see page 187).0/.1 with 0 < Þ − rx . Let " J2 D J C Þ E N 1 X i D0 N 1 X i D0 # jci j 2 (8. In both cases we attempt to impose a constraint on the amplitude that the coefﬁcients may assume. the problem of ﬁnding the optimum coefﬁcients become illconditioned.k ACC * c N1. we consider two methods that slightly modify the cost function.
72) ¼Þ sgn ck C ¼ek xŁ 2k For an analysis of the performance of the FSE. We note O that this is done without changing the noise wk that is present in x k . : : : . : : : . in (8. : : : . bn D 0. ak N2 g. : : : . For example. Explicitly writing terms that include precursors and postcursors.46). M2 . and output given by O xFB. Substituting the past symbols with O their detected versions fak 1 . akC1 . N2 . for i D 1. including convergence properties. Q We assume.h N1 akCN1 C ÐÐÐ C h 1 akC1 / C . that fh n g has ﬁnite duration and support [ N1 . as illustrated in Figure 8. The samples with positive time indices are called postcursors. and another that depends only on future symbols. as O illustrated in Figure 8.10.75) In addition to the actual symbol ak that we desire to detect from the observation of x k .7). akCN1 . : : : . we could use an ISI cancellation scheme limited only to postcursors.47). given by (8. in the scheme of Figure 8.5.76) If M2 ½ N2 . the desired signal is given by: sk D s R . : : : . where. in general. 8.8.74) where wk is the noise.h 1 ak 1 C Ð Ð Ð C h N 2 ak Q N 2 / C wk (8.75) two terms are identiﬁed in parentheses: one that depends only on past symbols ak 1 . for n D N2 C 1. : : : .ek xŁ 2k D ck Þ sgn ck / (8. ak N2 . : : : .k D b1 ak 1 C Ð Ð Ð C b M2 a k O M2 (8. Q . If the past symbols and the impulse response fh n g were perfectly known.74) becomes: x k D h 0 ak C .11. N2 .73) where the sampled pulse fh n g is deﬁned in (8. and ak i D ak i .5.t0 C kT / D C1 X iD 1 ai h k i (8. (8. for n D 1.5 Decision feedback equalizer (DFE) We consider the sampled signal at the output of the analog receive ﬁlter (see Figure 8. the feedback ﬁlter has impulse response fbn g. N1 C1. Decision feedback equalizer (DFE) 635 then ckC1 D ck C ¼. n D 1. N2 ]. then the DFE cancels the ISI due to postcursors. and those with negative time indices precursors. bn D h n .5 or Figure 8. Simulations results also demonstrate better performance of the FSE with respect to an equalizer that operates at the symbol rate [2]. we obtain a scheme to cancel in part ISI. we refer to [1]. In the presence of noise we have x k D s k C wk Q (8. M2 . N2 1.
Discretetime pulses in a DFE. M1 1 X i D0 z k D xFF. with M1 coefﬁcients.77) . (b) After the FF filter.12.636 Chapter 8. Figure 8.k D ci x k i (8. The general structure of a DFE is shown in Figure 8. Channel equalization and symbol detection (a) Before the FF filter.10. where two ﬁlters and the detection delay are outlined: 1. Feedforward (FF) ﬁlter c.
N1 C N2 C 1/T =F0 (timespan of h). . Decision feedback equalizer (DFE) 637 Figure 8. We note that the FF ﬁlter may be implemented as a FSE.78) Moreover. Figure 8. Simpliﬁed scheme of a DFE. xFB.11. so the FF ﬁlter can effectively equalize. with respect to the desired sample D . The choice of the various parameters depends on fh n g. In this manner almost all the ISI is cancelled by the FB ﬁlter.k C xFB.8. Feedback (FB) ﬁlter b. whereas the feedback ﬁlter operates with sampling period equal to T . are usually observed.k (8. General structure of the DFE.5.79) We recall that for a LE the goal is to obtain a pulse f n g free of ISI.4. Now (see Figure 8. M1 T =F0 (timespan of the FF ﬁlter) at least equal to . where only the feedback ﬁlter is included. 1.12.k D M2 X i D1 bi ak i D (8. 2.10) ideally the task of the feedforward ﬁlter is to obtain an overall impulse response f n D h Ł cn g with very small precursors and a transfer function Z[ n ] that is minimum phase (see Example 1. yk D xFF. however. The following guidelines. with M2 coefﬁcients.3).
81) (8. that does not introduce any delay. x k O O O 1 . DT is approximately equal to N 2 1 F0 .M1 1/ =F0 .638 Chapter 8. instead. ak D 1 . DT is equal to or smaller than .13. to reduce the constraints on the coefﬁcients of the FF ﬁlter the value of D is lowered and the system is iteratively designed. to simplify the FB ﬁlter. the criterion is that the center of gravity of the coefﬁcients of the ﬁlter c is approximately equal to . : : : . If the precursors are not negligible. Implementation of a DFE.82) Figure 8. . b1 . In practice. For very dispersive channels. ak D 2 . 3. c1 . b2 . b M2 ] T M1 1 X i D0 M2 X jD1 ci x k i C b j ak D j (8. it results M2 T × M10T . D .M1 1/T =F0 (timespan of the FF ﬁlter minus one). : : : . : : : . T For a LE.13. Channel equalization and symbol detection 2. which determines the number of postcursors. Adaptive DFE We consider the scheme implemented in Figure 8.N 1/=2. F 4. x k M1 C1 . The choice of DT. : : : . M2 depends also on the delay D. The detection delays discussed above are referred to a pulse fh n g “centered” in the origin. equal to the detection delay. is obtained by initially choosing a large delay.80) (8.M1 1/T =F0 . ak D M2 ] T 1 . for which we have N1 C N2 × M1 . M2 T (timespan of the FB ﬁlter) equal to or less than . c M1 and the input vector ξ k D [x k . where the output signal is given by yk D Deﬁning the coefﬁcient vector ζ D [c0 .
n/ Q (8.n/ D Deﬁning p N2 X jD N1 h j hŁ j n rw . with the usual assumptions of symbols that are i. for the MSE criterion with J D E[jak D yk j2 ] (8.85) Design of a DFE with a ﬁnite number of coefﬁcients If the channel impulse response fh i g and the autocorrelation function of the noise rw . we recall the following results: 1. Decision feedback equalizer (DFE) 639 we express (8. during the transmission of a training sequence.90) equation (8.n/ D ¦a rh .d.91) .80) becomes yk D N2 CM1 1 X pD N1 p ak p C M1 1 X i D0 ci wk Q i C M2 X jD1 b j ak D j (8.n/ D ¦a h Ł n (8.d.88) where rh .86) the Wiener ﬁlter theory may be applied to determine the optimum coefﬁcients of the DFE ﬁlter in the case ak D ak .nT / Q (8. ak O The LMS adaptation is given by ζ kC1 D ζ k C ¼ek ξ Ł k (8.73).80) in vector form: yk D ζ T ξ k The error is given by O ek D a k D (8. autocorrelation of x k 2 rx .n/ D N0 rg M .83) yk D (8.n/ Q are known.84) D ak D. and O statistically independent of the noise. For a generic sequence fh i g in (8. We recall that.89) D h Ł cp D M1 1 X `D0 c` h p ` (8.n/ C rw .8.5.87) 2. crosscorrelation between ak and x k 2 rax .
` h D ` (8. M1 ÄÂ [R] p. the optimum choice of the feedback ﬁlter coefﬁcients is given by bi D Substitution of (8. M2 (8.91).` [p]Ł ` ! copt. p Q q/ 1 p ak D j2 (8.` h i CD ` i D 1. : : : . M2 (8.98) 2 D ¦a 1 M1 1 X `D0 .94) 1 p D 0.95) h j hŁ j . 2.97) Moreover. : : : . q D 0. the following correlations are needed: " [p] p D E ak D xk p M2 X jD1 !# Ł h jCD p ak D j 2 D ¦a h Ł D p (8. M1 Therefore the optimum feedforward ﬁlter coefﬁcients are given by copt D R 1 p (8. 1. Channel equalization and symbol detection Observing (8. : : : .92) in (8. using (8. we get 2 Jmin D ¦a M1 1 X `D0 copt.96) and. 1.94).q D E xk q M2 X j1 D1 Ã h j1 CD q ak D j1 Â xk 2 D ¦a N2 X jD N1 p M2 X j2 D1 ÃŁ ½ h j2 CD ! C rw . from (8. the optimum feedback ﬁlter coefﬁcients are given by bi D M1 1 X `D0 copt.92) ci xk i M2 X jD1 ! h jCD i ak j D (8. p q/ M2 X jD1 h jCD Ł q h jCD p p.80) yields yk D M1 1 X i D0 i CD i D 1.92).93) To obtain the Wiener–Hopf solution. : : : .640 Chapter 8.
while the expression of the elements of the matrix R in (8. . among the four polyphase components (see Section 1. Observation 8. a larger sampling period of the signal at the MF input is considered.99) hj Ł qh j p C rw . in any case it is (semi)deﬁnite positive. In particular. while for a DFE it is only Hermitian.16)).98).g. p Q q/ (8. A particularly useful method to determine the impulse response fh n g in wireless systems (see Chapter 18) resorts to a short training sequence to achieve fast synchronization. by estimation. This method is similar to the timing estimator (14. the autocorrelation of wk is proportional to the autocorrelation of the Q receive ﬁlter impulse response: consequently.A.3.2 The equations to determine copt for a LE are identical to (8. To reduce implementation complexity. for example T =8. We recall that a ﬁne estimate of t0.4 For a LE. the Q autocorrelation rw . for example. The overall discretetime system impulse response obtained by sampling the output signal of the antialiasing ﬁlter gAA (see Figure 8. is in principle determined by the accuracy with which we desire to estimate the timing phase t0.B. Observation 8. which is determined by methods described in Chapter 14.2. the vector p in (8. however.n/ is easily determined.94)–(8.1 In the particular case in which all the postcursors are cancelled by the feedback ﬁlter.M F at the MF output.5 The deﬁnition of fh n g depends on the value of t0 . with M2 D 0.95) is simpliﬁed as 2 [R] p. e.4) is assumed to be known.5. This is equivalent to selecting among the four possible components with sampling period T =2 of the sampled output signal of the ﬁlter gAA the component with largest statistical power. thus realizing the MF criterion (see also (8. Decision feedback equalizer (DFE) 641 Observation 8. the coefﬁcients of the channel impulse Q Q response fh n g and the statistical power of wk used in R and p can be determined by the methods given in Appendix 3.9 on page 119) of the impulse response. Efﬁcient methods to determine the inverse of the matrix are described in Section 2. the matrix R is Hermitian and Toeplitz. the component with largest energy.M F is needed if the sampling period of the signal at the MF output is equal to T .3 For white noise wC .100) Observation 8. T =2. Observation 8.94) is not modiﬁed.q D ¦a D X jD N1 1 (8.8. We then implement the MF g M by choosing.95) is modiﬁed by the terms including the detected symbols. Finally. The sampling period. that is for M2 C D D N 2 C M1 (8.117). if the statistical power of wk is known.
6 In systems where the training sequence is placed at the end of a block of data (see the GSM frame in Appendix 17.A).642 Chapter 8. Channel equalization and symbol detection The timing phase t0. It is usually chosen either as the time at which the ﬁrst useful sample of the overall impulse response occurs. and statistically independent of the noise signal.269). 1. thus exploiting the knowledge of the training sequence. 1.102) . Now if f n D h Ł cn g and fbn g are the optimum impulse responses if the signal is processed in the forward mode. Note that.A A for the signal at the input of g M is determined during the estimation of the channel impulse response. where the FF ﬁlter c is now a fractionally spaced ﬁlter.M F D t0. facilitates the optimum choice of t0 (see Chapter 14). now f n g is maximum phase and anticausal with respect to the new instant of optimum sampling.A A C t M F . it is easy to BŁ BŁ verify that f n g and fbn g. it is convenient to process the observed signal fx k g starting from the end of the block. 0. In fact. : : : . besides reducing the complexity of c (see task 2 on page 628). i. K 1. The overall receiver structure is shown in Figure 8. We recall that the MF.12.14. as illustrated in Figure 8. The criterion (7. let’s say from k D K 1 to 0. and an FB ﬁlter with M2 coefﬁcients. apart from a constant delay.d. Otherwise the function of the MF may be performed by a discretetime ﬁlter placed in front of the ﬁlter c. the symbols are assumed i. In the particular case fh n g is a BŁ correlation sequence.7. for k D 0. where B is the backward operator deﬁned on page 27. according to which t0 is chosen in correspondence of the correlation peak. Also the FB ﬁlter will be anticausal. As usual. then the timing phase at the output of g M is given by t0. If fxn g is the input signal of the FF ﬁlter.e. or the time at which the peak of the impulse response occurs.101) xn D kD 1 The signal fyk g at the DFE output is given by yk D M1 1 X i D0 ci x2k i C M2 X jD1 b j ak D j (8. Observation 8. : : : . the extension to an FSE with an oversampling factor F0 > 2 is straightforward. is a particular case of this procedure. if f n g is ideally minimum phase and causal with respect to the BŁ timing phase. are the optimum impulse responses in the backward mode for k D K 1. where the ﬁlter gAA may be more sophisticated than a simple antialiasing ﬁlter and partly perform the function of the MF.i. then f n g can be obtained using as FF ﬁlter the ﬁlter having impulse BŁ response fcn g Design of a fractionally spaced DFE (FSDFE) We brieﬂy describe the equations to determine the coefﬁcients of a DFE comprising a fractionally spaced FF ﬁlter with M1 coefﬁcients and sampling period of the input signal equal to T =2. we have C1 X h n 2k ak C wn Q (8. We consider the scheme of Figure 8. shifted by a number of modulation intervals corresponding to a given number of precursors. if t M F denotes the duration of g M .
from (8. 1. : : : .8. : : : . jCD/ i ak . Let p D h Ł cp D M1 1 X `D0 c` h p ` (8.5.107) the components of the vector p and the matrix R of the Wiener problem associated with (8. 1 p. q D 0.103) then the optimum choice of the coefﬁcients fbi g is given by bi D 2. jCD/ (8. p Q q/ (8.102) becomes yk D M1 1 X i D0 ci x2k i M2 X jD1 ! h 2.110) [R] p. Â Ã T rw .57).i CD/ D M1 1 X `D0 c` h 2. jCD/ q h Ł jCD/ p 2.q D C1 X nD 1 h 2.108) K Ł 2 x2k i ] D ¦a h Ł 2K Ł x2k 2 p ] D ¦a C1 X nD 1 i (8.9.14. M2 (8. p Q (8.m/ D N0 rgAA m Q 2 (8.i CD/ ` i D 1. FSDFE structure.109) q/ (8.106) h 2n q q hŁ 2n p C rw . M1 h 2n q h Ł p 2n M2 X jD1 1 ! C rw .10 on page 72) E[ak E[x 2k where.104) With this choice (8. : : : . Decision feedback equalizer (DFE) 643  Figure 8.105) are given by 2 [p] p D ¦a h Ł 2D 2 ¦a p p D 0. M1 .105) Using the following relations (see also Example 1. 1.
` [p]Ł ` ! copt.644 Chapter 8. Observation 8. In particular.27). The minimum value of the cost function is given by 2 Jmin D ¦a M1 1 X `D0 copt. it is possible to achieve the minimum value of Jmin .104). .e j2³ f T / 2T where 8 is deﬁned in (8. for example by the correlation method (7.269).113) ln 2 1 N0 C ¦a 8.3 hold for a FSDFE.` h 2D ` (8. because it may be illconditioned. this method avoids the need for the estimate of the overall channel impulse response. Similarly to the procedure outlined on page 187.3).111) and the feedback ﬁlter is determined from (8. or FSLE.111).8 Two matrix formulations of the direct method to determine the coefﬁcients of a DFE and an FSDFE are given in Appendix 8. however. and the crosscorrelation of the two signals: using suitable estimates of the various correlations (see the correlation method and the covariance method considered in Section 2. so that the performance of the optimum solution does not change signiﬁcantly.111) with M2 D 0. it requires a greater computational complexity with respect to the method described in this section. Signaltonoise ratio γ Using FF and FB ﬁlters with an inﬁnite number of coefﬁcients. given by [3] 1 0 Z 1 2T N0 2 Jmin D ¦a exp @T dfA (8. obviously the value of this constant must be rather small. the equations to determine copt are given by (8. For an FSE. Channel equalization and symbol detection The feedforward ﬁlter is obtained by solving the system of equations R copt D p (8.112) 2 D ¦a 1 M1 1 X `D0 A problem encountered with this method is the inversion of the matrix R in (8. so that R becomes invertible. In this case the timing phase t0 after the ﬁlter gAA can be determined with accuracy T =2.7 Observations similar to the observations 8. with appropriate changes. a formulation uses the correlation of the equalizer input signal fxn g. a solution consists in adding a positive constant to the elements on the diagonal of R. Note that the matrix R is Hermitian but in general it is no longer Toeplitz.B.109)–(8. Observation 8. the correlation of the sequence fak g.1–8. Salz derived the expression of Jmin for this case.
A and Chapter 13. An analogous relation holds for an FSDFE.6.a/ da Z Ä e e f . However. 3. as in Figure 1. using the precoding method discussed in Appendix 7.98).15c. Detection errors tend to spread.112). such that Q Q . M2 ! 1) performance than the linear equalizer. However. assuming 8. Remarks 1. or nonminimum phase. h nom .114) to (8. 8. If FF and FB ﬁlters with a ﬁnite number of coefﬁcients are employed. 4. as in 2 Figure 1.44). for inﬁniteorder ﬁlters the value Jmin of a DFE is always smaller than Jmin of a LE. Error propagation leads to an increase of the error probability. R f .5.6 Convergence behavior of adaptive equalizers We consider the digital transmission model of Figure 8.8.15a.113): the result is that. we can compare the performance of a linear equalizer given by (8. the DFE has better asymptotic (M1 . and observe the performance of adaptive LE and DFE by resorting to a speciﬁc channel realization. fh n g. In particular.113). 2. is either minimum phase. instead of the DFE structure it is better to implement the linear FF equalizer at the receiver. we analyze two cases in which the discretetime overall impulse response of the system. because they produce incorrect cancellations. and the FB ﬁlter at the transmitter as a precoder. The DFE is deﬁnitely superior to the linear equalizer for channels that exhibit large variations of the attenuation in the passband.e j2³ f T / 6D constant and the absence of detection errors in the DFE. The additive channel noise wk is AWGN with statistical power ¦w . in analogy with (8. simulations indicate that for typical channels and symbol error probability smaller than 5 Ð 10 2 . for a given ﬁnite number of coefﬁcients M1 C M2 . also for a DFE we have 2 DFE Jmin (8. For channels with impulse response fh i g. In the absence of errors of the data detector.a/ da (8.115) where Jmin is given by (8. with Jmin given by (8.40) with that of a DFE given by (8. such that detection errors may spread catastrophically. error propagation is not catastrophic. as in such cases the linear equalizer tends to enhance the noise. the performance is a function of the channel and of the choice of M1 . Convergence behavior of adaptive equalizers 645 Applying the Jensen’s inequality. h min .
6.15c. System impulse responses and curves of meansquare error convergence. b and 8. the best results are obtained for a delay D D 0 in the case of h min and D D 8 in the case of h nom .k/ indicate that the RLS algorithm succeeds in achieving convergence by the end of the training sequence.n 0.2) for minimum and nonminimum phase channels.0/=¦w at the equalizer input is equal to 20 dB. we observe that the overall impulse response h nom is not centered at the origin. respectively.n g are depicted in Figures 8. Adaptive LE With reference to the scheme of Figure 8. (a) 1.5 1  c 0. d show curves of meansquare error convergence for standard LMS and RLS algorithms (see Section 3. Figures 8. Q The impulse response fcopt. d and 8.5 0 ψn 15 opt. In terms of meansquare error at convergence. . 1g is a PN training sequence of length L D 63.1. Jmin represents the minimum value of J achieved with optimum coefﬁcients computed by the direct method. The curves of convergence of J . We note that Jmin is 4 dB higher 2 than the value given by ¦w .16a. The Q sequence of symbols ak 2 f 1.5 1 (b) 1.15.16c. estimated over 500 realizations. even though a large adaptation gain ¼ is chosen. and a LE employing the LMS with ¼ D 0:062 or the RLS. b for the two channels. whereas the LMS algorithm still presents a considerable offset from the optimum conditions. and the delay D is the sum of the delays introduced by fh n g and fcn g.15a. we consider a LE with N D 15 coefﬁcients. because of the noise and residual ISI at the decision point. in the plots.5 0 0 5 10 0 5 10 15 n 0 (c) n LMS J(k) (dB) −5 −10 −15 −20 0 J min 10 20 30 40 50 60 (d) −15 J(k) (dB) −20 −25 −30 −35 0 Jmin RLS 10 20 30 40 50 60 k Figure 8. for a channel with minimum phase impulse response. Channel equalization and symbol detection 2 2 the signaltonoise ratio 0 D ¦a Ð rh .646 Chapter 8.n g of the optimum LE and the overall system impulse response f n D h Ł copt.
8. System impulse responses and curves of meansquare error convergence. estimated over 500 realizations.5  copt. for a channel with minimum phase impulse response.n  ψn 1 1 0. and a DFE employing the LMS with ¼ D 0:063 or the RLS. Convergence behavior of adaptive equalizers 647 (a) 2 1.5 0 ψn 10 opt.5 0 0 0. and a LE employing the LMS with ¼ D 0:343 or the RLS. System impulse responses and curves of meansquare error convergence. .6.n 0.5 0 5 0 n 0 (c) 0 5 10 n LMS J(k) (dB) −5 −10 −15 −20 0 Jmin 10 20 30 40 50 60 (d) −15 J(k) (dB) Jmin −20 −25 −30 −35 0 RLS 10 20 30 40 50 60 k Figure 8. (a) 1 1 (b)  c 0. estimated over 500 realizations.17.5 (b) 2 1.5 5 10 15 0 0 5 10 15 n 0 (c) n J(k) (dB) LMS −5 −10 −15 −20 0 Jmin 10 20 30 40 50 60 (d) −15 Jmin RLS J(k) (dB) −20 −25 −30 −35 0 10 20 30 40 50 60 k Figure 8.16. for a channel with nonminimum phase impulse response.
the signal at the output of a LE with N coefﬁcients (see (8.5 1 ψn 1 0.17a. d and 8.n g are depicted in Figures 8. estimated over 500 realizations. b.18c.7 LEZF with a ﬁnite number of coefﬁcients Ignoring the noise. The impulse response fcopt. Channel equalization and symbol detection (a) 2 (b) 2 1. Adaptive DFE We consider now the performance of a DFE as illustrated in Figure 8. for the two channels. with parameters M1 D 10. and D D 9.91)) is given by N 1 X i D0 CN N2X 1 pD N1 yk D ci x k i D p ak p (8. respectively. 8.5 0 0 0. for both h min and h nom . d show curves of meansquare error convergence for standard LMS and RLS algorithms.18a.5 5 10 0 n 0 (c) 0 5 10 n LMS J(k) (dB) −5 −10 −15 −20 0 Jmin 10 20 30 (d) 40 50 60 −15 Jmin J(k) (dB) −20 −25 −30 −35 0 RLS 10 20 30 40 50 60 k Figure 8. System impulse responses and curves of meansquare error convergence.n g of the optimum FF ﬁlter and the overall system impulse response f n D h Ł copt. Figures 8.n  1.648 Chapter 8.18. M2 D 5. and a DFE employing the LMS with ¼ D 0:143 or the RLS.116) .17c. for a channel with nonminimum phase impulse response. b and 8. for minimum and nonminimum phase channels.13.5  copt. Also in this case the chosen value of D gives the best results in terms of the value of J at convergence.
1) followed by the DFE illustrated in Figure 8.z/ to remove the ISI: therefore the LEZF output is given by x E. An alternative robust method.8. : : : . and the result windowed in the time domain so that the ﬁlter coefﬁcients are given by the N consecutive coefﬁcients that maximize the energy of the ﬁlter impulse response. a method to determine the coefﬁcients of the LEZF consists in considering the system (8.C. is known.108)). as deﬁned in (8. to simplify the notation.118) where D is a suitable delay.k D ak C w E.19.8 DFE: alternative conﬁgurations We determine the expressions of the FF and FB ﬁlters of a DFE in the case of IIR ﬁlter structure. otherwise the equalizer coefﬁcients may deviate considerably from the desired values.19.90). DFEZF We consider a receiver with a matched ﬁlter g M (see Figure 8.15.z/. alternatively.185)). N 1 X `D0 p D c` h p ` D c0 h p C c1 h p pD 1 C Ð Ð Ð C cN 1 h p .119) . 8. 0.117) with Nt D N1 C N2 C N equations and N unknowns. Note that all these methods require an accurate estimate of the overall impulse response. : : : . N2 . where. the solution can be found in the frequency domain by taking the Nt point DFT of the various signals (see (1. DFE: alternative conﬁgurations 649 where.25). if the determinant is different from zero. will be presented in Section 8.N 1/ (8. from (8. then the matrix of the system (8. the matched ﬁlter output x k is input to a linear equalizer zero forcing (LEZF) with transfer function 1=8.k (8. An adaptive ZF equalization method is discussed in Appendix 8.117) is square and.117) centered around D. : : : .118) only for N values of p in (8. it can be inverted. n D N1 . With reference to Figure 8. N 2 C N 1 For a LEZF it must be p D Žp D (8.117) N1 . which does not require the knowledge of fh n g and can be extended to FSEZF systems. The ztransform of the QAM system impulse response is given by 8. If the overall impulse response fh n g. we assume t0 D 0 and D D 0. An approximate solution is obtained by forcing the condition (8. that can be solved by the method of the pseudoinverse (see (2.8.
associated with a causal sequence ffn g.z/ As 8.z/ D 1 C a 0 z 1 C Ð Ð Ð C a0N . Channel equalization and symbol detection Figure 8.z/ D N 0 8.z/ of w E.23).N z N (see 1. it can be factorized as (see page 53) Â Ã 1 Ł (8.122) is a minimumphase function.z/ F zŁ where F.z/. where fr x .z/ is given by (2.z/ in (8.1=z Ł / is a function with zeros and poles outside the unit circle.z/ D F.nT /g deﬁned by (8.120) D N0 1 8.z/ is the ztransform of a correlation sequence. is obtained by considering a minimumphase prediction error ﬁlter A.z/ 1 8.85). F Ł . associated with an anticausal sequence f f Łn g.650 Chapter 8. On the other hand.n/g is now substituted by frqC .121).26).121) 8.nT /g. using the property (8.z/ D C1 X nD0 fn z n (8. From (8.9 A useful method to determine the ﬁlter F.z/8 Ł Â 1 zŁ Ã (8. Observation 8.k is given by Pw E .z/ D f 0 =A.29).19. designed using the ACS frqC . . that is with poles and zeros inside the unit circle. DFE zeroforcing. with a computational complexity that is proportional to the square of the number of ﬁlter coefﬁcients. we obtain that the spectrum Pw E . The ﬁnal result is F. The equation to determine the coefﬁcients of A.N page 147).
123) The ISI term in z k is determined by W .z/ f0 1 Â f0 FŁ 1 zŁ Ã (8. and whitening ﬁlter.z/ (8.124) the FB ﬁlter removes the ISI present in z k and leaves the white noise unchanged. is called whitened matched ﬁlter (WMF). As yk is not affected by ISI.z/ D 8.20.20.z/ D F. hence there are no precursors. where the block including the matched ﬁlter. the WF of Figure 8. DFEZF as whitened matched ﬁlter followed by a canceller of ISI. .8.125) DFE ZF Summarizing.z/=f 0 and B. because it is anticausal.z/.z/ D 1 9. the overall discretetime system is causal and minimum phase. DFE: alternative conﬁgurations 651 We choose as transfer function of the ﬁlter w in Figure 8. the ﬁlter composed of the cascade of LEZF and w is also a WF. for which we obtain D 2jf0 j2 N0 (8.8. If ak D ak then.126) With this ﬁlter the noise in z k is white.z/ D 1 W .z/ 1 f0 (8. 0 In any case. The relation between ak and the desired signal in z k is instead governed by 9.z/ D F. sampler. this structure is called DFEZF. The overall receiver structure is illustrated in Figure 8. In other words. that is the energy of the impulse response is mostly concentrated at the beginning of the pulse.20 is nonrealizable. In practice we can implement it in two ways: Figure 8.19 the function W . for O B. Note that the impulse response at the WF output has no precursors. Therefore the ﬁlter w is called whitening ﬁlter (WF).z/ 1.N0 =f2 /. the noise is white with statistical power. the relation between x k and z k is given by 1 1 F. In principle.
1=z Ł /.127) where rw E .1 a 2 / 1 C a 2 2a cos.128) qC . sampled at instant nT.0/ is determined as the coefﬁcient of z 0 in N0 =8. the ratio is LE ZF D 2 rw E .128) E qC is the energy of qC .1 . We observe that the choice F.1 az/ az (8. : : : . using a LEZF instead of a DFE structure means that a ﬁlter with transfer function f0 =F. f0 Observation 8.2³ f T / (8.t/ be the overall system impulse response at the MF input. Let p (8. 0.129) We note that the frequency response of (8.z/ D f 0 =A.1 a 2 / az 1 C .129) is 8. leads to an FIR WF with transfer function 12 AŁ . K 2.119). 1. and processing the output samples in the forward mode for k D 0.nT / D E qC a jnj Then 8.3).z/. The autocorrelation of qC .z/. is given by rqC . b) by processing the output samples of the IIR WF in the backward mode.z/ is placed after the WF to produce the signal x E. where A.1 C a 2 / .20. : : : . .t/ D E qC 2þe þt 1.k given by (8.0/ (8. for k D K 1. Example 8.652 Chapter 8.z/ D Z[r qC .k .8.nT /] D E qC D E qC . in (8.z/ is discussed in Observation 8.20 is illustrated by an example.e j2³ f T / D E qC and presents a minimum for f D 1=.130) aDe þT <1 (8. For a data detector based on x E. Channel equalization and symbol detection a) by introducing an appropriate delay in the impulse response of an FIR WF.131) .2T /. starting from the end of the block of samples.1 (WF for a channel with exponential impulse response) A method to determine the WF in the scheme of Figure 8. This expression is alternative to (8.10 With reference to the scheme of Figure 8.1 a 2 / az 1 /.9 on page 650.
133) 1 zŁ / In this case the WF. it is easy to identify the poles and zeros of 8.D D 1/. can be implemented by a simple FIR with two coefﬁcients.1 a2/ a2/ 1 az a z n 1 C1 X nD0 1 (8.q0 C q1 z 1 / (8.121).z/ D Q CC .1 a 2 / az/ (8.132) n In particular.8.z/Q CC Ł z where Q CC . The FB ﬁlter is a ﬁrstorder IIR ﬁlter with transfer function 1 1 F.137) .z/ D f0 D1 D C1 X nD1 an z 1 az n (8.E qC .1 a2/ (8.1 a 2 //.2 (WF for a tworay channel) In this case we directly specify the autocorrelation sequence at the matched ﬁlter output: Â Ã 1 Ł (8.1 E qC . a C z D E qC . the coefﬁcient of z 0 is given by f0 D The WF of Figure 8. whose values are equal to a=.8. apart from a delay of one sample .E qC .z/ D D q q E qC .20 is expressed as 1 Â f0 F Ł ÃD 1 .134) 1 z.z/ D p E qC .1 E qC .135) 1 1 az 1 1 az 1 Example 8.1 a 2 // and 1=.z/ inside the unit circle. DFE: alternative conﬁgurations 653 With reference to the factorization (8.1 a 2 / 1 q E qC .136) 8.8. hence F.
lies outside the unit circle.1 (8.139). Equation (8.144) 1 Ł Ł /.146) . In this case. p F. Channel equalization and symbol detection with q0 and q1 such that jq0 j2 C jq1 j2 D 1 jq0 j > jq1 j (8.z/ (8.z/ D E qC .2T /.139).139) In this way E qC is the energy of fqCC .143) 1 zŁ bz/ We note that the WF has a pole for z D b 1 .q0 C q1 z and f0 D The WF is given by 1 Â f0 F Ł where Â bD q1 q0 ÃŁ ÃD 1 E qC jq0 j2 .z/ D E qC .138) (8.137) we get 8.z/ is minimum phase.140) qCC .137) represents the discretetime model of a wireless system with a tworay channel.nT / D E qC . The impulse response is given by p (8.bz/ n (8.654 Chapter 8. f / D p E qC .bz/ i D 1 X nD0 . recalling (8.1 bz/ with an anticausal sequence. From (8.q0 Žn C q1 Žn 1 / The frequency response is given by QCC . in order to have a stable ﬁlter.142) 1 / D Q CC .q0 C q1 z hence. 1 1 bz D 0 X iD 1 .145) p E qC q 0 (8.136) and (8. recalling assumption (8. it is convenient to associate the ztransform 1=.141) We note that if q0 D q1 . the frequency response has a zero for f D 1=. which.q0 C q1 e j2³ f T / (8.nT /g and Q CC .q0 C q1 z/ (8.
149) W . plus the desired symbol ak .bz/ n E qC jq0 j2 nD0 1 zŁ (8.21.N 1/ terms.z//.21 consists in using as FF ﬁlter. where the FB ﬁlter acts as a noise predictor.1 W .z//A. a minimumMSE linear equalizer.z/ D f0 q1 z q0 1 (8.21. for ak D ak .19 is redrawn as in Figure 8.z/ D X E . as jbj < 1.z/ the scheme of Figure 8. DFE: alternative conﬁgurations 655 On the other hand.z/ X E .148) DFEZF as a noise predictor Let A. apart from a delay D D N . we can approximate the series by considering only the ﬁrst . obtaining 1 Â f0 F Ł Ã' N X 1 . where the ﬁlter c is given Figure 8.147) D 1 z N [b N C Ð Ð Ð C bz E qC jq0 j2 . DFE as ISI and noise predictor A variation of the scheme of Figure 8. In fact. Predictive DFE: the FF ﬁlter is a linear equalizer zero forcing. From the identity Y .22. .k .A. the FB ﬁlter input is colored noise.8.z// (8.1 D X E .z/ C . The FB ﬁlter in this case is a simple FIR ﬁlter with one coefﬁcient 1 1 F. we obtain yk that is composed of white noise.N 1/ Cz N ] Consequently the WF. with minimum variance. can be implemented by an FIR ﬁlter with N C 1 coefﬁcients.z/ D Z[a k ]. We refer to the scheme of Figure 8. By removing the O correlated noise from x E.8.z/W .z/ C .
N0 /2 2 .z/C Ł Ł z 4 8. Predictive DFE with the FF ﬁlter as a minimumMSE linear equalizer. by (8.z/ D Pa . the spectral density of the ISI has the following expression: PI S I .154) We note that the FF ﬁlter could be an FSE and the result (8.z/C.z// 2 2 using (8.153) Therefore the spectrum of the disturbance vk in z k .z/ (8.151) Hence.z/ D N 0 8. is given by Pv .z/ Â 1 zŁ Ã (8.152) D 2 ¦a .z/ N0 2 N0 C ¦a 8Ł 2 ¦a 8.z/¦ a D N0 2 .26) and the fact that the symbols are uncorrelated with Pa .z/C.z/ D ¦ a .150) (8.22. The ztransform of the overall impulse response at the FF ﬁlter output is given by: 8.z/ (8. The spectrum of the noise in z k is given by Â Ã 1 Pnoise .154) would not change. Channel equalization and symbol detection Figure 8. .z/ D As t0 D 0.656 Chapter 8.z/ D 2 N0 ¦a 2 N0 C ¦a 8.38). the ISI in z k is given by: 8.z// 2 (8.z/C.z/ 1D N0 2 N0 C ¦a 8.z/ 2 N0 C ¦a 8. composed of ISI and noise.N0 C ¦a 8.N0 C ¦a 8.z/ N0 2 N0 C ¦a 8.
For O a predictor of inﬁnite length. are separately adapted through the error signals fe F.k g.158) Â BŁ 1 zŁ Ã½ D [1 Ä B. 8.155) A. This conﬁguration. respectively.z/ D An alternative form is B. the noise sequence fw E.156) a0 D 1 0 (8.120). is used in conjunction with trelliscoded modulation (see Chapter 12) [4].k g can be modeled as the output of a ﬁlter having transfer function C F . To determine B.8.526): Pv .z/ D 1 F. we set B. An adaptive version of the basic scheme of Figure 8.z/ we use the spectral factorization in (1.z/A zŁ 2 ¦y (8. Performance comparison From (8. in terms of the signaltonoise ratio at the decision point. . c and b. the FB ﬁlter. In conclusion.159) . needs to remove the predictable components of z k . it results that the prediction error signal yk is a white noise process with 2 statistical power equal to ¦ y . with input ak z k D O ak z k D vk (assuming ak D ak ).9 Benchmark performance for two equalizers We compare limits on the performance of the two equalizers. LEZF and DFEZF.z/ D 1 where A.z/] 1 with Pv .k g and fe B.z/ given by (8.22 suggests that the two ﬁlters.z/ (8. Benchmark performance for two equalizers 657 To minimize the power of the disturbance in yk .z/ (8.154).157) is the forward prediction error ﬁlter deﬁned in (2. although suboptimum with respect to the DFE.9.83).z/ D C1 X nD0 0 an z n C1 X nD1 bn z n (8.z/ D 2 ¦y Â Ã 1 Ł A.
167) Therefore the loss due to the ISI.8.n j2 ½ jc F.0/ D N0 where.125) and (8.160) where fc F. given by the factor .161) 1 1 D F.162) jc F. Then we can express the statistical power of w E. Because F. from (8.k as: rw E .1 a 2 /=.z/ is causal: C F .131) assumes a minimum value close to zero.163) the comparison between (8.130) the coefﬁcient of z 0 in N0 =8.n z n (8. is the ﬁlter impulse response.166) Using the expression of we get MF (7. c F.1 C a 2 /.8.n g. 1 a2 1 C a2 LE ZF D MF (8.n j2 (8.1/ f0 1 jf0 j2 (8. LEZF Channel with exponential impulse response.658 Chapter 8.0 j2 D (8. LE ZF for the two simple systems introduced in Examples 8. obtained for a MF receiver in the absence of ISI.127).159). consequently. In this case the frequency response in (8.0/ D and.1/ D Using the inequality 1 X nD0 1 X nD0 jc F. can be very large if a is close to 1.165) D2 (8. from (8.113). also C F . n ½ 0. Channel equalization and symbol detection and input given by white noise with PSD N0 . .0 D C.164) Equalizer performance for two channel models We now analyze the value of and 8.2.1 N0 1 C a 2 E qC 1 a 2 E qC 1 a 2 N0 1 C a 2 (8.z/ is equal to rw E .127) yields LE ZF Ä DFE ZF (8. From (8.z/ D 1 X nD0 c F.z/ is causal.
Optimum methods for data detection 659 Tworay channel.10.jq0 j2 jq1 j2 / Ł Ł E qC q 0 q 0 q 1 q0 (8.133) in (8. .8. the decision on a transmitted symbol ak D is based only on yk through a memoryless threshold detector.169) Then LE ZF D 2 MF .q0 C q1 z 1 /.z/ E qC .8. DFEZF Channel with exponential impulse response. Tworay channel. we have derived the conﬁguration of Figure 8.172) jq1 j2 /.1 and 8.1 for an LE.12 for a DFE.168) q 1 =q0 lies inside the unit N0 N0 Â ÃD q1 E qC . the performance of LE and DFE are similar to the performance of LEZF and DFEZF.0/ D (8.1 C a 2 /.142) we have 1 1 D Ł Ł 8. In both cases.1 2 a2/ (8. We recall that in case E qC × N0 .144) in (8. that is for low noise levels.170) Also in this case we ﬁnd that the LE is unable to equalize channels with a spectral zero. hence rw E . for both LE and DFE.171) a / We note that DFE Z F is better than LE ZF by the factor . we ﬁnd only the pole for z D circle. 8. Anyway.1 N0 C MF .8. for the two systems of Examples 8.jq0 j jq1 j2 / (8. respectively. In this case the advantage with respect to LEZF is given by the factor jq0 j2 =.q0 C q1 z/ By a partial fraction expansion. Substituting the expression of f0 given by (8. and that of Figure 8.125).jq0 j2 which may be substantial if jq1 j ' jq0 j.125) yields DFE Z F D 2 E q jq0 j2 D N0 C 2 MF jq0 j (8.10 Optimum methods for data detection Adopting an MSE criterion at the decision point.2 the values of in terms of Jmin are given in [4]. Substitution of (8. From (8. we get DFE Z F D D 2 E q .
v.174) where Á0 is the sample of the overall system impulse response. or information message. K 1 (8. modeled as a sequence of r. e. z K 1] (8. in practice. We assume the coefﬁcients fÁn g are known. a L 1 C1 .177) 4.175) 2. Sequence of detected symbols.v.B. a general derivation of optimum detection methods is considered. decoding of convolutional codes (see Chapter 11). z 1 . Þ L 1 CK 1] T 1] T ai 2 A O (8. the decision