You are on page 1of 79

# ADAPTIVE SOFT-INPUT SOFT-OUTPUT ALGORITHMS FOR ITERATIVE

DETECTION
by
A hilleas Anastasopoulos

A Dissertation Presented to the
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Ful

llment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(Ele tri al Engineering)

August 1999

Dedi ation
This dissertation is dedi ated to my parents Ioanna and Ioannis.

ii

. . . . . . . . . 2. . Adaptive Iterative Dete tion . . . . . . .Contents Dedi ation ii List Of Figures v Abstra t 1 2 3 Introdu tion 1. . . .1 1. . . . .2 1. . . . . . . . . 2.2 Classi. . . . . . . . . . .3 1. . . . . . . . . Organization . . Iterative Dete tion . . .4 Adaptive Pro essing . vii . . . Optimal Adaptive SISO Algorithms . . . . .1 The FSM and parameter model . . . . . .

. . 2.3 Exa t evaluation of the soft metri s . . . . . .1 Parameter-. . . 2. . . . . .3. ation of soft metri s . . .

. .2 Deterministi Parameter Model 2.3.1.1. . . 2.3. .rst Combining . .3.2 Sequen e-. .1 Probabilisti Parameter Model 2.

. . . . . . .2. . . . . . . .2. 3. 2.rst Combining . . . . . . . . . . . . . . . .1 Probabilisti Parameter Model 2. Sub-Optimal (Pra ti al) Adaptive SISO Algorithms . .1 Parameter-. . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . .2 Deterministi Parameter Model . .

. 3. . . . . . . . . .1 Tree-sear h te hniques . . . .1. . 3. . .1. . . . . . . .2 Parameter estimate and binding term simpli. . . .rst Combining . . . . . . .

3 Interpretation of existing algorithms . . ation 3. . . . .1. . . 3.2 Sequen e-.

.2. . . . . .rst Combining . . . . . . . . 3.1 Metri simpli. . . . . .

. . . . . . 3. . . . . . .2 Further parameter estimator simpli.2. . . . . ation .

. . . . . . . . . . . . . .2. . . . . . . . . . . . . . 1 1 2 4 7 9 9 11 12 13 13 19 21 21 23 24 24 24 26 27 27 27 28 29 iii . . . . . . . . . . . . . . . . . . . . . . 3. . . . .3 Interpretation of existing algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ation . . . . .

. .1.2. . . . .. . . ... . . . . . . . . .2 Proof of equation (2. . .. . .. . . . .. . . . . . . . . . . . .. . . . 69 B. . . . . .. . . . . . . . . . . . . . . .. . .1 Re eiver Stru tures . . . . . 5. .2 PCCC with Carrier Phase Tra king 5. . . . . . . . . .13) . .. .. . . . . . A. . . . . . . . . . .. .. . . . . . . . .. . . . .. . . . . . . . . . . . 34 4. . . . .. . . 69 B. . . .1 Proof of equation (2. . . . . . .. . . . . . . .. . . 70 iv . . .. . . . . . . .1 Re eivers . . . .. . . . .1 Multiple-estimator adaptive SISO algorithm . . .18) . . . Appendix A . .2 Numeri al Results . . . . . . . . . .3 Fa tors Impa ting Performan e . . . . . .2 Numeri al Results and Dis ussion . A. . . .2. . . . . 5. . .. . .. .4 TCM in Interleaved Frequen y-Sele tive Fading Channels 30 5 Con atenated Convolutional Codes with Carrier Phase Tra king 42 6 Con lusions 4. . . . . . 42 43 45 49 50 52 55 64 64 65 67 68 Appendix B . . . . . . 31 4. . . 5. . . . .. .. .1 Re eivers .. . . . . . . . . . . ..2 Numeri al Results .. . . . . .8) . . A. . . . . . . . . . . . . . . . . .. . .. . . . . . .. . . . . . . .. . . . . . . . . . . .4 Channel update equations and binding term under the Gaussian assumption and a single estimator for (2.. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . .. . . . . . . . . . . . . . . .3 Channel update equations and binding term under the Gaussian assumption for (2. . . . A.. . . . . . . . . .18) . . .. . .. .. . . . .. ..1 SCCC with Carrier Phase Tra king 5. . . . . . .. . . .. . . . . . . . . . . . . . . .2 Single-estimator adaptive SISO algorithm . 40 5. . . . . .

3 Likelihood evaluation using a non-re ursive ECC . . . . . 2. . . . . . . . . . . . .1 Observation model . . . . . . . . . . 2. . . . . . . . . 2. . . . . . . . . .1 Parallel on atenation of FSMs and the asso iated iterative dete tion network . . . . . . . 2.2 Serial on atenation of FSMs and the asso iated iterative dete tion network . . . . . . . . . . . . . 2. . . . . . . . . . .2 Modeling options and reasonable soft outputs . 1. . . . . . . . . . . . . . .6 Soft-metri evaluation in the ase of sequen e-. . . . . . . .4 Likelihood evaluation using a forward-only re ursive ECC . . . . . . . . . . . .5 Likelihood evaluation using a forward/ba kward re ursive ECC . . 2. . . . . . . . . . .List Of Figures 1. . . . . . . . . . . .

.2 Trellis-based pra ti al SISO algorithm with a single estimator .1 BER vs. . . . 4. . . 3. Eb =N0 for system S1 and various on. 3.1 Trellis-based pra ti al SISO algorithm with multiple estimators .rst ombining . . . .

. for various values of the de ision lag D . Eb =N0 for system S2 and various on. . . .gurations for the adaptive inner SISO . 4. . . . . . .2 Comparison between Forward/Ba kward and Forward-only inner adaptive SISOs for system S1. . . . . . . .3 BER vs. . . . . . . . 4. . . . .

. . In ea h ase. . . . . . . . . non-adaptive iterative de oder . . . . . . . loop bandwidth for the SCCC with stati phase. . . . . . . . . as well as inner adaptive SISOs are onsidered. . . . .5 BER vs. 4. . . . . . . . .2 BER vs. 5. . Eb =N0 for the re eiver employing adaptive and non-adaptive (using interpolated hannel estimates) inner SISOs for di erent payload sizes . . . . . . urves for three Eb =N0 values are presented. . . . . . .gurations for the adaptive inner SISO . . . . . . . . . . . . . . . . . . . .1 Adaptive iterative dete tor for SCCC with single external PLL and standard. . . . . 4. . . .4 BER vs. . . . Curves orresponding to Trained and Non-trained systems are shown. . . . . 5. . . . Re eivers employing External PLL. . . . Eb =N0 for systems S2 and S3 employing hard-de ision and soft-de ision de oding . . . . . . . . 3 4 9 13 14 15 18 23 25 29 35 37 38 39 41 43 46 v . . . . . .

5.3 BER vs. Eb=N0 for SCCC with phase dynami s and various inner adaptive SISO on.

.gurations (the optimal performan e for SING re eivers was a hieved for d = 0). For omparison. Eb =N0 for PCCC with phase dynami s and various adaptive SISO on. . . 48 5. the performan e of CC with adaptive hard-de ision dete tion is presented . 53 5. . . . . . . .5 BER vs.4 A tivation s hedule of the adaptive iterative re eiver for PCCC .

. . . the performan e of CC with adaptive hard-de ision dete tion is presented . . . . For omparison. .gurations (the optimal performan e for SING re eivers was a hieved for d = 0). . . . . 54 vi . . .

the basi building blo ks of the iterative re eiver. The e e tiveness of iterative dete tion in the presen e of perfe t Channel State Information (CSI) is attributed to the ex hange of soft information between the Soft-Input Soft-Output (SISO) modules. originally introdu ed for the de oding of turbo odes. E e tive SISOs are trellis based algorithms that provide soft information on the input and output of an FSM. In parti ular. In this work. with linear omplexity on the observation length N (or the smoothing lag D). The problem of performing iterative dete tion for system having parametri un ertainty has re eived relatively little attention in the open literature.Abstra t Iterative dete tion. a sub lass of adaptive iterative re eivers. based on adaptive SISO algorithms is presented. in whi h parameter estimates. we introdu e the on ept of adaptive iterative dete tion. Previously proposed adaptive SISO algorithms are either based on an oversimpli. has proven to be a valuable tool for approximately optimal data dete tion for systems onsisting of multiple Finite State Ma hines (FSMs). as well as soft information on the data may be ex hanged and updated.

The exa t expressions for the soft metri s in the presen e of parametri un ertainty are then rederived in a novel way that allows a uni.ed parameter model. The investigation begins with a on eptual sorting of SISO algorithms for the Gauss-Markov (GM) and deterministi parameter models. or have omplexity that grows exponentially with N (or D).

ation of the theory of SISO algorithms with that of adaptive hard-de ision algorithms and enables the de oupling of omplexity and observation length (or smoothing lag). Starting from these expressions. as well as existing adaptive hard de ision algorithms are interpreted as spe ial ases within this framework. a family of suboptimal (pra ti al) algorithms is motivated. based on forward/ba kward adaptive pro essing with linear omplexity in N (or D). Re ently proposed adaptive SISO algorithms. vii .

several design options are ompared and the impa t of parametri un ertainty on previously established results for iterative dete tion with perfe t CSI is assessed.Using three representative appli ations. Spe i.

viii . joint iterative equalizationde oding for trellis-based odes over frequen y-sele tive hannels is onsidered as is arrier phase tra king in the iterative de oding of both serial and parallel on atenated (turbo) odes. ally.

Chapter 1 Introdu tion The goal of this work is to bring together two separate areas in ommuni ations.1 Adaptive Pro essing Maximum Likelihood Sequen e Dete tion (MLSD) has been an a tive resear h topi over the past de ade. and the major advan es in ea h of these areas are reviewed. The Viterbi Algorithm (VA) [Forn73℄ { the optimal solution for this set of problems { is a . and data dete tion in Inter-Symbol Interferen e (ISI) hannels [Forn72℄. with leading appli ations being the de oding of onvolutional odes [ViOm79℄ (or Trellis Coded Modulation (TCM) [Unge82℄ in general). Adaptive iterative dete tion is then introdu ed as a natural extension of iterative dete tion when parametri un ertainty is present. adaptive pro essing iterative dete tion 1. these two on epts are brie y presented. In the following. namely and .

A ne essary ondition for the optimality of the VA is the exa t knowledge of all the auxiliary parameters required for the evaluation of the transition metri s (e. This problem has also been treated extensively in the literature. su h hannel related parameters are unknown.g.. whi h outputs a single hard sequen e estimate. with proposed solutions ranging from simple modi. optimal in the MLSD sense. and may also be time varying. however. rendering the VA inappli able.xed- omplexity. hannel taps in the ase of equalization in ISI hannels). forward-re ursive s heme. In many pra ti al s enarios.

to the on eptually optimal s hemes for a spe i. Unge74℄. ations of the VA [Koba71. MaPr73.

Among the latter. parameter model. the theoreti al framework for MLSD in the 1 .

AnMo79℄. and Gauss-Markov (GM) dis rete-time parameter model were derived in [Kail60. LoMo90. The ommon result of the above works is that. Ilti92. no . due to the parameter-indu ed memory in the observation. probabilisti ally modeled unknown parameter pro ess was established in [Kail69℄ and the resulting optimal re eiver was shown to have an Estimator-Correlator (EC) stru ture. Optimal re eivers for the spe ial ase of the Gaussian. DaSh94. while in [ChPo95℄ the framework for MLSD in the presen e of a deterministi unknown parameter model was set.presen e of a ontinuous-time.

whenever the ML metri an be omputed re ursively..g. KuMuFu94. It is implied that a per-path parameter estimator { in the form of a Kalman Filter (KF) or a Re ursive Least Squares (RLS) for the GM and deterministi parameter models. respe tively { that depends on the entire path history is stored and updated as the tree is expanded. Previously suggested. a fa t that was expli itly stated in [Mamm95℄. Sesh94. Per-Survivor Pro essing (PSP) [LiLiPr92. the optimal stru ture { at least for the spe ial ases mentioned above { is a re ursive version of the EC re eiver. RaPoTz95℄) an be derived by for ed folding of the re ursive EC tree stru ture into a . Furthermore. This optimal re eiver an be visualized as a tree sear h { as opposed to a trellis sear h for known parameters { algorithm.xed omplexity re eiver (like the VA) is optimal [Chug98℄. eÆ ient re eivers (e.

1. Iterative dete tion for a system onsisting of an inter onne tion of multiple FSMs an be loosly de. there has been great interest in iterative dete tion te hniques for systems onsisting of networks of Finite State Ma hines (FSMs).2 Iterative Dete tion Re ently.xed- omplexity trellis.

the utility of iterative dete tion as a general approa h to 2 . and de oding of TCM in interleaved frequen y-sele tive fading hannels [PiDiGl97. AnCh97℄.ned as the set of rules to ex hange. Appli ations that utilize this s heme in lude turbo de oding of Parallel and Serial Con atenated Convolutional Codes (PCCC and SCCC) [BeGlTh93. ombine and iterate some sort of soft information related to the FSM input/output symbols. with the purpose of providing reliable de isions about the input sequen e. BeMoDiPo98℄. Furthermore.

respe tively.1 and 1. an algorithm that a epts a-priori information on the input and ouput symbols of an FSM and outputs the orresponding a-posteriori information.FSM 1 BC I FSM 2 MAPPER omplexity redu tion is be oming apparent and has re ently been demonstrated for various multidimensional dete tion problems [ChCh98a. onsisting of parallel and serial on atenation of two FSMs are shown in Figs. Two su h simple networks.1: Parallel on atenation of FSMs and the asso iated iterative dete tion network The ore building blo k in these iterative s hemes is the Soft-Input Soft-Output (SISO) module [BeDiMoPo98℄. Based on the sort of information ex hanged. 1. where the notation introdu ed in [BeDiMoPo98℄ is used.2. ChChOrCh98. SISO algorithms an be lassi. Mohe98℄. Channel SISO 1 SOBC I-1 SOMAP Demodulator SISO 2 I Figure 1.

An additional lassi.ed as A-Posteriori Probability (APP) or Minimum Sequen e Metri (MSM) [ChCh98b℄ (referred to as Additive SISO in [BeDiMoPo98℄).

resulting in Fixed Interval (FI) or Fixed Lag (FL) [ChCh98b℄ (referred to as Sliding Window SISO in [BeDiMoPo98℄) modes. Without going into the details of these on. ation stems from the way the SISO algorithm pro esses the input re ord.

gurations. we note that the SISO algorithm onsists of Add-Compare-Sele t (ACS) or 3 .

2: Serial on atenation of FSMs and the asso iated iterative dete tion network Produ t-Sum (PS) operations on the FSM trellis for MSM and APP on.Outer I FSM Inner FSM Mapper Channel Demodulator SOMAP Outer SISO -1 Inner I SISO I Figure 1.

both forward and ba kward re ursions are required.gurations. however. in order for the . Contrary to the VA. followed by a ompletion operation. respe tively.

nal soft information to be generated. 1. perfe t Channel State Information (CSI) is not available at the re eiver. Consequently. in both FL and FI modes the omplexity grows linearly with the smoothing lag D or the re ord length N .3 Adaptive Iterative Dete tion In most pra ti al situations. Approa hes suggested in the literature are: (i) iterative dete tion with an external estimator whi h is run only before the . an iterative re eiver should be able to deal with the unknown. as well as reliability information for the data. is an algorithm that generates. and possibly time varying parameters. in its most general form. respe tively. ex hanges and iterates both parameter estimates. Nevertheless. An adaptive iterative dete tion s heme.

as well as parameter estimates. as in [LuWi98℄. as in [ZhFiGe97. for non-iterative soft information transfer in on atenated systems.rst iteration. AnPo97. AnPo98. in whi h the parameter estimates are not ex hanged as part of the iterative pro edure. 4 . In this work a sub lass of adaptive iterative re eivers is investigated. and (ii) dete tion using an adaptive SISO algorithm to provide both soft information on the data symbols. HaAu98℄. rather.

they are generated and are on.

whi h are the natural extension of the lassi al SISO algorithms when parametri un ertainty is present.ned inside the adaptive SISO. Nevertheless. In the simplest ase of the unknown parameter being modeled as a Markov Chain with . the ex hange of soft information on the FSM symbols provides a impli it me hanism for the re-estimation of the unknown parameters as well. The basi building blo ks of su h adaptive iterative re eivers are the adaptive SISO algorithms.

nite number of states. the optimal algorithm is a modi.

Sequential versions of these algorithms have also been developed [KrMo93℄ for the deterministi parameter model. stru turally similar algorithms are derived in [IlShGi94℄ and [AnPo97. The iterative nature of this algorithm limits its appli ability. Indeed. though. respe tively..ed SISO that runs on the augmented FSM [FrVi98℄. The pro edure is on luded with appropriate ombining of the metri s to get the required soft information. The inherent limitation of all the above approa hes is that they are all FL and two major on i ting goals in designing a pra ti al algorithm are oupled through a single parameter. In [ZhFiGe97℄ a GM model is assumed for the unknown parameter and the optimal s heme is derived.g. Not surprisingly. the same parameter determines the amount of pruning of the sequen e tree and needs to be kept as small as possible. On the other hand. the optimal pro edure is almost identi al to the one performed in [Ilti92℄ for MLSD. Early attempts to solve this more general problem were based on the Baum-Wel h method (or equivalently the Expe tation-Maximization (EM) algorithm [DeLaRu77℄). phase o set or hannel taps). the smoothing depth D. AnPo98℄ for GM and deterministi parameter models. a large de ision delay (smoothing depth) D is required to deliver reliable soft information. an adaptive SISO with a single parameter estimator was developed in [BaCu98℄. a forward tree is built and the metri s are updated with the aid of per-path KFs. it is not lear if and how they an be extended to in orporate a sto hasti des ription for the unknown parameters. Starting from a di erent viewpoint. thus resulting in a forward only re ursive Estimator-Correlator-Combiner (ECC) optimal stru ture. 5 . Finally. sin e time-varying parameters annot be easily in orporated into the model. in a FL algorithm. Of more interest is the ase of the parameter being ontinuous in nature (e.

espe ially sin e it results in exponential omplexity growth1 . Additional simpli.

g. thresholding is used in [ZhFiGe97℄. ations are then summoned up to de ouple D and omplexity (e. while redu ed state sequen e estimation and suboptimal .

Appli ations that an potentially bene.ltering is used in [IlShGi94℄ to further redu e the pro essing burden). A dire t onsequen e is that no FI algorithms an be obtained utilizing these existing approa hes.

t from these adaptive SISOs in lude TCM in fast ISI fading hannels. regarding the . and PCCCs and SCCCs with arrier phase tra king (or in the presen e of at fading). A major goal related to all three appli ations is to assess the impa t of parametri un ertainty on previously established on lusions for iterative dete tion in systems onsisting of on atenated FSMs. In parti ular.

HaAu96℄. it is a well demonstrated result that soft de isions greatly enhan e the performan e of TCM systems in ISI hannels with perfe t CSI [MeWiMe92. Nevertheless.rst appli ation. LiVuSa95. iterative dete tion has been proven to provide a minor in remental improvement2 after the .

The natural question arising is whether these on lusions are valid when imperfe t CSI is available. the introdu tion of turbo odes [BeGlTh93℄. was arguably one of the most signi. On the oding front.rst iteration for a fading hannel when eÆ ient SISOs are utilized [AnCh97℄.

the hannel dynami s and the interleaver stru ture [AnCh99℄.. mobile platforms) and toleran e of. As an example. The ability to maintain this oding gain in more pra ti al s enarios (e. 2 The iteration gain is a tually a omplex fun tion of the e e tiveness of the parti ular SISO module used [PiDiGl97℄. 6 . an additional 2. These odes have been shown to a hieve near apa ity performan e with reasonable omplexity.. omparing the industry standard onstraint length 7 onvolutional ode with an equal omplexity PCCC or SCCC3 . is essential for utilization in many environments (e. ant advan es in oding theory.5 dB of oding gain an be observed at a BER of 10 4. This is the exa t reason for the exponential omplexity of the Abent & Frit hman algorithm [AbFr70℄ in the known parameter ase. 1 The exponential omplexity o urs due to the fa t that in the pro ess of evaluating APPs for a generi quantity uk D . re ursive expressions for the APPs of ukk D are build. or robustness to imperfe t referen e os illators.g. when phase jitter is present). 3 For an interleaver size of 16K and 4-state onstituent odes for both SCCC and PCCC.g.

the FSM and parameter models are introdu ed. Several meaningful soft metri s for the GM and deterministi parameter models are presented and sorted in a way that dire tly implies the method of evaluation.1. The exa t expressions for ea h of the soft metri s is then rederived in a novel way that allows the uni.4 Organization In Chapter 2.

leading dire tly to both FL and FI s hemes. oupled with the plethora of alternatives for deriving suboptimal s hemes. ation of the known- hannel SISO with the hard-de ision adaptive theory. as is the ase for SISOs when no parametri un ertainty is present. In the . This fa t. based on forward/ba kward adaptive pro essing. or re ord length N (for FI). Starting from these expressions. a family of suboptimal pra ti al algorithms is motivated in Chapter 3. One of the main results from Chapter 2 is the multipli ity of methods for evaluating various soft-metri s. All existing adaptive SISO algorithms for ontinuous valued parameter models an be viewed as spe ial ases within this framework. the omplexity of these algorithms grows linearly with the smoothing lag D (for FL). leads to a proliferation of available design options. the unique hara teristi of whi h is the de oupling of omplexity and smoothing depth. In parti ular.

In this appli ation the observation model is indeed linear. the hannel) an be adequately modeled as a GM pro ess or a deterministi onstant [CaRa98℄. in luding modi. where the dis repan y emerges from the non-linear dependen y of the observation on the arrier phase. AnCh97℄. As a onsequen e. One su h example examined in Chapter 5 is the de oding of SCCCs in the presen e of arrierphase un ertainty. an adaptive iterative re eiver for this appli ation an be dire tly devised from the non-adaptive one [PiDiGl97.e. by substituting the inner SISO with its adaptive equivalent. Many appli ations of interest do not adhere to the exa t modeling assumption required to derive the adaptive SISO algorithms in hapters 2 and 3.rst appli ation examined in Chapter 4 { TCM in interleaved frequen y-sele tive fading hannels { this design spa e is partially explored and the e e tiveness of the various adaptive SISO options is assessed. and the parameter (i. Available options for mitigating those problems are dis ussed in detail.

. means of utilizing the adaptive SISOs for iterative de oding of PCCCs with arrier-phase forward only 7 . utilization of pilot symbols).g. ations to the proposed adaptive SISOs. Finally. as well as additional enhan ements (e.

To larify the presentation. this appli ation is hallenging be ause the transmitted signal is an expli it fun tion of both FSM outputs. In addition to the non-linear observation model. 8 .un ertainty is onsidered. many of the detailed algorithm re ursions and the asso iated derivations are relegated to the Appendi es.

Chapter 2 Optimal Adaptive SISO Algorithms 2.1 The FSM and parameter model The output yk of a generi FSM an be de.

1: Observation model Two modeling options for the unknown parameter are of main interest: (i) a sto hasti des ription.e.1a) (2. FSM xk Θ state sk transition t k=(s k. sk ) sk+1 = ns(xk . the output yk of the FSM is observed indire tly. xk . As shown in Fig. whi h also involves an unknown parameter .    . yk . sk .x k) yk z Mapper s k+1=ns(t k) . 1. y =out(t k) k n k k Figure 2. or tk ) is assumed to take values in the set Au = f0.1b) where ea h quantity uk (i. sk ) (Output) (Next State) (2..ned as a fun tion of its input xk and state sk { together onstituting the transition tk = (sk . 2. Nu 1g. xk ) { through the equations yk = out(xk . in parti ular a GM random pro ess ( = fgk g). non-linear in general. through a fun tion. and (ii) a 9 .1.

a . Regarding the former option.deterministi unknown onstant ( = g).

as well as any ARMA pro ess [AnMo79℄. Æ() representing the Krone ker delta. Under these assumptions. Kg . A ne essary and suÆ ient ondition for stationarity is that the ovarian e matrix of gk .rst order GM pro ess is onsidered sin e it an model any higher order GM. the ve tor pro ess fgk g evolves in time a ording to the equation1 gk = Ggk 1 + wk (2. satis.2) where wk is zero-mean Gaussian ve tor with ovarian e Kw (m) = QÆ(m).

the timereversed pro ess fg k g is also .es the equation Kg = GKg G+ + Q (2.3) where ()+ denotes omplex onjugate and transpose. Under this ondition.

In the following we assume the observation zk to be a linear fun tion of gk : zk = f (yk )T gk + nk ! zk = ykT gk + nk (2.4a) Gb = Kg G+ Kg 1 Kv (m) = Qb Æ (m) Qb = Kg Kg G+ Kg 1 GKg (2.4d) where (Kg is assumed non-singular. 10 . All results generalize to the time-variant ase.4 ) (2.rst order GM and stationary with representation gk = Gb gk+1 + vk (2.4b) (2.5) 1 We assume a time-invariant model for notational and expositional simpli ity. sin e a singular Kg would imply that the state model dimension ould be redu ed).

in general. a omplex ve tor depending on the modulation format.2 Classi. and f () is. For e onomy of symbols yk is used in pla e of f (yk ).where nk is a omplex valued AWGN with varian e N0 . 2.

ation of soft metri s
The obje tive of a SISO algorithm is to provide soft information about the input
and output symbols of the FSM based on the observation re ord. This reliability
information an either be in the form of an a-posteriori probability or any other
related quantity. It would be advantageous at this point to generalize the notion of
the state sk and transition tk to longer sequen e portions (e.g., a super-state and
a super-transition an be de

ned as ssk = (tk d; : : : ; tk 1; sk ) and tsk = (tk d; : : : ; tk )
for arbitrary d). This foreshadows the result that the optimal algorithms do not
\fold" [Chug98℄ onto a trellis as in the ase of known hannel and that the size of
the trellis eventually used is a design parameter. For a generi quantity uk (i.e.,
xk ; yk ; sk ; tk ; ssk ; tsk , et .) and whenever  has a probabilisti des ription, we are
interested in the following two soft outputs
APPp(uk ) = P (uk jz0n) =

X
0 :uk

xn

P (z0n ; xn0 ) =

X
0 :uk

xn

E fP (z0n ; xn0 j)g

MSMp(uk ) = log[max
P (xn0 jz0n )℄ = 0 log[max
P (z0n ; xn0 )℄
x :u
x :u
= 0 log[max
E fP (z0n; xn0 j)g℄
x :u
n
0

n

k

n
0

0

k

k

(2.6a)
(2.6b)

where xn0 : uk denotes all input sequen es onsistent with uk , and and 0 are
normalizing onstants. In the ase when the unknown parameter is modeled as
a deterministi onstant, and expe tation over the unknown  is not feasible, a
reasonable soft output hoi e is
APPd (uk ) =

X
xn
0

:uk

max
P (z0n ; xn0 j) 

MSMd (uk ) = 0 log[max
max
P (z0n ; xn0 j)℄
x :u 

n
0

k

(2.6 )
(2.6d)

11

When the SISO module is part of an iterative re eiver, the soft output is usually
normalized to the a-priori information resulting in the so- alled extrinsi information (e.g., APP(uk )=P (uk ), or MSM(uk ) ( log P (uk )) is used in pla e of APP() or
MSM() respe tively). We observe that in all ases, the soft outputs an be derived
from the quantities EfP (z0n; xn0 j)g and max P (z0n; xn0 j) by either averaging or
maximizing { for APP() or MSM(), respe tively { over the nuisan e parameters
xn0 : uk . In this ontext, the soft output APP() an be thought of as an average
likelihood, while MSM() as a generalized likelihood (the notion of average or generalized likelihood does not refer to the averaging or maximization over the parameter 
; subs ripts p and d { for probabilisti and deterministi des ription respe tively
{ are used to distinguish the latter). This interpretation is useful for two reasons:
(i) a uni

all maximizations an be eÆ iently performed in the log domain.ed treatment of the two types of soft-outputs is possible in terms of the ECC stru ture. Sin e the max and log operators ommute. and (ii) a relationship between APP() and MSM() is established other than the latter being a suboptimal version of the former. We .

2. Maintaining the onditioning over the entire input sequen e. n = k + D. expe tation or maximization an be performed on the unknown parameter depending on the underlying model. while for a FL algorithm.nally note that for a FI algorithm. n represents the last symbol in the re eived blo k. xn0 j) to obtain the proposed soft metri s.3 Exa t evaluation of the soft metri s Equation (2. Combining of the resulting metri s over the nuisan e parameters xn0 : uk is performed as a .6) learly suggests several options in manipulating P (z0n.

nal step. leading to the .

2.nal four soft metri s for uk . 2. This set of options is represented by the left subtree of Fig. In the same .

Di erent. as well as maxx :u and max ommute. the subse tions.6d). two additional hoi es are available for the evaluation the metri s in (2. Sin e operators Px :u and E. where the exa t evaluation of the orresponding metri s is developed. These two extra options are represented by the right bran h of the design options tree. are indi ated as well. Here. but meaningful soft metri s an also be de.6a) and (2. the sequen e ombining is done initially.gure. followed by the parameter elimination.

ned by inter hanging the Px :u operator with the E n 0 n 0 k k n 0 k 12 .

Nuisance Parameter (data & channel) Combining Parameter-First Sequence-first Combining Combining Parameter Model Probabilistic (2.1 Parameter-.2.1) Deterministic Probabilistic (2.2: Modeling options and reasonable soft outputs or max operator in (2. 2.3.3.2) Metric Combining APP p ( ) MSM p ( ) APP d ( ) MSM d( ) APP p( ) MSM d ( ) Figure 2.1. respe tively.1) Deterministic (2.6b) and (2. mainly be ause they don't appear to lead to rigorously expressed optimal stru tures.3.3.3.1.2) (2.2.6 ). These options will not be pursued in this work.

rst Combining 2.1 Probabilisti Parameter Model We begin by deriving optimal algorithms for the evaluation of the soft outputs de.1.3.

xk0 )P (xk )P (z0k 1 . The obvious approa h is a straightforward evaluation of this likelihood for ea h of the (Nx)n+1 input sequen es. The pro edure is on luded with the appropriate ombining of these quantities (summation or maximization for APPp(uk ) or MSMp(uk ). xn0 ). sin e no re ursive pro essing is taking pla e. xn0 ) an be omputed re ursively as in [Ilti92.3.ned in equations (2.6b) and more pre isely the quantity P (z0n. su ers from extreme omplexity.6a) and (2. xk0 ) = P (zk jz0k 1 . DaSh94℄ P (z0k . This type of pro essing. A more attra tive alternative is based on the fa t that the likelihood P (z0n. 2. respe tively). whi h is the ECC and is shown in Fig. xk0 1 ) 13 .

At ea h time k. Starting at time 0 a forward Nx-ary tree is built.4 and an be des ribed as follows. together with g~kjk 1 and G~ kjk 1 of that path are stored in ea h node. the tree is expanded forward and the probabilities orresponding to the newly generated bran hes are al ulated using (2. whi h is a forward re ursive ECC. It is implied from this equation that a KF that depends on the entire path history is required to omplete the re ursion. 2. After n + 1 steps.7). xk0 1) (2. the (Nx)n+1 path likelihoods orresponding to the same uk are ombined (averaged for APPp(uk ) or maximized for MSMp(uk )) to produ e the . ea h node of whi h represents a sequen e path.7) where g~kjk 1 and G~ kjk 1 are the hannel one-step predi tion and orresponding ovarian e generated by a KF.n+1 Nx Metrics 0 k n Figure 2. The method suggested by this equation. ykT g~kjk 1. xk0 1). The likelihood P (z0k 1.3: Likelihood evaluation using a non-re ursive ECC = N (zk . N0 + ykT G~ kjk 1yk)P (xk )P (z0k 1. is illustrated in Fig.

14 . We observe that. as mentioned in Chapter 1. An alternative optimal pro edure for the likelihood al ulation. is now des ribed. due to the presen e of the parameter pro ess fgk g. based on whi h.nal soft output. Although this te hnique is more eÆ ient than the straightforward evaluation of the likelihoods. it results in suboptimal algorithms where omplexity and smoothing depth are exponentially oupled. several suboptimal useful algorithms will be developed in the next hapter.

n+1 Nx Metrics 0 k n Figure 2.4: Likelihood evaluation using a forward-only re ursive ECC 15 .

x{znk+1jsk+1)} | {z past/present future Z k k P (gk jx0 . xn0 ) = P (z0k .future observations depend on past observations onditioned on the state of the FSM. yielding (see Appendix for the details of the derivation). P (z0n . xk0 )} P| (zkn+1. by onditioning on the parameter gk as well. of whi h the . are the basis for the pra ti al algorithms proposed in Chapter 3 and is a key ontribution of this work.8) p The relation in (2. z0 )P (gk jsk+1. It indi ates that the likelihood an be split into three terms. zkn+1 ) dgk P ( g g k) | {z } binding b k (2. On the other hand. xnk+1 . separation of the future and past observations o urs.8) and subsequent analogous expressions.

Indeed. respe tively.rst two depend ea h on the past/present and future. while the third an be viewed as a weighting fa tor that binds them together. the third term quanti.

G~ kjk .es the dependen e of the future. present and past that is introdu ed due to the parameter pro ess fgk g and in the absen e of parametri un ertainty would be eliminated2. The binding term admits a losed form solution sin e it involves an integral of Gaussian densities and results in bp (~gkjk . g~kb jk+1. G~ bkjk+1) = jKg jjP j exp(.

+P .

) ~ jGkjkjjG~ bkjk+1j (2.9a) with P 1 = G~ kj1k + (G~ bkjk+1) 1 Kg 1 .

G~ bkjk+1 are the orresponding ovarian es.9 ) (2. g~kb jk+1 are the sequen e- onditioned forward hannel estimate.9b) (2. = G~ kj1k g~kjk + (G~ bkjk+1) 1 g~kb jk+1 = g~k+jk G~ kj1k g~kjk + (~gkb jk+1)+ (G~ bkjk+1) 1 g~kb jk+1 (2.9d) where g~kjk . Although the expression for bp() is fairly ompli ated (it involves inverse matri es 2 This is also true for the ase of the parameter being independent for ea h time instant 16 . the onestep ba kward hannel predi tor and G~ kjk .

we emphasize that it does not require any repro essing of the observation re ord. all quantities are available from the KFs on the forward and ba kward trees.and matrix determinants). The .

and (2. in the same way des ribed earlier. and a per-path KF providing the required hannel estimates. xnk+2 jsk+2) (2.8).(2. xnk+2 jsk+2) = N (zk+1 .rst term in (2. with the likelihoods al ulated using (2.9). The likelihood of ea h sequen e xn0 an now be evaluated as indi ated by (2. The .7).(2. After k forward and n k ba kward steps. while the se ond is al ulated through a similar ba kward re ursion P (zkn+1.8).7).8) is re ursively evaluated using (2.10).10) is illustrated in Fig. sk+1 . The (Nx)k+1 likelihoods orresponding to the nodes of the forward tree are ombined with the (Nx)n k likelihoods orresponding to the nodes of the ba kward tree (future) and weighted by the term in (2. ykT+1g~kb +1jk+2. The relevant hannel estimates are provided by a per-path ba kward running KF. 2. starting at time n a ba kward tree is expanding a ording to the re ursion (2.9).5 and an be des ribed as follows.10) The s heme suggested by (2. In addition. Starting at time 0 a forward tree is built. xnk+1 )P (xk+1)P (zkn+2 .7). the two trees meet ea h other. xnk+1 jsk+1 ) = P (zk+1jzkn+2 . N0 + ykT+1G~ bk+1jk+2yk+1) P (xk+1 )P (zkn+2.

We refer to this stru ture as the forward/ba kward re ursive ECC.. In a pra ti al algorithm. the parti ular point in time when the past and future metri s are ombined..e. leading to pra ti al algorithms. Note that the hoi e of k. in order to maximize the number of relevant sequen es ombined to produ e the soft information on um. the referen e point k is hosen to be in the neighborhood of m. In fa t. that de ouples omplexity and observation length (or smoothing depth). is (i. while it may seem redundant to store and update both a forward and a ba kward tree (i.e. it has been shown already how to a omplish the same result with a single forward tree).nal soft output for a generi quantity um is the summation (or maximization) over all terms with the same um. however. Thus. however. it is not related to m). the two extreme values k = n and k = 0 orrespond to a single forward or a single ba kward tree. ompletely arbitrary 17 . it is this form.

5: Likelihood evaluation using a forward/ba kward re ursive ECC 18 .0 n+1 Metrics Nx k+1 Nx k n n+1 Metrics Nx Binding n+1 Metrics Nx n-k Nx Figure 2.

together with a possible third term that binds the former ones. On e again. respe tively. We will slightly generalize the expression in (2. xn0 ) [ChPo95℄. resulting in a forward/ba kward re ursive ECC. We hoose to work in the log domain and develop expressions for the sequen e metri (z0n.11) by introdu ing an exponentially de aying bilateral window relative to time k with forgetting fa tor .2. (z0n. xn0 )  log max P (z0n.8) that splits the likelihood al ulation in a past/present and future term (relative to zk ). This weighting provides in reased numeri al stability as well as the ability to tra k slow parameter variations. an algorithm with both forward and ba kward re ursions is sought in order to formulate pra ti al algorithms. The key to this derivation is the fa t that the minimizing parameter g~ { the solution to the least squares problem { an be written in terms of a forward g~k and a ba kward g~kb +1 estimate. we on entrate on the unique features of this sub ase. a forward only re ursion an be derived and visualized as a forward growing tree with a per-path RLS hannel estimator providing the required quantities for the evaluation of the likelihoods (z0n.2 Deterministi Parameter Model We now pro eed with the se ond modeling option.3. x0 ) = min g n n (jzm ymT gj2 N0 log P (xm))k m+ m=0 jzk ykT gj2 N0 log P (xk )+ 3 n X (jzm ymT gj2 N0 log P (xm))m = +1 m k k5 (2.1. depending on the observation z0k and zkn+1. where the unknown parameter is modeled as a deterministi onstant. is similar to that asso iated with the GM hannel. The development.11) As in the ase of random parameters.12) 19 . xn0 ) = min g "k 1 X (z0 . We seek an expression similar to (2. and hen e the resulting algorithms. xn0 j) n X jzm ymT gj2 N0 log P (xm ) = n X jzm ymT g~j2 N0 log P (xm) m=0 m=0 (2. Therefore.

Leaving the proof for the Appendix. the .

sk+2. sin e it suggests that. sk+1. ba kward RLS hannel estimates and the orresponding information matri es [Hayk96℄.8) is obvious. P~kb+2 are the forward. xk0 1) +  jzk ykT g~k 1j2 N0 log P (xk ) (2. and (iii) the reliability of the forward and the ba kward hannel estimates ompared to the the a-priori statisti s (e.13) The resemblan e of the above expression to (2. when the forward and ba kward hannel estimates agree..14a) T ~   + yk Pk 1 yk (zkn+1. xn0 ) = (z0k . This result is intuitively satisfying. Due to the presen e of an a-priori model for the parameter g (re e ted in Kg ). the required onditions for the binding term to vanish are (i) the equality of the forward and the ba kward estimates g~kjk = g~kb jk+1. xk0 ) = (z0k 1 . sk+1. the binding term bd () = 0. xk0 ) + (zkn+1. The forward and ba kward RLS updates are summarized in the Appendix together with the expli it expression for the binding term bd(). P~k 1. P~k . xnk+1) = (zkn+2. xnk+1) + bd (~gk . This parallel is ompleted by providing the re ursions for the forward and ba kward metri s: (z0k .nal result is summarized in the following equation (z0n. g~kb +2. no penalty is paid by means of in reasing the orresponding sequen e metri . This is not pre isely true for the GM ase in (2.9).g. g~kb +1.14b) where g~k 1. Note that when g~k = g~kb +1. P~kb+1) (2. the term Kg 1 is insigni. xnk+2) +  jzk+1 ykT+1g~kb +2j2 N0 log P (xk+1) T  + yk+1P~kb+2 yk+1 (2. (ii) the equality of the forward and the ba kward ovarian e matri es G~ kjk = G~ bkjk+1.

20 . ant in evaluating (2.9)).

3.2.2 Sequen e-.

gk )P (zk jtk . sk . gk ) an be updated by a forward and ba kward re ursion respe tively P (z0k . the storage requirement for the above equations is in.17a) P (zk+1jtk+1 . to obtain X APPp(uk ) = Ef : xn 0 uk P (z0n . tk . xn0 j)℄  x :u n 0 2. sk+1 .15b) Probabilisti Parameter Model We begin with the derivation for the soft-output APPp(uk ) and in parti ular APPp(tk ) for the GM hannel. sk . gk ) = X Z : tk sk+1 X : gk tk+1 sk+1 P (z0k 1. P (z0 . gk+1)P (xk+1 )P (gk+1jgk ) P (zkn+2jsk+2 .16) where P (z0k 1. tk ) = n = Z Zgk gk P (z0n . gk )g is a mixed-state Markov hain. gk+1)dgk (2.17b) Unfortunately. gk )P (xk )P (gk+1jgk )dgk Z gk+1 (2. gk )P (zk jtk .2.1 k (2. xn0 j)g = P (z0n .15a) (2.6d). sk . gk+1) = P (zkn+1 jsk+1. gk ) and P (zkn+1jsk+1. gk )dgk (2. gk )P (xk )P (zkn+1jsk+1 . gk )dgk P (z0k 1 .3. A straightforward expression an be derived by utilizing the fa t that the pro ess f(tk .rst Combining The spe ial form of APPp(uk ) and MSMd(uk ) allows us to obtain alternative expressions for the optimal soft outputs by realizing that we an inter hange the expe tation (or maximization) operators in (2. respe tively. uk ) MSMd (uk ) = 0 log[max max P (z0n.6a) and (2.

making it of primarily on eptual 21 .nite due to the fa t that gk takes values in a ontinuous spa e.

we will follow another approa h. gk )P (xk )P (gk jsk+1. zkn+1 ) dgk P ({zgk ) g | k } b0p (2. sk )P (zkn+1 jsk+1) Z P (gk jsk .8) leads to P (z0n .18) () The forward and ba kward re ursions for the .value 3 . tk ) = P (z0k 1 . Although it is on eivable to quantize the hannel values. A derivation similar to (2. z0k 1 )P (zk jtk .

8).rst two quantities are as follows: P (z0k .19) with (2.19) is mu h simpler: only a forward and ba kward re ursion is performed over a state trellis. sk )P (zk jtk . 2.(2. and (ii) the o -line evaluation of the third term of (2. z0k 1 )P (xk ) X : tk+1 sk+1 P (zk+1jtk+1 . On e more we emphasize that the generalized states ssk and transitions tsk an be used with the orresponding updating equations un hanged. sk+1 ) = P (zkn+1jsk+1 ) = X : tk sk+1 P (z0k 1 .18). assuming that the latter diÆ ulty an be over ome. followed by a ombining (multipli ation) of the updated quantities with an appropriate weight (third term).(2.18) as well as the innovation terms in (2. As a .19b) Aside from the evident similarity of (2.(2.7) and (2.6.10) there are two important di eren es: (i) the re ursions des ribed here do not depend (at least expli itly) on the entire path history. Nevertheless.19) is ompli ated due to the fa t that they are mixed-Gaussian densities. This pro edure is depi ted in Fig. the algorithm suggested by (2. zkn+2 )P (xk+1)P (zkn+2 jsk+2) (2.18).19a) (2.

other on. tk ) in past. note that (2.18) is not the only way of splitting P (z0n .nal remark. future and binding terms.

z0k )P (gk jsk+1.gurations are also possible resulting in slightly di erent forward and ba kward re ursions as in the following P (z0 . 22 . zkn+1 ) dgk P (gk )P (tk ) gk Z (2.17) are basi ally the well-known BCJR [BaCoJeRa74℄ re ursions for a mixed-state Markov pro ess. tk ) = P (z0 .20) 3 The re ursions in (2. tk )P (z +1jsk+1 ) n k n k P (gk jtk .

0 k n Binding Ntn+1 metrics Figure 2.6: Soft-metri evaluation in the ase of sequen e-.

rst ombining Though su h expression is useful in deriving links with existing algorithms.2 Deterministi Parameter Model This last ase is not pursued further.3.18) is due to the symmetry of the past and future terms.2. the reason being that the exa t metri evaluation is umbersome to expli itly express and does not o er any signi. our preferen e in (2. 2.

meaningful suboptimal algorithms will be developed in the next hapter based on this sub ase. ant insight. by utilizing the orresponden e between the expe tation and the maximization operator. Nevertheless. 23 .

3. respe tively.3.1 and 2.Chapter 3 Sub-Optimal (Pra ti al) Adaptive SISO Algorithms The exa t evaluation of the soft metri s developed in subse tions 2. assisted by perpath .2 under either modeling assumption for the unknown parameter involves likelihood updates on a forward and ba kward tree and trellis.

followed by binding of the past and future metri s.lters. In view of this fa t. any suboptimal algorithm for the ase of parameter-.

rst ombining an be interpreted as the result of applying one or more of the following simpli.

(ii) non-Kalman (or non-RLS) parameter estimators. ations: (i) non-exhaustive tree sear h. for the ase of sequen e-. and (iii) suboptimal binding of the past and future metri s. Similarly.

In the following. 3. as well as a simpler form for the parameter estimators and binding term in (2. this design spa e is partially explored.rst ombining.18). any suboptimal algorithm is the result of a simplifying assumption for the innovation terms.1 Parameter-.

.rst Combining 3. Breadth-.g. many options are available to prune the sequen e tree (e.1 Tree-sear h te hniques Regarding the tree sear h. from the hard-de ision literature [AnMo84℄).1.

the fa t that breadth-. be ause ompletion of the sequen e metri s is required.rst s hemes seem to be the most appropriate for soft-de isions. Indeed.

One su h algorithm is the VA. 24 .rst algorithms maintain a ommon front in the sear h pro ess fa ilitates the ombining task.

whi h maintains and updates { through the familiar ACS operations { a .

3. shown in Fig. An algorithm that ombines these two tasks an be derived employing either the PSP prin iple [RaPoTz95℄. Given that a set of paths { at the same depth { is available.1: Trellis-based pra ti al SISO algorithm with multiple estimators parameter estimate is kept for every trellis state and updated in a PSP [RaPoTz95℄ fashion. At this point we emphasize on e more that the trellis on whi h this algorithm operates is not tightly related to the FSM trellis. a 25 . the De ision Feedba k (DF) assumption introdu ed in [SeFi95℄. the transition metri of tk . or eqivalently. onsist of forward and ba kward re ursions similar to the ones performed in the lassi al SISO (or A-SISO in [BeDiMoPo98℄).xed number of paths in su h a way that they are for ed to have di erent re ent paths. an algorithm for evaluating the MSM() metri . The latter is omputed as the produ t (sum) of the forward metri of the starting state sk . Its size is a design parameter that determines the amount of pruning in the forward and ba kward trees. Similarly. and the binding term orresponding to tk . for APP or MSM soft metri s respe tively. The resulting algorithms. PS or ACS operations are performed for the metri updates. and eventually. the de ision delay D an be hosen independently. A KF (or RLS for deterministi modeling) Observation Trellis Based Forward PS/ACS Buffer & Combiner g~ k|k-1 Channel Estimators APP(t k) / MSM(t ) k Trellis Based Backward PS/ACS ~ gb k|k+1 Channel Estimators Figure 3. The formulation of a pra ti al algorithm for al ulating APP() involves summation of the sequen e metri s as well as tree pruning. the omplexity of the algorithm. while the ompletion is performed by minimizing the orresponding transition metri s. the ba kward metri of the ending state sk+1 .1. pro eeds by extending and eliminating paths in the same way as in the hard-de ision ase [RaPoTz95℄. For a FL algorithm. The soft outputs for xk and yk are derived from the soft output of the transition tk .

1.ba kward re ursion of depth D is performed at ea h step.2 Parameter estimate and binding term simpli. while in a FI algorithm a single forward and ba kward re ursions over the entire observation re ord suÆ e. 3.

ation Any near-optimal re eiver has to sear h over as many paths as possible for a given amount of resour es. so it is desirable to redu e the omplexity asso iated with the metri updates and in parti ular the parameter estimates. One su h simpli.

ation for the ase of deterministi parameters an be a hieved by approximating the information matri es used in RLS with Pk = (1 )IL and Pkb+1 = (1 )IL. where L is the parameter ve tor size. The bene.

13) is obtained.14a) and (2.1) is threefold: (i) the RLS parameter update redu es to the LMS algorithm. (ii) the forward/ba kward likelihood re ursions of (2. and (iii) a simple and meaningful expression for the binding term bd () in (2. xk0 1) +  jz yT g~ j2 N0 log P (xk )  + L(1 ) k k k 1 (zkn+1.1 ) (3. this penalty is ampli. xnk+2) +  j zk+1 ykT+1 g~kb +2j2 N0 log P (xk+1)  + L(1 )  b 2 bd (~gk .1b) (3. xk0 ) = (z0k 1 .t as seen in (3. 1  y(z yT g~ ) g~k = g~k 1 +  + L(1 ) k k k k 1 (z0k .1a) (3.14b) simplify onsiderably.1d) The above equations provide additional insight on the role of the third term: If the forward and ba kward parameter estimates orresponding to a parti ular sequen e are not onsistent. sk+1. xnk+1) = (zkn+2. a penalty is paid by means of in reasing the sequen e metri . g~kb +1) = (1 2) jjg~k g~k+1jj (3. so no matrix storage and update is required. Furthermore. sk+2.

Regarding omplexity redu tion in the ase of probabilisti modeling. although su h solutions are appli ation spe i. redu ed omplexity KF is on eivable.ed when tra king slowly hanging parameters ( lose to 1).

[RoSi℄. 26 .

To a hieve the desired smoothing depth D. the forward-only algorithms proposed in the literature an be derived.8) and (2.3 Interpretation of existing algorithms By dropping the ba kward re ursion in (2. tk 1.3.13). the forward algorithm is developed based on the super state ssk = (tk d.1. More pre isely. the algorithm in [ZhFiGe97℄ al ulates APPp(xk ) soft outputs in a FL on. This is exa tly the approa h followed in [ZhFiGe97℄.    . where d is sele ted su h that xk D is in luded in ssk+1. sk ).

2 Sequen e-.guration. using the T-algorithm [AnMo84℄ for path pruning and employing KF for parameter estimation. and RLS parameter estimation. with the VA used to prune the tree. 3. AnPo98℄: a forward-only re ursion is onsidered to produ e APPd(xk ) and MSMd (xk ) soft outputs for the spe ial FL ase of the delay being equal to the parameter ve tor size. A similar approa h is followed in [AnPo97.

2.1 Metri simpli.rst Combining 3.

The Gaussian approximation leads to an attra tive algorithm sin e only the state- onditioned/sequen e-averaged forward (i. P (zk+1jtk+1.. Note that these estimates are only partially onditioned on the data sequen e through the state sk (or more generally the super-state ssk ).19). g~kjk 1(sk ) = E (gk jsk . suboptimal algorithms an be derived by employing a simplifying assumption for the innovation terms P (zk jtk . z0k 1). ation Starting from equations (2. whi h are in reality mixed-Gaussian density fun tions.e. z0k 1)) and ba kward parameter one-step predi tions together with the orresponding ovarian es need to be maintained and updated. zkn+2). . Re ursive update equations for these Partially Conditioned (PC) parameter estimates.

the innovation terms be ome pre isely Gaussian and the PCKF be ome the sequen e- onditioned KF. this is the exa t s enario of the parameter-.rst derived in [IlShGi94℄. are very similar to the KF re ursions (see Appendix). Furthermore. in the limiting ase when the super-state represents the entire sequen e. thus we use the name PCKF.

rst ombining in the GM ase. the binding term in (2. resulting in a fun tion similar to bp () (see Appendix for details). 27 . Under the Gaussian assumption.18) an be evaluated o -line as well.

2.2 Further parameter estimator simpli.3.

ation In addition to the Gaussian approximation. a further simpli.

2) This approximation { if valid { results in a desirable solution. ation o urs under the assumption that the onditional means and ovarian es of the parameter are not fun tions of the states E (gk jsk . The intuitive justi. z0k 1 )  E (gk jz0k 1 ) = g^kjk 1 (3. losely resemble those of the KF. Assuming that a probabilisti des ription is available for the transitions tk (P 0(tk )). sin e only a single forward and a single ba kward global estimator (averaged over the sequen e) needs to be maintained and updated. The re ursion equations. summarized in the Appendix. a re ursion an be derived for g^kjk 1.

by exploiting the orresponden e between the expe tation and maximization operator. Both (i) and (ii) are alleviated by introdu ing a delayed (advan ed) by d parameter estimate to evaluate the forward (ba kward) transition metri at time k. The appli ation of the AKF single-estimator idea is inhibited sin e (i) the independen e assumption is not valid and (ii) an a urate P 0(tk ) an only be derived from the observation z0k and is therefore tightly oupled with the estimation pro ess.3) is improved. at the end of whi h. Finally. and pro eeds as follows. an average y^k = Pt yk P 0(tk ) an be used in pla e of yk in the KF re ursions. the a ura y of the approximation k E (gk djsk . a smoothed soft metri P (tk djz0k ) is obtained. 3. that utilizes a d-lag (d-advan ed) soft-de isiondire ted forward (ba kward) AKF. a suboptimal algorithm an be derived for the rightmost leaf of 28 . The latter is now used in the AKF to update g^k djk d 1. thus resulting in what we refer to as an Average KF (AKF). ation of this algorithm is that sin e a probabilisti des ription of tk { and onsequently yk { exists. z0k d 1 )  E (g k d jz0k d 1 ) = g^ j k dk d 1 (3. sin e by in reasing the de ision delay d.2. is depi ted in Fig. A similar one-step ba kward/dstep forward re ursion is required for the update of the ba kward quantities. The resulting algorithm. The forward metri s at time k are updated as in (2.19) using the d-delayed parameter estimate g^k djk d 1. Starting at time k a d-step non-adaptive ba kward re ursion is performed.

the algorithm des ribed in [IlShGi94℄ is produ ed as a spe ial ase.Observation Trellis Based Forward PS/ACS Buffer & Combiner Trellis Based Backward PS/ACS ^g k-d|k-d-1 z-d Channel Estimator ^g b k+d|k+d+1 APP p (t k) / MSM d (t k) zd Channel Estimator d delayed/advanced hard/soft tentative decisions Figure 3. Forward (ba kward) ACS operations are performed on the state trellis.3 Interpretation of existing algorithms Starting from the adaptive SISO employing per-state PCKF.3). it seriously ompromises the a ura y of the approximation in (3. the latter is a FL. 2. motivating the non-zero delay d proposed herein. and by dropping the ba kward re ursion. Indeed. The SISO algorithm des ribed in [BaCu98℄ an be regarded as a spe ial ase of the single-estimator (AKF) adaptive SISO presented earlier. aided by a single d-lag (d-advan ed) hard-de ision-dire ted forward (ba kward) RLS or LMS parameter estimation. This algorithm has a similar stru ture with the one des ribed in the previous paragraph. Although the zero tentative de ision delay eliminates the need for additional ba kward re ursions. Although the latter was not intended to provide soft de isions.2. forward only version.2. 3. the metri updates and parameter re ursions (in the form of the PCKF) are pre isely those developed therein. operating on the super-trellis ssk = tk 1 with d = 0. the . Similarly.2: Trellis-based pra ti al SISO algorithm with a single estimator the tree in Fig.

Unge74℄. The latter is a modi. 2.xed omplexity algorithm resulting from the rightmost leaf of the design tree in Fig.2 an be regarded as a forward/ba kward extension to the CAMLSD re eiver [Koba71. MaPr73.

29 . that uses a d-delayed. single external parameter estimate to update the metri s. ation to the VA. hard-de ision dire ted.

and are observed in white noise. pulse-shaped. At the re eiver the waveform is mat h. mapped into a size Nu onstellation.Chapter 4 TCM in Interleaved Frequen y-Sele tive Fading Channels In this Chapter we onsider a typi al TDMA ellular transmission system onsisting of a memoryless sour e that feeds a rate R onvolutional ode. The trellis- oded symbols un are interleaved using a size J  K blo k interleaver. transmitted through a frequen y sele tive fading hannel.

rather.ltered with the known pulse shape and sampled at the symbol rate. this issue will not be addressed (refer to [ChPo95℄ for a treatment of this topi ). Although it is understood that optimal pre-pro essing involves fra tionally spa ed sampling. we use a simpli.

un orrelated s atter (WSSUS) model [Bell63℄ for the frequen y sele tive fading. The equivalent dis rete-time model for the above s enario onsists of the onvolution of the oded symbols with an (L + 1)-tap FIR. gk. The sour e symbols and the hannel taps are normalized to unit energy.nuk n + nk (4. and the two-dimensional isotropi s attering model 30 . Under the wide sense stationary.n is the nth tap of the hannel at time k.ed symbol-spa ed model to illustrate the on epts with a manageable amount of simulation e ort. time varying hannel: q zk = Es L X =0 n gk. uk is the oded symbol and nk is a white omplex Gaussian noise with E fjnk j2g = N0.1) where Es is the symbol energy.

the se ond-order statisti s of the symbol-sampled. experien e approximately independent fading. the depth is hosen su h that su essive oded symbols. lowpass equivalent fading pro ess an be expressed as  g = J (2 l)Æ (m) E fgk+l. Regarding the interleaver design. whi h are a tually transmitted J symbols apart.2) where d = fd Ts is the normalized Doppler spread of the hannel and Æ() is the Krone ker delta. while the width of the interleaver K is hosen to separate any LD +1 su essive symbols as far as possible.n 0 d (4. These design onstraints are met with J > 1=(2d) and > K 7L [Stub96℄.[Clar68℄ for the hannel dynami s.1 Re eiver Stru tures The s heme des ribed above an be modeled as a serial on atenation of two FSMs { the outer TCM en oder and the inner ISI hannel { through the interleaver.n+mgk. In [AnCh97℄. where LD is the de oding depth of the ode. 4. three re eiver types were identi.

e. An adaptive re eiver an be derived in a straightforward way from the non-adaptive version. while in a soft-de ision iterative re eiver. as well as the more sophisti ated iterative stru ture shown in Fig. while leaving the outer dete tor (i. or a PSP-based VE [LiLiPr92.e. by repla ing the inner dete tor (i. the de oder) inta t. RaPoTz95℄.. the ISI hannel. soft inverse 1 The term \equalizer" is only used to signify that the parti ular VA is asso iated with the inner FSM. We emphasize that this does not imply that linear or de ision feedba k equalization is taking pla e... 1. is that the on ept of having a separate demodulator.ed for the ase of perfe t CSI. Unge74℄. 1.e. Sesh94. In the more traditional hard-de ision s heme. the VE is repla ed by either a CA-MLSD VE [Koba71. MaPr73. i.2. They in luded the traditional hard-de ision Viterbi Equalizer1 (VE) followed by a Viterbi De oder (VD). SOMAP (whi h is the of the mapping from uk to the noiseless hannel output). the equalizer) with its adaptive equivalent.2. 31 . An additional distin tion of the adaptive iterative re eiver from the non-adaptive version proposed in [BeDiMoPo98℄ and depi ted in Fig. KuMuFu94. one of the adaptive SISOs proposed in Chapter 3 is used in pla e of the inner SISO.

in the ase of deterministi parameter model.14a)  jzk ykT h~ k 1j2 N0 log P (uk ) (4. The soft metri s ex hanged have the form of extrinsi information. The hannel estimate is only generated internally in the inner adaptive SISO. and h~ k 1 = [h~ 0 . For example. are produ ed for the symbols uk . thus making the merging of the above three blo ks ne essary.3) T ~  + yk Pk 1 yk where yk = pEs[uk . uk L℄T is the signal orresponding to the transition tk . assuming no a-priori information about the orresponding input symbols uk . all three blo ks need to be in orporated within the inner adaptive SISO. After pro essing the observation re ord. This is true be ause the a-priori soft information on the output symbols of the inner FSM annot be evaluated. soft-output metri s. h~ L ℄T is the hannel estimate orresponding to the starting state of tk . whi h result from normalizing the soft metri s de.and SISO is not appli able. unless a parameter estimate is available. The iterative pro edure is initiated by a tivating the inner adaptive SISO. while the a-priori information about the output symbols is dire tly omputed from the observation. the transition metri for the forward re ursion orresponding to the inner FSM transition tk . : : : . as dis ussed above. : : : . is given by (2.

Indeed.7).(2. The latter an now be fedba k to the inner adaptive-SISO and utilized as new a-priori information. We utilize trellis-based algorithms and des ribe multiple and single estimator adaptive SISOs separately.ned in (2. resulting in an iterative equalization/de oding s heme [AnCh97.19a) for the forward re ursions.14a) and (2. Within these lasses. by the orresponding a-priori information. thus the orresponding parameter update equations are a e ted. Although there are many possible adaptive SISOs arising from the previously derived framework. the new a-priori information on the input symbols results in di erent transition metri s in the inner adaptive SISO. algorithms are de. Note that the iterative pro edure results in an impli it parameter re-estimation as well. These are deinterleaved and passed to the outer SISO whi h generates soft-outputs for both the sour e symbols xk and the oded symbols uk . we summarize the detailed operation of the algorithms in the Appendix. PiDiGl97℄ whi h terminates as soon as mature de isions are available for the sour e symbols xk .6). This is learly suggested in (2.

e.ned by the type of soft information (i. APP.. the type of parameter estimation 32 . MSM).

AKF. LMS. RLS.g.. and the type of binding.(e. More spe i. KF. PCKF).

9). Indeed. KF. The latter an be eÆ iently implemented with a single-entry lookup table and 3-bit quantization resulting in no notable performan e degradation [BeDiMoPo98℄. while single-estimator s hemes require d ba kward steps { for every forward step { to provide reliable tentative soft or hard data estimates to update their single estimator.  Forward-only algorithms have signi. ally. AKF.  Regarding the parti ular hannel estimator used. RLS. several notes on the details of the implementation follow:  APP algorithms operating in the log domain.12). while the suboptimal binding proposed in (3. y) log(1 + exp( jx y j)). result in a small omplexity in rease ompared to MSM as reported in [BeDiMoPo98℄.8) and (A. in general.  Trellis-based multiple-estimator stru tures store and update one estimator per state with zero delay. all APP algorithms an be onstru ted from their MSM ounterparts by repla ing the min(x. (A.1d) results in a small in rease in the adaptive SISO omplexity. a ostly operation as shown in (2.  Optimal binding is. with the KF and the AKF having almost equal omplexity. y) fun tion in the ACS operation by min(x. the omplexity in reases in the order LMS. PCKF.

if the performan e of forward/ba kward algorithms is to be obtained. As was dis ussed. are qualitatively the same as in the non-adaptive SISOs. The di eren es are ampli. the exponential dependen e of omplexity and smoothing depth D is expe ted to give rise to mu h higher overall requirements for forward-only algorithms. however. namely omplexity vs. sin e they do not require the additional ba kward re ursion and binding. antly lower requirements in omputation and memory than forward/ba kward algorithms with the same number of states. memory.  The trade-o s between FI and FL s hemes.

however.ed. due to the fa t that hannel related parameters need to be 33 .

Although the de orrelation time of su h a hannel is mu h larger than 57 symbols. Table 4.1: Summary of System Parameters Signaling R Constraint Length Channel length S1 QPSK 1/2 5 3 S2 8PSK 2/3 6 3 S3 QPSK un oded 3 Regarding the naming of the presented algorithms.1. Ea h burst is modulated and sent over a 3-tap equal power Rayleigh fading hannel with normalized Doppler spread d = 0:005. a smaller interleaver depth is used in onjun tion with the assumption of burst-to-burst independent hannel. 4. The basi parameters of the three simulated systems are summarized in Table 4.stored and updated in the adaptive SISOs. whereas only the forward/ba kward metri s are stored and updated in the perfe t CSI ase.2 Numeri al Results and Dis ussion Simulations were run for a transmission s heme omparable to GSM [Stee92℄. ea h algorithm is identi. Ea h interleaver olumn is formatted into a TDMA burst together with a training sequen e. equally split in 13 leading and 13 trailing symbols. for the purpose of simulation eÆ ien y. The onvolutionally en oded sequen e is interleaved using a 57  30 blo k interleaver.

employing the iterative re eiver des ribed in the previous se tion with di erent adaptive SISOs for the inner equalizer.  the multipli ity of the hannel estimators (i.e. RLS.e. Optimal.ed by a four-part label. LMS. Suboptimal or No Binding). KF. ea h part of whi h denoting  the type of the soft de ision (i. Figure 4. AKF)  the binding method (i. SING or MULT). des ribed in Table 4.1.e. APP or MSM).e.1 presents performan e urves for system S1.  the parti ular hannel estimator used (i. BER urves for the .

rst and .

fth iteration are shown. no signi.

ant improvement was observed for more than .

For 34 .ve iterations.

Eb=N0 for system S1 and various on.1: BER vs.0 10 -1 10 -2 BER 10 -3 10 -4 10 Perfect CSI MSM-MULT-KF-OB MSM-MULT-LMS-SB -5 10 APP-SING-AKF-OB (d=3) MSM-SING-LMS-SB (d=3) 1st Iteration 5th Iteration MSM-MULT-LMS-NB -6 10 2 4 6 8 10 12 14 16 Eb/No Figure 4.

gurations for the adaptive inner SISO 35 .

the hannel estimators were obtained by approximating the Clarke spe trum [Clar68℄ with a .the adaptive SISOs employing KF or AKF.

a loss of 2 dB (1 dB) is observed for the 5th (1st) iteration when no binding is performed. Comparing the two urves orresponding to MSMMULT-LMS.rst order model having 10dB-bandwidth equal to d . This out ome learly indi ates the signi.

In the . ant pra ti al { aside from the on eptual { value of the binding term. The omparison between MSMMULT-LMS-SB and MSM-MULT-KF-OB shows that LMS hannel estimation with suboptimal binding is nearly as good as the KF with optimal { and omputationally expensive { binding.

7 dB at BER=10 3). while in the .rst iteration the latter performs slightly better (by 0.

Multiple-estimator s hemes are shown to be 2 to 4 dB better than single-estimator ounterparts in the .fth iteration no notable di eren e is observed.

5 to 2 dB after the .rst iteration. while this gain is de reased to 0.

The best adaptive SISO a hieves performan e that is just 1 dB away from that of perfe t CSI.fth iteration as an be observed from the omparison of MSM-MULT-LMS-SB and MSM-MULT-KF-OB with MSM-SING-LMS-SB or APP-SING-AKF-OB. This result is the dire t antithesis with the perfe t CSI ase. simulation results that are not shown here on. where an iteration gain of only 1 dB does not even justify the need for iterative dete tion. Finally. Note that the optimal value for the tentative delay was found to be d = 3 for both SING estimators. Regarding the iteration gain. as mu h as 6 to 7 dB an be gained using 5 iterations for both single or multiple estimator SISOs.

1 is ompared with that of the orresponding re eiver employing a forward-only adaptive SISO (as the one in [ZhFiGe97℄) with de ision delays D = 3. but gives rise to exponential omplexity growth. performan e is improved by in reasing the smoothing depth D. 4. 4 and 5 symbols. In Fig.5 dB an be a hieved with the FI adaptive SISO with only a fra tion of the omplexity (a forward and a ba kward re ursion on a 16-state trellis is required). As expe ted. the performan e of MSM-MULT-LMS-SB of Fig. 36 . 4.rm the negligible di eren e between APP and MSM algorithms for these operational SNRs. AnCh98℄ for the ase of CSI as well. a fa t whi h was noted in [AnCh97.2. Other than the di erent inner adaptive SISOs. The omparison with the proposed SISO shows that even with a high omplexity forward-only algorithm (D = 5 orresponds to a 1024-state trellis) a performan e gain of 1 to 1. all other omponents of the ompared re eivers are identi al.

0 10 -1 10 -2 BER 10 -3 10 1st Iteration 5th Iteration -4 10 Perfect CSI MSM-MULT-LMS-SB Forward only (d=3) Forward only (d=4) Forwrda only (d=5) -5 10 -6 10 2 4 6 8 10 12 Eb/No Figure 4. for various values of the de ision lag D 37 .2: Comparison between Forward/Ba kward and Forward-only inner adaptive SISOs for system S1.

Similar performan e urves are reprodu ed in Fig.3 for system S2 onsisting of a rate 2/3. The presense of the denser 8PSK onstellation produ es quantitatively 0 10 -1 10 -2 BER 10 -3 10 -4 10 Perfect CSI MSM-MULT-KF-OB MSM-MULT-LMS-SB APP-SING-AKF-OB (d=3) MSM-SING-LMS-SB (d=3) MSM-MULT-LMS-NB -5 10 1st Iteration 5th Iteration -6 10 5 10 15 20 Eb/No Figure 4.3: BER vs. Eb=N0 for system S2 and various on. 32-state 8PSK TCM ode over the same hannel as in the previous simulation. 4.

.gurations for the adaptive inner SISO di erent performan e urves: Single-estimator s hemes rea h an error oor at BER values greater than 10 2. LMS or AKF). Multiple estimator algorithms using either KF and OB or LMS and SB perform almost identi ally at BERs smaller than 10 2.e. Both of these adaptive algorithms yield mu h worse performan e ompared to perfe t CSI (the loss is on the order of 5 dB for the . regardless of the hannel estimator used (i.

fth iteration for the best adaptive SISO at BER of 10 3.. oding gain) with the only ost being in reased re eiver 38 . Coded modulation te hniques have been onsidered as a method to provide improved performan e (i.e. while is redu ed to approximately 3 dB for a BER of 10 5).

omplexity (i. In 0 10 -1 10 -2 BER 10 -3 10 -4 10 Perfect CSI Adaptive Uncoded QPSK -5 10 TCM 8PSK (Hard) 1st Iteration 5th Iteration TCM 8PSK (Soft) -6 10 0 5 10 15 Eb/No 20 25 30 Figure 4.6 dB gain over the un oded system.4: BER vs. no bandwidth expansion). In [AnCh98℄. both having the same throughput and o upying the same bandwidth. AnCh98℄ on lusions are obtained for the ase of perfe t CSI: Coding gain without bandwidth expansion is not possible using hard-de ision re eivers. Similar to [AnCh97.4 presents a omparison between systems S2 (un oded QPSK) and S3 (8PSKTCM). Eb =N0 for systems S2 and S3 employing hard-de ision and soft-de ision de oding the AWGN hannel S2 provides a 4. those trade-o s were studied under the perfe t CSI assumption.. The design trade-o s for this frequen ysele tive hannel are more omplex than those for an ideal AWGN hannel. The utilization of soft-de ision re eivers results in 4 dB oding gain at a BER of 10 3 for the .e. Figure 4.

resulting in 5.rst iteration.5 dB gain at the . Additional iterations slightly improve the performan e.

and adaptive pro essing is performed.fth iteration. the hard-de ision PSP re eiver still annot provide 39 . When perfe t CSI is not available.

any performan e improvement over the un oded system. provides a poor oding gain when only a single iteration is performed (i. the adaptive soft-de ision algorithms..3 Fa tors Impa ting Performan e The on lusions drawn in the previous se tion are tightly oupled with the parti ular hannel onditions and system on. 4. the use of iterative soft-de ision adaptive pro essing results in a gain of approximately 13 dB.e.5 dB). On the other hand. Furthermore. 3.

guration. These on lusions an be signi.

that has a signi. antly altered when di erent operating onditions are onsidered. One hannel hara teristi .

is the level of dynami s. an initial hannel estimate may suÆ e for use in onjun tion with a non-adaptive iterative dete tor. Similar on lusions have been drawn for adaptive hard-de ision algorithms [Tzou93℄. the need for adaptive pro essing is questionable. in the ase of low dynami s. The signaling format and in parti ular the on. ant e e t on re eiver design. While high dynami s were onsidered here.

When only a leading training sequen e is available the adaptive SISOs have to be modi.guration of the training sequen e is another system hara teristi that has a great impa t on re eiver design.

Although the .ed. or (iii) initialization with the most re ent forward estimate. The forward hannel estimates are still initialized as before. (ii) startup with the initial forward estimate (possibly predi ted forward in time using any available hannel statisti s). while for the ba kward estimates several options an be onsidered: (i) blind startup.

rst method is the one suggested by the theory. it implies that the adaptive SISO will operate in the a quisition mode. instead of the tra king mode. in the ba kward re ursion. This is an undesirable situation. resulting in signi.

Similar remarks (in parti ular (i) and (iii)) hold regarding the modi. ant performan e degradation.

a forward matrix re ursion an e e tively substitute the ba kward re ursion [LiVuSa95℄. the smaller the value of P . ation of a FI towards a FL adaptive SISO. the lower the probability of loosing lo k. thus alleviating the need for ba kward parameter initialization. In this ase. For systems operating with small P values. however. Regarding tra king vs. a 40 . a relevant measure is the produ t of the payload size with the normalized Doppler spread of the hannel P = J  d . a quisition operating mode. and utilizing leading/trailing training.

low omplexity non-adaptive SISO algorithm an be utilized. The hannel estimates are derived by linear interpolation between the initial and .

The latter is demonstrated in Fig. su h high-performan e/low- omplexity adaptive SISO is not feasible when either a trailing training sequen e is unavailable or when the value of P is in reased. Unfortunately. Eb =N0 for the re eiver employing adaptive and non-adaptive (using interpolated hannel estimates) inner SISOs for di erent payload sizes operates with 1 dB degradation ompared to MSM-MULT-LMS-SB for a BER of 10 3.5.5 the performan e of this s heme is ompared with that of MSM-MULTLMS-SB for system S1. 4. where the doubling of payload size results in atastrophi performan e for the interpolator based SISO.5: BER vs. 41 . 4. In Fig. It is shown that the interpolator based non-adaptive SISO 0 10 -1 10 -2 BER 10 -3 10 -4 10 Perfect CSI (57) MSM-MULT-LMS-SB (57) INTERPOLATOR (57) MSM-MULT-LMS-SB (120) INTERPOLATOR (120) -5 10 -6 10 0 5 10 15 Eb/No Figure 4.nal hannel estimates.

whi h were PCCCs [BeGlTh93℄. The output symbols are mapped onto a onstellation of size Q. whi h introdu es phase o set k as well. For example. Es is the symbol energy. The sequen e of sour e bits bn is partitioned into blo ks and onvolutionally en oded using a rate Ro outer CC. tra king bandwidth and BER in the tra king mode are all relevant performan e measures. q zk = Es qk ejk + nk (5.1) where nk is white omplex Gaussian noise with E fjnk j2g = N0. 42 . This is 1 In [BeDiMoPo98℄ it was shown that bit interleaving yields better performan e with a slightly more ompli ated de oder stru ture. These symbols are fed to an inner CC of rate Ri through a pseudorandom symbol interleaver1 of length N .Chapter 5 Con atenated Convolutional Codes with Carrier Phase Tra king 5. The omplex symbols qk are transmitted to an AWGN hannel. loss of lo k probability.1 SCCC with Carrier Phase Tra king SCCCs were introdu ed in [BeMoDiPo98℄ as an alternative to the original turbo odes. resulting in an overall ode rate of R = Ro Ri log2 Q (bits per hannel use). and the symbols qk are normalized to unit energy. The e e tiveness of the adaptive iterative dete tion algorithm an be assessed by a number of fa tors. Initial experiments suggested that y le slipping was a major performan e limiting fa tor. produ ing N oded symbols un.

5) is de. the mapping fun tion f () of (2. Thus.2) Es = RRt Eb = Ro Ri log2 Q Nd + Nt b where Eb is the energy per information bit.be ause the operating SNR is very low and the blo k length (interleaver size) is large. Nt pilot symbols are inserted in the transmitted sequen e for every Nd oded symbols. The energy lost in the redundant pilot symbols is a ounted for by lowering the transmitted symbol energy as Nd E (5. In the development of advan ed adaptive re eivers for this system. In parti ular. More pre isely. referring to the inner FSM. it is desirable to view the pilot symbols as part of the inner ode by introdu ing a time-varying mapping fun tion. we onsider the insertion of pilot symbols.

1 Re eivers The stru ture of a SCCC is one of a serial on atenation of two FSMs through an interleaver and therefore it permits the iterative re eiver shown in Fig. When k is a multiple of Nd + Nt . shown in Fig 5. 5.1. that is not a multiple of Nd + Nt .1.1: Adaptive iterative dete tor for SCCC with single external PLL and standard.2 for the ase of perfe t CSI. It onsists of a single de ision-dire ted Outer I FSM Inner FSM Mapper exp(j θ ) noise SOMAP Outer SISO -1 Inner I SISO I DecisionDirected PLL Derotated Observation Figure 5. A simple adaptive de oder. is derived based on the idea introdu ed in [LuWi98℄ for PCCCs. 1. non-adaptive iterative de oder Phase-Lo ked Loop (PLL) whi h uses de isions on the raw output symbols qk .ned as follows: For ea h time k. the orresponding Q-ary symbol is transmitted together with Nt known pilot symbols. the regular mapping of the oded symbols to the Q-ary omplex onstellation is used. as 43 .

A standard iterative de oder is then employed on the derotated observation { after dis arding the pilot symbols { to produ e .well as the pilot symbols. to obtain a phase estimate and onsequently derotate the observation.

The main obsta le in deriving a more sophisti ated re eiver by means of repla ing the inner SISO with its adaptive ounterpart. One approa h is to modify the observation equation to obtain a linear model as in q (5.nal de isions on the sour e bits. Note that. in su h a segregated system the hannel estimator (PLL) does not use any information on the stru ture of the output sequen e fqk g due to the underlying ode and is run only on e prior to the initial iteration. is that the observation is not a linear fun tion of the unknown parameter . There are at least two ways to ir umvent this diÆ ulty. With this modi.3) zk = Es qk gk + nk where gk is a omplex amplitude parameter pro ess having either a sto hasti or a deterministi des ription.

instead of estimating the physi al parameter . For example. respe tively. ation. a simple . for a sto hasti or deterministi parameter model. Regarding the hannel estimator. In the following. the latter approa h is onsidereded. the omplex amplitude gk is estimated. an Extended KF (EKF) or a PLL an be used. Another approa h is to maintain the nonlinear observation equation and repla e the hannel estimators in the adaptive SISOs by some non-linear estimator.

4) having noise equivalent bandwidth (normalized to the symbol time) Beq = =(4 2). k 44 . ~kb ) = 1  jej~ (2 ) k ej ~k j2 b (5.5) This approximation is based on (3.rst order PLL is used in pla e of the RLS (or LMS) algorithm ~k+1 = ~k + =fzk qk e j ~k g (5. The binding term bd () is approximated by bd (~k .1d) by substituting  by 1  and interpreting ej as a single hannel tap gk .

as mentioned in the previous se tion. resulting in an overall ode rate R = 1=2  2=3  log2 8 = 1..7 dB experien ed by the MSM-type algorithms [BeMoDiPo98℄ is ru ial in this appli ation. In addition.5. only deterministi modeling is onsidered for the unknown parameter. The phase pro ess is modeled as a random walk as in [AnMeVi94℄ k = k 1 + k (5. the initial and . external PLL). Only APP-type SISOs are onsidered here sin e the SNR loss of 0.1. the re eivers onsisting of the inner adaptive SISOs des ribed in Chapter 3 will be labeled as SING/MULT-SB/NB. rate 2/3 RSC. The orresponding generator matri es are given by Go (D ) = h 1 1+D2 1+D+D2 i 2 1 0 Gi (D) = 4 0 1 3 1+D2 1+D+D2 5 1+D 1+D+D2 The output symbols are mapped to an 8PSK onstellation with Gray en oding. orresponding to single or multiple PLLs and suboptimal binding of (5. It onsists of an outer 4-state. rate 1/2 RSC onne ted through a length N = 16384 symbol pseudo-random interleaver to an inner 4-state.6) where k is a Gaussian in rement of zero mean and varian e 2 .e. while the baseline algorithm onsisting of a single external PLL operating on the raw 8PSK symbols will be labeled EXT (i.5).2 Numeri al Results The SCCC system presented in [BeDiMoPo98℄ is simulated in this se tion. In all simulations presented here. or no binding respe tively. In view of these fa ts.5 dB to 0.

a forward PLL starting at the beginning of the blo k is used to derotate the . for a fair omparison between the External PLL re eiver and the proposed re eiver stru tures. Consequently.nal phase estimates are assumed ideal.

With su h a s heme. the knowledge of both the initial and the .rst half of the observation. while a ba kward PLL starting at the end of the blo k is used for the se ond half of the observation.

Note that interpolation between phase estimates obtained using the Nd-separated pilot symbols was found to perform poorly under all operational s enarion of interest presented.nal phase is utilized by the External PLL re eiver. 45 .

5. Re eivers employing External PLL.In Fig. A large value of Beq suggests the ability to No Training -1 10 -2 10 BER (32. as well as inner adaptive SISOs are onsidered.2: BER vs.256) training -3 10 Eb/No = 1 dB Eb/No = 1. loop bandwidth for the SCCC with stati phase. In ea h ase. urves for three Eb =N0 values are presented. For medium and high loop-bandwidth (Beq > 10 3) a lear advantage of the MULT-SB adaptive SISO an be observed over the EXT re eiver. tra k larger phase dynami s. approa hing the perfe t CSI performan e. the simulations show that with the proposed algorithm the PLL bandwidth an be in reased two to three times.0 dB -4 10 -5 10 EXT MULT-SB -6 10 10 -3 10 -2 10 -1 Beq Figure 5. In parti ular. thus the External PLL re eiver suÆ es.5 dB Eb/No = 2.2 the BER is plotted versus the loop bandwidth Beq for the ase of the true phase pro ess being stati ( = 0). The omparison of EXT and MULT-SB urves leads to di erent on lusions depending on the bandwidth range: In the low loop-bandwidth range (Beq  10 3) the two re eivers perform almost identi ally. Curves orresponding to Trained and Non-trained systems are shown. 46 .

the basi trade-o is ontrolled by the parameter Nt (for .Regarding the omparison between trained and non-trained systems.

xed Nd): by in reasing Nt . better tra king is possible. In the other extreme (Rt < 1). resulting in high probability of y le slipping at moderate phase dynami s. while the symbol energy Es is redu ed as re e ted in (5. the SNR loss nulli. no training is introdu ed (Nt = 0). In the one extreme.2).

. At low Eb =N0 (i. the superiority of the trained system is even more evident.. redu ing the e e tive Es=N0 to 1 0:51 = 0:49 dB.es any performan e gain due to the improved phase estimate. 256) training. 5.2: no training and (Nt . 1 dB) the non-trained system is superior sin e training introdu es an energy loss of 10 log10 Rt = 0:51 dB. 1. 2 dB).e. Finally.to three-fold advantage of the trained system over the untrained one in terms of Beq . Nd) = (32. This behavior is attributed to the fa t that the former system is able to maintain phase lo k for wider loop bandwidths. generating a two.5 dB) the trade-o is reversed. At medium Eb=N0 (i. at large Eb =N0 values (i. giving rise to as mu h as .e.e. Two pra ti al ases are shown in Fig. whi h results in poor performan e even in the oherent ase.

3 shows a omparison of the SCCC system with the industry standard rate 1/2. in the more realisti s enario that in ludes phase dynami s. The . Eb =N0 and Nt are gradually in reased until the target (BER. Nd). while two adaptive re eiver stru tures are onsidered. Figure 5. MLSD with the aid of a VA is performed in the oherent ase. Our design pro edure is initiated by setting a target BER and Beq region.ve to seven times in rease in Beq . The above omparisons raise the issue of proper sele tion of the system parameter (Nt . Regarding the sele tion of Nd . it should be smaller than the average time-to-slip or else the performan e will be dominated by y le slips. and a hieving even lower BER. A sear h pro edure is then followed. The CC output is mapped on a QPSK alphabet resulting in a rate R = 1 (bits per hannel use) ode (no pilot symbols are used).Beq ) pair is rea hed. onstrained length 7 CC. in the pro ess of whi h.

MaPr73. and the se ond is a PSPbased [LiLiPr92. Simulations were run for  = 2o and 47 . and SING-NB. The SCCC re eivers onsidered are the EXT. SING-SB. MULT-SB. Unge74℄. RaPoTz95℄ re eiver onsisting of a VA with 128 PLLs driven with zero-delay de isions. KuMuFu94. Sesh94. onsisting of a single PLL driven by delayed tentative de isions from the VA.rst is the Conventional AdaptiveMLSD (CA-MLSD) re eiver of [Koba71.

3: BER vs. Eb=N0 for SCCC with phase dynami s and various inner adaptive SISO on.CC Perfect CSI PSP CA-MLSD -1 10 -2 10 BER SCCC Perfect CSI EXT MULT-SB SING-SB SING-NB -3 10 -4 10 -5 10 -6 10 0 1 2 3 4 5 Eb/No Figure 5.

gurations (the optimal performan e for SING re eivers was a hieved for d = 0). the performan e of CC with adaptive hard-de ision dete tion is presented 48 . For omparison.

With perfe t CSI. together with pilot symbols. the orresponding gain is in reased to 3 dB. the SCCC performs with a 2.2 PCCC with Carrier Phase Tra king In a PCCC [BeGlTh93℄. For reasons that will be lear in the next subse tion. the following observations an be made. This gain vanishes when a PSP based MLSD re eiver is used to de ode CC and the EXT re eiver is used for SCCC. Simulation trials not shown here suggested that a reasonable pair is (Nt. Beq 5. Under perfe t CSI. BER of 10 5 is a hieved at Eb=N0 = 3:75 dB. Examining the CC performan e urves.3 we on lude that MULTSB and SING-SB (d=0) perform identi ally (0. Observing the SCCC urves in Fig.4 dB.5 dB away from the oherent ase). Nd ) = (16. The design pro edure outlined in the previous paragraphs was followed for the sele tion of Nt for the SCCC ase. respe tively. By utilizing the more advan ed adaptive SISOs. while the the CA-MLSD re eiver performs poorly resulting in a BER of 10 2 at 4 dB. This is to be ontrasted with the 128-state CC ase. uk )e (5. 5. The omparison of the CC and SCCC urves learly illustrates the importan e of adaptive SISOs. while an interleaved version of the input sequen e is en oded by (2) a se ond CC of rate R2. modeled exa tly as in the ase of SCCCs.7) 49 .25 dB. the observation equation is written as q q (2) jk + n = E q ejk + n zk = Es mk (u(1) k s k k k .was optimized for ea h Eb =N0 value. Therefore. No binding results in a loss of 0. The PSP-based re eiver operates at this BER with a loss of 0.6 dB gain over the standard CC. there is no notable gain by using four PLLs instead of one PLL. 256) for a target BER of 10 5 and the mentioned phase dynami s. giving rise to the oded symbols u(1) k and uk . a length N blo k of the original sequen e fbk g is en oded by a rate R1 CC. where a large di eren e between the CA-MLSD and PSP-based de oders is observed. whi h may be attributed to the fa t that the former orresponds to an FSM of only 4 states. (2) The output symbols u(1) k and uk are then mapped { after possible pun turing { to the symbols qk and transmitted over an AWGN hannel whi h introdu es phase un ertainty.

1.7).(2) where the time-varying mapping qk = mk (u(1) k . Thus. As in the ase of SCCCs. uk ) is expli itly shown. the outputs of the onstituent FSMs are oupled via the non-linear mapping (5. The adaptive re eiver proposed in [LuWi98℄.1 an be applied when perfe t CSI is present. 5. followed by a non-adaptive turbo de oder is a potential solution when knowledge of the phase o set is not available at the re eiver. This makes the substitution of the perfe t CSI SISO by an adaptive SISO insuÆ ient for performing adaptive iterative dete tion in this ase. PLLs an again be utilized to address this issue. the non-linear observation model presents a hurdle in the dire t appli ation of the adaptive SISO algorithms derived in Chapter 3. Furthermore.2. In ontrast to the serially on atenated examples onsidered earlier. Nevertheless.1 Re eivers Sin e PCCCs an be modeled as parallel on atenated FSMs. Pilot symbols are inserted in the transmitted sequen e in the same manner des ribed for SCCCs. the PCCC has the property that the outputs of both FSMs are dire tly a e ted by the hannel. In the following we dis uss the options for doing so and demonstrate one spe i. adaptive iterative dete tion for this PCCC appli ation requires a method for evaluating transition metri s and updating phase estimates for ea h SISO. onsisting of an external de ision-dire ted PLL operating on the raw symbols qk . the iterative de oder shown in Fig.

1) an be performed by treating the output symbols orresponding to FSM2 as nuisan e parameters and either averaging or maximizing over them. the transition metri for the forward re ursion in SISO1 is evaluated as q X log exp( 1 jz E m (u(1) . 1. approa h. averaging over the output symbols of FSM2 seems to be a preferable hoi e.8) (2) uk N0 k s k k k k k k 50 . Sin e APP soft metri s are proven to be superior ompared to MSM ones (for the parti ular appli ation in the perfe t CSI ase). for the ase of deterministi parameter model. Metri Evaluation: Metri evaluation in SISO1 (shown in Fig. For example. u(2) )ej ~ j2 )P (b )P (u(2) ) (5.

2. 3g is a quaternary symbol. and ompatible with the notion that SISO blo ks ex hange information only in the form of soft metri s. onsider the ase where u(1) k 2 (2) f0. uk 2 f0. su h an approa h is not always appli able.14a) was dropped for simpli ity. Starting from the simplest solution. for whi h the symbol qk is only a fun tion of u(1) k (k is even). using Gray mapping).e. whi h was the basis for the original turbo ode. an be a hieved by alterate pun turing 8 < (2) qk = mk (u(1) k . In addition.g.where the -related fra tion of (2.10) : ~ if k is odd k The immediate onsequen e of this sort of hannel update is a loss of the full tra king ability of the estimator (i. uk ) = : QPSK(u(1) if k is even k ) (1) (2) QPSK(2buk =2 + uk ) if k is odd (5. 1. This signaling format. the e e tive loop bandwidth is halved). A similar pro edure an be followed for the evaluation of the transition metri s of SISO2.. In a more re. 1g is a binary symbol and qk belongs to a QPSK signal onstellation. This situation arises when u(2) k is periodi ally pun tured. the hannel update is only performed for those time instants k. A reasonable hoi e for the probability P (u(2) k ) is to use the most re ent soft-metri s produ ed by SISO2. sin e the mapping mk () may always be an expli it fun tion of the symbol u(2) k .. The resulting updates for this pun tured PLL be ome 8 (1)  j ~ ~k+1 = < ~k + =fzk [QPSK(uk )℄ e g if k is even (5. For the presentation. as in the ase of non-pun tured odes (this is also true in the previous example when onsidering phase estimation for SISO2). Parameter estimate Update: Several options are onsidered for updating the phase estimate in SISO1.9) where QPSK() maps the quaternary symbols to the two-dimensional signal onstellation (e. as is ommonly suggested. This solution is both simple to implement.

the hannel estimator { and in parti ular the PLL (or PLLs) { is updated for every time instant k. u(1) k is determined by (2) (2) the state transition of SISO1.ned te hnique. while an estimate u^k of uk is determined by hard quantizing the most re ent soft information of u(2) k available either from SISO2 or k 51 . As in the previous ase.

the . In the following. Su h a PLL operates in a de ision dire ted mode in terms of the symbol (2) u(1) k .11) Finally. while it e e tively averages out the symbol uk (a simple PLL stru ture that operates by averaging equiprobable binary symbols has been proposed in [LiSi73℄).from any other soft blo k in the adaptive re eiver. The resulting updates for this parallel de ision-dire ted PLL be ome 8 < (1) ) e j ~ g ~k+1 = ~k + =fzk QPSK(uk (1)  : ~ + =fz QPSK(2bu =2 + u ^(2) k k k k ) e k j ~k g if k is even if k is odd (5. an even more sophisti ated te hnique an be derived by utilizing a PLL. Hybrid s hemes that use a pun tured PLL initially and swit h to a parallel de isiondire ted operation are also possible.

8). Spe i.rst order PLL and suboptimal binding term in (5.5) will be used. a hybrid approa h for phase tra king is used. Transition metri s are evaluated by averaging out the symbols orresponding to the other FSM as des ribed in (5.4) and (5. Lastly.

SISO1 is run with the pun tured PLL of (5.11) in the subsequent iterations. ally.10) on the initial iteration. The rational behind this hybrid bootstrapping pro edure is that in the . and swit hes to the parallel de ision-dire ted mode of (5.

The a tivation s hedule for the iterative dete tor is shown in Fig. onstru ted by on atenating two identi al 4-state Re ursive Systemati onvolutional Codes (RSC).2. Both the systemati and the en oded bits are output from the .2 Numeri al Results An overall rate R = 1 ode is onsidered here. there are no soft (or hard) de isions available for the symbol u(2) k . et .4 and des ribed as follows: SISO1 (with internal SOMAP) ! SOBC (whi h e e tively forwards extrinsi information form SISO1 to SISO2) ! SISO2 (with orresponding internal SOMAP) ! SOBC in the opposite dire tion ! SISO1. 5. mixed- mode 5.rst iteration.

while only the en oded bit is output from the se ond.rst ode. The orresponding generator matri es are given by G1 (D) = h 1 1+D2 1+D+D2 i G2 (D) = 1+1+DD+D 2 2 52 .

3 are presented. Also.2. 53 . with the only di eren e being the slight degradation of the SING-SB and SING-NB algorithms over the MULT-SB adaptive SISO. 5. as in the ase of perfe t CSI. 5. In Fig. The on lusions are similar to the SCCC ase. Finally. these results are not presented for brevity. the quantitative performan e a hieved using the SCCC and PCCC systems is very similar.9). performan e urves similar to those of Fig.5. simulations for the ase of stati phase revealed omparable performan e with that shown in Fig.4: A tivation s hedule of the adaptive iterative re eiver for PCCC The output symbol is formed exa tly as des ribed in (5.SO(u Metric (1) ) k Parameter Evaluation Update Observation I-1 SISO1 SOBC Observation I SISO2 Parameter Update Metric Evaluation (2) SO(u ) k Closed after 1st iteration Figure 5. 5.

CC Perfect CSI PSP CA-MLSD -1 10 -2 10 BER PCCC -3 Perfect CSI EXT MULT-SB SING-SB SING-NB 10 -4 10 -5 10 -6 10 0 1 2 3 4 5 Eb/No Figure 5.5: BER vs. Eb =N0 for PCCC with phase dynami s and various adaptive SISO on.

gurations (the optimal performan e for SING re eivers was a hieved for d = 0). the performan e of CC with adaptive hard-de ision dete tion is presented 54 . For omparison.

Furthermore. this development illustrated that.Chapter 6 Con lusions The basi re ursions for bi-dire tional adaptive SISO algorithms were developed in this work. exhaustive digital sear hing. This assertion. suggests that utilizing the adaptive forward and ba kward re ursions with a smaller number of states will be desirable relative to the existing forward only algorithms. as in the adaptive MLSD ase. This property is on. these new results in lude a binding term to be used in the ompletion operation. is required to ensure optimality. there are diminishing returns for in reasing the omplexity of the tree-sear h algorithm. We expe t. along with the de oupling of the smoothing depth and sear h omplexity for the new algorithms. that as in the hard-de ision ase. or alternatively exa t evaluation of the mixed-Gaussian transition metri s. however. In addition to the forward and ba kward likelihood re ursions asso iated with known hannel SISO algorithms.

Be ause the .rmed by the experimental results presented here.

g..g. As a result. with delayed. et . virtually any hard de ision algorithm an be used to motivate. et . PSP.) with those of known- hannel SISOs (e. For example. hardde ision dire ted KF hannel estimators is possible.. extrinsi outputs. a general methodology emerged that ombined the attributes of adaptive hard de ision pro essing (e. a SISO for systems with parametri un ertainty. Also. delayed soft and hard de ision feedba k.rst step in our development was to spe ify what a reasonable soft output would be for an FSM with parametri un ertainty. we obtained several families of algorithms. intuitively at least. While this algorithm did not 55 . ompletion operations. virtually every adaptive SISO algorithm suggested in the literature an be viewed as a forward-only member of one of these lasses.). a forward/ba kward MSM algorithm. forward and ba kward re ursion. In parti ular.

2. 2. Another example is the lass of algorithms for the linear Gaussian fading hannel that utilize steady-state.follow dire tly as one of the bran hes of Fig. . it emerges in retrospe t from the insight gained.

Although pra ti al re eivers were proposed based on adaptive SISOs. namely de oding of PCCCs with phase tra king. The presentation of the last appli ation. the development of a general framework for adaptive iterative dete tion on arbitrary networks of FSMs { similar to that in [BeDiMoPo98℄ for the perfe t CSI ase { is an area for future resear h. 57 . The use of su h odes may alleviate the detrimental e e ts of y le slipping. potentially enabling even wider loop bandwidths. revealed that the on ept of adaptive iterative dete tion is broader than the on ept of adaptive SISOs.multidimensional { inner odes.

[AnCh97℄ Pro . Systems and Computers." vol. [AnCh98℄ Pa i. 5. 31 Asilomar Conferen e on Signals. 779-785. \Iterative Equalization/De oding of TCM for Frequen y-Sele tive Fading Channels. IEEE. May 1970. Frit hman." Pro . Abend and B. \Statisti al Dete tion for Communi ation Channels with Intersymbol Interferen e. D.Referen e List [AbFr70℄ K. A. No. Chugg. pp. Anastasopoulos and K. M. 58.

D' Andrea." Atlanta. Feb. pp." vol. A. 169-176. \Sequential Coding Algorithms: A Survey Cost Analysis. M./Apr. Chugg. N. \Approximate ML De oding of Coded PSK with No Expli it Carrier Phase Referen e. February/Mar h/April 1994. J." To be submitted Jan. Pro . M. Mengali and G. Anastasopoulos and K. 58 . 183-190. U.J. [AnCh99℄ IEEE Communi ations Letters.. Mar. [AnMo79℄ [AnMo84℄ Optimal Filtering. 9. 2. COM-32. 705-709. Moore. A. O. Prenti e Hall. B. 1998. Anderson and J." vol. Commun. GA. June 1998.. Commun. Mohan. Anastasopoulos and K. [AnPo97℄ Pro . Grove. No. pp.. Anastasopoulos and A. A. A. N. of ICC '98. CA Nov. IEEE Trans. Interleaved Fading Channels Using Joint Diversity Combining. pp. \On the e e ts of hannel dynami s and interleaver stru ture on the iteration gain for iterative equalization/de oding. \TCM for Frequen ySele tive. Vitteta. Anderson and S. M. Polydoros. [AnPo98℄ ETT." May 1997. D. \Soft-De isions Per-Survivor Pro essing for Mobile Fading Channels. 1984. A. New Jersey." vol. 1979. 1997. Anastasopoulos and A. of VTC '97. 1033-1039. \Adaptive Soft-De ision Algorithms for Mobile Fading Channels. 42. B. B. pp. [AnMeVi94℄ IEEE Trans. Polydoros. Chugg. 1999.

and P. Chen. Inf." vol. Mar. M. [BeMoDiPo98℄ S. No. A. \Optimal de oding of linear odes for minimizing symbol error rate. Commun. F. Theory. 1998. G.. Chang. IL. Bello. [ChPo95℄ K. Benedetto. 9. Theory. M. ETT. 1998.. Divsalar. Co ke. pp. Mar. M.284-287. Montorsi." vol. 2." Atlanta. -W. Ba arelli and R.[BaCoJeRa74℄ L. 155-172. Montorsi and F. pp. 2. pp. \Soft-Input Soft-Output Modules for the Constru tion and Distributed Iterative De oding of Code Networks. \MLSE for an Unknown Channel { Part I: Optimality Considerations. Berrou. [BeGlTh93℄ C. Cusani. Pro- eedings of ICC'93. Jelinek and J." vol. \EÆ ient Ar hite tures for Soft-Output Algorithms. GA. \An Iterative Algorithm for Two-Dimensional Digital Least Metri Problems with Appli ations to Digital Image Compression. and Iterative De oding. Castoldi and R. Raheli." vol. 59 . Mar. 909-926. Pro . Thitimajshima. Chen. Pollara. 1963. 44. 360-393. 1064-1070. No. June 1998. \Chara terization of Randomly Time-Variant Linear Channels. IEEE Trans." Chi ago. Inform. Commun. Benedetto. May 1993. IT-44." Atlanta." vol. O t. GA. Chugg and X. pp. of ICC '98. 11. [Bell63℄ P. Polydoros. 1974. IEEE Trans. A. [BaCu98℄ E. Divsalar and F. M. 209-220. of ICC '98. April 1998. pp." Geneva. Pro . Pollara. ETT./Apr. [BeDiMoPo98℄ S. Chugg. pp. D. J. 1998. R. June 1998. Bahl. [CaRa98℄ P. [ChChOrCh98℄ K. \Combined Channel Estimation and Data Dete tion Using Soft Statisti s for Frequen y-Sele tive FastFading Digital Links. 46. \Near Shannon Limit Error-Corre ting Coding and De oding: Turbo-Codes. July 1996. \Serial Con atenation of Interleaved Codes: Performan e Analysis. G. \Near-Optimal Data Dete tion for TwoDimensional ISI/AWGN Channels Using Con atenated Modeling and Iterative Algorithms. A./Apr. Communi ation Systems. 9. IT-20. Pro . Glavieux. Switzerland. pp. May 1998. Design. Chugg. [ChCh98b℄ K. Chen and K. X. Chugg and A. D. Ortega and C. [ChCh98a℄ X. Raviv." vol. \On Re ursive Optimal Dete tion of Linear Modulations in the Presen e of Random Fading. IEEE Trans. 424-427. pp. IEEE Trans." vol. of ICIP '98 IEEE Trans. 836-846.

Q. 957-1000." vol. P. IEEE Trans. \Soft Information Transfer for Sequen e Dete tion with Con atenated Re eivers. Chugg. May 1972. June 1998. G. T. M.. pp. pp. pp. Inform. [HaAu98℄ Pro . 268-78. R. Iltis. 46. Giridhar. GA. 1996. June 1960. pp./Apr. -Frias and J. J. 44." vol. pp. pp. 1977." vol 42. R. A. U. U." vol. Pro . [Clar68℄ Bell System Te h." vol. Clarke. Hansson and T. Information Theory.[Chug98℄ K.. 579-588." vol. 47. Commun. [Forn73℄ [FrVi98℄ Pro . [Kail60℄ IRE Trans. 10. D. [DeLaRu77℄ J. New Jersey. 1017-1032. \Correlation Dete tion of Signals Perturbed by a random hannel. J. 1992. February-April 1994. Forney. 361366. Communi- ations./Mar." vol. D. [HaAu96℄ IEEE Trans. Gree e. 42. [Forn72℄ IEEE Trans.. \MaximumLikelihood for In omplete Data via the EM Algorithm. Sele ted Areas Commun. 60 . R. Sept. Laird and D. Forney. M. \The Condition for the Appli ability of the Viterbi Algorithm with Impli ations for Fading Channel MLSD. \Dete tion Of Bandlimited Signals Over Frequen y Sele tive Rayleigh Fading Channels. A. S.. [Hayk96℄ [IlShGi94℄ Adaptive Filter Theory. IT-18. \Turbo Codes for Binary Markov Channels. Dai and E. Aulin. Commun. 1994. Englewood Cli s. 61. 3rd Ed. Dempster. Sep. Villasenor. Th. 941 -950. 1086-1095. Commun. \The Viterbi Algorithm. \Bayesian Algorithms for Blind Equalization Using Parallel Adaptive Filtering. pp. IEEE Trans. of ICC '98. \Digital Signaling on the Time Continuous Rayleigh Fading Channel. Prenti e Hall... 1998. of ICT '98.. Apr. A. pp. 1112-1116. Kailath." IT-6. of IEEE. Aulin. Iltis." vol. [Ilti92℄ IEEE J. J. 1-17. J. G. B. [DaSh94℄ IEEE Trans. 363-378." vol. So . \Maximum-Likelihood Sequen e Estimation of Digital Sequen es in the Presen e of Intersymbol Interferen e. June 1998.. 1968. pp. Rubin. G. Mar h 1973. N. \A Bayesian Maximum-Likelihood Sequen e Estimation Algorithm for A Priori Unknown Channels and Symbol Timing. Roy." Porto Carras. Shwedyk. \A Statisti al Theory of Mobile Radio Re eption. Statist." Atlanta. 39. 1996. D.. Hansson and T. pp. Haykin. Feb. Shynk and K.

\Optimum Re eivers for Randomly Varying Channels." Butterworth S ienti. Kailath.[Kail61℄ T.

Vu eti and Y. Proakis. [LuWi98℄ Pro . June 1971.. 704-713. Moher. pp. \Syn hronization of Turbo Coded Modulation Systems at Low SNR. 109-122. Magee and J. \Adaptive Maximum-Likelihood Sequen e Estimation for Digital Signaling in the Presen e of Intersymbol Interferen e. London 1961. Fujino. J." Atlanta. May 1969. Englewood Cli s. Murakami and T. [LoMo90℄ IEEE Trans.. 19. 4th Symposium Inform. H. \Optimum Soft-Output Dete tion for Channels with Intersymbol Interferen e. 268-280. of ICC '98. \Simultaneous Adaptive Estimation and De ision Algorithm for Carrier Modulated Data Transmission Systems. Inform. New Jersey. Kobayashi. 2557-2572./Mar h/April 1994." O t." vol. 1973. F." vol. 350-361.. 19. B. Aug." vol. Lodge and M. 1993. \An Adaptive MLSE for Fast Time-Varying ISI Channels. Theory. Signal Pro essing. \On-Line Estimation of Hidden Markov Model Parameters Based on the Kullba k-Leiber Information Measure. 1872-1880. K. Pro . [LiVuSa95℄ IEEE Trans. G.. H. [LiSi73℄ Tele ommuni ation Systems En- gineering. Feb. Wilson. B. 235-239. No. Th. June 1998. L. Y. 3. T. Moore. [MaPr73℄ IEEE Trans. Jan. 61 . Lin. PIMRC92. 42. W. [Koba71℄ IEEE Trans. [KrMo93℄ IEEE Trans. pp. Prenti e Hall. Inf. Commun. \A General Likelihood-Ratio Formula for Random Signals in Gaussian Noise. Commun. \Maximum Likelihood Sequen e Estimation of CPM Signals Transmitted over Rayleigh at-fading Channels. Lindsey and M. Kubo. G. IT-41. 787794." IT-15. K. 1992. 1973. pp. Press. 38. pp." vol. Proakis. [KuMuFu94℄ IEEE Trans. \Joint Data and Channel Estimation for TDMA Mobile Channels. Ling and J. H. V. pp. L. F. pp. G. R. [Kail69℄ IEEE Trans. Infor- mation Theory. pp." vol.." vol 41. 120-124. Li. J. Kailath. C. Lu and S. Krisnamurthy and J. May 1995. GA. June 1990. pp. Simon. pp. Sato. [LiLiPr92℄ Pro . Th. Commun.

Simmons. Meyr. 870-880. Didier and A.4." vol 46. Pi art. July 1998." Montreal.[Mamm95℄ A. R. Wittkopp. Mar h/April 1995. Glavieux. pp. Mammela. \Simpli. Rollins and S. M. J. \An Iterative Multiuser De oder for Near-Capa ity Communi ations. Polydoros and C. Tzou. \Soft Output Equalization and Trellis Coded Modulation for Severe Frequen y-Sele tive Fading Channels. A. \Turbo-Dete tion: A New Approa h to Combat Channel Frequen y Sele tivity. -K. M. Raheli. Canada. P. pp. Mehlan. 354-364." 1992. and H." Espoo. 43. 331." vol. Moher. pp. E. 1995. VTT Publi ations 253. \Diversity re eivers in a fast fading multipath hannel. \The Prin iple of PerSurvivor Pro essing: A General Approa h to Approximate and Adaptive MLSE. Finland: Te hni al Resear h Centre of Finland.1-5. J. R. A. June 1997.

[PiDiGl97℄ Pro . 62 . N. 544-553. Commun. [MeWiMe92℄ Pro .. Pente h Press. 1000-1016. P.D. 1996. \Per-Survivor Pro essing: A General Approa h to MLSE in Un ertain Environments. Do toral thesis. 1995. 1993. Fitz. Feb. 1525-1533. \Near-Optimal Symbol-by-Symbol Dete tion S hemes for Flat Rayleigh Fading. 624-636. [RoSi℄ IEEE Trans." vol. . L. Mobile Radio Communi ations Prin iples of Mobile Communi ation Ph. 22. [RaPoTz95℄ IEEE Trans. Seymour and M. pp." vol. Thesis. R.. \Adaptive Maximum Likelihood Re eiver for Carrier-Modulated Data-Transmission Systems." vol. C. Commun. Stuber.. Steele.. 42. of ICC '97. pp. Feb./Mar h/April 1994. [Mohe98℄ IEEE Trans. G. May 1997. Comm. G.. Seshardi." vol. May 1974. pp. Comm. [SeFi95℄ IEEE Trans. ." University of Souther California. De . J.. Commun./Apr. 45. pp. [Sesh94℄ IEEE [Stee92℄ [Stub96℄ [Tzou93℄ Trans. London. P. Tzou. Commun./Mar. [Unge74℄ IEEE Trans. Ungerboe k. ICC 1992. -K.ed per-survivor Kalman pro essing in fast frequen y-sele tive fading hannels. \Joint Channel and Data Estimation Using Blind Trellis Sear h Te hniques. 1992. Kluwer A ademi Press. 43.

\Innovations-Based MLSE for Rayleigh Fading Channels. 63 . Information Theory. New York. Gelfand. of GLOBECOM '97. Zhang. Pasupathy.[Unge82℄ G. B.Y./Apr. 1997.1534-1544." Vol. Pro . M Graw-Hill. Viterbi and J. Commun. IEEE Trans. N. 1995. X." vol. IT-28. pp. 1982. Y. P. pp.. Feb. Yu and S. AZ. Ungerboe k.. 56-67. 43. Omura. 1979. [YuPa95℄ IEEE [ZhFiGe97℄ Trans. A. J. Nov. [ViOm79℄ Prin iples of Digital Communi a- tion and Coding. M. Jan. \Soft Output Demodulation on Frequen y-Sele tive Rayleigh Fading Channels using AR Channel Models." Phoenix. K. \Channel Coding with Multilevel/Phase Signals./Mar. Fitz and S.

Appendix A A.8) To prove (2.8) we .1 Proof of equation (2.

xPk+1(g)P) (zk+1. g~kb jk+1. xk0 )dgk k k Z k k n n P ( = P (z0k .7) in (2. sk+1.1) By substituting (2. xn0 ) = P (z0k 1 . z0 )P (Pg(kgjsk)+1.2) The formal de. xk0 )dgk P (zkn+1.rst ondition on the hannel gk and then use the fa t that onditioned on the hannel and the transition at time k. future and binding terms be ome more expli it: P (z0n . xk0 )P (xk )P (zkn+1jsk+1 . present. zk+1) dgk k k = P (z0k . xk0 . xk0 1 )P (zk jz0k 1 . G~ kjk . P (z0n . G~ bkjk+1) (A. xk0 )P (zkn+1. xk0 )dgk P (zkn+1. gk . xk0 )P (z0k . xk0 )dgk P (gk jsk+1 ) gk Z n n n n = P (z0k . xk0 ) g P (gk jzk+1. xk0 ) Z gk Z gk P (zkn+1. xnk+1 . gk )P (gk jz0k . xk0 ) = P (z0k . past and future observations are independent. xk0 ) Z (A. xk+1. gk jsk+1 ) P (gk jz0k . gk jz0k . gk )P (gk jz0k . xn0 ) = = Z Z gk gk P (z0n . xn0 )dgk P (zkn+1. xnk+1 jsk+1 . xk0 )dgk = P (z0k . xnk+1 . xnk+1 jz0k . xk+1jsk+1) P (gkjz0k . xnk+1) bp (~gkjk . xnk+1jsk+1) g gkjx0 .8) the past.

xk0 ) G~ kjk = Cov (gk jz0k . xk0 ) 64 .nition of the forward and ba kward hannel estimates and onestep predi tions together with their orresponding ovarian es follows g~kjk = E (gk jz0k .

xnk+1 ) G~ kjk 1 = Cov (gk jz0k 1 .12) an be expressed in terms of a forward estimate g~k . xnk ) g~kb jk+1 = E (gk jzkn+1 . xnk+1 ) (A.13) The proof of (2. a ba kward estimate g~kb +1 and third term g .g~kjk 1 = E (gk jz0k 1 . First we prove that the solution g~ to the least squares minimization problem in (2. xk0 1 ) G~ bkjk = Cov (gk jzkn . the forward and ba kward Kalman re ursions are given by the following set of equations: G~ kjk 1yk Kk = N0 + ykT G~ kjk 1yk g~kjk = g~kjk 1 + Kk (zk ykT g~kjk 1) G~ kjk = (I Kk ykT )G~ kjk 1 g~k+1jk = Gg~kjk G~ k+1jk = GG~ kjk G+ + Q (A.5) A.13) is obtained in two steps. sk+1. G~ bkjk+1 are available. sk+1 .2 Proof of equation (2. sk+1.3) where the dependen e on the path history is shown expli itly.4) and G~ bkjk+1yk Kkb = N0 + ykT G~ bkjk+1yk g~kb jk = g~kb jk+1 + Kkb (zk ykT g~kb jk+1) G~ bkjk = (I Kkb ykT )G~ bkjk+1 g~kb 1jk = Gb g~kb jk G~ bk 1jk = Gb G~ bkjk Gb+ + Qb (A. G~ kjk 1 and g~kb jk+1. xnk ) G~ bkjk+1 = Cov (gk jzkn+1 . Assuming that g~kjk 1. x0k 1 ) g~kb jk = E (gk jzkn .13).12) yields (2. Substituting this result in (2. sk+1. The following auxiliary parameters are de.

 ) W2 = diag (n k 1 . y0 )T Y1 = (yk . . z0 )T z2 = (zn . yk+1)T z = (zn . : : : . : : : . 1) P1 = (Y1+ W1 Y1 ) 1 P2 = (Y2+ W2 Y2 ) 1 65 . : : : . y0)T Y2 = (yn. : : : . : : : .ned: Y = (yn. z0 )T z1 = (zk . : : : . W1 ) W1 = diag (1. : : : . : : : . zk+1 )T g1 = g~k g2 = g~kb +1 k W = diag (W2 . .

The least squares solution an be expressed as g~ = (Y + W Y ) 1 Y + W z = (Y1+W1Y1 + Y2+W2Y2 ) 1(Y1+W1 z1 + Y2+W2 z2) = (Y1+W1Y1 + Y2+W2Y2 ) 1(Y1+W1 Y1g1 + Y2+ W2Y2g2) = g1 + g2 + g (A. The last term g an be expressed as 1 P [P + 1 P ℄ 1 g 1 g = P [P + P ℄ 1 g (A.6) where we made use of the matrix inversion lemma.7) 1 1  2 1  2 1  2 2 Substituting this result in the metri de.

tnk+1) + jjg2 + g jj2P + jjg1 + g jj2P (A. tnk+1) + jjY1(g2 + g )jj2W + jjY2(g1 + g )jj2W = (z0k . and g~kb +1.9) Pkb+2 yk+1  + ykT+1 Pkb+2yk+1 g~kb +1 = g~kb +2 + Kkb+1(zk+1 ykT+1 g~kb +2) 1 Pkb+1 = (I Kkb+1 ykT+1)Pkb+2  (A. Pkb+1 are available.nition yields (the terms orresponding to the a-priori probabilities are omitted) (z0n . the forward and ba kward re ursions are given by the following set of equations end Pk 1 yk Kk =  + ykT Pk 1yk g~k = g~k 1 + Kk (zk ykT g~k 1 ) 1 Pk = (I Kk ykT )Pk 1  (A.8) 1 2 1 2 1 | 1 1 {z bd () 2 2 1 } where we made use of the notation jjxjj2W = x+W x. xn0 ) = jjz Y g~jj2W = jjz1 Y1g~jj2W + jjz2 Y2g~jj2W = jjz1 Y1(g1 + g2 + g )jj2W + jjz2 Y2(g1 + g2 + g )jj2W = (z0k . xk0 ) + (zkn+1. Assuming that g~k 1. xk0 ) + (zkn+1.10) Kkb+1 = 66 . Pk 1.

3 Channel update equations and binding term under the Gaussian assumption for (2. z0k 1 )P (xk ) P (z0k .11) Finally. sk )P (zk jtk . z0k 1 )P (xk ) g~k+1jk = G g~kjk 0 P (z0k . sk+1 ) (A. sk )P (zk jtk . sk+1 ) t :s X G~ k+1jk = [GG~ k 1jk 1G+ + Q + (~gk+1jk Gg~kjk )(~gk+1jk Gg~kjk )+℄ k k+1 : tk sk+1 k P (z0 1 . G~ kjk 1yk Kk = N0 + ykT G~ kjk 1yk g~kjk = g~kjk 1 + Kk (zk ykT g~kjk 1) G~ kjk = (I Kk ykT )G~ kjk 1 X P (z k 1 .18) The forward re ursions developed in [IlShGi94℄ for the PCKF are given below.18) under the Gaussian assumption is given by the expression jK jjP j exp(.A. Ba kward re ursions are similar. the binding term in (2.

+P .

G~ kjk 1. G~ bkjk+1) = P (xk ) ~ ~g b jGkjk jjGkjk+1jN0 with yk ykT 1 Kg + N0 yz 1 . g~kb jk+1. ) (A.12a) b0p (~gkjk 1.

12b) 67 . = Gkj1k gkjk + Gbkjk+1 gkb jk+1 + k k N0 jz j2 = gk+jk Gkj1k gkjk + gkb jk+1+ Gbkjk+1 1 gkb jk+1 + k N0 P 1 = Gkj1k + Gbkjk+1 1 (A.

Ba kward re ursions are similar.2. P (tk djz0k 1) is used as the tentative soft-de ision.12).13) In the pra ti al algorithm des ribed in subse tion 3. X y^k d = yk d P 0(tk d ) t X Y^k d = (yk d yk djk d 1)(yk d yk djk d 1)+ P 0(tk d ) k d tk d h  N + tra e(Y^  G ^ ^ = G y ^ d 0 k djk d 1 k d k d k djk d 1 )+ i 1 y^kT d G^ k djk d 1y^k d + gk+ djk d 1Y^k d g^k djk d 1 g^k djk d = g^k djk d 1 + Kk d (zk d y^kT d g^k djk d 1) G^ k djk d = (I Kk d y^kT d)G^ k djk d 1 g^k d+1jk d = Gg^k djk d G^ k d+1jk d = GG^ k djk dG+ + Q (A. The binding term is the same as in (A.18) The forward re ursions for the d-delayed AKF based on P 0(tk d) are given below.4 Channel update equations and binding term under the Gaussian assumption and a single estimator for (2.A. Kk 68 .

ba kward metri s. . multiple-estimator adaptive SISO algorithms an be des ribed with the following generi pro edure. are denoted as (). The umulative forward.Appendix B B. trellis-based.1 Multiple-estimator adaptive SISO algorithm The lass of FI. and the transition metri .

The quantities ps(tk ) and ns(tk ) denote the initial and . respe tively. (). and ().

1) Forward re ursion: 8 k = 0. NS 1 Metri update (ACS) (sk+1 ) = 1 8 tk : sk+1. : : : . : : : . (ps(tk )) + (tk )℄ Channel update g~(sk+1) 2) Ba kward re ursion: 8 k = N 1. NS 1 Metri update (ACS) . : : : . N 1 8 sk+1 =0.nal state asso iated with transition tk . 0 8 sk = 0. : : : . (sk+1) = min [ (sk+1).

. (sk ) = 1 8 tk : sk .

(sk ) = min[.

(sk ). .

(ns(tk )) + (tk )℄ Channel update g~b(sk ) 3) Completion stage: 8 k = 0. : : : . NV 1 SO(vk ) = 1 8 tk : vk Extend hannel estimate g~(ps(tk )) ! g~f Evaluate binding term b(~gf . g~b(ns(tk ))) SO(vk ) = min [SO(vk ). (ps(tk )) + (tk ) + . : : : . N 1 8 vk = 0.

MSM and APP versions are produ ed by substituting the fun tion min(x. y) by either the standard 69 . (ns(tk )) + b()℄ Output extrinsi information SO(vk ) = SO(vk ) SI (vk ) Several remarks are in oder at this point  All operations are performed in the log domain.

B. 0 8 sk = 0.8) is used in the ase of phase tra king in PCCCs. (A. for the ase of phase tra king in PCCCs.5) an be used. KF. (2. (ps(tk )) + (tk )℄ d ba kward ACS steps to obtain SO(tk djz0k ) Channel update g^k d ! g^k d+1 2) Ba kward re ursion: 8 k = N 1. The average metri in (5.10). : : : . y) operator.11) an be used.14a). (2. LMS.  The binding term an be evaluated using either the exa t expressions (2. 1) Forward re ursion: 8 k = 0. single-estimator adaptive SISOs an be des ribed in a similar way. : : : .9).8).  The hannel update is performed by one of the proposed estimators (i. respe tively.7).10). RLS. or the approximate expression (3.1d). the approximate expression in (5. (2. PCKF). (sk+1) = min [ (sk+1). respe tively. y) log(1+exp( jx yj)) operator.e..  Metri updates are essentially the ones performed in equations (2. NS 1 Metri update (ACS) . : : : . In parti ular. (2. or the min(x.min(x.2 Single-estimator adaptive SISO algorithm The entire lass of FI.19a) of Chapter 2 for the forward and ba kward re ursions. N 1 8 sk+1 =0.19a) and (2. (5. the te hniques des ribed in equations (5. : : : .14b). For the ase of phase tra king in SCCC or PCCCs. NS 1 Metri update (ACS) (sk+1 ) = 1 8 tk : sk+1.

. (sk ) = 1 8 tk : sk .

(sk ) = min[.

. (sk ).

: : : . N 1 8 vk = 0. (ns(tk )) + (tk )℄ d forward ACS steps to obtain SO(tk+djzkN 1 ) Channel update g^kb +d ! g^kb +d 1 3) Completion stage: 8 k = 0. NV 1 SO(vk ) = 1 8 tk : vk Extend hannel estimate g^k d ! g^f Evaluate binding term b(^gf . (ps(tk )) + (tk ) + . : : : . g^kb +d) SO(vk ) = min [SO(vk ).

(ns(tk )) + b()℄ Output extrinsi information SO(vk ) = SO(vk ) SI (vk ) 70 .

71 . An additional available option for parameter estimation is the AKF estimator des ribed in Chapter 3.All omments regarding the multiple-estimator adaptive SISOs are valid here as well.