You are on page 1of 9

LECTURE 02 TURBO CODE

June 16, 2019

Li-Wei Liu
National Chiao Tung University
Department of Electronic Engineering
Contents

1 Introduction 2
1.1 Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Basic Element of Turbo Code 3


2.1 Encoding of Turbo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Properties of Turbo Code . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.2 Interleaver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 EXIT Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1
Chapter 1

Introduction

1.1 Channel Capacity


• It is possible to convey information reliably over the channel at a rate of up to "C
bits per channel use"
(Reliably meaning with error rates as low as you want them)

• It is impossible to convey information reliably over the channel at rates greater than
C bits per channel use.

• Channel Capacity of AWGN Channel : C AW G N = log2 (1 + SNR)

• Channel Capacity of BSC Channel : C B SC = 1 − h(p), where h(p) = −p log2 (p) − (1 −


p) log2 (1 − p)

Based on Shannon’s A Mathematical Theory of Communication

1. Good codes should have long code word

2. Good codes should appear random

2
Chapter 2

Basic Element of Turbo Code

2.1 Encoding of Turbo


• Turbo Code is "Block Code"(Block-by-Block into encoder)

• Parallel Concatenation of 2 convolutional code with interleaver.

• Termination with "Back to Zero" by self-cancellation architecture (Feedback XOR


with its self) to add tail bits

2.1.1 Properties of Turbo Code


Error Floor issue in High SNR.(Weight Spectrum Thining as the consequece of Inter-
leaver)

Bd × r d
X
Pb =
d

• P b : Bit Error Probability

• B d : the number of codewords of weight d

• r:
P p
p(y|0)p(y|1)
y∈A Y ={1,−1}

3
4

Derivation of r
à !
d d
p e (1 − p)d −e
X
Pd =
e= d +1
e
2
à !
d
X d d d
< p 2 (1 − p) 2
e= d +1
e
2
à !
d d
d
X d
= p (1 − p)
2 2
e
e= d +1
2
à !
d d
d d
X
< p 2 (1 − p) 2
e=0 e
d d
= p (1 − p) 2d
2 2

p d
= (2 p(1 − p))

P d :Probability of Error code word with weight d

Generally, we would make d larger to have lower P b , i.e. Increase the


Hamming Weight, Feedback Encoder Design

However, interleaver would make B d smaller in low d, which make B d well


distributed.

• Original Encoder: Low Weight Sequence

• Interleaved Encoder: High Weight Sequence (major from Recursive Systematic


Feedback Encoder, feedback 1 to input)
Increase Hamming Weight and code length

2.1.2 Interleaver
• Block Interleaver (Not Often Used in Turbo Code, Why ???? )

• S-Random Interleaver
(Each randomly selected compared with previous S selected, if equal to previous +S
index or -S index, rejected)
5

2.2 EXIT Chart


Mutual Information
Consider random variables X and Y. Recall the mutual information between X and Y.

I (X ; Y ) = H (X ) − H (X |Y ) = H (Y ) − H (Y |X ) (2.1)
= Information of X - Remaining Information of X After Known Y (2.2)
X 1 X 1
= p(x) log − p(x, y) log p(x,y) (2.3)
X p(x) X ,Y
p(y)
X 1 X 1
= p(x) log − p(y|X )p(X ) log p(X ,y)
(2.4)
X p(x) Y
p(y)
X 1 X 1
= p(x)(log − p(y|x) log p(x,y)
) (2.5)
X p(x) Y
p(y)
X X 1 X 1
= p(x)( p(y|x) log − p(y|x) log p(x,y)
) (2.6)
X Y p(x) Y
p(y)
X X 1 1
= p(x)[ p(y|x)(log − log p(x,y)
)] (2.7)
X Y p(x)
p(y)
X X p(x, y)
= p(x)[ p(y|x)(log
)] (2.8)
X Y p(x)p(y)
X X p(y|x)
= p(x) p(y|x) log (2.9)
x y p(y)

LLR Rand omV ar i al be+Noi se = Rand omV ar i abl e


For the receive signal from the AWGN Channel

z = x +n

The conditional PDF is 2


1 − (z−x)
2
p(x|X = x) = p e 2σn
2πσn
The Reliability of Z (LLR z )
2
− (z−1)
p 1
2
2σn
p(z|x = +1) e − (z−1)
2 −(z+1)2
− −4z2 4z 2
2πσn 2σ2 2σn
LLR z = log = =e n =e = = (x + n)
p(z|x = −1) − (z+1)
2
2σ2n σ2n
p 1 e 2σ2
n
2πσn
6

LLR z is also a random varialbe with


mean µz = σ22 u x
n
variance σ2z = ( σ22 )2 σ2n = ( σ42 )
n n
σ2z = 2µz , which implies consistency property

Assumption of EXIT Chart


• For a large of block length, the a prior value (LLR A ) are almost uncorrelated with
received signal (LLR Z ).

• The pdf of A approach Gaussian-Like Distribution (Law of Large Number)

From pervioius derivation, we could model a prior input LLR A as

LLR A = µ A x + n A

With : σ2LLR A = 2µ A
Then the pdf of LLR pr i or :

σ2
LLR A 2
(ξ− 2 x)
1 −
2σ2
p LLR A (ξ|X = x) = p e LLR A

2πσLLR A
7

Mutual Information of a prior LLR A and X

p(ξ|x)
Z

X
I (X ; A) = p(x) p LLR A (ξ|x) log 1
x LLR A = +1) + 21 p(ξ|x = −1)
2
p(ξ|x
1 p(ξ|x = +1)
Z
= p LLR A (ξ|x = +1) log 1 d ξ part A
2 LLR A 2
p(ξ|x = +1) + 12 p(ξ|x = −1)
1 p(ξ|x = −1)
Z
+ p LLR A (ξ|x = −1) log 1 1
d ξ part B
2 LLR A 2 p(ξ|x = +1) + 2 p(ξ|x = −1)
1 p(ξ|x = +1)
Z
=2 p LLR A (ξ|x = +1) log 1 dξ
2 LLR A 2
p(ξ|x = +1) + 12 p(ξ|x = −1)
(By Both integral are the same)
1 2p(ξ|x = +1)
Z
=2 p LLR A (ξ|x = +1) log dξ
2 LLR A p(ξ|x = +1) + p(ξ|x = −1)
Given σ2LLR A = σ2A
σ2
(ξ− 2A )2
σ2 −
(ξ− 2A )2 1 2σ2
Z ∞ 1 − 2p e A
2σ2 2πσ A
= p e A log2 dξ
σ2 σ2
−∞ 2πσ A (ξ− 2A )2 (ξ+ 2A )2
− −
2σ2 2σ2
p 1 e A +p 1
e A
2πσ A 2πσ A
σ2
(ξ− 2A )2
Z ∞
1 − 1
2σ2
= p e A (log2 2 + log2 )d ξ
σ2 σ2
−∞ 2πσ A (ξ− 2A )2 (ξ+ 2A )2

2σ2 2σ2
1+e A A
σ2
(ξ− 2A )2
Z ∞
1 − 1
2σ2
= p e A (1 + log2 )d ξ
−2σ2 ξ
−∞ 2πσ A A
2σ2
1+e A
σ2
(ξ− 2A )2
Z ∞
1 − 1
2σ2
= p e A (1 + log2 )d ξ
−∞ 2πσ A 1 + e −ξ
σ2
(ξ− 2A )2
Z ∞ 1 −
2σ2
= 1− p e A log2 (1 + e −ξ )d ξ
−∞ 2πσ A

= 1 − E ξ∈LLR A [log2 (1 + e −ξ )]
From Ergodicity we coud approximate it into
1 NX−1
≈ 1− log2 (1 + e −xi A i )
N i =0
2
Where A i = µ A x + n A , and µ A =
σ2n
8

With the Expectation Form of the Mutual Information, we could generate an I (X ; A)


and I (X ; E ) in the same variance condition (σ2A ). Because both of mutual information is
monotonic , therefore there must have a transfer function between I (A; X ) and I (E ; X )
We could use statistic method to plot the transfer curve as follows,

Because Turbo code consist of 2 decoder, each decoder’s output (Extrinsic


Information) would the other’s input(Prior Information), so we could flip the
chart and combine both of them together to have a iterative decoding process.

You might also like