You are on page 1of 16

LECTURE 04 POLAR CODE

June 18, 2019

Li-Wei Liu
National Chiao Tung University
Department of Electronic Engineering
Contents

1 Introduction 2
1.1 Intro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Channel Combining WN : X N → Y N . . . . . . . . . . . . . . . . . . . 2
1.1.2 Channel Splitting WNi : X i → Y N , X i −1 . . . . . . . . . . . . . . . . . 4
1.1.3 Channel Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Encoding of Polar Code 7


2.1 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Alternative Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Polar Decoding 9

1
Chapter 1

Introduction

1.1 Intro
• Proposed by Erdal Arikan in 2008

• Provably achieve the channel capacity (capacity achieving for any symmetric binary
input discrete memoryless channel)

• Usually concatenated with CRC to improve performance

• Better than trubo or LDPC for "short" polar code

1.1.1 Channel Combining WN : X N → Y N


Mapping from X N input into Y N output

Symbol Representation

WNi : Transition Probability of Code Length N in i t h Channel (erausure or flipping)


j
a i : vector’s i t h element to j t h element
j
a i ,e : vector’s even element from i t h to j t h index
j
a i ,o : vector’s odd element from i t h to j t h index
01N : all zero vector of length N

2
3

Joint Transition Probability of Combining Channel

(a) N=2 Case


(b) N=4 Case
(c) N Case

In Fig.1.1c case

W2 (y 12 |u 12 ) = W (y 1 |u 1 ⊕ u 2 )W (y 2 |u 2 )
= W (y 1 |x 1 )W (y 2 |x 2 )

In Fig.1.1b case

W4 (y 14 |u 14 ) = W2 (y 12 |u 1 ⊕ u 2 , u 3 ⊕ u 4 )W2 (y 34 |u 2 , u 4 )
= W2 (y 12 |u 1,o
4 4
⊕ u 1,e )W2 (y 34 |u 1,e
4
)
= W2 (y 12 |v 12 )W2 (y 34 |x 34 )

and We can further extend to WN , where N is power of 2


In Fig.1.1c case

WN (y 1N |u 1N ) = WN /2 (y 1N /2 |u 1,o
N N
⊕ u 1,e N
) × WN /2 (y N N
+1 |u 1,e )
4

1.1.2 Channel Splitting WNi : X i → Y N , X i −1


Mapping from X i input into Y N and X i −1 output to analyze the individual channel Chain
Rule of Mutual Information
N
I (X 1 , X 2 , . . . , X N ; Y N ) = I (X n ; y N |X n−1 , X n−2 , . . . , X 1 )
X
n=1

Channel Capacity of WN

C (WN ) = I (U N ; Y N )
N
I (u i ; y 1N |u 1i −1 )
X
=
i =1
N
C (WN(i ) )
X
=
i =1

Theorem 1 General Format


The binary-input channels WN(i ) can be described by the transition probabilities

WN(i ) (y 1N , u 1i −1 |u i ) = WN (y 1N , u 1i −1 , u iN+1 |u i )
X

u iN+1

WN (y 1N , u 1i −1 , u iN+1 |u 1N )P (u 1i −1 , u iN+1 )
X
=
u iN+1
1
WN (y 1N |u 1N )
X
=
u iN+1
2N −1
5

Theorem 2 Recursive Faster Format


For any n ≤ 0,N = 2n , 1 ≤ i ≤ N
odd channel

(2i −1) 2N
X1
W2N (y 1 , u 12i −2 |u 2i −1 ) = WN(i ) (y 1N , u 1,o
2i −2 2i −2
⊕ u 1,e |u 2i +1 ⊕ u 2i )
u 2i 2
WN(i ) (y N
2N 2i −2
+1 , u 1,e |u 2i )

even channel

(2i ) 2N 1
W2N (y 1 , u 12i −1 |u 2i ) = WN(i ) (y 1N , u 1,o
2i −2 2i −2
⊕ u 1,e |u 2i +1 ⊕ u 2i )
2
WN(i ) (y N
2N 2i −2
+1 , u 1,e |u 2i )

1.1.3 Channel Polarization


Given a B-DMC w, there are two channel parameters.

• Symmetric Capacity: (Highest Rate reliable in communication across channel W)

X X 1 W (y|x)
I (W ) = W (y|x)l og 2 ( 1 1
)
y∈Y x∈X 2 2
W (y|0) + 2
W (y|1)

• Bhattachayya Parameter: (Upper Bound on the probability of ML decision error)


Xp
Z (W ) = W (y|0)W (y|1)
y∈Y

Relation between I(W) and Z(W):


For any Binary Discrete Memoryless Channel:

2
I (W ) ≥ log2
1 + Z (W )
q
I (W ) ≤ 1 − Z (W )2
6

Symmetric Capacity

When analyzing the channel in symmetric capacity way in Binary Erasure Symmetric
Channel(BEC)
) 2
I (WN(2i −1) ) = I (WN(i/2 ) (1.1)

2
I (WN(2i ) ) = 2I (WN(i/2
)
) − I (WN(i/2
)
) (1.2)

with I (W1(1) ) = 1 − ²
This is only valid for BECs. No Algorithm for general B-DMC W.

Bhattacharyya parameter

When analyzing the channel in ML decision error probability

2
Z (WN(2i −1) ) = 2Z (WN(i/2
)
) − Z (WN(i/2
)
) (1.3)

2
Z (WN(2i ) ) = Z (WN(i/2
)
) (1.4)

with Z (W1(1) ) = ²
This is only valid for BECs. No Algorithm for general B-DMC W.
The higher Z(W) the higher error it would have -> Bad Channel -> Frozen Set AC
The lower Z(W) the lower error it would have -> Good Channel -> Information set A
Chapter 2

Encoding of Polar Code

2.1 Encoding
N N
G N = (I N F )R N (I 2 G N )
N 2 N L N 2
I N F : U1,e U1,o
2
R N : Even down, odd up Permutation
I 2 G N : Concatenated with smaller G (G N )
N
2 2

Choose those Z (WN(i ) ) near 0 , as information set to transmitt signal, the other would
be frozen set.
A block length N code is encoded as: x 1N = u 1N G N , where N = 2n for some n ≥ 0

2.2 Alternative Realization


Target: Put all permutation in 1st stage, and the xor operation at later stage.
If A,B,C and D are matrices of such size that one can form the matrix products AC and
BD, then

O O O
(AC ) (B D) = (A B )(C D) (2.1)

Since I N /2 , F , R N ’s inverse are all equal to itself.


Therefore, by DSP analysis method, exchange input and output port, would still have
the same result.

O O
(I N /2 F )R N = R N (F I N /2 ) (2.2)

7
8

O O
G N = {(I N F )R N }(I 2 GN)
2 2

= ...
DefineB N = R N (I 2
O O O
R N /2 )(I 4 R N /4 ) . . . (I N 2 R2 )
N
n
= BN F

where B N is equivalent to Bit Reversal


Chapter 3

Polar Decoding

Polar Decoding
1. Successive Decoding(SC)

• List Successive Decoding(LSC)


• CRC-Aid LSC

2. Belief Propagation

Successive Decoding

The SC decoder observe (y 1N , u AC ) and generate an estimate û N of u 1N

• y 1N : Decoder Output

• u AC : Frozen Bit (chosen from Higher Bhattacharyya parameter)

• u 1N : All input of the encoder

Compute the full set of Likelihood Ratio:

(i ) N
WN(i ) (y 1N , û 1i −1 |0)
lN (y 1 , û 1i −1 ) =
WN(i ) (y 1N , û 1i −1 |1)
Decision: 
1, if l N(i ) (y 1N , û 1i −1 ) ≥ 0.
û =
0, otherwise.

9
10

Fundamental Element of Decoding Algorithm

l 1 : from higher part of channel (Bad one)


l 2 : from lower part of channel (Good One)
Recursive Formula:
(i )
l1 = lN (y N /2 , û 1,o
/2 1
2i −2 2i −2 1−2û 2i −1
⊕ û 1,e )
(i )
l2 = lN (y N , û 2i −2 )
/2 N /2+1 1,e
(i ) (i )
(2i −1) N 2i −2
lN (y N /2 , û 1,o
/2 1
2i −2 2i −2
⊕ û 1,e ) × lN (y N , û 2i −2 ) + 1
/2 N /2+1 1,e
lN (y 1 , û 1 ) = (i ) (i )
lN (y N /2 , û 1,o
/2 1
2i −2 2i −2
û 1,e ) + l N (y N , û 2i −2 )
/2 N /2+1 1,e
(2i ) N (i ) (i )
lN (y 1 , û 12i −2 ) = l N (y N /2 , û 1,o
/2 1
2i −2 2i −2 1−2û 2i −1
⊕ û 1,e ) × lN (y N , û 2i −2 )
/2 N /2+1 1,e
11

Numerical Issue(Log Likelihood Ratio)


MinSum Format
12

SC Decoding

Decoding order is following from the bit reversal

Activation Level of u i is counted as (number of consecutive 0 form MSB) + 1,


except for u 0 (active all)

List Successive Cancellation Decoding

Purpose: Avoid Error Propagation

• For SC Decoding, if a bit û i is frozen, û i = 0. Otherwise, the docding rule is as


follows:  (i ) N i −1
0, if û i = WN(i ) (y 1 ,û1i −1 |0) ≥ 0

N
WN (y 1 ,û 1 |1)
û i =
1, otherwise.

• It can be found that once a wrong bit position is made, there is no chane to correct
it in the following decoding procedure

Solution: store the partial sequence in a size-L list, in each estimation just fork the result
and calculate the probability of each sequence.
13

1. Initialization: Set a null sequence as the only candidate in the initial list and set
the corresponding probability to be 1. S 0 = {;}, P (;) = 1

2. Bit Estimation:

(a) Expansion: Each candidate generate 2 length-i sequence with û i = 0 and û i = 1


respectively

S ( i ) = û 1i |û 1i −1 ∈ S i −1 , û i ∈ {0, 1}

and calculate its probability

P (û 1i ) = P (û 1i −1 )P (û = b|u 1i −1 = û 1i −1 )

,where b ∈ {0, 1}
WN(i ) (y 1N ,û 1i −1 |û i =b)


 , if û i is information bit
WN(i ) (y 1N ,û 1i −1 |û i =0)+WN(i ) (y 1N ,û 1i −1 |û i =1)
P (û = b|u 1i −1 = û 1i −1 ) =
1 when b=0,

if û i is frozen bit.
(b) Competition: If the number of candidate is larger, reserve L candidate with
the largest probabilities.

3. Sequence Determinate: After the N bits are decoded, rencode every candidate in
the list and calculate all likelihood probability, select the maximual one.

Generally, list size’s complexity would grow exponentially, therefore the list size would
not be too large.

CRC-aided LSC Decoding

After N bits are decodeded, the path in the list are examined one-by-one with decreasing
reliability.(WN (Y1N ,U1N )). The decoder output the CRC passing code. If not, then decode
failure.
14

Belief Propagation

Equivalence of Polar Code and LDPC node.

• Nodes (1,*) denote source vector

• Nodes (n+1,*) denote codeword

Node(i,j), i t h level from left to right, row j t h node from top to down

Source vector initialization:

The message R 1, j in LLR format:






 0, if j is information bit

R 1, j = inf,

if j is frozen bit 0.

if j is frozen bit 1

− inf

Channel Value initialization:

The message L n+1, j in LLR format:

W (y j |x j = 0)
L n+1, j = log( ), 1 ≤ j ≤ N
W (y j |x j = 1)
15

f-function
as check node update
Generally, BP is a little bit better than Successive Cancellation Decoding.

You might also like