Satellite Communication Systems
Mohamed Khedr
http://webmail.aast.edu/~khedr
20060216 Lecture 9 2
Syllabus
Tentatively
Week 1 Overview
Week 2 Orbits and constellations: GEO, MEO and LEO
Week 3 Satellite space segment, Propagation and
satellite links , channel modelling
Week 4 Satellite Communications Techniques
Week 5 Satellite Communications Techniques II
Week 6 Satellite Communications Techniques III
Week 7 Satellite error correction Techniques
Week 8 Satellite error correction TechniquesII
Multiple access
Week 9 Satellite in networks I, INTELSAT systems ,
VSAT networks, GPS
Week 10 GEO, MEO and LEO mobile communications
INMARSAT systems, Iridium , Globalstar,
Odyssey
Week 11 Presentations
Week 12 Presentations
Week 13 Presentations
Week 14 Presentations
Week 15 Presentations
20060216 Lecture 9 3
Block diagram of a DCS
Format
Source
encoder
Format
Source
decode
Channel
encoder
Pulse
modulate
Bandpass
modulate
Channel
decoder
Demod.
Sample
Detect
C
h
a
n
n
e
l
Digital modulation
Digital demodulation
20060216 Lecture 9 4
Channel coding:
Transforming signals to improve
communications performance by increasing
the robustness against channel impairments
(noise, interference, fading, ..)
Waveform coding: Transforming waveforms
to better waveforms
Structured sequences: Transforming data
sequences into better sequences, having
structured redundancy.
“Better” in the sense of making the decision
process less subject to errors.
What is channel coding?
20060216 Lecture 9 5
What is channel coding?
Coding is mapping of binary source (usually) output sequences of
length k into binary channel input sequences n (>k)
A block code is denoted by (n,k)
Binary coding produces 2
k
codewords of length n. Extra bits in
codewords are used for error detection/correction
In this course we concentrate on two coding types: (1) block,
and (2) convolutional codes realized by binary numbers:
Block codes: mapping of information source into channel
inputs done independently: Encoder output depends only on
the current block of input sequence
Convolutional codes: each source bit influences n(L+1)
channel input bits. n(L+1) is the constraint length and L is the
memory depth. These codes are denoted by (n,k,L).
(n,k)
block coder
kbits nbits
20060216 Lecture 9 6
Error control techniques
Automatic Repeat reQuest (ARQ)
Fullduplex connection, error detection codes
The receiver sends a feedback to the transmitter,
saying that if any error is detected in the received
packet or not (NotAcknowledgement (NACK) and
Acknowledgement (ACK), respectively).
The transmitter retransmits the previously sent
packet if it receives NACK.
Forward Error Correction (FEC)
Simplex connection, error correction codes
The receiver tries to correct some errors
Hybrid ARQ (ARQ+FEC)
Fullduplex, error detection and correction codes
20060216 Lecture 9 7
Why using error correction coding?
Error performance vs. bandwidth
Power vs. bandwidth
Data rate vs. bandwidth
Capacity vs. bandwidth
(dB) /
0
N E
b
B
P
A
F
B
D
C
E
Uncoded
Coded
Coding gain:
For a given biterror probability,
the reduction in the Eb/N0 that can be
realized through the use of code:
[dB] [dB] [dB]
c
0
u
0


.

\

÷


.

\

=
N
E
N
E
G
b b
20060216 Lecture 9 8
Channel models
Discrete memoryless channels
Discrete input, discrete output
Binary Symmetric channels
Binary input, binary output
Gaussian channels
Discrete input, continuous output
20060216 Lecture 9 9
Linear block codes
Let us review some basic definitions first
which are useful in understanding Linear
block codes.
20060216 Lecture 9 10
Some definitions
Binary field :
The set {0,1}, under modulo 2 binary
addition and multiplication forms a field.
Binary field is also called Galois field, GF(2).
0 1 1
1 0 1
1 1 0
0 0 0
= ©
= ©
= ©
= ©
1 1 1
0 0 1
0 1 0
0 0 0
= ·
= ·
= ·
= ·
Addition
Multiplication
20060216 Lecture 9 11
Some definitions…
Examples of vector spaces
The set of binary ntuples, denoted by
Vector subspace:
A subset S of the vector space is called a
subspace if:
The allzero vector is in S.
The sum of any two vectors in S is also in S.
Example:
. of subspace a is )} 1111 ( ), 1010 ( ), 0101 ( ), 0000 {(
4
V
n
V
n
V
)} 1111 ( ), 1101 ( ), 1100 ( ), 1011 ( ), 1010 ( ), 1001 ( ), 1000 (
), 0111 ( ), 0101 ( ), 0100 ( ), 0011 ( ), 0010 ( ), 0001 ( ), 0000 {(
4
= V
20060216 Lecture 9 12
Some definitions…
Spanning set:
A collection of vectors ,
the linear combinations of which include all vectors in
a vector space V, is said to be a spanning set for V or
to span V.
Example:
Bases:
A spanning set for V that has minimal cardinality is
called a basis for V.
Cardinality of a set is the number of objects in the set.
Example:
{ } . for basis a is ) 0001 ( ), 0010 ( ), 0100 ( ), 1000 (
4
V
{ } . spans ) 1001 ( ), 0011 ( ), 1100 ( ), 0110 ( ), 1000 (
4
V
{ }
n
G v v v , , ,
2 1
. =
20060216 Lecture 9 13
Linear block codes
Linear block code (n,k)
A set with cardinality is called a
linear block code if, and only if, it is a
subspace of the vector space .
Members of C are called codewords.
The allzero codeword is a codeword.
Any linear combination of codewords is a
codeword.
n
V
n
V C c
k
2
n k
V C V c ÷
20060216 Lecture 9 14
Linear block codes – cont‟d
n
V
k
V
C
Bases of C
mapping
20060216 Lecture 9 15
(a) Hamming distance d(c
i
, c
j
) > 2t + 1.
(b) Hamming distance d(c
i
, c
j
) < 2t. The
received vector is denoted by r.
20060216 Lecture 9 16
# of bits for FEC
Want to correct t errors in an (n,k) code
Data word d =[d
1
,d
2
, . . . ,d
k
] => 2
k
data
words
Code word c =[c
1
,c
2
, . . . ,c
n
] => 2
n
code
words
Data word
Code word
d
j
c
j
c
j
’
00
11
10
01
110
101
100
010
111
001
011
000
20060216 Lecture 9 17
Representing codes by vectors
Code strength is measured by Hamming distance that tells how
different code words are:
Codes are more powerful when their minimum Hamming
distance d
min
(over all codes in the code family) is large
Hamming distance d(X,Y) is the number of bits that are different
between code words
(n,k) codes can be mapped into ndimensional grid:
3bit repetition code 3bit parity code
valid code word
20060216 Lecture 9 18
Error Detection
If a code can detect
a t bit error, then c
j
‟
must be within a
Hamming sphere of
t
For example, if
c
j
=101, and t =1,
then „100‟,‟111‟, and
„001‟ lie in the
Hamming sphere.
Code word
c
j
c
j
’ 110
101
100
010
111
001
011
000
20060216 Lecture 9 19
Error Correction
To correct an
error, the
Hamming
spheres around
a code word
must be
nonoverlapping,
d
min
=2 t +1
c
j
c
j
’ 110
101
100
010
111
001
011
000
20060216 Lecture 9 20
6D Code Space
20060216 Lecture 9 21
Block Code Error Detection and Correction
(6,3) code 2
3
=> 2
6
,
d
min
=3
Can detect 2 bit errors,
correct 1 bit
110100 sent; 110101
received
Erasure: Suppose code
word 110011 sent but
two digits were erased
(xx0011), correct code
word has smallest
Hamming distance
Messag
e
Code
word
1 2
000 000000 4 2
100 110100 1 3
010 011010 3 2
110 101110 3 3
001 101001 3 2
101 011101 2 3
011 110011 2 0
111 000111 3 1
20060216 Lecture 9 22
Geometric View
Want code
efficiency, so the
space should be
packed with as
many code words
as possible
Code words should
be as far apart as
possible to
minimize errors
2
n
ntuples,
V
n
2
k
ntuples, subspace
of codewords
20060216 Lecture 9 23
Linear block codes – cont‟d
The information bit stream is chopped into blocks of k bits.
Each block is encoded to a larger block of n bits.
The coded bits are modulated and sent over channel.
The reverse procedure is done at the receiver.
Data block
Channel
encoder
Codeword
k bits n bits
rate Code
bits Redundant
n
k
R
nk
c
=
20060216 Lecture 9 24
Linear block codes – cont‟d
The Hamming weight of vector U, denoted by
w(U), is the number of nonzero elements in
U.
The Hamming distance between two vectors
U and V, is the number of elements in which
they differ.
The minimum distance of a block code is
) ( ) ( V U V U, © = w d
) ( min ) , ( min
min i
i
j i
j i
w d d U U U = =
=
20060216 Lecture 9 25
Linear block codes – cont‟d
Error detection capability is given by
Error correctingcapability t of a code, which is
defined as the maximum number of
guaranteed correctable errors per codeword, is
(
¸
(
¸
÷
=
2
1
min
d
t
1
min
÷ = d e
20060216 Lecture 9 26
Linear block codes – cont‟d
For memory less channels, the probability
that the decoder commits an erroneous
decoding is
is the transition probability or bit error probability
over channel.
The decoded bit error probability is
j n j
n
t j
M
p p
j
n
P
÷
+ =
÷


.

\

s
¿
) 1 (
1
j n j
n
t j
B
p p
j
n
j
n
P
÷
+ =
÷


.

\

~
¿
) 1 (
1
1
p
20060216 Lecture 9 27
Linear block codes – cont‟d
Discrete, memoryless, symmetric channel model
Note that for coded systems, the coded bits are
modulated and transmitted over channel. For
example, for MPSK modulation on AWGN channels
(M>2):
where is energy per coded bit, given by
Tx. bits Rx. bits
1p
1p
p
p
( ) ( )


.

\


.

\

=


.

\


.

\

~
M N
R E M
Q
M M N
E M
Q
M
p
c b c
t t
sin
log 2
log
2
sin
log 2
log
2
0
2
2 0
2
2
c
E
b c c
E R E =
1
0 0
1
20060216 Lecture 9 28
Linear block codes –cont‟d
A matrix G is constructed by taking as its
rows the vectors on the basis, .
n
V
k
V
C
Bases of C
mapping
} , , , {
2 1 k
V V V .
(
(
(
(
¸
(
¸
=
(
(
(
¸
(
¸
=
kn k k
n
n
k
v v v
v v v
v v v
. .
.
2 1
2 22 21
1 12 11
1
V
V
G
20060216 Lecture 9 29
Linear block codes – cont‟d
Encoding in (n,k) block code
The rows of G, are linearly independent.
mG U=
k n
k
k n
m m m u u u
m m m u u u
V V V
V
V
V
· + + · + · =
(
(
(
(
¸
(
¸
· =
2 2 2 1 1 2 1
2
1
2 1 2 1
) , , , (
) , , , ( ) , , , (
. .
.
. .
20060216 Lecture 9 30
Linear block codes – cont‟d
Example: Block code (6,3)
(
(
(
¸
(
¸
=
(
(
(
¸
(
¸
=
1
0
0
0
1
0
0
0
1
1
1
0
0
1
1
1
0
1
3
2
1
V
V
V
G
1
1
1
1
1
0
0
0
0
1
0
1
1
1
1
1
1
0
1
1
0
0
0
1
1
0
1
1
1
1
1
0
0
0
1
1
1
1
0
0
0
1
1
0
1
0
0
0
1
0
0
0
1
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
0
1
0
Message vector Codeword
20060216 Lecture 9 31
Linear block codes – cont‟d
Systematic block code (n,k)
For a systematic code, the first (or last) k
elements in the codeword are information bits.
matrix ) (
matrix identity
] [
k n k
k k
k
k
k
÷ × =
× =
=
P
I
I P G
) ,..., , , ,..., , ( ) ,..., , (
bits message
2 1
bits parity
2 1 2 1
¸ ¸¸ ¸ ¸_ ¸
¸ ¸¸ ¸ ¸_ ¸
k k n n
m m m p p p u u u
÷
= = U
20060216 Lecture 9 32
Linear block codes – cont‟d
For any linear code we can find an
matrix , which its rows are
orthogonal to rows of :
H is called the parity check matrix and
its rows are linearly independent.
For systematic linear block codes:
n k n × ÷ ) (
H
G
0 GH =
T
] [
T
k n
P I H
÷
=
20060216 Lecture 9 33
Linear block codes – cont‟d
Syndrome testing:
S is syndrome of r, corresponding to the error
pattern e.
Format
Channel
encoding
Modulation
Channel
decoding
Format
Demodulation
Detection
Data source
Data sink
U
r
m
m
ˆ
channel
or vector pattern error ) ,...., , (
or vector codeword received ) ,...., , (
2 1
2 1
n
n
e e e
r r r
=
=
e
r
e U r + =
T T
eH rH S = =
20060216 Lecture 9 34
Linear block codes – cont‟d
Standard array
1. For row , find a vector in of minimum
weight which is not already listed in the array.
2. Call this pattern and form the row as the
corresponding coset
k k n k n k n
k
k
2 2
2
2 2
2
2 2 2 2
2
2 1
U e U e e
U e U e e
U U U
© ©
© ©
÷ ÷ ÷
. .
zero
codeword
coset
coset leaders
k n
i
÷
= 2 ,..., 3 , 2
n
V
i
e th : i
20060216 Lecture 9 35
Linear block codes – cont‟d
Standard array and syndrome table decoding
1. Calculate
2. Find the coset leader, , corresponding to .
3. Calculate and corresponding .
Note that
If , error is corrected.
If , undetectable decoding error occurs.
T
rH S =
i
e e =
ˆ S
e r U
ˆ
ˆ
+ = m
ˆ
)
ˆ ˆ
(
ˆ
ˆ
e (e U e e) U e r U + + = + + = + =
e e =
ˆ
e e =
ˆ
20060216 Lecture 9 36
Linear block codes – cont‟d
Example: Standard array for the (6,3) code
010110 100101 010001
010100 100000
100100 010000
111100 001000
000110 110111 011010 101101 101010 011100 110000 000100
000101 110001 011111 101011 101100 011000 110110 000010
000110 110010 011100 101000 101111 011011 110101 000001
000111 110011 011101 101001 101110 011010 110100 000000
.
. . .
Coset leaders
coset
codewords
20060216 Lecture 9 37
Linear block codes – cont‟d
111 010001
100 100000
010 010000
001 001000
110 000100
011 000010
101 000001
000 000000
(101110) (100000) (001110)
ˆ
ˆ
estimated is vector corrected The
(100000)
ˆ
is syndrome this to ing correspond pattern Error
(100) (001110)
: computed is of syndrome The
received. is (001110)
ted. transmit (101110)
= + = + =
=
= = =
=
=
e r U
e
H rH S
r
r
U
T T
Error pattern Syndrome
20060216 Lecture 9 38
Hamming codes
Hamming codes are a subclass of linear block codes
and belong to the category of perfect codes.
Hamming codes are expressed as a function of a
single integer .
The columns of the paritycheck matrix, H, consist of
all nonzero binary mtuples.
Hamming codes
2 > m
t
m nk
m k
n
m
m
1 : capability correction Error
: bits parity of Number
1 2 : bits n informatio of Number
1 2 : length Code
=
=
÷ ÷ =
÷ =
20060216 Lecture 9 39
Hamming codes
Example: Systematic Hamming code (7,4)
] [
1 0 1 1 1 0 0
1 1 0 1 0 1 0
1 1 1 0 0 0 1
3 3
T
P I H
×
=
(
(
(
¸
(
¸
=
] [
1 0 0 0 1 1 1
0 1 0 0 0 1 1
0 0 1 0 1 0 1
0 0 0 1 1 1 0
4 4×
=
(
(
(
(
¸
(
¸
= I P G
20060216 Lecture 9 40
Example of the block codes
8PSK
QPSK
[dB] /
0
N E
b
B
P
20060216 Lecture 9 41
Convolutional codes
Convolutional codes offer an approach to error control
coding substantially different from that of block codes.
A convolutional encoder:
encodes the entire data stream, into a single codeword.
does not need to segment the data stream into blocks of fixed
size (Convolutional codes are often forced to block structure by periodic
truncation).
is a machine with memory.
This fundamental difference in approach imparts a
different nature to the design and evaluation of the code.
Block codes are based on algebraic/combinatorial
techniques.
Convolutional codes are based on construction techniques.
20060216 Lecture 9 42
Convolutional codescont‟d
A Convolutional code is specified by
three parameters or
where
is the coding rate, determining the
number of data bits per coded bit.
In practice, usually k=1 is chosen and we
assume that from now on.
K is the constraint length of the encoder a
where the encoder has K1 memory
elements.
There is different definitions in literatures for
constraint length.
) , , ( K k n
) , / ( K n k
n k R
c
/ =
20060216 Lecture 9 43
Block diagram of the DCS
Information
source
Rate 1/n
Conv. encoder
Modulator
Information
sink
Rate 1/n
Conv. decoder
Demodulator
¸ ¸ ¸ ¸ ¸ _ ¸
sequence Input
2 1
,...) ,..., , (
i
m m m = m
¸ ¸¸ ¸ ¸_ ¸
¸ ¸ ¸ ¸ ¸ ¸ ¸ _ ¸
bits) coded ( rd Branch wo
1
sequence Codeword
3 2 1
,...) ,..., , , (
n
ni ji i i
i
,...,u ,...,u u U
U U U U
=
=
= G(m) U
,...) ˆ ,..., ˆ , ˆ (
ˆ
2 1 i
m m m = m
¸
¸ ¸¸ ¸ ¸_ ¸
¸ ¸ ¸ ¸ ¸ ¸ ¸ _ ¸
d Branch wor per outputs
1
d Branch wor for
outputs r Demodulato
sequence received
3 2 1
,...) ,..., , , (
n
ni ji i
i
i
i
,...,z ,...,z z Z
Z Z Z Z
=
= Z
C
h
a
n
n
e
l
20060216 Lecture 9 44
A Rate ½ Convolutional encoder
Convolutional encoder (rate ½, K=3)
3 shiftregisters where the first one takes the
incoming data bit and the rest, form the memory
of the encoder.
Input data bits Output coded bits
m
1
u
2
u
First coded bit
Second coded bit
2 1
, u u
(Branch word)
20060216 Lecture 9 45
A Rate ½ Convolutional encoder
1 0 0 1
t
1
u
2
u
1 1
2 1
u u
0 1 0 2
t
1
u
2
u
0 1
2 1
u u
1 0 1
3
t
1
u
2
u
0 0
2 1
u u
0 1 0 4
t
1
u
2
u
0 1
2 1
u u
) 101 ( = m
Time Output Output Time
Message sequence:
(Branch word) (Branch word)
20060216 Lecture 9 46
A Rate ½ Convolutional encoder
Encoder ) 101 ( = m ) 11 10 00 10 11 ( = U
0 0 1
5
t
1
u
2
u
1 1
2 1
u u
0 0 0
6
t
1
u
2
u
0 0
2 1
u u
Time Output Time Output
(Branch word) (Branch word)
20060216 Lecture 9 47
Effective code rate
Initialize the memory before encoding the first bit (all
zero)
Clear out the memory after encoding the last bit (all
zero)
Hence, a tail of zerobits is appended to data bits.
Effective code rate :
L is the number of data bits and k=1 is assumed:
data
Encoder
codeword tail
c eff
R
K L n
L
R <
÷ +
=
) 1 (
20060216 Lecture 9 48
Encoder representation
Vector representation:
We define n binary vector with K elements (one
vector for each modulo2 adder). The i:th element
in each vector, is “1” if the i:th stage in the shift
register is connected to the corresponding modulo
2 adder, and “0” otherwise.
Example:
m
1
u
2
u
2 1
u u
) 101 (
) 111 (
2
1
=
=
g
g
20060216 Lecture 9 49
Encoder representation – cont‟d
Impulse response representation:
The response of encoder to a single “one” bit that
goes through it.
Example:
1 1 001
0 1 010
1 1 100
11 10 11 : sequence Output
0 0 1 : sequence Input
2 1
u u
Branch word
Register
contents
11 10 00 10 11
11 10 11 1
00 00 00 0
11 10 11 1
Output Input m
Modulo2 sum:
20060216 Lecture 9 50
Encoder representation – cont‟d
Polynomial representation:
We define n generator polynomials, one for each
modulo2 adder. Each polynomial is of degree K1 or
less and describes the connection of the shift
registers to the corresponding modulo2 adder.
Example:
The output sequence is found as follows:
2 2 ) 2 (
2
) 2 (
1
) 2 (
0 2
2 2 ) 1 (
2
) 1 (
1
) 1 (
0 1
1 . . ) (
1 . . ) (
X X g X g g X
X X X g X g g X
+ = + + =
+ + = + + =
g
g
) ( ) ( with interlaced ) ( ) ( ) (
2 1
X X X X X g m g m U =
20060216 Lecture 9 51
Encoder representation –cont‟d
In more details:
11 10 00 10 11
) 1 , 1 ( ) 0 , 1 ( ) 0 , 0 ( ) 0 , 1 ( ) 1 , 1 ( ) (
. 0 . 0 . 0 1 ) ( ) (
. 0 1 ) ( ) (
1 ) 1 )( 1 ( ) ( ) (
1 ) 1 )( 1 ( ) ( ) (
4 3 2
4 3 2
2
4 3 2
1
4 2 2
2
4 3 2 2
1
=
+ + + + =
+ + + + =
+ + + + =
+ = + + =
+ + + = + + + =
U
U
g m
g m
g m
g m
X X X X X
X X X X X X
X X X X X X
X X X X X
X X X X X X X X
20060216 Lecture 9 52
State diagram
A finitestate machine only encounters a
finite number of states.
State of a machine: the smallest amount
of information that, together with a
current input to the machine, can predict
the output of the machine.
In a Convolutional encoder, the state is
represented by the content of the
memory.
Hence, there are states.
1
2
÷ K
20060216 Lecture 9 53
State diagram – cont‟d
A state diagram is a way to represent
the encoder.
A state diagram contains all the states
and all possible transitions between
them.
Only two transitions initiating from a
state
Only two transitions ending up in a state
20060216 Lecture 9 54
State diagram – cont‟d
10 01
00
11
Current
state
input Next
state
output
00
0 00
1 11
01
0 11
1 00
10
0 10
1 01
11
0 01
1 10
0
S
1
S
2
S
3
S
0
S
2
S
0
S
2
S
1
S
3
S
3
S
1
S
0
S
1
S
2
S
3
S
1/11
1/00
1/01
1/10
0/11
0/00
0/01
0/10
Input
Output
(Branch word)
20060216 Lecture 9 55
Trellis – cont‟d
Trellis diagram is an extension of the state
diagram that shows the passage of time.
Example of a section of trellis for the rate ½ code
Time
i
t
1 + i
t
State
00
0
= S
01
1
= S
10
2
= S
11
3
= S
0/00
1/10
0/11
0/10
0/01
1/11
1/01
1/00
20060216 Lecture 9 56
Trellis –cont‟d
A trellis diagram for the example code
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
6
t
1
t
2
t
3
t
4
t
5
t
1 0 1 0 0
11 10 00 10 11
Input bits
Output bits
Tail bits
20060216 Lecture 9 57
Trellis – cont‟d
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6
t
1
t
2
t
3
t
4
t
5
t
1 0 1 0 0
11 10 00 10 11
Input bits
Output bits
Tail bits
20060216 Lecture 9 58
Trellis of an example ½ Conv. code
1 0 1 0 0
11 10 00 10 11
Input bits
Output bits
Tail bits
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6
t
1
t
2
t
3
t
4
t
5
t
1/01
20060216 Lecture 9 59
Soft and hard decision decoding
In hard decision:
The demodulator makes a firm or hard decision
whether one or zero is transmitted and provides
no other information for the decoder such that
how reliable the decision is.
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.
20060216 Lecture 9 60
Soft and hard decoding
Regardless whether the channel outputs hard or soft decisions
the decoding rule remains the same: maximize the probability
However, in soft decoding decision region energies must be
accounted for, and hence Euclidean metric d
E
, rather that
Hamming metric d
free
is used
Transition for Pr[30] is indicated
by the arrow
0
ln ( , ) ln (  )
j
m j mj
p p y x
·
=
¿
= y x
20060216 Lecture 9 61
Decision regions
Coding can be realized by softdecoding or harddecoding principle
For softdecoding reliability (measured by bitenergy) of decision region
must be known
Example: decoding BPSKsignal: Matched filter output is a continuos
number. In AWGN matched filter output is Gaussian
For softdecoding
several decision
region partitions
are used
Transition probability
for Pr[30], e.g. prob.
that transmitted „0‟
falls into region no: 3
20060216 Lecture 9 62
Soft and hard decision decoding …
ML softdecisions decoding rule:
Choose the path in the trellis with minimum
Euclidean distance from the received
sequence
ML harddecisions decoding rule:
Choose the path in the trellis with minimum
Hamming distance from the received
sequence
20060216 Lecture 9 63
The Viterbi algorithm
The Viterbi algorithm performs Maximum
likelihood decoding.
It finds a path through trellis with the largest
metric (maximum correlation or minimum
distance).
At each step in the trellis, it compares the partial
metric of all paths entering each state, and keeps
only the path with the largest metric, called the
survivor, together with its metric.
20060216 Lecture 9 64
Example of harddecision Viterbi decoding
) 100 (
ˆ
= m
) 11 00 11 10 11 (
ˆ
= U
) 101 ( = m
) 11 10 00 10 11 ( = U
) 01 10 11 10 11 ( = Z
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3 0 1 2
3
2
3
2 0
2
3 0
( )
i i
t t S ), ( I
Branch metric
Partial metric
20060216 Lecture 9 65
Example of softdecision Viterbi decoding
) 101 (
ˆ
= m
) 11 10 00 10 11 (
ˆ
= U
) 101 ( = m
) 11 10 00 10 11 ( = U
) 1 ,
3
2
, 1 ,
3
2
, 1 ,
3
2
,
3
2
,
3
2
,
3
2
, 1 (
÷
÷
÷ ÷
= Z
5/3
5/3
4/3
0
0
1/3
1/3
1/3
1/3
5/3
5/3
1/3
1/3
1/3
6
t
1
t
2
t
3
t
4
t
5
t
5/3
0 5/3 5/3 10/3 1/3 14/3
2
8/3
10/3
13/3 3
1/3
5/3 5/3
( )
i i
t t S ), ( I
Branch metric
Partial metric
1/3
4/3
5/3
5/3
5/3
20060216 Lecture 9 66
Free distance of Convolutional codes
Distance properties:
Since a Convolutional encoder generates codewords with
various sizes (as opposite to the block codes), the following
approach is used to find the minimum distance between all
pairs of codewords:
Since the code is linear, the minimum distance of the code is
the minimum distance between each of the codewords and the
allzero codeword.
This is the minimum distance in the set of all arbitrary long
paths along the trellis that diverge and remerge to the allzero
path.
It is called the minimum free distance or the free distance of
the code, denoted by
f free
d d or
20060216 Lecture 9 67
Free distance …
2
0
1
2
1
0
2
1
1
2
1
0
0
2
1
1
0
2
0
6
t
1
t
2
t
3
t
4
t
5
t
Hamming weight
of the branch
Allzero path
The path diverging and remerging to
allzero path with minimum weight
5 =
f
d
20060216 Lecture 9 68
Interleaving
Convolutional codes are suitable for memoryless
channels with random error events.
Some errors have bursty nature:
Statistical dependence among successive error events
(timecorrelation) due to the channel memory.
Like errors in multipath fading channels in wireless
communications, errors due to the switching noise, …
“Interleaving” makes the channel looks like as a
memoryless channel at the decoder.
20060216 Lecture 9 69
Interleaving …
Interleaving is done by spreading the coded
symbols in time (interleaving) before
transmission.
The reverse in done at the receiver by
deinterleaving the received sequence.
“Interleaving” makes bursty errors look like
random. Hence, Conv. codes can be used.
Types of interleaving:
Block interleaving
Convolutional or cross interleaving
20060216 Lecture 9 70
Interleaving …
Consider a code with t=1 and 3 coded bits.
A burst error of length 3 can not be corrected.
Let us use a block interleaver 3X3
A1 A2 A3 B1 B2 B3 C1 C2 C3
2 errors
A1 A2 A3 B1 B2 B3 C1 C2 C3
Interleaver
A1 B1 C1 A2 B2 C2 A3 B3 C3
A1 B1 C1 A2 B2 C2 A3 B3 C3
Deinterleaver
A1 A2 A3 B1 B2 B3 C1 C2 C3
1 errors 1 errors 1 errors
20060216 Lecture 9 71
Concatenated codes
A concatenated code uses two levels on coding, an
inner code and an outer code (higher rate).
Popular concatenated codes: Convolutional codes with
Viterbi decoding as the inner code and ReedSolomon codes
as the outer code
The purpose is to reduce the overall complexity, yet
achieving the required error performance.
Interleaver
Modulate
Deinterleaver
Inner
encoder
Inner
decoder
Demodulate
C
h
a
n
n
e
l
Outer
encoder
Outer
decoder
Input
data
Output
data
20060216 Lecture 9 72
Optimum decoding
If the input sequence messages are equally likely, the
optimum decoder which minimizes the probability of
error is the Maximum likelihood decoder.
ML decoder, selects a codeword among all the
possible codewords which maximizes the likelihood
function where is the received
sequence and is one of the possible codewords:
) (
) (m
p
'
U  Z Z
) (m'
U
) ( max ) ( if Choose
) (
all over
) ( ) ( m m m
p p
(m)
U  Z U  Z U
U
=
' '
ML decoding rule:
codewords
to search!!!
L
2
20060216 Lecture 9 73
ML decoding for memoryless channels
Due to the independent channel statistics for
memoryless channels, the likelihood function becomes
and equivalently, the loglikelihood function becomes
The path metric up to time index , is called the partial path
metric.
[[ [
·
= =
·
=
= = =
1 1
) (
1
) ( ) (
2 1 ,... ,..., ,
) (
)  ( )  ( )  ,... ,..., , ( ) (
2 1
i
n
j
m
ji ji
i
m
i i
m
i z z z
m
u z p U Z p U Z Z Z p p
i
U  Z
¿¿ ¿
·
= =
·
=
= = =
1 1
) (
1
) ( ) (
)  ( log )  ( log ) ( log ) (
i
n
j
m
ji ji
i
m
i i
m
u z p U Z p p m U  Z
U
¸
Path metric Branch metric
Bit metric
ML decoding rule:
Choose the path with maximum metric among
all the paths in the trellis.
This path is the “closest” path to the transmitted sequence.
" "i
20060216 Lecture 9 74
Binary symmetric channels (BSC)
If is the Hamming distance between Z
and U, then
Modulator
input
1p
p
p
1
0 0
1
) (
) (m
m
d d U Z, =
) 1 log(
1
log ) (
) 1 ( ) (
) (
p L
p
p
d m
p p p
n m
d L d m
m n m
÷ +


.

\
 ÷
÷ =
÷ =
÷
U
U  Z
¸
Demodulator
output
) 0  0 ( ) 1  1 ( 1
) 1  0 ( ) 0  1 (
p p p
p p p
= = ÷
= =
ML decoding rule:
Choose the path with minimum Hamming distance
from the received sequence.
Size of coded sequence
20060216 Lecture 9 75
AWGN channels
For BPSK modulation the transmitted sequence
corresponding to the codeword is denoted by
where and
and .
The loglikelihood function becomes
Maximizing the correlation is equivalent to minimizing the
Euclidean distance.
) (m
U
c ij
E s ± =
> =< =
¿¿
·
= =
) (
1 1
) (
) (
m
i
n
j
m
ji ji
s z m S Z,
U
¸
Inner product or correlation
between Z and S
ML decoding rule:
Choose the path which with minimum Euclidean distance
to the received sequence.
) ,..., ,..., (
) ( ) ( ) (
1
) (
m
ni
m
ji
m
i
m
i
s s s S = ,...) ,..., , (
) ( ) (
2
) (
1
) ( m
i
m m m
S S S = S
20060216 Lecture 9 76
Soft and hard decisions
In hard decision:
The demodulator makes a firm or hard decision
whether one or zero is transmitted and provides no
other information for the decoder such that how
reliable the decision is.
Hence, its output is only zero or one (the output is
quantized only to two level) which are called “hard
bits”.
Decoding based on hardbits is called the
“harddecision decoding”.
20060216 Lecture 9 77
Soft and hard decisioncont‟d
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision.
The side information provides the decoder with a
measure of confidence for the decision.
The demodulator outputs which are called soft
bits, are quantized to more than two levels.
Decoding based on softbits, is called the
“softdecision decoding”.
On AWGN channels, 2 dB and on fading
channels 6 dB gain are obtained by using
softdecoding over harddecoding.
20060216 Lecture 9 78
The Viterbi algorithm
The Viterbi algorithm performs Maximum likelihood
decoding.
It find a path through trellis with the largest metric
(maximum correlation or minimum distance).
It processes the demodulator outputs in an iterative
manner.
At each step in the trellis, it compares the metric of all
paths entering each state, and keeps only the path with
the largest metric, called the survivor, together with its
metric.
It proceeds in the trellis by eliminating the least likely
paths.
It reduces the decoding complexity to !
1
2
÷ K
L
20060216 Lecture 9 79
The Viterbi algorithm  cont‟d
Viterbi algorithm:
A. Do the following set up:
For a data block of L bits, form the trellis. The trellis
has L+K1 sections or levels and starts at time and
ends up at time .
Label all the branches in the trellis with their
corresponding branch metric.
For each state in the trellis at the time which is
denoted by , define a parameter
B. Then, do the following:
i
t
} 2 ,..., 1 , 0 { ) (
1 ÷
e
K
i
t S
( )
i i
t t S ), ( I
1
t
K L
t
+
20060216 Lecture 9 80
The Viterbi algorithm  cont‟d
1. Set and
2. At time , compute the partial path metrics for
all the paths entering each state.
3. Set equal to the best partial path metric
entering each state at time .
Keep the survivor path and delete the dead paths
from the trellis.
4. If , increase by 1 and return to step 2.
C. Start at state zero at time . Follow the
surviving branches backwards through the
trellis. The path thus defined is unique and
correspond to the ML codeword.
0 ) , 0 (
1
= I t . 2 = i
i
t
( )
i i
t t S ), ( I
i
t
K L i + < i
K L
t
+
20060216 Lecture 9 81
Example of Hard decision Viterbi
decoding
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6
t
1
t
2
t
3
t
4
t
5
t
) 101 ( = m
) 11 10 00 10 11 ( = U
) 01 10 11 10 11 ( = Z
20060216 Lecture 9 82
Example of Hard decision Viterbi
decodingcont‟d
Label al the branches with the branch metric
(Hamming distance)
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0
( )
i i
t t S ), ( I
20060216 Lecture 9 83
Example of Hard decision Viterbi
decodingcont‟d
i=2
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2
0
20060216 Lecture 9 84
Example of Hard decision Viterbi
decodingcont‟d
i=3
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3
0
2
3 0
20060216 Lecture 9 85
Example of Hard decision Viterbi
decodingcont‟d
i=4
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3 0
3
2
3
0
2
3 0
20060216 Lecture 9 86
Example of Hard decision Viterbi
decodingcont‟d
i=5
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3 0 1
3
2
3
2 0
2
3 0
20060216 Lecture 9 87
Example of Hard decision Viterbi
decodingcont‟d
i=6
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3 0 1 2
3
2
3
2 0
2
3 0
20060216 Lecture 9 88
Example of Hard decision Viterbi decoding
cont‟d
Trace back and then:
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0 2 3 0 1 2
3
2
3
2 0
2
3 0
) 100 (
ˆ
= m
) 00 00 11 10 11 (
ˆ
= U
20060216 Lecture 9 89
Example of softdecision Viterbi decoding
) 101 (
ˆ
= m
) 11 10 00 10 11 (
ˆ
= U
) 101 ( = m
) 11 10 00 10 11 ( = U
) 1 ,
3
2
, 1 ,
3
2
, 1 ,
3
2
,
3
2
,
3
2
,
3
2
, 1 (
÷
÷
÷ ÷
= Z
5/3
5/3
4/3
0
0
1/3
1/3
1/3
1/3
5/3
5/3
1/3
1/3
1/3
6
t
1
t
2
t
3
t
4
t
5
t
5/3
0 5/3 5/3 10/3 1/3 14/3
2
8/3
10/3
13/3 3
1/3
5/3 5/3
( )
i i
t t S ), ( I
Branch metric
Partial metric
1/3
4/3
5/3
5/3
5/3
20060216 Lecture 9 90
20060216 Lecture 9 91
Trellis
diagram for
K = 2, k = 2,
n = 3
convolutional
code.
20060216 Lecture 9 92
State diagram for
K = 2, k = 2, n = 3
convolutional code.