You are on page 1of 69

Coding

Mitigation of Fast Fading

• Mitigation of Fast Fading


• Coding & interleaving
• Signal redundancy
• Robust modulation
• Doppler diversity
Channel Coding
• Improves mobile communication link performance by adding
redundant data bits in the transmitted message.
• A channel coder maps a digital message sequence into another
specific sequence containing greater number of bits than originally
contained in the message.
• The coded message is then modulated for transmission in the
wireless channel.
• Channel Coding is used by the receiver to detect or correct some or
all of the errors introduced by the channel in a particular sequence of
message bits.
Channel Coding (cont.)
• Codes designed for AWGN channel typically do not work well for
fading channels as they cannot correct long error burst due to deep
fading.
• Codes for fading channels are based on AWGN channel code
combined with interleaving (provides diversity gain)
• Other coding techniques for fading channels are
• Unequal error protection codes: Code bits are prioritized. High-priority bits are
coded with stronger error protection techniques.
• Joint source and channel coding
Channel Coding (cont.)
• Coding gain in AWGN is defined as the amount that the bit energy or
signal-to-noise power ratio can be reduced under coding technique for
a given BEP or block error probability.
• Coding gain in AWGN is generally a function of the minimum
Euclidean distance of the code. Thus codes are designed to maximize
Euclidian distance.
• Shannon capacity indicates the best data rate that any practical code
can achieve.
𝐶 = 𝐵 log 2 (1 + 𝑆𝑁𝑅)
Channel Coding (cont.)
• The added coding bits lower the raw data transmission rate through the
channel.
• Eg. Block codes, convolution codes, and turbo codes.
Interleaving
• A technique for making forward error correction (reverse channel not
required for retransmission of data) more robust with respect to burst
errors
Interleaving (cont.)

Block interleaver
where source bits are
read into columns and
out as n-bit rows
Interleaving (cont.)
Fast Fading Mitigation
• Signal Redundancy
• Low data rate transmission
• If symbol duration is reduced compared to coherence time, the channel
appears as slow fading channel
• Robust Modulation
• Non-coherent or, differentially coherent modulation
• Phase tracking not required. Reduces detector integration time.
Fast Fading Mitigation (cont.)
• Doppler Diversity
• Doppler spread induced by temporal channel variations can provide another
means for diversity that can be exploited to combat fading
• Applicable to CDMA spread-spectrum RAKE receiver
Real World Example - 1
Detect Error On Credit Card
Formula for detecting error
• Let d2, d4, d6, d8, d10, d12, d14, d16 be all the even values in the
credit card number.
• Let d1, d3, d5, d7, d9, d11, d13, d15 be all the odd values in the credit
card number.
• Let n be the number of all the odd digits which have a value that
exceeds four
• Credit card has an error if the following is true:
(d1 + d3 + d5 + d7 + d9 + d11 + d13 + d15) × 2 + n +
(d2 + d4 + d6 + d8 + d10 + d12 + d14 + d16)  0 mod(10)
Detect Error On Credit Card

n=3

d1

d2 d3 … d15 d16
Now the test
(4 + 4 + 8 + 1 + 3 + 5 + 7 + 9) = 41
(5 + 2 + 1 + 0 + 3 + 4 + 6 + 8) x 2 + 3 = 61
41 + 61 = 102 mod (10) = 2

3
Credit Card Summary
The test performed on the credit card number is called a parity check equation.
The last digit is a function of the other digits in the credit card. This is how credit
card numbers are generated by Visa and Mastercard. They start with an account
number that is 15 digits long and use the parity check equation to find the value of
the 16th digit.

“This method allows computers to detect 100% of single-position errors and about
98% of other common errors”.
Some More Examples
• Internet
• Checksum used in multiple layers of
TCP/IP stack
• Cell phones
• Satellite broadcast
• TV
• Deep space telecommunications
• Mars Rover
“Unusual” applications
• Data Storage
• CDs and DVDs
• RAID
• ECC memory

• Paper bar codes


• QR Codes
• ISBN Codes
• UPS (MaxiCode)

Codes are all around us


The purpose
• A message can become distorted through a wide range of
unpredictable errors.

• Humans
• Equipment failure
• Lighting interference
• Scratches in a magnetic tape
Why error-correcting code?
• To add redundancy to a message so the original message can be
recovered if it has been garbled.

• Example message = 10
• code = 1010101010
Block codes
• Forward error correction codes (Enable limited number of errors to be
detected and corrected without retransmission)
• All codewords are of the same length
• Less complex decoder
• A q-ary code C of length n is a set of n-character words over an alphabet of
q elements
• Ex. 1001 n=4, q={1,0}
• Ex. 2389047298738904 n=16, q={0,1,2,3,4,5,6,7,8,9}
• Ex. (a,b,c,d,e) n=5, q={a,b,c,d,e,…,y,z}

• The most common code is when q={1,0}. This is known as a binary code.
Block Codes (cont.)
• Encode k information bits into n
code bits
• Redundant bits added = (n-k) for
error detection and correction
• Referred as (n,k) code
• Code rate, 𝑅𝑐 = 𝑘/𝑛
• Ability to correct code is function
of code distance
• Implemented using combinatorial
logic circuit.
Definitions
• Distance of a code:
• The distance between two codewords is the number of elements in which two
codewords differ
• For binary code, the distance is known as Hamming distance
• The Hamming distance between two codes u and v is the number of positions which
differ
• e.g. u = (1,0,0,0,0,1,1) v = (0,1,0,0,1,0,1) dist(u,v) = 4
• Weight of a code:
• The weight of a codeword of length N is given by the number of nonzero elements in
the codeword.
• For binary code, the weight is basically the number of 1’s in the codeword
• e.g. wt(0010110) = 3
• Another definition of distance is wt(u – v) = dist(u,v).
Some Important Types of Block Codes
• Linear Block Codes:
• Sum of any 2 code words results in a third unique code word.
• Eg. C1 and C2 are two codewords then 𝛼1 𝐶1 + 𝛼2 𝐶2 is also a code word
• Systematic Code:
• The data bits also are present in the generated codeword.
• Parity bits are appended at the end of the information bits.
• BCH:
• Generalization of Hamming code for multiple error correction.
• Very special class of linear codes known as Goppa codes.
• Cyclic Codes:
• Important subclass of linear block codes where encoding and decoding can be implemented easily.
• Cyclic shift of a code word yields another code word
• Eg. : If 𝐶 = [𝑐𝑛−1 , 𝑐𝑛−2 , … , 𝑐0 ] is a code word, then cyclic shift of C = 𝑐𝑛−2 , 𝑐𝑛−3 , … , 𝑐0 , 𝑐𝑛−1 is
also code word
• Same as linear block codes. Also known as generalized parity check codes.
Encoding
• Naïve approach
 The same message is sent multiple times.

• Hamming codes
 Property of binary hamming code
𝑛, 𝑘 = 2𝑚 − 1, 2𝑚 − 1 − 𝑚
𝑘 is number of information bits
𝑛 is number of coded bits
𝑚 is any positive integer
Number of parity symbols, 𝑚 = 𝑛 − 𝑘
Naïve approach
• The same message is sent multiple times. Then take the value with the
highest average.

Message:= 1001
Encode:= 111000000111
Channel:= 111000000011
Message:= 1001
Hamming [7,4] Code
• The seven is the number of digits that make the code.
E.g. 0100101
The four is the number of information digits in the code.
E.g. 0100101

• Code rate = 4/7


Hamming [7,4] Encoding
• Encoded with a generator matrix. All codes can be formed from row
operations on matrix. The code generator matrix for this presentation
is the following:
minimum weight of Hamming code for G is wt(G)= 3

1 0 0 0 1 0 1
0 1
𝐺= 0 1 0 1 0
0 0 1 0 1 1 1
0 0 0 1 0 1 1
Hamming [7,4] Codes
1000011
0100101

2  16
0010110
0001111 4
1100110
Codes
1010101
1001100

2  128
7
0110011
0101010 Possible codes
0011001
1101001
1001010
1111111
0111100
0011001
0000000
Definitions

• For any u, v, and w in a space V, the following three conditions hold

𝑑𝑖𝑠𝑡 𝑢, 𝑢 = 0

𝑑𝑖𝑠𝑡 𝑢, 𝑣 = 𝑑𝑖𝑠𝑡 𝑣, 𝑢

𝑑𝑖𝑠𝑡 𝑢, 𝑤 ≤ 𝑑𝑖𝑠𝑡 𝑢, 𝑣 + 𝑑𝑖𝑠𝑡 𝑣, 𝑢


Generator Matrix
• Generator Matrix, 𝐺 = [𝐼4 |𝑃]
where 𝐼4 is 4 × 4 identity matrix and P is parity check matrix
• The parity check matrix is found by solving the generator matrix for
𝐺𝐻 𝑇 = 0
• Distance 𝑑 of generator matrix 𝐺 is minimum weight of 𝐺 = 3
• The detecting capacity: 𝐷 = 𝑑 − 1 = 2
• Correcting capacity: 𝐶 = (𝑑 − 1)/2
Encoding
• Original message: u=1001
• Coded sequence: 𝑣1 = 𝑢 × 𝐺
1 0 0 0 1 0 1
• 𝑣1 = [1 0 0 1] 0 1 0 0 1 1 0
0 0 1 0 1 1 1
0 0 0 1 0 1 1

= [1 0 0 1 1 1 0 ] (Coded Bit)
• Eg. 5th bit = 1 × 1 ⨁ 0 × 1 ⨁ 0 × 1⨁1 × 0 = 1
Decoder
• Parity Check matrix, H
• 𝐻 = [𝑃𝑇 |𝐼3 ]

1 1 1 0 1 0 0
• 𝐻= 0 1 1 1 0 1 0
1 0 1 1 0 0 1
• Received bit, 𝑣2 = [1 0 0 1 0 1 0]
• Syndrome 𝑆 = 𝐻𝑣2𝑇 1
0
1 1 1 0 1 0 0 0
• In given case, 𝑆 = 0 1 1 1 0 1 0 1 =[1 0 0]
1 0 1 1 0 0 1 0
1
0
Decoder (cont.)
• Syndrome and Error table remains same for a Generator matrix.
• 𝑆 = 𝑟𝐻 𝑇 , 𝑟 = 𝐶 + 𝐸
where 𝑆 is syndrome, 𝑟 is received codeword resulting from
Syndrome and Error
transmission of codeword C, 𝐸 is error pattern table for given
• For 𝑆 = 1 0 0 , error is 𝐸 = 0 0 0 0 1 0 0 generator matrix G
• Correct sequence is
𝑟 = 𝑣2 − 𝐸 = 1 0 0 1 0 1 0 − 0 0 0 1 0 0 = 1 0 0 1 1 1 0
• Note: *On subtraction carry is not forwarded to next bit (Linear decoding is a bit-
wise operation with no memory)
• *S=[0 0 0] implies No error i.e. Correct sequence is same as received sequence
• Since rank of 𝐻 𝑇 is at most 𝑛 − 𝑘 , the minimum distance of the code is upper
bounded by 𝑑𝑚𝑖𝑛 ≤ 𝑛 − 𝑘 + 1 (Singelton bound)
Block Versus Convolutional Codes
• Block codes take k input bits and produce n output bits, where k and n
are large
• no data dependency between blocks
• useful for data communications
• Convolutional codes take a small number of input bits and produce a
small number of output bits in each time period
• data passes through convolutional codes in a continuous stream
• useful for low- latency communications
Convolutional Codes
• Convolutional codes are applied in applications that require good performance with
low implementation cost. They operate on data stream, not static block.
• Convolutional codes have memory that uses previous bits to encode or decode
following bits
• It is sometimes denoted by (n, k, m), where m is code memory length, n is number of
coded bits and k is number of information bits
• Output depends not only on current set of k input bits, but also on past input.
• k bits are input, n bits are output
• Code rate 𝑅𝑐 = k/n, Code rate < 1
• k & n are very small (usually k=1 to 3, n=2 to 6)
Convolutional Codes
• Convolutional encoder is a finite state machine (FSM), processing
information bits in a serial manner
• Generated code is a function of input and the states of the FSM
• For 𝑛, 𝑘, 𝑚 encoder each message bits influences a span of 𝑘 × 𝑚
successive output bits
Example of Convolutional Code

• k=2, n=3, m=2 convolutional code


• 𝑘 = 2 implies
• In one time instant, 2-bits are shifted
and 2 new bits (input bits) enter shift
register
• 𝑛 = 3 implies
• For every 2-bit input
convolutional encoder
produce 3-bit output
Representations of Convolutional Codes
1. Generator Sequence Representation
2. Encoder Block Diagram
3. State Diagram Representation
4. Tree diagram Representation
5. Trellis Representation
1. Generator Sequence Representation
• g1 = [1011]
• g2 = [1101]
• g3 = [1010]
Generator Matrix
• Generator sequences specify convolutional code completely by the
associated generator matrix
• Encoded convolutional code is produced by matrix multiplication of
input and the generator matrix
Generator Matrix Example
2. Encoder
Block
Diagram
• Input bits: 1011000
• Coded Sequence:
11 11 01 11 01 01 11
3. State Diagram Representation
 Contents of shift registers make up "state" of code:
 Most recent input is most significant bit of state.
 Oldest input is least significant bit of state.
 (this convention is sometimes reverse)

 Arcs connecting states represent allowable transitions


 Arcs are labeled with output bits transmitted during transition
State Diagram Example
4. Tree
Diagram
Representation

01(001)

(010)
5. Trellis Diagram Representation
• Trellis diagram is “unfolded” a function of time
• Time indicated by movement towards right
• Contents of shift registers make up "state" of code:
• Most recent input is most significant bit of state.
• Oldest input is least significant bit of state.
• Allowable transitions are denoted by connects between states
• transitions may be labeled with transmitted bits
Trellis Representation Example
11 11 01 11 01 01 11
Sequential Decoding
Number in red denote bit
disagreement count (Also
known as hamming distance)
Threshold = 3 bit in error

1 3

1 1

3
2
3
1

3
3
Trellis Decoding Input Data: 0 1 0 1 1 0 0
Using Hamming distance Transmited: 00 11 01 00 10 10 11
as measurement parameter Received: 00 01 01 01 10 10 11
Hamming Distance: Bit Decoded Sequence : 0 1 0 1 1 0 0
disagreement
00 01 01 01 10 10 11
0 1 2 2 3 4 2
00 00 00 00 3 00 00 00
3 Path with
11 11 11 11 11 11 11
2 1 3 3 2 3 4 4 minimum
11 11 11 11 11 11 11 hamming
00 00 00 00 00 00 00
01 01 01 01 01 01 01
distance is
10 10 2 10 6 1 10 3 10 3 10 2 10 4 selected
10 10 10 10 5 10 10 10

01 01 01 4 01 01 01 01
4 3 3 2 3 4
Compute the two possible paths at each state
Add the weight of the
and select the one with less cumulative
path at each state
Hamming weight
Viterbi Decoding Path metric is count of bit in
agreement
Viterbi Decoding, Step 2
Viterbi Decoding, Step 5
Viterbi Decoding, Step 6
Denotes Maximum Weight Node Decoded bit is : 1011000

Line going down: Decoded as 1 Viterbi Decoding, Step 7


Line going up: Decoded as 0
The Viterbi Algorithm
• Walk through the trellis and compute the path metric between that branch of received bits
r and those in the trellis.
• At each level, consider the two paths entering the same node and are identical from this
node onwards. From these two paths, the one that is closer to r (more bit agreement) at
this stage will still be so at any time in the future. This path is retained, and the other path
is discarded.
• Proceeding this way, at each stage one path will be saved for each node. These paths are
called the survivors.
• Each survivor is associated with a metric of the accumulated path metric (the path metric
up to this stage).
• Carry out this process until the received sequence is considered completely. Choose the
survivor with the highest path metric.
The Viterbi Algorithm
• The viterbi algorithm is used to decode convolutional codes and any
structure or system that can be described by a trellis.
• It is a maximum likelihood decoding algorithm that selects the most
probable path that maximizes the likelihood function.
• The algorithm is based on add-compare-select the best path each time
at each state.
Search for Good CC
• We would like convolutional codes with large free distance
• Generators for best convolutional codes are generally found via
computer search
• search is constrained to codes with regular structure
• search is simplified because any permutation of identical generators is
equivalent
• search is simplified because of linearity.
Shannon Capacity of Coded Symbol
• Shannon’s capacity for AWGN channel • Bandwidth efficiency = C/B
𝐸𝑏 𝑅𝑏 • More redundant bits
𝐶 = 𝐵 log 2 1 + • More bandwidth requirement
𝑁0 𝐵
• Reduced bandwidth efficiency
where • Good BER performance at low SNR
C : channel capacity (bit per second) value
B : transmission bandwidth (Hz)
P : Received signal power (W), 𝑃 = 𝐸𝑏 𝑅𝑏
𝐸𝑏 : average bit energy
𝑅𝑏 : Transmission bit rate
𝑁0 : Single side-band noise power density
(W/Hz)

You might also like