Professional Documents
Culture Documents
Block interleaver
where source bits are
read into columns and
out as n-bit rows
Interleaving (cont.)
Fast Fading Mitigation
• Signal Redundancy
• Low data rate transmission
• If symbol duration is reduced compared to coherence time, the channel
appears as slow fading channel
• Robust Modulation
• Non-coherent or, differentially coherent modulation
• Phase tracking not required. Reduces detector integration time.
Fast Fading Mitigation (cont.)
• Doppler Diversity
• Doppler spread induced by temporal channel variations can provide another
means for diversity that can be exploited to combat fading
• Applicable to CDMA spread-spectrum RAKE receiver
Real World Example - 1
Detect Error On Credit Card
Formula for detecting error
• Let d2, d4, d6, d8, d10, d12, d14, d16 be all the even values in the
credit card number.
• Let d1, d3, d5, d7, d9, d11, d13, d15 be all the odd values in the credit
card number.
• Let n be the number of all the odd digits which have a value that
exceeds four
• Credit card has an error if the following is true:
(d1 + d3 + d5 + d7 + d9 + d11 + d13 + d15) × 2 + n +
(d2 + d4 + d6 + d8 + d10 + d12 + d14 + d16) 0 mod(10)
Detect Error On Credit Card
n=3
d1
d2 d3 … d15 d16
Now the test
(4 + 4 + 8 + 1 + 3 + 5 + 7 + 9) = 41
(5 + 2 + 1 + 0 + 3 + 4 + 6 + 8) x 2 + 3 = 61
41 + 61 = 102 mod (10) = 2
3
Credit Card Summary
The test performed on the credit card number is called a parity check equation.
The last digit is a function of the other digits in the credit card. This is how credit
card numbers are generated by Visa and Mastercard. They start with an account
number that is 15 digits long and use the parity check equation to find the value of
the 16th digit.
“This method allows computers to detect 100% of single-position errors and about
98% of other common errors”.
Some More Examples
• Internet
• Checksum used in multiple layers of
TCP/IP stack
• Cell phones
• Satellite broadcast
• TV
• Deep space telecommunications
• Mars Rover
“Unusual” applications
• Data Storage
• CDs and DVDs
• RAID
• ECC memory
• Humans
• Equipment failure
• Lighting interference
• Scratches in a magnetic tape
Why error-correcting code?
• To add redundancy to a message so the original message can be
recovered if it has been garbled.
• Example message = 10
• code = 1010101010
Block codes
• Forward error correction codes (Enable limited number of errors to be
detected and corrected without retransmission)
• All codewords are of the same length
• Less complex decoder
• A q-ary code C of length n is a set of n-character words over an alphabet of
q elements
• Ex. 1001 n=4, q={1,0}
• Ex. 2389047298738904 n=16, q={0,1,2,3,4,5,6,7,8,9}
• Ex. (a,b,c,d,e) n=5, q={a,b,c,d,e,…,y,z}
• The most common code is when q={1,0}. This is known as a binary code.
Block Codes (cont.)
• Encode k information bits into n
code bits
• Redundant bits added = (n-k) for
error detection and correction
• Referred as (n,k) code
• Code rate, 𝑅𝑐 = 𝑘/𝑛
• Ability to correct code is function
of code distance
• Implemented using combinatorial
logic circuit.
Definitions
• Distance of a code:
• The distance between two codewords is the number of elements in which two
codewords differ
• For binary code, the distance is known as Hamming distance
• The Hamming distance between two codes u and v is the number of positions which
differ
• e.g. u = (1,0,0,0,0,1,1) v = (0,1,0,0,1,0,1) dist(u,v) = 4
• Weight of a code:
• The weight of a codeword of length N is given by the number of nonzero elements in
the codeword.
• For binary code, the weight is basically the number of 1’s in the codeword
• e.g. wt(0010110) = 3
• Another definition of distance is wt(u – v) = dist(u,v).
Some Important Types of Block Codes
• Linear Block Codes:
• Sum of any 2 code words results in a third unique code word.
• Eg. C1 and C2 are two codewords then 𝛼1 𝐶1 + 𝛼2 𝐶2 is also a code word
• Systematic Code:
• The data bits also are present in the generated codeword.
• Parity bits are appended at the end of the information bits.
• BCH:
• Generalization of Hamming code for multiple error correction.
• Very special class of linear codes known as Goppa codes.
• Cyclic Codes:
• Important subclass of linear block codes where encoding and decoding can be implemented easily.
• Cyclic shift of a code word yields another code word
• Eg. : If 𝐶 = [𝑐𝑛−1 , 𝑐𝑛−2 , … , 𝑐0 ] is a code word, then cyclic shift of C = 𝑐𝑛−2 , 𝑐𝑛−3 , … , 𝑐0 , 𝑐𝑛−1 is
also code word
• Same as linear block codes. Also known as generalized parity check codes.
Encoding
• Naïve approach
The same message is sent multiple times.
• Hamming codes
Property of binary hamming code
𝑛, 𝑘 = 2𝑚 − 1, 2𝑚 − 1 − 𝑚
𝑘 is number of information bits
𝑛 is number of coded bits
𝑚 is any positive integer
Number of parity symbols, 𝑚 = 𝑛 − 𝑘
Naïve approach
• The same message is sent multiple times. Then take the value with the
highest average.
Message:= 1001
Encode:= 111000000111
Channel:= 111000000011
Message:= 1001
Hamming [7,4] Code
• The seven is the number of digits that make the code.
E.g. 0100101
The four is the number of information digits in the code.
E.g. 0100101
1 0 0 0 1 0 1
0 1
𝐺= 0 1 0 1 0
0 0 1 0 1 1 1
0 0 0 1 0 1 1
Hamming [7,4] Codes
1000011
0100101
2 16
0010110
0001111 4
1100110
Codes
1010101
1001100
2 128
7
0110011
0101010 Possible codes
0011001
1101001
1001010
1111111
0111100
0011001
0000000
Definitions
𝑑𝑖𝑠𝑡 𝑢, 𝑢 = 0
𝑑𝑖𝑠𝑡 𝑢, 𝑣 = 𝑑𝑖𝑠𝑡 𝑣, 𝑢
= [1 0 0 1 1 1 0 ] (Coded Bit)
• Eg. 5th bit = 1 × 1 ⨁ 0 × 1 ⨁ 0 × 1⨁1 × 0 = 1
Decoder
• Parity Check matrix, H
• 𝐻 = [𝑃𝑇 |𝐼3 ]
1 1 1 0 1 0 0
• 𝐻= 0 1 1 1 0 1 0
1 0 1 1 0 0 1
• Received bit, 𝑣2 = [1 0 0 1 0 1 0]
• Syndrome 𝑆 = 𝐻𝑣2𝑇 1
0
1 1 1 0 1 0 0 0
• In given case, 𝑆 = 0 1 1 1 0 1 0 1 =[1 0 0]
1 0 1 1 0 0 1 0
1
0
Decoder (cont.)
• Syndrome and Error table remains same for a Generator matrix.
• 𝑆 = 𝑟𝐻 𝑇 , 𝑟 = 𝐶 + 𝐸
where 𝑆 is syndrome, 𝑟 is received codeword resulting from
Syndrome and Error
transmission of codeword C, 𝐸 is error pattern table for given
• For 𝑆 = 1 0 0 , error is 𝐸 = 0 0 0 0 1 0 0 generator matrix G
• Correct sequence is
𝑟 = 𝑣2 − 𝐸 = 1 0 0 1 0 1 0 − 0 0 0 1 0 0 = 1 0 0 1 1 1 0
• Note: *On subtraction carry is not forwarded to next bit (Linear decoding is a bit-
wise operation with no memory)
• *S=[0 0 0] implies No error i.e. Correct sequence is same as received sequence
• Since rank of 𝐻 𝑇 is at most 𝑛 − 𝑘 , the minimum distance of the code is upper
bounded by 𝑑𝑚𝑖𝑛 ≤ 𝑛 − 𝑘 + 1 (Singelton bound)
Block Versus Convolutional Codes
• Block codes take k input bits and produce n output bits, where k and n
are large
• no data dependency between blocks
• useful for data communications
• Convolutional codes take a small number of input bits and produce a
small number of output bits in each time period
• data passes through convolutional codes in a continuous stream
• useful for low- latency communications
Convolutional Codes
• Convolutional codes are applied in applications that require good performance with
low implementation cost. They operate on data stream, not static block.
• Convolutional codes have memory that uses previous bits to encode or decode
following bits
• It is sometimes denoted by (n, k, m), where m is code memory length, n is number of
coded bits and k is number of information bits
• Output depends not only on current set of k input bits, but also on past input.
• k bits are input, n bits are output
• Code rate 𝑅𝑐 = k/n, Code rate < 1
• k & n are very small (usually k=1 to 3, n=2 to 6)
Convolutional Codes
• Convolutional encoder is a finite state machine (FSM), processing
information bits in a serial manner
• Generated code is a function of input and the states of the FSM
• For 𝑛, 𝑘, 𝑚 encoder each message bits influences a span of 𝑘 × 𝑚
successive output bits
Example of Convolutional Code
01(001)
(010)
5. Trellis Diagram Representation
• Trellis diagram is “unfolded” a function of time
• Time indicated by movement towards right
• Contents of shift registers make up "state" of code:
• Most recent input is most significant bit of state.
• Oldest input is least significant bit of state.
• Allowable transitions are denoted by connects between states
• transitions may be labeled with transmitted bits
Trellis Representation Example
11 11 01 11 01 01 11
Sequential Decoding
Number in red denote bit
disagreement count (Also
known as hamming distance)
Threshold = 3 bit in error
1 3
1 1
3
2
3
1
3
3
Trellis Decoding Input Data: 0 1 0 1 1 0 0
Using Hamming distance Transmited: 00 11 01 00 10 10 11
as measurement parameter Received: 00 01 01 01 10 10 11
Hamming Distance: Bit Decoded Sequence : 0 1 0 1 1 0 0
disagreement
00 01 01 01 10 10 11
0 1 2 2 3 4 2
00 00 00 00 3 00 00 00
3 Path with
11 11 11 11 11 11 11
2 1 3 3 2 3 4 4 minimum
11 11 11 11 11 11 11 hamming
00 00 00 00 00 00 00
01 01 01 01 01 01 01
distance is
10 10 2 10 6 1 10 3 10 3 10 2 10 4 selected
10 10 10 10 5 10 10 10
01 01 01 4 01 01 01 01
4 3 3 2 3 4
Compute the two possible paths at each state
Add the weight of the
and select the one with less cumulative
path at each state
Hamming weight
Viterbi Decoding Path metric is count of bit in
agreement
Viterbi Decoding, Step 2
Viterbi Decoding, Step 5
Viterbi Decoding, Step 6
Denotes Maximum Weight Node Decoded bit is : 1011000