You are on page 1of 88

Introduction to rings, fields

Introduction to Algebra

2 jj 10/3/2018
Group

3 jj 10/3/2018
Group

4 jj 10/3/2018
Group

5 jj 10/3/2018
Group

6 jj 10/3/2018
Group

7 jj 10/3/2018
Fields

10 jj 10/3/2018
Fields

11 jj 10/3/2018
Fields

12 jj 10/3/2018
Fields

13 jj 10/3/2018
Fields

14 jj 10/3/2018
Fields

15 jj 10/3/2018
Fields

16 jj 10/3/2018
Fields

17 jj 10/3/2018
18 jj 10/3/2018
19 jj 10/3/2018
20 jj 10/3/2018
21 jj 10/3/2018
22 jj 10/3/2018
1 jj 10/8/2018
2 jj 10/8/2018
 Codes for error detection and correction
 parity check coding

3 jj 10/8/2018
Introduction

 Two types of coding


 Source coding – compressing a data source using
encoding of information
 Channel coding – encode information to be able to
overcome bit errors

4 jj 10/8/2018
Codes for error detection and correction

 Three approaches can be used to cope with data


transmission errors.

1. Using codes to detect errors.


2. Using codes to correct errors – called Forward
Error Correction (FEC).
3. Mechanisms to automatically retransmit (ARQ)
corrupted packets.

5 jj 10/8/2018
Codes for error detection and correction
(FEC)
 Error Control Coding (ECC)
 Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or correction
at the receiver
 Done to prevent the output of erroneous bits despite
noise and other imperfections in the channel
 Two main types, namely block codes and
convolutional codes.

6 jj 10/8/2018
Block Codes

consider only binary data


 Data is grouped into blocks of length k bits (dataword)
 Each dataword is coded into blocks of length n bits
(codeword), where in general n>k
 This is known as an (n,k) block codeThe
 n is called the block length of the code
 The channel encoder produces bits at the rate R0 = (n/k)Rs,
where Rs is the bit rate of the information source.

7 jj 10/8/2018
Block Codes
 A vector notation is used for the datawords and
codewords,
 Dataword d = (d1 d2….dk)
 Codeword c = (c1 c2……..cn)
 The redundancy introduced by the code is quantified by
the code rate,
 Code rate = k/n
 i.e., the higher the redundancy, the lower the code rate

8 jj 10/8/2018
Block Codes - Example
 Dataword length k = 4
 Codeword length n = 7
 This is a (7,4) block code with code rate = 4/7
 For example, d = (1101), c = (1101001)

9 jj 10/8/2018
Parity Codes
 Example of a simple block code – Single Parity Check
Code
 In this case, n = k+1, i.e., the codeword is the dataword with one
additional bit
 For ‘even’ parity the additional bit is,
q  i 1 di (mod 2)
k

– For ‘odd’ parity the additional bit is 1-q


– That is, the additional bit ensures that there
are an ‘even’ or ‘odd’ number of ‘1’s in the
codeword
10 jj 10/8/2018
Parity Codes- Example
 Even parity
(i) d=(10110) so,
c=(101101)
(ii) d=(11011) so,
c=(110110)

11 jj 10/8/2018
Parity Codes- Example
 Coding table for (4,3) even parity code

Dataword Codeword
0 0 0 0 0 0 0
0 0 1 0 0 1 1
0 1 0 0 1 0 1
0 1 1 0 1 1 0
1 0 0 1 0 0 1
1 0 1 1 0 1 0
1 1 0 1 1 0 0
1 1 1 1 1 1 1

12 jj 10/8/2018
Parity Codes
 To decode
 Calculate sum of received bits in block (mod 2)
 If sum is 0 (1) for even (odd) parity then the dataword is the
first k bits of the received codeword Otherwise error
 Code can detect single errors
 But cannot correct error since the error could be in
any bit
 For example, if the received dataword is (100000) the
transmitted dataword could have been (000000) or
(110000) with the error being in the first or second
place respectively
 Note error could also lie in other positions including the
parity bit
13 jj 10/8/2018
Parity Codes
 Known as a single error detecting code (SED). Only
useful if probability of getting 2 errors is small since parity
will become correct again
 Used in serial communications
 Low overhead but not very powerful
 Decoder can be implemented efficiently using a tree of XOR
gates

14 jj 10/8/2018
Hamming Distance
 Error control capability is determined by the
Hamming distance
 The Hamming distance between two codewords is
equal to the number of differences between them,
e.g.,
10011011
11010010 have a Hamming distance = 3
 Alternatively, can compute by adding codewords (mod 2)
=01001001 (now count up the ones)

15 jj 10/8/2018
Hamming Distance
 The Hamming distance of a code is equal to the minimum
Hamming distance between two codewords

 If Hamming distance is:


1 – no error control capability; i.e., a single error in a received
codeword yields another valid codeword

 If Hamming distance is:


2 – can detect single errors (SED); i.e., a single error will yield an
invalid codeword
2 errors will yield a valid (but incorrect) codeword

16 jj 10/8/2018
Hamming Distance
 If Hamming distance is:
3 – can correct single errors (SEC) or can detect double errors
(DED)
3 errors will yield a valid but incorrect codeword

17 jj 10/8/2018
Hamming Distance
 The maximum number of detectable errors is

d min  1
 That is the maximum number of correctable
errors is given by,
 d min  1
t 
 2 
where dmin is the minimum Hamming distance between 2
codewords and . means the smallest integer

18 jj 10/8/2018
1 jj 10/15/2018
2 jj 10/15/2018
 linear block codes
 error detecting and correcting capabilities
 generator and parity check matrices
 Standard array and syndrome decoding

3 jj 10/15/2018
Linear Block Codes
By definition: A code is said to be linear if any two
codewords in the code can be added in modulo-2
arithmetic to produce a third codeword in the
code.
 Consider an (n,k) linear block code- k bits of the n code bits
are always identical to the message sequence to be
transmitted.
 The (n – k) bits are computed from the message bits in
accordance with a prescribed encoding rule that determines
the mathematical structure of the code
 (n – k) bits are referred to as parity-check bits

4 jj 10/15/2018
Linear Block Codes
 Block codes in which the message bits are transmitted in
unaltered form are called systematic codes
 For applications requiring both error detection and error
correction, the use of systematic block codes simplifies
implementation of the decoder.
 Let m0, m1, …., mk – 1 constitute a block of k arbitrary
message bits. ,2k distinct message blocks
 Encoder producing an n-bit codeword (c0, c1, …, cn – 1).
 (b0, b1, …., bn – k – 1 )denote the (n – k) parity-check bits
in the codeword.

5 jj 10/15/2018
Linear Block Codes
 For the code to possess a systematic structure, a
codeword is divided into two parts, message bits and
parity-check bits
 Message bits of a codeword before the parity-check bits, or
vice versa.
 (n – k) leftmost bits of a codeword are identical to the
corresponding parity-check bits and the k rightmost bits of
the codeword are identical to the corresponding message bits

6 jj 10/15/2018
Linear Block Codes

 (10.1) and (10.2) defines the mathematical structure of the (n,k)


linear block code
 The coefficients pij are chosen in such a way that the rows of the
generator matrix are linearly independent and the
parity-check equations are unique
7 jj 10/15/2018
Linear Block Codes
 Equations may be rewritten in a compact form using matrix
notation
 1-by-k message vector m
 1-by-(n – k) parity-check vector b
 1-by-n code vector c

 Parity check-bits in the compact matrix form


b = mP

8 jj 10/15/2018
Linear Block Codes

9 jj 10/15/2018
Linear Block Codes
 The generator matrix G is in the canonical form, in that
its k rows are linearly independent
 It is not possible to express any row of the matrix G as a
linear combination of the remaining rows.
 The full set of codewords, referred to simply as the code, is
generated as C = mG
 by passing the message vector m range through the set of all
2k binary k-tuples (1-by-k vectors)
 Sum of any two codewords in the code is another
codeword
 This basic property of linear block codes is called closure.

10 jj 10/15/2018
Linear Block Codes
 To prove its validity, consider a pair of code vectors ci and cj
corresponding to a pair of message vectors mi and mj

 The modulo-2 sum of mi and mj represents a new message


vector.
 Correspondingly, the modulo-2 sum of ci and cj represents a
new code vector.

11 jj 10/15/2018
Linear Block Codes
 There is another way of expressing the relationship between the message bits
and parity-check bits of a linear block code.
 Let H denote an (n – k)-by-n matrix, defined as

12 jj 10/15/2018
Linear Block Codes
 The matrix H is called the parity-check matrix of the code
and the equations specified by (10.16) are called parity-check
equations.
 The generator equation (10.13) and the parity-check detector
equation (10.16) are basic to the description and operation of a
linear block code.
 These two equations are depicted in the form of block diagrams in
Figure 10.5a and b, respectively.

13 jj 10/15/2018
Example:1
(7, 4) Hamming code over GF(2)
The encoding equation for this code is given by
c0 = m 0
c1 = m 1
c2 = m 2
c3 = m 3
c4 = m 0 + m 1 + m 2
c5 = m 1 + m 2 + m 3
c6 = m0 + m1 + m3
Example:2

 Every codeword is a linear combination of these 4 codewords.


That is: c = m G, where
 
1 1 0 1 0 0 0
0 1 1 0 1 0 0
G    P | Ik 
1 1 1 0 0 1 0
 
1

0 1

0 0 0 1

 k  ( n k ) k k 
 Storage requirement reduced from 2k(n+k) to k(n-k).
Parity-Check Matrix
For G = [ P | Ik ], define the matrix H = [In-k | PT]
(The size of H is (n-k)xn).
It follows that GHT = 0.
Since c = mG, then cHT = mGHT = 0.

 1 0 0 1 0 1 1
H   0 1 0 1 1 1 0
 
 0 0 1 0 1 1 1
Decoding
 The generator matrix G is used in the encoding
operation at the transmitter.
 Parity-check matrix H is used in the decoding
operation at the receiver
 r denote the 1-by-n received vector that results from
sending the code vector c over a noisy binary channel
 vector r as the sum of the original code vector c and a new
vector e
 r = c+e

17 jj 10/15/2018
Error vector or Error pattern
 The vector e is called the error vector or error pattern
 The ith element of e equals 0 if the corresponding element of r
is the same as that of c.
 ith element of e equals 1 if the corresponding element of r
is different from that of c, an error is occurred in the ith
location
 for i = 1, 2,…, n, we have

18 jj 10/15/2018
Decoding
 Let c be transmitted and r be received, where
r=c+e
e = error pattern = e1e2..... en, where c + r
e

The weight of e determines the number of errors.


If the error pattern can be determined, decoding can be
achieved by:
c=r+e
Consider the (7,4) code.
(1) Let 1101000 be transmitted and 1100000 be received.
Then: e = 0001000 ( an error in the fourth location)
Syndrome: Definition

 The receiver has to decode the code vector c from the


received vector r.
 The algorithm to perform decoding operation starts with the
computation of a 1-by-(n – k) vector called the error-
syndrome vector or syndrome.
 syndrome depends only upon the error pattern.
 1-by-n received vector r,
 Syndrome , s= rH T

20 jj 10/15/2018
Syndrome: Properties

Property3: For a linear block code, the syndrome s is equal to the sum of
those rows of the transposed parity-check matrix HT where errors
have
jj
occurred due to channel noise.
21 10/15/2018
Syndrome: Properties

Property4: Each coset of the code is characterized by a unique


syndrome

22 jj 10/15/2018
Minimum Distance Considerations
 The Hamming weight w(c) of a code vector c is defined
as the number of nonzero elements in the code vector.

 Hamming weight of a code vector is the distance between


the code vector and the all-zero code vector.

 The minimum distance dmin of a linear block code is


the smallest Hamming distance between any pair of
codewords.

23 jj 10/15/2018
Minimum Distance Considerations
 Minimum distance is the same as the smallest Hamming
weight of the difference between any pair of code vectors

 From the closure property of linear block codes, the sum (or
difference) of two code vectors is another code vector.

 The minimum distance of a linear block code is the


smallest Hamming weight of the nonzero code
vectors in the code.

24 jj 10/15/2018
Minimum Distance Considerations
 dmin is related to the structure of the parity-check
matrix H of the code
 cHT = 0, where HT is the transpose of the parity-check
matrix H
 Let the matrix H be expressed in terms of its columns as H =
[h1, h2,… hn]
 For a code vector c to satisfy the condition cHT = 0, the
vector c must have ones in such positions that the
corresponding rows of HT sum to the zero vector 0.

25 jj 10/15/2018
Minimum Distance Considerations
 by definition, the number of ones in a code vector is the
Hamming weight of the code vector.
 smallest Hamming weight of the nonzero code vectors in a
linear block code equals the minimum distance of the code

 The minimum distance of a linear block code is


defined by the minimum number of rows of the
matrix HT whose sum is equal to the zero vector.

26 jj 10/15/2018
Minimum Distance Considerations
 dmin determines the error-correcting capability of
the code
 (n,k) linear block code is required to detect and correct all
error patterns over a binary symmetric channel, and whose
Hamming weight is less than or equal to t.
 That is, if a code vector ci in the code is transmitted and
the received vector is r = ci + e, we require that the
decoder output whenever the error pattern e has a
Hamming weight w(e) ≤ t

27 jj 10/15/2018
Minimum Distance Considerations
 assume : 2k code vectors in the code are transmitted with
equal probability
 best strategy for the decoder to pick the code vector closest to
the received vector r; that is, the one for which the Hamming
distance d(ci,r) is the smallest.
 Decoder will be able to detect and correct all error patterns of
Hamming weight w(e), provided that the minimum
distance of the code is equal to or greater than 2t + 1.

28 jj 10/15/2018
Minimum Distance Considerations
 demonstrate the validity of this requirement by adopting a
geometric interpretation :
 transmitted 1-by-n code vector and the 1-by-n received
vector are represented as points in an n-dimensional space.
 construct two spheres, each of radius t, around the points
that represent code vectors ci and cj under two different
conditions:

29 jj 10/15/2018
Minimum Distance Considerations
 1. Let these two spheres be disjoint, d(ci,cj) ≥2t + 1
 If, then, the code vector ci is transmitted and the Hamming
distance d(ci,r) ≤ t, it is clear that the decoder will pick ci, as
it is the code vector closest to the received vector r.
 2. If, on the other hand, the Hamming distance d(ci,cj) ≤ 2t,
the two spheres around ci and cj intersect

30 jj 10/15/2018
Minimum Distance Considerations
 if ci is transmitted, there exists a received vector r such that
the Hamming distance d(ci,r) ≤ t, yet r is as close to cj as it is
to ci.
 there is now the possibility of the decoder picking the vector
cj, which is wrong.
 An (n,k) linear block code has the power to correct all error
patterns of weight t or less if, and only if, d(ci,cj) ≥ 2t + 1,
for all ci and cj.

31 jj 10/15/2018
Minimum Distance Considerations

32 jj 10/15/2018
Decoding
syndrome-based decoding scheme for linear block codes.
 2k code vectors of an (n, k) linear block code.
 r : received vector, which may have one of 2n possible values.
 The receiver has the task of partitioning the 2n possible
received vectors into 2k disjoint subsets in such a way that
the ith subset Di corresponds to code vector ci for 1 < i < 2k.
 The received vector r is decoded into ci if it is in the ith subset.
 For the decoding to be correct, r must be in the subset that
belongs to the code vector ci that was actually sent.
 The 2k subsets described herein constitute a standard array of
the linear block code.

33 jj 10/15/2018
Standard array
To construct
1. The 2k code vectors are placed in a row with the all-zero
code vector c1 as the leftmost element.
2. An error pattern e2 is picked and placed under c1, and a
second row is formed by adding e2 to each of the remaining
code vectors in the first row; it is important that the error
pattern chosen as the first element in a row has not
previously appeared in the standard array.
3. Step 2 is repeated until all the possible error patterns have
been accounted for.

34 jj 10/15/2018
Standard array

35 jj 10/15/2018
Standard Array Decoding

 For an (n,k) linear code, standard array decoding is able to


correct exactly 2n-k error patterns, including the all-zero error
pattern.
 Illustration 1: The (7,4) Hamming code
# of correctable error patterns = 23 = 8
# of single-error patterns = 7
Therefore, all single-error patterns, and only single-error
patterns can be corrected
Standard Array Decoding (cont’d)
Illustration 2: The (6,3) code defined by the H matrix:

Codewords
 1 0 0 0 1 1
  000000
H   0 1 0 1 0 1 110001
  101010
 0 0 1 1 1 0
011011
c1 = c5  c 6 011100
101101
c2 = c4  c6 110110
c3 = c4  c5 000111
d min  3
Standard Array Decoding (cont’d)
 Can correct all single errors and one double error pattern

000000 110001 101010 011011 011100 101101 110110 000111


000001 110000 101011 011010 011101 101100 110111 000110
000010 110011 101000 011001 011110 101111 110100 000101
000100 110101 101110 011111 011000 101001 110010 000011
001000 111001 100010 010011 010100 100101 111110 001111
010000 100001 111010 001011 001100 111101 100110 010111
100000 010001 001010 111011 111100 001101 010110 100111
100100 010101 001110 111111 111000 001001 010010 100011
Syndrome Decoding
decoding procedure for linear block codes:
1. For the received vector r, compute the syndrome s = rHT.
2. Within the coset characterized by the syndrome s, identify
the coset leader (i.e., the error pattern with the largest
probability of occurrence); call it e0.
3. Compute the code vector
c = r + e0
as the decoded version of the received vector r.
 This procedure is called syndrome decoding.

40 jj 10/15/2018
Syndrome Decoding

41 jj 10/15/2018
46 jj 10/15/2018
47 jj 10/15/2018
48 jj 10/15/2018
49 jj 10/15/2018
50 jj 10/15/2018
(a) G=

51 jj 10/15/2018
(a) HT=

52 jj 10/15/2018
53 jj 10/15/2018
 P

 G

54 jj 10/15/2018
55 jj 10/15/2018

You might also like