You are on page 1of 8

Description/definition of coding theory

Coding theory allows ease of understanding between the transmission of date across noisy channels. It
deals with error-correcting codes using classical and modern algebraic techniques.

Explanation of the main goals of coding theory


The goal of coding theory is to improve the reliability of digital communication by devising methods that
enable the receiver to decide whether there have been errors during the transmission (error detection),
and if there are, to possibly recover the original message (error correction). If a machine could reliably
transmit and receive data, then coding theory would be superfluous.

Diagram representing the process of sending and receiving of information/messages across a


noisy channel
Illustrative examples of simple codes (parity, repetition, and H (4,7))

Parity code example

Recieved Data Even parity Odd parity

1 0 1 0 1010 0 1010 1

1 0 0 1 1001 0 1001 1

1 1 1 0 1110 1 1110 0

0 1 0 0 0100 1 0100 0

Repetition code example

R=4

xy xxxx yyyy Codeword

11 1111 1111 1111 1111

10 1111 0000 1111 0000

01 0000 1111 0000 1111

00 0000 0000 0000 0000


Hamming code H (7,4)

w x y z a=w+x+z b=w+y+z c=x+y+z codeword

1 0 0 1 0 0 1 1001 001

0 1 0 1 0 1 0 0101 010

1 1 1 1 1 1 1 1111 111

0 1 1 1 0 0 1 0111 001

Discussion on minimum distance and error detection and error correction capability of a code
Minimum distance of a code

• It is the smallest distance between any two distinct codewords in the code.
• We use the notation 𝑑(𝐶) to mean the minimum distance of a code C.

Example:

Suppose the code

C = {000000, 111111, 010101}

𝑑(𝐶) = 3
Detection and Correction Capability of a Code

Let C be a set of codewords and let 2𝑒 + 1 be its minimum Hamming distance. Then the code can:

1. Detect up to 2𝑒 errors; and


2. Correct up to 𝑒 errors.

Note: answer should be non-negative integers.

Example:

How many errors can a code with minimum distance of 10 detect? Correct?

Minimum distance: 2𝑒 + 1 = 10

Error detection: Up to 2𝑒 = 10 − 1 = 9
9
Error correction: Up to 𝑒 = = 4.5  →  4 (𝑟𝑜𝑢𝑛𝑑 𝑑𝑜𝑤𝑛)
2

Discussion on the contributions of Claude Shannon and Richard Hamming to the field of
coding theory

Claude Shannon
Father of Information Theory

• Simplification of information into bits or binary code (1’s and 0’s)


• communication as selection
o Information, no matter the form, can be measured using a fundamental unit
▪ Entropy (H) (scale of measurement)
• Bit (unit)
o 1 = yes; 0 = no
o A scroll, email, etc. containing the same information have equal bits but in varying
density

• Less uncertainty/surprise of output <=> less information


o Measure of average uncertainty/surprise = entropy
▪ Unit of entropy based on uncertainty of fair coin flip (50%; 50%) = bit
• Entropy is maximum when all incomes are equally likely
o Predictability decreases entropy
▪ Less questions to ask needed to guess/predict outcome
1
• 𝐻 = ∑𝑛𝑖=1 𝑝𝑖 × 𝑙𝑜𝑔2 (𝑝 )
𝑖
o p = probability
Richard Hamming
Error Correcting Codes – Nearest Neighbor Error Correction

• Hamming Distance d(C)


o not only detect, but correct errors in bit strings by calculating the number of bits
disparate between valid codes and the erroneous code.

• Any received codeword not contained in C is obviously the result of noise


• quality of error correction is heavily dependent on choosing efficient codewords in C
• codeword with the smallest Hamming distance has a high probability of being correct
o lower d(C), more futile error correction


• d(C) denotes Minimum Hamming Distance: that is the smallest hamming distance between any
two code words contained within C
• Hamming Bound
o Formulation of efficient code is bounded by three competing principals
1. You want short codewords to reduce the size of data transmissions
2. You want codewords with greater minimum Hamming distance (i.e., greater the
ability to detect and correct errors)
3. But there is limited number of codewords of a specified length that also have a
specified minimum Hamming distance
o Expressed as an equation:

2𝑛
|C| ≤
∑𝑘𝑖=0(𝑛𝑖)
▪ |C| = upper bound number of codewords
▪ n = length of the codewords
▪ k = maximum number of errors capable of correcting
o Any code that achieves the upper bound of the equation is known as a Perfect Code
▪ Hamming Codes
Solutions to items #4, 6, and 8 are found on page 147 of the textbook.

You might also like