You are on page 1of 8

Coding theory :

Coding theory is the study of the properties of codes and their fitness for a
specific application. Codes are used for data compression, cryptography, error-
correction and more recently also for network coding. Codes are studied by various
scientific disciplines—such as information theory, electrical engineering,
mathematics, and computer science—for the purpose of designing efficient and
reliable data transmission methods. This typically involves the removal of
redundancy and the correction (or detection) of errors in the transmitted data.

There are four types of coding:[1]

1. Data compression (or, source coding)


2. Error correction (or channel coding)
3. Cryptographic coding
4. Line coding

Cryptographical coding :
Cryptography or cryptographic coding is the practice and study of techniques for
secure communication in the presence of third parties (called adversaries).[6] More
generally, it is about constructing and analyzing protocols that block adversaries;[7]
various aspects in information security such as data confidentiality, data integrity,
authentication, and non-repudiation[8] are central to modern cryptography. Modern
cryptography exists at the intersection of the disciplines of mathematics, computer
science, and electrical engineering. Applications of cryptography include ATM
cards, computer passwords, and electronic commerce.

Cryptography prior to the modern age was effectively synonymous with


encryption, the conversion of information from a readable state to apparent
nonsense. The originator of an encrypted message shared the decoding technique
needed to recover the original information only with intended recipients, thereby
precluding unwanted persons from doing the same. Since World War I and the
advent of the computer, the methods used to carry out cryptology have become
increasingly complex and its application more widespread.

Modern cryptography is heavily based on mathematical theory and computer


science practice; cryptographic algorithms are designed around computational
hardness assumptions, making such algorithms hard to break in practice by any
adversary. It is theoretically possible to break such a system, but it is infeasible to
do so by any known practical means. These schemes are therefore termed
computationally secure; theoretical advances, e.g., improvements in integer
factorization algorithms, and faster computing technology require these solutions
to be continually adapted. There exist information-theoretically secure schemes
that provably cannot be broken even with unlimited computing power—an
example is the one-time pad—but these schemes are more difficult to implement
than the best theoretically breakable but computationally secure mechanisms.

Turbo code :

In information theory, turbo codes (originally in French Turbocodes) are a class of


high-performance forward error correction (FEC) codes developed around 1990-91
(but first published in 1993), which were the first practical codes to closely
approach the channel capacity, a theoretical maximum for the code rate at which
reliable communication is still possible given a specific noise level. Turbo codes
are finding use in 3G/4G mobile communications (e.g. in UMTS and LTE) and in
(deep space) satellite communications as well as other applications where
designers seek to achieve reliable information transfer over bandwidth- or latency-
constrained communication links in the presence of data-corrupting noise. Turbo
codes are nowadays competing with LDPC codes, which provide similar
performance.

Viterbi algorithm :

The Viterbi algorithm is a dynamic programming algorithm for finding the most
likely sequence of hidden states – called the Viterbi path – that results in a
sequence of observed events, especially in the context of Markov information
sources and hidden Markov models.

The algorithm has found universal application in decoding the convolutional codes
used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-
space communications, and 802.11 wireless LANs. It is now also commonly used
in speech recognition, speech synthesis, diarization,[1] keyword spotting,
computational linguistics, and bioinformatics. For example, in speech-to-text
(speech recognition), the acoustic signal is treated as the observed sequence of
events, and a string of text is considered to be the "hidden cause" of the acoustic
signal. The Viterbi algorithm finds the most likely string of text given the acoustic
signal.

BCH code :

In coding theory, the BCH codes form a class of cyclic error-correcting codes that
are constructed using finite fields. BCH codes were invented in 1959 by French
mathematician Alexis Hocquenghem, and independently in 1960 by Raj Bose and
D. K. Ray-Chaudhuri.[1][2][3] The acronym BCH comprises the initials of these
inventors' names.

One of the key features of BCH codes is that during code design, there is a precise
control over the number of symbol errors correctable by the code. In particular, it
is possible to design binary BCH codes that can correct multiple bit errors. Another
advantage of BCH codes is the ease with which they can be decoded, namely, via
an algebraic method known as syndrome decoding. This simplifies the design of
the decoder for these codes, using small low-power electronic hardware.

BCH codes are used in applications such as satellite communications,[4] compact


disc players, DVDs, disk drives, solid-state drives[5] and two-dimensional bar
codes.

Cryptanalysis :

Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to loosen" or "to
untie") is the study of analyzing information systems in order to study the hidden
aspects of the systems.[1] Cryptanalysis is used to breach cryptographic security
systems and gain access to the contents of encrypted messages, even if the
cryptographic key is unknown.

In addition to mathematical analysis of cryptographic algorithms, cryptanalysis


includes the study of side-channel attacks that do not target weaknesses in the
cryptographic algorithms themselves, but instead exploit weaknesses in their
implementation.
Soft-decision decoder :

In information theory, a soft-decision decoder is a kind of decoding methods -- a


class of algorithm used to decode data that has been encoded with an error
correcting code. Whereas a hard-decision decoder operates on data that take on a
fixed set of possible values (typically 0 or 1 in a binary code), the inputs to a soft-
decision decoder may take on a whole range of values in-between. This extra
information indicates the reliability of each input data point, and is used to form
better estimates of the original data. Therefore, a soft-decision decoder will
typically perform better in the presence of corrupted data than its hard-decision
counterpart.[1]

Soft-decision decoders are often used in Viterbi decoders and turbo code decoders.

Concatenated error correction code :

In coding theory, concatenated codes form a class of error-correcting codes that


are derived by combining an inner code and an outer code. They were conceived
in 1966 by Dave Forney as a solution to the problem of finding a code that has
both exponentially decreasing error probability with increasing block length and
polynomial-time decoding complexity.[1] Concatenated codes became widely used
in space communications in the 1970s.

Decoding concatenated codes :

A natural concept for a decoding algorithm for concatenated codes is to first


decode the inner code and then the outer code. For the algorithm to be practical it
must be polynomial-time in the final block length. Consider that there is a
polynomial-time unique decoding algorithm for the outer code. Now we have to
find a polynomial-time decoding algorithm for the inner code. It is understood that
polynomial running time here means that running time is polynomial in the final
block length. The main idea is that if the inner block length is selected to be
logarithmic in the size of the outer code then the decoding algorithm for the inner
code may run in exponential time of the inner block length, and we can thus use an
exponential-time but optimal maximum likelihood decoder (MLD) for the inner
code.
Reed-Solomon Codes :
1. Introduction

Reed-Solomon codes are block-based error correcting codes with a wide range of
applications in digital communications and storage. Reed-Solomon codes are used
to correct errors in many systems including:

 Storage devices (including tape, Compact Disk, DVD, barcodes, etc)


 Wireless or mobile communications (including cellular telephones,
microwave links, etc)
 Satellite communications
 Digital television / DVB
 High-speed modems such as ADSL, xDSL, etc.

A typical system is shown here:

The Reed-Solomon encoder takes a block of digital data and adds extra
"redundant" bits. Errors occur during transmission or storage for a number of
reasons (for example noise or interference, scratches on a CD, etc). The Reed-
Solomon decoder processes each block and attempts to correct errors and recover
the original data. The number and type of errors that can be corrected depends on
the characteristics of the Reed-Solomon code.

2. Properties of Reed-Solomon codes

Reed Solomon codes are a subset of BCH codes and are linear block codes. A
Reed-Solomon code is specified as RS(n,k) with s-bit symbols.

This means that the encoder takes k data symbols of s bits each and adds parity
symbols to make an n symbol codeword. There are n-k parity symbols of s bits
each. A Reed-Solomon decoder can correct up to t symbols that contain errors in a
codeword, where 2t = n-k.
The following diagram shows a typical Reed-Solomon codeword (this is known as
a Systematic code because the data is left unchanged and the parity symbols are
appended):

Example: A popular Reed-Solomon code is RS(255,223) with 8-bit symbols. Each


codeword contains 255 code word bytes, of which 223 bytes are data and 32 bytes
are parity. For this code:

n = 255, k = 223, s = 8

2t = 32, t = 16

The decoder can correct any 16 symbol errors in the code word: i.e. errors in up to
16 bytes anywhere in the codeword can be automatically corrected.

Given a symbol size s, the maximum codeword length (n) for a Reed-Solomon
code is n = 2s – 1

For example, the maximum length of a code with 8-bit symbols (s=8) is 255 bytes.

Reed-Solomon codes may be shortened by (conceptually) making a number of data


symbols zero at the encoder, not transmitting them, and then re-inserting them at
the decoder.

Example: The (255,223) code described above can be shortened to (200,168). The
encoder takes a block of 168 data bytes, (conceptually) adds 55 zero bytes, creates
a (255,223) codeword and transmits only the 168 data bytes and 32 parity bytes.

The amount of processing "power" required to encode and decode Reed-Solomon


codes is related to the number of parity symbols per codeword. A large value of t
means that a large number of errors can be corrected but requires more
computational power than a small value of t.
Symbol Errors

One symbol error occurs when 1 bit in a symbol is wrong or when all the bits in a
symbol are wrong.

Example: RS(255,223) can correct 16 symbol errors. In the worst case, 16 bit
errors may occur, each in a separate symbol (byte) so that the decoder corrects 16
bit errors. In the best case, 16 complete byte errors occur so that the decoder
corrects 16 x 8 bit errors.

Reed-Solomon codes are particularly well suited to correcting burst errors (where a
series of bits in the codeword are received in error).

Decoding :

Reed-Solomon algebraic decoding procedures can correct errors and erasures. An


erasure occurs when the position of an erred symbol is known. A decoder can
correct up to t errors or up to 2t erasures. Erasure information can often be supplied
by the demodulator in a digital communication system, i.e. the demodulator "flags"
received symbols that are likely to contain errors.

When a codeword is decoded, there are three possible outcomes:

1. If 2s + r < 2t (s errors, r erasures) then the original transmitted code word will
always be recovered,

OTHERWISE

2. The decoder will detect that it cannot recover the original code word and
indicate this fact.

OR

3. The decoder will mis-decode and recover an incorrect code word without any
indication.

The probability of each of the three possibilities depends on the particular Reed-
Solomon code and on the number and distribution of errors.
Coding Gain :

The advantage of using Reed-Solomon codes is that the probability of an error


remaining in the decoded data is (usually) much lower than the probability of an
error if Reed-Solomon is not used. This is often described as coding gain.

Example: A digital communication system is designed to operate at a Bit Error


Ratio (BER) of 10-9, i.e. no more than 1 in 109 bits are received in error. This can
be achieved by boosting the power of the transmitter or by adding Reed-Solomon
(or another type of Forward Error Correction). Reed-Solomon allows the system to
achieve this target BER with a lower transmitter output power. The power "saving"
given by Reed-Solomon (in decibels) is the coding gain.

------------

You might also like