Professional Documents
Culture Documents
Cryptography
Cryptography
Coding theory is the study of the properties of codes and their fitness for a
specific application. Codes are used for data compression, cryptography, error-
correction and more recently also for network coding. Codes are studied by various
scientific disciplines—such as information theory, electrical engineering,
mathematics, and computer science—for the purpose of designing efficient and
reliable data transmission methods. This typically involves the removal of
redundancy and the correction (or detection) of errors in the transmitted data.
Cryptographical coding :
Cryptography or cryptographic coding is the practice and study of techniques for
secure communication in the presence of third parties (called adversaries).[6] More
generally, it is about constructing and analyzing protocols that block adversaries;[7]
various aspects in information security such as data confidentiality, data integrity,
authentication, and non-repudiation[8] are central to modern cryptography. Modern
cryptography exists at the intersection of the disciplines of mathematics, computer
science, and electrical engineering. Applications of cryptography include ATM
cards, computer passwords, and electronic commerce.
Turbo code :
Viterbi algorithm :
The Viterbi algorithm is a dynamic programming algorithm for finding the most
likely sequence of hidden states – called the Viterbi path – that results in a
sequence of observed events, especially in the context of Markov information
sources and hidden Markov models.
The algorithm has found universal application in decoding the convolutional codes
used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-
space communications, and 802.11 wireless LANs. It is now also commonly used
in speech recognition, speech synthesis, diarization,[1] keyword spotting,
computational linguistics, and bioinformatics. For example, in speech-to-text
(speech recognition), the acoustic signal is treated as the observed sequence of
events, and a string of text is considered to be the "hidden cause" of the acoustic
signal. The Viterbi algorithm finds the most likely string of text given the acoustic
signal.
BCH code :
In coding theory, the BCH codes form a class of cyclic error-correcting codes that
are constructed using finite fields. BCH codes were invented in 1959 by French
mathematician Alexis Hocquenghem, and independently in 1960 by Raj Bose and
D. K. Ray-Chaudhuri.[1][2][3] The acronym BCH comprises the initials of these
inventors' names.
One of the key features of BCH codes is that during code design, there is a precise
control over the number of symbol errors correctable by the code. In particular, it
is possible to design binary BCH codes that can correct multiple bit errors. Another
advantage of BCH codes is the ease with which they can be decoded, namely, via
an algebraic method known as syndrome decoding. This simplifies the design of
the decoder for these codes, using small low-power electronic hardware.
Cryptanalysis :
Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to loosen" or "to
untie") is the study of analyzing information systems in order to study the hidden
aspects of the systems.[1] Cryptanalysis is used to breach cryptographic security
systems and gain access to the contents of encrypted messages, even if the
cryptographic key is unknown.
Soft-decision decoders are often used in Viterbi decoders and turbo code decoders.
Reed-Solomon codes are block-based error correcting codes with a wide range of
applications in digital communications and storage. Reed-Solomon codes are used
to correct errors in many systems including:
The Reed-Solomon encoder takes a block of digital data and adds extra
"redundant" bits. Errors occur during transmission or storage for a number of
reasons (for example noise or interference, scratches on a CD, etc). The Reed-
Solomon decoder processes each block and attempts to correct errors and recover
the original data. The number and type of errors that can be corrected depends on
the characteristics of the Reed-Solomon code.
Reed Solomon codes are a subset of BCH codes and are linear block codes. A
Reed-Solomon code is specified as RS(n,k) with s-bit symbols.
This means that the encoder takes k data symbols of s bits each and adds parity
symbols to make an n symbol codeword. There are n-k parity symbols of s bits
each. A Reed-Solomon decoder can correct up to t symbols that contain errors in a
codeword, where 2t = n-k.
The following diagram shows a typical Reed-Solomon codeword (this is known as
a Systematic code because the data is left unchanged and the parity symbols are
appended):
n = 255, k = 223, s = 8
2t = 32, t = 16
The decoder can correct any 16 symbol errors in the code word: i.e. errors in up to
16 bytes anywhere in the codeword can be automatically corrected.
Given a symbol size s, the maximum codeword length (n) for a Reed-Solomon
code is n = 2s – 1
For example, the maximum length of a code with 8-bit symbols (s=8) is 255 bytes.
Example: The (255,223) code described above can be shortened to (200,168). The
encoder takes a block of 168 data bytes, (conceptually) adds 55 zero bytes, creates
a (255,223) codeword and transmits only the 168 data bytes and 32 parity bytes.
One symbol error occurs when 1 bit in a symbol is wrong or when all the bits in a
symbol are wrong.
Example: RS(255,223) can correct 16 symbol errors. In the worst case, 16 bit
errors may occur, each in a separate symbol (byte) so that the decoder corrects 16
bit errors. In the best case, 16 complete byte errors occur so that the decoder
corrects 16 x 8 bit errors.
Reed-Solomon codes are particularly well suited to correcting burst errors (where a
series of bits in the codeword are received in error).
Decoding :
1. If 2s + r < 2t (s errors, r erasures) then the original transmitted code word will
always be recovered,
OTHERWISE
2. The decoder will detect that it cannot recover the original code word and
indicate this fact.
OR
3. The decoder will mis-decode and recover an incorrect code word without any
indication.
The probability of each of the three possibilities depends on the particular Reed-
Solomon code and on the number and distribution of errors.
Coding Gain :
------------