2 views

Original Title: Coding Bounds Linear Codes

Uploaded by ChaseVetruba

- P08 BlockCodes Sol
- Channel Coding.pdf
- צפינה- מצגת 1 | Linear Block Codes
- Coding Theory
- 27286
- Decimal Set 1
- data_layout.txt
- IJAIEM-2013-11-29-086
- Computer Networks 1
- MD05 Error Detection and Correction
- Bicubic Interpolation
- A Course in Combinatorial Optimization - Schrijver2004
- Curtis - The Cult of Jurassic Park
- Benchmark For Pathfinding Search Algorithms
- Challenges in Climate Change
- Golub and Van Loan-Matrix Computations
- Real-time Heuristic Search for Game Pathfinding.pdf
- ATSN.pdf
- Topics in Algebraic Combinatorics
- -STOP SPAMMING ME!- - Exploring Information Overload on Facebook

You are on page 1of 5

ber of codewords and let dmin be the minimal Hamming distance between two

codewords. Large dmin is good for error-detecting and correcting.

In this section we explore relationships between the three numbers n, M and

dmin .

For fixed length n, if you have many more codewords, (large M ), you have

to pack them closer together in bitspace, so dmin is likely to be smaller.

The Hamming bound makes this precise by providing an upper bound to

the size of M for given n and dmin .

For fixed n and dmin , you can start choosing codewords, always making

sure that the next word is distance dmin or more from all preceding words.

How many words can you put on your list before you get stuck? The

Gilbert-Varshamov Bound gives you a number M of codewords that you

are guaranteed to reach (if you choose wisely).

The Hamming bound is an upper bound on the number of code words and

the code rate that are possible for a binary code of given length and minimum

distance.

Theorem The numbers n, M and dmin satisfy the inequality

2n

M n

+ n1 + + n

0 t

where

(dmin 2)/2 if dmin is even

t= .

(dmin 1)/2 if dmin is odd

Using the notation

we have

2n vol(n, n)

M = .

vol(n, t) vol(n, t)

Thus

log2 vol(n, t)

Code rate 1 .

n

Proof. For each codeword w, let Bw be the set of all codewords x such that

d(w, x) t. You can call Bw the Hamming ball of radius t centered at w.

It is easy to see than no ball Bw can contain two different codewords u and v.

If it did, then the triangle inequality shows that dmin would not be the minimal

distance:

d(u, v) d(u, w) + d(w, v) 2t < dmin .

1

Since there are M codewords w, and vol(n, t) codewords in each ball Bw ,

and balls do not overlap, all balls taken together contain M vol(n, t) different

bitstrings, which must be among the 2n length n bit stings. QED

There is also a Hamming bound for nonbinary codes.

A code that achieves the Hamming bound is called a perfect code. A perfect

code with minimum distance dmin is characterized by the property that every

length n bitstring can be inverted into one and only one codeword by making

changes in t or fewer bits, where t is as in the sphere packing bound.

Perfect codes are extremely rare. The only nontrivial perfect codes are

Hamming codes and Golay codes.

There is an existence theorem for codes that goes in the direction opposite that

of the Hamming bound.

Theorem For each code length N and minimal Hamming distance dmin

N , there is at least one code with M codewords where

2N vol(N, N )

M N N N

= .

vol(N, dmin 1)

0 + 1 + + dmin 1

log2 vol(N, dmin 1)

Code rate 1 .

N

Proof. We build a code with minimal distance using a greedy algorithm.

Choose any length N bitstring for our first codeword, c1 . Then choose code-

word 2, c2 , to be any bitstring of distance at least dmin from c1 . Continue,

at each step choosing a new codeword that is distance at least dmin from all

preceding codewords. If you have chosen M words and

can not together cover all 2N elements of bitspace, so we can pick any of the

omitted strings as another codeword at distance at least dmin from all preceding

ones. For the process to stop, we must have

2N

M .

vol(N, dmin 1)

QED

Of course we especially want to know about linear codes. We will prove

than there are linear codes that approach the Gilbert-Varshamov bound. Before

doing that, we need a criterion for minimal Hamming distance of a linear code.

Let H be an N k parity check matrix for an (N, L) binary linear code,

where N = L + k. Recall that multiplication by H converts errors sequences

2

into syndromes. Its nullspace is the set of codewords:

H : FN k

2 (space of error sequences) F2 (space of syndromes).

Lemma The minimal Hamming distance of a linear block code is the small-

est integer d 1 such that there are d rows of H that sum to the zero vector.

Proof: It suffices to prove that there are d 1 rows of H that add to zero if

and only if there are two different codewords at Hamming distance d.

Suppose that d rows of H add to zero. Construct the error sequence e FN 2

with d 1s in positions corresponding to the rows of the sum. Then eH = 0,

which means that e is a codeword. Since the code is linear, x + e is another

codeword. Moreover, dist(x, x + e) = d, so d is the Hamming distance between

two codewords.

Now suppose that x and y are two codewords such that dist(x, y) = d. The

error sequence e = y x is a codeword and contains exactly d 1 s. Hence eH = 0

is the sum of d rows of H. QED

Theorem. (Gilbert-Varshamov for linear codes) For every N , L and d with

1 L < N and 2 d N such that

2N

M = 2L < Pd2 N

i=0 i

Proof. Let k = N L, the syndrome length. We are going to build up an

N k parity check matrix H such that no d 1 or fewer rows add to zero. Its

nullspace will be the code. We will build up H row by row.

Start by choosing r1 through rk to be the rows of the k k identity matrix.

If we have chosen rows r1 , r2 , . . ., ri for 1 i < N so that no d 1 or fewer of

them add to zero, can we succeed in choosing ri+1 ? The requirement is that no

d 1 or fewer of the first i + 1 rows add to zero. The sums not involving ri+1

are already taken care of, so we just need to choose ri+1 Fk2 such that it does

not equal the sum of any d 2 or fewer of the first i rows. How many such sums

are there? In a worst case scenario, all these sums are different. In that case

i i i

Number of sums = + + +

0 1 d2

N N N

+ + +

0 1 d2

< 2N L = 2k

where the strict inequality comes from the hypothesis of the theorem. Thus

there does exist at least one vector in Fk2 that is not such a sum. We choose

it to be ri+1 , the next row in H. We can continue choosing additional rows

without problem till we have constructed a full N L matrix H. QED

Its an open problem to find a deterministic polynomial time algorithm to

produce binary linear codes approaching the Gilbert-Varshamov bound as N

approaches infinity.

3

Part of the problem is that, given a linear code, it is NP hard to compute

or even approximate its minimum distance.

Hamming codes achieve the Hamming bound with dmin = 3. Here is a quick

way to construct a systematic Hamming code with k 2 parity bits.

First construct the N k parity check matrix H, the matrix whose rows

contain all the syndromes of error sequences containing a single 1. We choose

for the rows all 2k 1 nonzero elements of Fk2 , arranging for the k k identity

matrix to be at the bottom. The dimensions of H determine that the code

length is

N = 2k 1.

From H it is easy to write down a generator matrix G.

For example, for letting k = 3 we could choose

1 1 0

1 0 1

0 1 1

P

H= 1 1 1 =

1 0 0

I3

0 1 0

0 0 1

where

1 1 0

1 0 1

P =

0

1 1

1 1 1

is the L k matrix containing the parity bits of the basic data words. We see

that

L = 2k 1 k.

Next we construct the L N generator matrix G of the form

1 0 0 0 1 1 0

0 1 0 0 1 0 1

G= I P = 0

.

0 1 0 0 1 1

0 0 0 1 1 1 1

No two rows of H add to zero (because no two rows are the same). Any two

rows of H must add to a third row of H (because every k-bit sequence is a row

of H), yielding three rows that add to zero. Therefore dmin = 3 for this code.

Moreover, the code is perfect, which we can verify in two ways.

4

The code verifies the Hamming bound.

With N = 2k 1, L = N k, and t = (dmin 1)/2 = 1 we have

N N

M + = 2N k (1 + N ) = 2N k (1 + 2k 1) = 2N .

0 1

Every bitstring y in FN

2 is within Hamming distance 1 of one and only

one codeword.

If y is not a codeword, then the syndrome s = yH is the syndrome of an

error sequence e containing a single 1 (because every nonzero syndrome

has this property). Hence y + e is a codeword. Thus we can convert y to

a codeword by changing just 1 bit, so y was at Hamming distance 1 from

a codeword. But y can not be at Hamming distance 1 from two different

codewords because dmin = 3.

While in principle you can create a perfectly fine Hamming code by putting

the syndromes into the rows of H in any order whatsoever, there is a very clever

way to do it so that enables you to use the syndromes compute which bit a single

error belongs to. We secretly did this in the example above. Lets see how it

works.

Number the information columns of G (the first L columns) in increasing

order from 1 to L = 2k 1 but skipping all powers of 2, and number the parity

columns of G (the last k columns) in increasing order using the powers of 2 from

1 to 2k1 . The parity bits for a data word with 1 in column i and 0s elsewhere

is the binary representation of i (digits in reverse order).

In our case, suppose wi is the data word with 1 in column i and zeros

elsewhere. Express i in the form i = a 1 + b 2 + c 4. The codeword for wi is

then wi abc, with parity bits abc. We have

3 = 11+12+04

5 = 11+02+14

6 = 01+12+14

7 = 11+12+14

so

3 5 6 7 1 2 4

1 0 0 0 1 1 0

G= .

0 1 0 0 1 0 1

0 0 1 0 0 1 1

0 0 0 1 1 1 1

If you receive y through the channel, compute its syndrome s = yH = abc.

If s 6= 0, compute i = a 1 + b 2 + c 4, and convert y to a codeword by changing

the single bit in column i.

For example, if we receive y = 1100110, the syndrome is s = yH = 101, we

have i = 1 + 4 = 5, and we change y to codeword x = 1000110.

- P08 BlockCodes SolUploaded byebenpradeep
- Channel Coding.pdfUploaded byVivek Goud
- צפינה- מצגת 1 | Linear Block CodesUploaded byRon
- Coding TheoryUploaded bytrungnv2803
- 27286Uploaded bytemitope312
- Decimal Set 1Uploaded byyim
- data_layout.txtUploaded bySayan Kumar Khan
- IJAIEM-2013-11-29-086Uploaded byAnonymous vQrJlEN
- Computer Networks 1Uploaded byKirti Patil
- MD05 Error Detection and CorrectionUploaded byblackhawkrey1981

- Bicubic InterpolationUploaded byChaseVetruba
- A Course in Combinatorial Optimization - Schrijver2004Uploaded byChaseVetruba
- Curtis - The Cult of Jurassic ParkUploaded byChaseVetruba
- Benchmark For Pathfinding Search AlgorithmsUploaded byChaseVetruba
- Challenges in Climate ChangeUploaded byChaseVetruba
- Golub and Van Loan-Matrix ComputationsUploaded byMartin Denstedt
- Real-time Heuristic Search for Game Pathfinding.pdfUploaded byChaseVetruba
- ATSN.pdfUploaded byChaseVetruba
- Topics in Algebraic CombinatoricsUploaded byChaseVetruba
- -STOP SPAMMING ME!- - Exploring Information Overload on FacebookUploaded byChaseVetruba
- Convolutional CodesUploaded byChaseVetruba
- GroupsUploaded byChaseVetruba
- Bicubic InterpolationUploaded byChaseVetruba
- A Course in Combinatorial Optimization - Schrijver2004Uploaded byChaseVetruba
- Abstract Algebra - Dummit and FooteUploaded byChaseVetruba
- PolynomialsUploaded byChaseVetruba
- ModularArithmeticUploaded byChaseVetruba
- An Introduction to Mathematical ReasoningUploaded bykamalren
- PrimesUploaded byChaseVetruba
- Composition, Perception, and Schenkerian TheoryUploaded byRob Oxoby
- Noisy Channel TheoremUploaded byChaseVetruba
- Scientificomputation - heath.pdfUploaded byChaseVetruba

- Ale Et Al-trends of LSDS Strength Results LLDPE-HDPE-paper-rev.bUploaded byJose Ale
- 14 LTE Access Fault DiagnosisUploaded byĐêm Trắng
- Errores SQLUploaded byabuitrago81
- Benson Saler. Conceptualizing ReligionUploaded byAnonymous 2Dl0cn7
- MarketingUploaded byAkshay Rawat
- 1.1. General StatementsUploaded byArchan
- Prophet Muhammad Models of Political CommunicationUploaded byAli Zohery, Ph.D.
- 1-s2.0-0141460786900788-mainUploaded bySamaptika Mohanty
- Ash LeshaUploaded bymalarvk
- 2016-09-01 MatrixUploaded bycamilovivi
- Perl Quick Reference v1.0Uploaded bymghuzurahmed
- Experimental Study On Concrete With Plastic Waste And Master GeliniumUploaded byAnonymous vQrJlEN
- 234622057-PDSInstall-Checklist.pdfUploaded bypareen9
- BTEC Command WordsUploaded byJoshKellySwagMaster2
- Database StateflowUploaded byShoung0690
- CANOCOUploaded byMarcelo Bueno
- Formative Assessment and Self-regulated Learning - A Model and Seven Principles of Good Feedback PracticeUploaded bydestinycreature
- Kao Optimized Coacervation FormationUploaded byQuỳnh-Mai Nguyễn
- Foundations of Space and Time Reflections on Quantum Gravity - Jeff Murugan .pdfUploaded byCORTO-MALTES
- lesson plan format- week2Uploaded byapi-286654153
- WildStar_ Esper Class GuideUploaded byGamesCDKey.com
- Migration Legislation, Institutions and policies in the EUROMED regionUploaded byEugenio Orsi
- IVMS-4200 User ManualUploaded byRodinei Ferraz
- Ucon Fluid 50Uploaded byvanhung68
- Dec ReportUploaded byamitsaraf11
- Letter to GACS on Creationism in the Science ClassroomUploaded byLoren Collins
- Social ExperimentUploaded byRain Mendoza
- Advaita-Vedanta-Eliot-DuetschUploaded byvedasagar
- Manual.pdfUploaded bydaniel
- 04 Lagrange Multipliers SVUploaded byLoady Das