You are on page 1of 15

fewer check digits, these codes operate at a higher efficiency.

3.2 LINEAR BLOCK CODES


Let a code word consists of a digits C 1, C2, .... Cn , and a data word consists of k
digits d1, d2, ....dk . Since the code word and the data word are an n-tuple and ktyple respectively. Hence, they are n- and k- dimensional vectors. using row
matrices to represent these words, we have;
C= (C1,C2... Cn) ; d= (d1, d2... dk)
For linear block codes, all the n digits of c are formed by linear combinations
(1modulo-2 a addition) of k data digits.
Note: 1 1 = 0 0 =0
0 1 = 1 0 =1
But, in a special case where C 1=d1,C2=d2, Ck=dk and the remaining digits from
Ck+1 to Cn are linear combinations of d 1, d2.. dk is known as a systematic code.
In a systematic code, the first k digits of a code word are the data digits and the
last m=n-k digits are known as parity- check digits. Which are formed by linear
combinations of data digits d1,d2,dk
Hence,
C1=d1
C2=d2
.
.
.
Ck=dk
CK+1=h11d1 h12 d2 .h1kdk
Ck+1=h21d1 h22d2 h2kdk
.
.
.
(3.3a)
Cn=hm1d1 hm2d2hmkdk
Or
C=dG.(3.3b)
Where

G=

1 0 00
0 1 00
.
.
0 0 0.0
Ik (k x k)

h11 h21 hm1


h12 h22 hm2
.
.
h1k h2k .hmk
P(k x m)

.3.4

G is kxm matrix and is known as the generator matrix, which can be partitioned
into a k x k identity matrix I k and a k x m matrix p. All the elements of P are either
0 or 1. Then, the code word can be expressed as
C=dg
= d [ Ik,p]
= [d, dp]
= [d,cp].. (3.5)
where
Cp=dp (3.6)
Example e1: For a (6,3) code the generator matrix G is
G=

1 0 0
0 1 0
0 0 1

1 0 1
0 1 1
1 1 0

Ik
P
For all eight possible data words, find the corresponding code words, and verify
that the code is a single error correcting code.
Solution
Applying,
C=dp
The table 3.1 below shows the eight data words and the corresponding code
words.
Data word d
Code word C
111
111000
110
110110
101
101011
100
100101
011
011101
010
010011
001
001110
000
000000
Table 3.1
The distance between any two code words is at least 3. Hence, the code can
correct at least one error . The fig 3.1 below shows a possible encoder for this
1
code, using a three digit shift register and threeCmodoulo
-2 adders
C2
C3
Commutator
Data input d d2
3

d1
C4
C5
C6

Fig 3.1 Encoder for linear block codes


Decoding:
From equ. (3.6) and the fact that the modulo-2 sum of any sequence with iself is
zero. we have;
d Cp P
0
dp Cp=
.....(3.7)
Im =
C
Where:
Im = Identity matrix of order m x m (m=n-k); Hence,
CHT=0 ................ (3.8)
Where:
......(3.8b)
P
HT=
Im
and its transpose is
H= [ PT Im ] ......(3.8C)
is known as the parity check matrix.
In decoding, every code word must satisty equ. 3.8 .
Let us consider, ther received word r and because of possible errors caused by
channel noise, r differs from the transmitted code word C,
=Ce
where,
e= error word ( or error vector)
Example 2: If the data word 100 in example 1 above is transmitted as a code
word 10010, and the channel noise causes a detection error in the third digit;
Solution
Apply
= 101101
C= 100101
e
= 001000
Thus, an element 1 in e indicates an eror in the corresponding position, and 0
indicates no error. The Hamming distance between r and c is simply the number
of 1s in e, which is one in thirs case.
Supposed the transmitted code word is Code word is C i and the channel noise
causes an error li, then,

=Ci+ei
where = the received word
if ei=000000, which implies no error, then HT=0
But, as a result of possible channel errors, HT is in general a non zero row
vector s which is cared the syndrome.
S= HT--------- (3.9)
= (Ci eiHT
=eiHT-----------(3.9b)
can also be expressed in terms of code words other than l i. Hence,
= Cj ej j#i
This implies that ;
S= (Cj ej) HT+ejHT
Since , there are 2k possible code words,
S= eHT
is satisfied by 2k error vectors.
Example 3: If a data word d=100 is transmitted by a code word 100101 in
example e1 above, and if a detection error is caused in the third digit, then the
received word is 101101.
In this case, we have;
C=100101
e=001000
But , the same word could have been received if
C=101011 and e=000110, or if C= 010011 and e= 111110, etc.
From equ 3.9b, since we have eight possible error vectors (2 k error vectors),
hence the problem will be which one to choope. Hence, the reasonable criterion
is the maximum-likelihood rule where, if we receiver , then we decide in favour
of that C for which is most likely to be received. Thus, we decide C i
transmitted if
p(\Ci)>p(/Ck) add k#i
For a BSC, suppose the Hamming distance b/w and Ci is d, i, the channel noise
causes errors in d digits, Then if Pe is the digit error probability of a BSc,
n-d

.Pe
P(\Ci)=Pe (1-pe) =(1-pe)
1 Pe
d

If Pe < 0.5, then P(\Ci)is a monotonically decreasing function of d because P c/(1Pe)<1


Therefore, to Maximize P(/Ci, Ci must be choose, which is closest to ie the
error vector with the smallest number of 1s must be choosen.
A vector with the smallest number of 1s is called the minimum-weight
vector.
Example 4: A (6,3) code is generated according to the generating matnx in
example 1 above; the receiver receives = 100011. Determine the corresponding
data word if the channel is a BSc and the maximum- likelihood decision is used.
Solution
Applying equ (3.9a) that is,

=HT

100011

1
0
1
1
0
0

0
1
1
0
1
0

1
1
0
0
0
1

= [ 110 ]
For modulo-2 operation, subtraction is the same as addition the correct
transmitted code word C is given by
C= e
Pout
S= [ 110] = eHT
1 0 1
0 1 1
1 1 0
= e1 e2 e3 e4 e5 e6
1 0 0
0 1 0
0 0 1
e=001000 satisfies this equation; Also does e=000110,00010101, or 011011, or
110000, or 101101, or 1000011. Hence, the suitable choice the minimum-weight
e, is 001000
Thus
C=100011 001000=101011

3.5 Interlaced cods for burst and Random Error Correction.


The random error correcting codes are not efficient for burst error correcting and vice
virsa. Inter laced code is the most efficient and effective methods for correcting random
and burst erros at the same time.
In the care of and (n.k) code ,If we interlaced code words then from there, are will
generate interlaced code ( ie
) . Hence, instead of transmitting code words one by
one, it will be better if are can group code word and infer laced them.
Let us consider, the case of x=3 and a two error correcting (5,8) code. If any lies that each
code word has 15 digits, if we group the code words for transmission in to three. If the
first three code words to be transmitted are X(x1, x2, , x5), Y(y1, y2,z2 x,5) which is
illustrated graphically in fig 3.5,where code words are arranged in rows, Normally, we
ransmit one row after ansher, bu, in interlaced, we transmit in colums in sequence, when
all the 15(n) columns are transmitted, we repeat the procedure for the next
In error correction withis case, we observe that the decoder will first remove the
interlacing and regroup the received digits as x1.x2x5; y1,y2y5 and z1,z2 z15. If the
doffed digits in fig. 3.5 were in error, since this is two error correcting code, two or less
errors in each row will surely be corrected. Thus, all the errors are correctable. Notice

that we have two r and am, or independent errors and one burst of length 4 in all the 45
digits transmitted. Generally, if the original(n,k) code is t- error correcting the interlaced
code can correct any combination of t bursts of length or less.

3.6 Convolution codes


Convolution codes can be devised for correcting random, burst errors or both so,
incoding is easily implemented by shift register. A convolution encoder as shown in fig
3.6 k bits ( one input frame are shifted in each time, and , concurrently, n bits (one
output frame) are shifted out, where n>k. Thus, every k-bit input frame produces an n-bit
out put frame. Redundancy in the output, since n>k .There is also memory in the coder,
since the output frame depends on the previous k input fromes where k>1 and the code
rate is R=k/n, which is in this case. The constraint length, k, is the number of input
frames that are held in the kK-bit Shift register. Data from the kK stages of the shift
register are added (modulo 2) and used to set the bits in the n-stage output register.

k-bit frame

Constraint length= k input famine


Shift register
(kK. bits)

Input data
Logic

n-bit frame
Shifts
register
(n-bits)

Coded output data


n-bit

Fig 3.6 Convolution encoding (k=3,n=4,k=5,and R=3/4)


Let as consider the convolution coder shown in fig 3.7 where k=1 , n=d2 , k=3 , and a
commentator whit two inputs performs the function of a two stage output shift register.
Hence, the convolution code is generated by inputting a bit of data and then giving the
commutator a complete revolution. This process is repeated for successive in put bits to
produce the convolution ally encoded output . In this case, each k=1 in put bit produces
n=2 out put bits, thus, the code Rate (R) is . Let is illustrate this with an input digits of
11010. Firstly, all the stages of the registor is clear; ie they are in 0 state. When the first
data digit 1 enters the register, the stage S1 shows 1 and all the other stages (S2 and S3) are
unchanged; ie at 0 state. The two modulo -2 adders V1=1 and v2=1 . The commutator
samples this output, given the coder output to 11. However , when the 2nd message bit1
enters the register, it enters the stage s1, and the previous 1 is S1 is shifted to S2=1 .
Hence, the decoder output is 01. Also, when the 3rd message digits 0enters the register,
then S1=0 , S2=1 and S3=1 and the decoder output is 01 . The process continues until the
last data digits enters the stage S1.
In general, the convolution coder operates on a continuous basis .

fewer check digits, these codes operate at a higher efficiency.


3.2 LINEAR BLOCK CODES
Let a code word consists of a digits C 1, C2, .... Cn , and a data word consists of k
digits d1, d2, ....dk . Since the code word and the data word are an n-tuple and ktyple respectively. Hence, they are n- and k- dimensional vectors. using row
matrices to represent these words, we have;
C= (C1,C2... Cn) ; d= (d1, d2... dk)
For linear block codes, all the n digits of c are formed by linear combinations
(1modulo-2 a addition) of k data digits.
Note: 1 1 = 0 0 =0
0 1 = 1 0 =1
But, in a special case where C 1=d1,C2=d2, Ck=dk and the remaining digits from
Ck+1 to Cn are linear combinations of d 1, d2.. dk is known as a systematic code.
In a systematic code, the first k digits of a code word are the data digits and the
last m=n-k digits are known as parity- check digits. Which are formed by linear
combinations of data digits d1,d2,dk
Hence,
C1=d1
C2=d2
.
.
.
Ck=dk

CK+1=h11d1 h12 d2 .h1kdk


Ck+1=h21d1 h22d2 h2kdk
.
.
.
(3.3a)
Cn=hm1d1 hm2d2hmkdk
Or
C=dG.(3.3b)
Where

G=

1 0 00
0 1 00
.
.
0 0 0.0
Ik (k x k)

h11 h21 hm1


h12 h22 hm2
.
.
h1k h2k .hmk
P(k x m)

.3.4

G is kxm matrix and is known as the generator matrix, which can be partitioned
into a k x k identity matrix I k and a k x m matrix p. All the elements of P are either
0 or 1. Then, the code word can be expressed as
C=dg
= d [ Ik,p]
= [d, dp]
= [d,cp].. (3.5)
where
Cp=dp (3.6)
Example e1: For a (6,3) code the generator matrix G is
G=

1 0 0
0 1 0
0 0 1

1 0 1
0 1 1
1 1 0

Ik
P
For all eight possible data words, find the corresponding code words, and verify
that the code is a single error correcting code.
Solution
Applying,
C=dp
The table 3.1 below shows the eight data words and the corresponding code
words.

Data word d
Code word C
111
111000
110
110110
101
101011
100
100101
011
011101
010
010011
001
001110
000
000000
Table 3.1
The distance between any two code words is at least 3. Hence, the code can
correct at least one error . The fig 3.1 below shows a possible encoder for this
code, using a three digit shift register and three modoulo -2 adders
C1
C2
C3
Commutator
Data input d d2
3

d1
C4
C5
C6

Fig 3.1 Encoder for linear block codes


Decoding:
From equ. (3.6) and the fact that the modulo-2 sum of any sequence with iself is
zero. we have;
d Cp P
0
dp Cp=
.....(3.7)
Im =
C
Where:
Im = Identity matrix of order m x m (m=n-k); Hence,
CHT=0 ................ (3.8)
Where:
......(3.8b)
P
HT=
Im
and its transpose is
H= [ PT Im ] ......(3.8C)

is known as the parity check matrix.


In decoding, every code word must satisty equ. 3.8 .
Let us consider, ther received word r and because of possible errors caused by
channel noise, r differs from the transmitted code word C,
=Ce
where,
e= error word ( or error vector)
Example 2: If the data word 100 in example 1 above is transmitted as a code
word 10010, and the channel noise causes a detection error in the third digit;
Solution
Apply
= 101101
C= 100101
e
= 001000
Thus, an element 1 in e indicates an eror in the corresponding position, and 0
indicates no error. The Hamming distance between r and c is simply the number
of 1s in e, which is one in thirs case.
Supposed the transmitted code word is Code word is C i and the channel noise
causes an error li, then,
=Ci+ei
where = the received word
if ei=000000, which implies no error, then HT=0
But, as a result of possible channel errors, HT is in general a non zero row
vector s which is cared the syndrome.
S= HT--------- (3.9)
= (Ci eiHT
=eiHT-----------(3.9b)
can also be expressed in terms of code words other than l i. Hence,
= Cj ej j#i
This implies that ;
S= (Cj ej) HT+ejHT
Since , there are 2k possible code words,
S= eHT
is satisfied by 2k error vectors.
Example 3: If a data word d=100 is transmitted by a code word 100101 in
example e1 above, and if a detection error is caused in the third digit, then the
received word is 101101.
In this case, we have;
C=100101
e=001000
But , the same word could have been received if
C=101011 and e=000110, or if C= 010011 and e= 111110, etc.
From equ 3.9b, since we have eight possible error vectors (2 k error vectors),
hence the problem will be which one to choope. Hence, the reasonable criterion

is the maximum-likelihood rule where, if we receiver , then we decide in favour


of that C for which is most likely to be received. Thus, we decide C i
transmitted if
p(\Ci)>p(/Ck) add k#i
For a BSC, suppose the Hamming distance b/w and Ci is d, i, the channel noise
causes errors in d digits, Then if Pe is the digit error probability of a BSc,
n-d

.Pe
1 Pe

P(\Ci)=Pe (1-pe) =(1-pe)


d

If Pe < 0.5, then P(\Ci)is a monotonically decreasing function of d because P c/(1Pe)<1


Therefore, to Maximize P(/Ci, Ci must be choose, which is closest to ie the
error vector with the smallest number of 1s must be choosen.
A vector with the smallest number of 1s is called the minimum-weight
vector.
Example 4: A (6,3) code is generated according to the generating matnx in
example 1 above; the receiver receives = 100011. Determine the corresponding
data word if the channel is a BSc and the maximum- likelihood decision is used.
Solution
Applying equ (3.9a) that is,
=HT

100011

1
0
1
1
0
0

0
1
1
0
1
0

1
1
0
0
0
1

= [ 110 ]
For modulo-2 operation, subtraction is the same as addition the correct
transmitted code word C is given by
C= e
Pout
S= [ 110] = eHT
1 0 1
0 1 1
e
e
e
e
e
e
1 1 0
1 2 3 4 5 6
=
1 0 0
0 1 0
0 0 1
e=001000 satisfies this equation; Also does e=000110,00010101, or 011011, or
110000, or 101101, or 1000011. Hence, the suitable choice the minimum-weight
e, is 001000
Thus

C=100011 001000=101011

4.5 Nose in Amplitude modulation systems


Introduction: Now we have to look at the behavior of an alog communication
systems in the presence of noise. When a signal power S T is transmitted over a
channel as showed in fig. 4.0 .The transmitted signal is corrupted by channel
noise during transmission . The channel will attenuate the signal. Gence, at the
receiver input .a signal will nixed with noise, and the signal and noise powers at
this point are Si and Ni respectively. The receiver processes the signal to yield the
desired signal plus noise. The signal and noise power at the receiver output are
so and No respectively. Thus, in analog communication systems, the quality of
the received signalisdetermined by ration s o/No which is the out put SNR . This
rationcan be increased by increasing the transmitted power

3.5 Interlaced cods for burst and Random Error Correction.


The random error correcting codes are not efficient for burst error correcting and
vice virsa. Inter laced code is the most efficient and effective methods for
correcting random and burst erros at the same time.
In the care of and (n.k) code ,If we interlaced code words then from there, are
will generate interlaced code ( ie
) . Hence, instead of transmitting code
words one by one, it will be better if are can group code word and infer laced
them.
Let us consider, the case of x=3 and a two error correcting (5,8) code. If any lies
that each code word has 15 digits, if we group the code words for transmission
in to three. If the first three code words to be transmitted are X(x 1, x2, , x5),
Y(y1, y2,z2 x,5) which is illustrated graphically in fig 3.5,where code words
are arranged in rows, Normally, we ransmit one row after ansher, bu, in
interlaced, we transmit in colums in sequence, when all the 15(n) columns are
transmitted, we repeat the procedure for the next

In error correction withis case, we observe that the decoder will first remove the
interlacing and regroup the received digits as x 1.x2x5; y1,y2y5 and z1,z2 z15.
If the doffed digits in fig. 3.5 were in error, since this is two error correcting code,
two or less errors in each row will surely be corrected. Thus, all the errors are
correctable. Notice that we have two r and am, or independent errors and one
burst of length 4 in all the 45 digits transmitted. Generally, if the original(n,k) code
is t- error correcting the interlaced code can correct any combination of t bursts
of length or less.

X1
y1
z1

X2
y2
z2

X3
y3
z3

X14
y14
z14

X15
y15
z15

3.6 Convolution codes


Convolution codes can be devised for correcting random, burst errors or both so,
incoding is easily implemented by shift register. A convolution encoder as shown
in fig 3.6 k bits ( one input frame are shifted in each time, and , concurrently, n
bits (one output frame) are shifted out, where n>k. Thus, every k-bit input frame
produces an n-bit out put frame. Redundancy in the output, since n>k .There is
also memory in the coder, since the output frame depends on the previous k
input fromes where k>1 and the code rate is R=k/n, which is in this case. The
constraint length, k, is the number of input frames that are held in the kK-bit
Shift register. Data from the kK stages of the shift register are added (modulo 2)
and used to set the bits in the n-stage output register.

k-bit frame

Constraint length= k input famine


Shift register
(kK. bits)

Input data
Logic

n-bit frame
Shifts
register
(n-bits)

Coded output data


n-bit

Fig 3.6 Convolution encoding (k=3,n=4,k=5,and R=3/4)


Let as consider the convolution coder shown in fig 3.7 where k=1 , n=d2 , k=3 ,
and a commentator whit two inputs performs the function of a two stage output
shift register. Hence, the convolution code is generated by inputting a bit of data
and then giving the commutator a complete revolution. This process is repeated
for successive in put bits to produce the convolution ally encoded output . In this
case, each k=1 in put bit produces n=2 out put bits, thus, the code Rate (R) is
.
Data input

S1

S2

Modulo 2
adders
commentator

S3

Coded sequence output

Let is illustrate this with an input digits of 11010. Firstly, all the stages of the
registor is clear; ie they are in 0 state. When the first data digit 1 enters the
register, the stage S1 shows 1 and all the other stages (S 2 and S3) are
unchanged; ie at 0 state. The two modulo -2 adders V 1=1 and v2=1 . The
commutator samples this output, given the coder output to 11. However , when
the 2nd message bit1 enters the register, it enters the stage s 1, and the previous
1 is S1 is shifted to S2=1 . Hence, the decoder output is 01. Also, when the 3 rd
message digits 0enters the register,
then S 1=0 , S2=1 and S3=1 and the
decoder output is 01 . The process continues until the last data digits enters the
stage S1.
In general, the convolution coder operates on a continuous basis .
4.0 Noise in Amplitude modulation systems
Introduction: Now we have to look at the be haviour of analog communication systems
in the presence of noise. When a signal power ST is transmitted over a channel as showed

in fig. 4.0 . The transmitted signal is corrupted by channel noise during transmission. The
Channel will altercate the signal. Hence, at the receiver input, a signal will nixed with
noise, and the signal and noise powers at this point are Si and Ni respectively. The
receiver processes the signal to yield the desire dsignal plys noise. The signal and noise
power at the receiver out put are So and No respectively. Thus, in analog communication
systems, the quality of the received signal is determined by raion . So/No which is th
output SNR. This rationcan be increased by increasing the transmitted power ST. other
factors which can affect the maximum value of ST are transmitter cost, channels etc.
Although, it is more convenient to deal with the received power rather than the
transmitted power ST.
4.1 Base band systems
This is where a signal is transmitted directly with out any modulation. This mode of
ommunication is suitable over a pair of wires, optical fiber, or coaxial cables. S0, it is
mainly used in short have links. Generally, base band systems serves as a basis against
which other systems may be compared. Hence the transmitter and the receiver are ideal
baseband filters as shown in fig 4.1. We have two low-pass filters. The first one is Hp(w)
which is at the transmitter limits the input signal spectrum to a given bandwidth, while
the other ,Hd(w) which is at the receiver eliminate the out-of-band noise and channel
interference. These filter can also serve as an optimize the signal-to-noise ratio (SNR) at
the receiver.

Fig.4.1.A baseband system


If we assume the baseband signal m(t) to be a zero mean ,wide sense stationary random
process band limited to B Hz. Let us consider , an ideal low-pass (baseband) filter with
bandwidth B at the transmitter and the receiver . If the channel is assumed to be distortion
less. Thus,
So = Si ---------------------------------------------- (4.0)
And
No = 2

Sn(w)df

--------------------------(4.1)

You might also like