You are on page 1of 19

PROGRAM NO.

AIM: Write a program for determination of various entropies and mutual information of a given
channel

SOFTWARE USED: MATLAB R2015a

THEORY: Information entropy is the average rate at which information is produced by a


stochastic source of data.

The measure of information entropy associated with each possible data value is the negative
logarithm of the probability mass function for the value:

S=-∑ P(i) ln P(i)


Thus, when the data source has a lower-probability value (i.e., when a low-
probability event occurs), the event carries more "information" than when the source data has a
higher-probability value. The amount of information conveyed by each event defined in this way
becomes a random variable whose expected value is the information entropy.

CODE:
clc;
clear all;
close all;

p=input('enter the probablities');

sum=0;

for(i=1:length(p))
I(i)=log2(1/p(i)); %Info
h=sum+p(i)*I(i); %entropy
sum=h;
end

disp('I(x)=');
disp(I);

disp('H(x)=');
disp(h);

1
OUTPUT:

Fig 1.1 Information and Entropy

2
PROGRAM NO. 2

AIM: Write a program for generation and evaluation of Lempel Ziv Coding and decoding
coding using C/MATLAB

SOFTWARE USED: MATLAB R2015a

THEORY: Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm.


The algorithm is simple to implement and has the potential for very high throughput in hardware
implementations. It is the algorithm of the widely used Unix file compression utility compress
and is used in the GIF image format.

CODE:
clc;
clear all;
close all;
datain=input('enter the string in single quote with symbol $ as End of string =');%input data
lda=length(datain);%length of datainput
dictionary=input('enter the dictionary in single quote(symbol used in string are to be included)=');%input
dictionary

ldi=length(dictionary);%length of dictionary
j=1;%used for generating code
n=0;

%used for
%loop used for string array to cell array conversion
for i=1:ldi
dictnew(i)={dictionary(i)};
end

p=datain(1);%first symbol
s=p;%current symbol
k=1; %used for generating transmitting output code
i=1;%for loop
m=0;

while datain(i)~= '$'%end of symbol


c=datain(i+1);

if c~='$'
comb=strcat(s,c);%just for see combination

if strcmp(dictnew,strcat(s,c))==0
dictnew(j+ldi)={strcat(s,c)};
%lopp and check used for generating transmitting
%code array

3
check=ismember(dictnew,s);

for l=1:length(check)

if check(l)==1
tx_trans(k)=l;
k=k+1;
break;
end
end

s=c;
j=j+1;
i=i+1;
m=m+1;

else

s=strcat(s,c);
i=i+1;
end

else
%for sending last and eof tx_trans
check=ismember(dictnew,s);

for l=1:length(check)

if check(l)==1
tx_trans(k)=l;
k=k+1;
tx_trans(k)=0;
end
end

break;

end
end

display('new dictionary=')
display(dictnew);
display(tx_trans);

%decoding

dicgen=dictionary;
ldgen=length(dicgen);
ldtx=length(tx_trans);
index=length(dictionary);
string='';

4
%loop and below inst. used for cell array to char array
dicgen=cellstr(dictionary);

for i=1:ldi
dicgen(i)={dictionary(i)};
end

g=1;
entry=char(dictionary(tx_trans(1)));%first symbol
g=g+1;% next symbol

while tx_trans(g)~=0 %for EOF


s=entry;
entry=char(dicgen(tx_trans(g)));
string=strcat(string,s); %detected string
index=index+1; % next index
dicgen(index) = {strcat(s,entry(1))};%upgrade dictionary
g=g+1; % next index

end

string=strcat(string,entry)
disp(dicgen);
display('received original string=');
disp(string);

OUTPUT:

Fig: 2.1 Inputs and processing

5
Fig 2.2 Encoding

Fig 2.3 Decoding

6
PROGRAM NO. 3

AIM: Write a program for generation and evaluation of Huffman Coding and decoding coding
using C/MATLAB

SOFTWARE USED: MATLAB R2015a

THEORY: Huffman coding is a lossless data compression algorithm. The idea is to assign
variable-length codes to input characters; lengths of the assigned codes are based on the
frequencies of corresponding characters. The most frequent character gets the smallest code and
the least frequent character gets the largest code.

CODE:
clc;
clear all;
close all;

p=input('Enter the probabilities:');


n=length(p);
symbols=[1:n];
[dict,avglen]=huffmandict(symbols,p);
temp=dict;
t=dict(:,2);
for i=1:length(temp)
temp{i,2}=num2str(temp{i,2});
end
disp('The huffman code dict:');
disp(temp)
fprintf('Enter the symbols between 1 to %d in[]',n);
sym=input(':')
encod=huffmanenco(sym,dict);
disp('The encoded output:');
disp(encod);
bits=input('Enter the bit stream in[];');
decod=huffmandeco(bits,dict);
disp('The symbols are:');
disp(decod);

H=0;
Z=0;
for(k=1:n)
H=H+(p(k)*log2(1/p(k)));

end
fprintf(1,'Entropy is %f bits',H);
N=H/avglen;
fprintf('\n Efficiency is:%f',N);

7
OUTPUT:

Fig 3.1 Huffman Encoding and Decoding

8
PROGRAM NO. 4

AIM: Write a Program for coding & decoding of Linear block codes.

SOFTWARE USED: MATLAB R2015a

THEORY: In the linear block codes, the parity bits and message bits have a linear
combination, which means that the resultant code word is the linear combination of any two code
words.

Let us consider some blocks of data, which contains k bits in each block. These bits are mapped
with the blocks which has n bits in each block. Here n is greater than k.

The transmitter adds redundant bits which are (n-k) bits. The ratio k/n is the code
rate. It is denoted by r and the value of r is r < 1.

The (n-k) bits added here, are parity bits. Parity bits help in error detection
and error correction, and also in locating the data. In the data being transmitted, the left most bits
of the code word correspond to the message bits, and the right most bits of the code word
correspond to the parity bits.

CODE:
clc;
clear all;
close all;

% Generator Matrix

G= input('Enter the generator matrix');


[n,k]=size(G);

%Extracting the Parity matrix

P=G(:,1:n-1);
disp('P=')
disp(P)

%Generating the parity transpose matrix

Pt=transpose(P);
disp('Pt=')
disp(Pt)

%Identity Matrix
I=eye(k-n);

% Generating the Parity check matrix

9
H=[I Pt];
Ht= transpose(H);
disp(Ht)

msg=input('Enter the n*1 message matrix');


txd=mod((msg*G),2);

rxd=input(' Enter the 1*k received message');

ch_rxd=mod((rxd*Ht),2);
disp(ch_rxd);

%Calculation of syndrome and detection of error

%Detection of error
% IF decision

if ch_rxd==0
disp('There is no error');

elseif ch_rxd==Ht(1,:)
disp('Error is in first row');

elseif ch_rxd==Ht(2,:)
disp('Error is in second row');

elseif ch_rxd==Ht(3,:)
disp('Error is in third row');

elseif ch_rxd==Ht(4,:)
disp('Error is in fourth row');

elseif ch_rxd==Ht(5,:)
disp('Error is in fifth row');

elseif ch_rxd==Ht(6,:)
disp('Error is in sixth row');

else
disp('Error is in seventh row');

end

10
OUTPUT:

Fig 4.1 Detection of error

11
PROGRAM NO. 5

AIM: Write a Program for coding & decoding of Convolutional codes.

SOFTWARE USED: MATLAB R2015a

THEORY:
A convolutional code is a type of error-correcting code that generates parity symbols via the
sliding application of a boolean polynomial function to a data stream.

Convolutional codes are often characterized by the base code rate and the depth (or memory) of
the encoder [n,k,K]. The base code rate is typically given as n/k, where n is the input data rate
and k is the output symbol rate.

The depth is often called the "constraint length" K, where the output is a function of
the current input as well as the previous K-1 inputs. The depth may also be given as the number
of memory elements ‘v’ in the polynomial or the maximum possible number of states of the
encoder (typically: 2v).

Fig: Convulational Encoder

CODE:
clc;
clear all;
close all;
m=input('Enter a message vector of 4 bits');

12
disp('m=');
disp(m);

n(1)=0; %Initialisation
n(2)=0;
n(3)=0;

x=0;
x1=0;
x2=0;

for i=1:length(m)
n(3)=m(i); %Message Passing

x1=xor(xor(n(3),n(2)),n(1)); %Processing
x2=xor(n(1),n(3));

n(2)=n(3); %Shifting
n(1)=n(2);
x= [x x1 x2]; %Encoding
end

x=x(2:length(x)); %Eliminating 0

disp(‘Code=’);
disp(x);

OUTPUT:

Fig:5.1 Convolutional coding


13
PROGRAM NO. 6

AIM: Write a Program for coding & decoding of Convolutional codes.

SOFTWARE USED: MATLAB R2015a

THEORY: In coding theory, a cyclic code is a block code, where the circular shifts of each
codeword gives another word that belongs to the code. They are error-correcting codes that have
algebraic properties that are convenient for efficient error detection and correction.

CODE:
clc;
clear all;
%Encoding
n=7;
k=4;
p=[1 1 0 ; 1 1 1; 0 0 1 ; 1 0 1]; % Parity Matix
d=[1 1 0 1]; % Message word
ik=eye(k);

g=cat(2,ik,p);
disp('Generator Matrix:');
disp(g);

g1=cyclpoly(n,k,'max');
gp=poly2sym(g1);
disp('Generator Polynomial:');disp(gp);
c1=mtimes(d,g);

c=mod(c1,2);
disp('The codeword for given message is:'); disp(c);

%Decoding
r=[1 0 0 1 1 1 0];disp('received word of 7 bit:');disp(r);
ink=eye(n-k);
h=cat(2,p',ink);
ht=h';disp('Transpose of parity check matrics :');disp(ht);
rp=poly2sym(r);
[qp,remp]=quorem(rp,gp);

14
disp('Syndrome polynomial:');disp(qp);
rem=sym2poly(remp);
s=mod(rem,2);disp('Syndrome :');disp(s);

if (s == 0)
disp('The received code is correct.');
else
disp('The received code is incorrect.');
row = 0;

for j=1:1:n
m=xor(s,ht(j,:));

if (m==0)
row = j;
break;
end
end

r(1,row) = ~r(1,row);
disp(r);
disp('Correct codeword is:');
end
disp('c=');
disp(r);

OUTPUT:

Fig 6.1 Cyclic Coding

15
Fig: 6.2 Code Detection

16
PROGRAM NO. 7

AIM: Write a Program for coding & decoding of BCH & RS codes.

SOFTWARE USED: MATLAB R2015a

THEORY: There is a precise control over the number of symbol errors correctable by the
code. In particular, it is possible to design binary BCH codes that can correct multiple bit errors.
Another advantage of BCH codes is the ease with which they can be decoded, namely, via an
algebraic method known as syndrome decoding. This simplifies the design of the decoder for
these codes, using small low-power electronic hardware.

CODE:
%BCH Encoding
clear all;
clc;
m =input ('m=') ;
fprintf('Codeword length ')

n = 2^m-1 % Codeword length


k = input ('enter message length k='); % Message length

m=input('enter the message of the length m=')


msg = gf(m)

% Find t, the error-correction capability.


[genpoly,t] = bchgenpoly(n,k);
disp('Error correcting capability :');
disp(t);

%t2=input('Enter no. of errors to be added to produce noisy code :');

% Encode the message.


code = bchenc(msg,n,k)

% Corrupt up to t2 bits in each codeword.


tn=1*6;
noisycode = code + randerr(1,n,1:tn)

% Decode the noisy code.

[newmsg,err,ccode] = bchdec(noisycode,n,k);
fprintf('new message=')
disp(newmsg)

if msg==newmsg
disp('The message was recovered perfectly.')
else

17
disp('Error in recovery of message.')
end;

% %RS CODING
% clear all;
% clc;
n=input('accept n=');
k=input('accept k=');
m=input('accept message=');
msg=gf([m],k);
c = rsenc(msg,n,k); % Code will be a Galois array.
% %RS DECODING
% clear all; clc;
n=input('accept n=');
k=input('accept k=');
m=input('accept message=');
msg=gf([m],k);
c = rsenc(msg,n,k) % Code will be a Galois array.
r=c;
r(1)=2;
r(3)=1;
r(10)=29;
r(11)=12;
r(12)=18;
[d,e]=rsdec(r,n,k)
hold on;

OUTPUT:

Fig. 7.1 BCH Encoding

18
Fig 7.2 BCH Decoding

19

You might also like