You are on page 1of 31

Vaish College of Engineering, Rohtak

Neural Network
Practical File

Submitted To: Submitted by:


Ms. Neha Taneja Ritika Gupta
AP, CSE Dept. 110-CSE-A-19
INDEX

S. Practical Name Page No. Date Signature


No.
1. Introduction to MATLAB in 1-6
context with NN.

2. Plotting of Activation Functions: 7-8


Logistic functions, Tan-
hyperbolic functions, Identity
function using MATLAB.
3. Implementation of AND-logic in 9-11
MCP using MATLAB.

4. Implementation of OR-logic in 12-13


MCP using MATLAB.

5. Implementation of Hebb 14-15


Learning using MATLAB.

6. Implementation of Hebbian 16-17


Learning using MATLAB

7. Implementation of Perceptron 18-20


Model using MATLAB.

8. Implementation of Gradient- 21-22


descent using MATLAB

9. Write a MATLAB program to 23-25


construct and test auto
associative network for input
vector using outer product rule.

10. Write a MATLAB program to 26-29


construct and test hetero-
associative network for binary
inputs and targets.
Practical 1.

Introduction to Matlab in context with NN.

MATLAB is a software package for high-performance mathematical computation,


visualization, and programming environment. It provides an interactive environment with
hundreds of built-in functions for technical computing, graphics, and animations.

MATLAB stands for Matrix Laboratory. MATLAB was written initially to implement a
simple approach to matrix software developed by the LINPACK (Linear system package)
and EISPACK (Eigen system package) projects.

MATLAB is a modern programming language environment, and it has refined data


structures, including built-in editing and debugging tools, and supports object-oriented
programming.

MATLAB is multi-paradigm. So, it can work with multiple types of programming


approaches, such as Functional, Object-Oriented, and Visual.

Besides the environment, MATLAB is also a programming language.


As its name contains the word Matrix, MATLAB does its' all computing based on
mathematical matrices and arrays. MATLAB's all types of variables hold data in the form
of the array only, let it be an integer type, character type or String type variable.

MATLAB is used in various disciplines of engineering, science, and economics.

MATLAB allows several types of tasks, such as manipulations with matrix, algorithm
implementation, data, and functions plotting, and can interact with programs written in
other programming languages.
MATLAB is a dynamic and weakly typed programming language.

MATLAB environment handles tasks of the declaration of the data type of the variables
and provision for an appropriate amount of storage for the variables.

History of MATLAB

The development of the MATLAB started in the late 1970s by Cleve Moler, the chairman
of the Computer Science department at the University of New Mexico. Cleve wanted to
make his students able to use LINPACK & EISPACK (software libraries for numerical
computing, written in FORTRAN), and without learning FORTRAN. In 1984, Cleve Moler
with Jack Little & Steve Bangert rewrote MATLAB in C and founded MathWorks. These
libraries were known as JACKPAC at that time, later these were revised in 2000 for matrix
manipulation and named as LAPACK.

Main Features and Capabilities of MATLAB

The diagram in the figure shows the main features and capabilities of MATLAB.
MATLAB's built-in functions provide excellent tools for linear algebra computations, data
analysis, signal processing, optimization, numerical solution of ordinary differential
equations (ODEs), quadrate, and many other types of scientific calculations.

Most of these functions use state-of-the-art algorithms. These are numerous functions
for 2-D and 3-D graphics, as well as for animations.

MATLAB supports an external interface to run those programs from within MATLAB. The
user is not limited to the built-in functions; he can write his functions in the MATLAB
language.

There are also various optional "toolboxes" available from the developers of MATLAB.
These toolboxes are a collection of functions written for primary applications such as
symbolic computations, image processing, statistics, control system design, and neural
networks.

The necessary building components of MATLAB are the matrix. The fundamental data
type is the array. Vectors, scalars, real matrices, and complex matrices are all automatically
handled as special cases of the primary data type. MATLAB loves matrices and matrix
functions. The built-in functions are optimized for vector functions. Therefore, Vectorized
commands or codes run much faster in MATLAB.

MATLAB System

The MATLAB systems consist of five main elements:


Development Environment

This is the set of tools and facilities that help you use MATLAB operations and files. Many
of these tools are graphical user interface. It involves the MATLAB desktop and command
window, a command history, an editor and debugger, and browsers for considering help,
the workspace, reports, and the search path.

MATLAB Mathematical Function Library

This is a vast compilation of computing design ranging from basic functions, like sum,
sine, cosine, and complex mathematic, to more sophisticated features like matrix inverse,
matrix eigenvalues, Bessel functions, and fast Fourier transforms.

MATLAB Language

This is a high-level matrix/array language with control flow statement, function, data
structure, input/output, and object-oriented programming characteristics. It allows both
"programming in the small" to create quick and dirty throw-away programs rapidly and
"programming in the large" to create large and complex application functions.

Graphics

MATLAB has extensive facilities for displaying vector and matrices as graphs, as well as
annotating and printing these graphs. It contains high-level structures for two-
dimensional and three-dimensional data visualization, image processing, animation, and
presentation graphics. It also involves low-level structures that allow us to customize the
display of graphics fully as well as to build complete graphical user interfaces on our
MATLAB applications.

MATLAB External Interfaces/API

This is a library that allows us to write C and FORTRAN programs that interact with
MATLAB. It contains facilities for calling routines from MATLAB (dynamic linking), calling
MATLAB as a computational engine, and for reading and writing MAT-files.
Practical 2.

Plotting of Activation Functions: Logistic functions, Tan-hyperbolic functions, Identity


function using MATLAB.

Logistic Function: The standard logistic function is a logistic function with parameters k = 1, x0 = 0, L = 1.

This reduces the logistic function as below:

Tan-hyperbolic Function: The hyperbolic tangent function is a function f: R → R is defined by f(x) = [ex –
e-x] / [ex + e-x] and it is denoted by tanh x

tanh x = [ex – e-x] / [ex + e-x]

Identity Function: The identity function is a function which returns the same value, which was used as
its argument.

%Plotting of activation function


x= -10:0.1:10
temp = exp(-x);
y1 = 1./(1+temp);
%Here <./ >stands for the right division in matlab (otherwise stated as rdivide(A,B))
y2 = (1-temp)./(1+temp);
y3 = x;
subplot(221);
plot(x,y1);
grid on;
axis ([min(x) max(x) -2 2]);
title(‘Logistics Function’);
xlabel(‘(a)’);
axis(‘square’);
subplot(223);
plot(x,y2);
grid on;
axis ([min(x) max(x) -2 2]);
title(‘Hyperbolic tangent Function’);
xlabel(‘(b)’);
axis(‘square’);
subplot(224);
plot(x,y3);
grid on;
axis ([min(x) max(x) -2 2]);
title(‘Identity or linear function’);
xlabel(‘©’);
axis(‘square’);
Practical 3.

Implementation of AND-logic in MCP using MATLAB.

MCP: In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published “A logical
calculus of the ideas immanent in nervous activity” in the Bulletin of Mathematical Biophysics 5:115-
133. In this paper McCulloch and Pitts tried to understand how the brain could produce highly complex
patterns by using many basic cells that are connected together. These basic brain cells are called
neurons, and McCulloch and Pitts gave a highly simplified model of a neuron in their paper. The
McCulloch and Pitts model of a neuron, which we will call an MCP neuron for short, has been very
important in computer science. In fact, you can buy an MCP neuron at most electronic stores, but they
are called "threshold logic units." A group of MCP neurons that are connected together is called
an artificial neural network. In a sense, the brain is a very large neural network. It has billions of
neurons, and each neuron is connected to thousands of other neurons. McCulloch and Pitts showed
how to encode any logical proposition by an appropriate network of MCP neurons. And so in theory
anything that can be done with a computer can also be done with a network of MCP neurons. McCulloch
and Pitts also showed that every network of MCP neurons encodes some logical proposition. So if the
brain were a neural network, then it would encode some complicated computer program. But the MCP
neuron is not a real neuron; it’s only a highly simplified model. We must be very careful in drawing
conclusions about real neurons based on properties of MCP neurons.

Clear;

clc;
disp(‘Enter Weights’);
w1 = input (‘Weight w1 = ‘);
w2 = input (‘Weight w2 = ‘);
disp(‘Enter Threshold Value ‘);
theta = input (‘theta=’);
y=[0 0 0 0];
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 0 1 0];
con=1;
while con
zin=x1*w1+x2*w2;
for i=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0;
end
end
disp(‘output for net’);
disp(y);
if y==z
con=0;
else
disp(‘not learning’);
w1 = input (‘Weight w1 = ‘);
w2 = input (‘Weight w2 = ‘);
theta = input (‘theta=’);
end
end
disp(‘MCP’);
disp(‘Weight’);
disp(w1);
disp(w2);
disp(‘threshold value’);
disp(theta);
OUTPUT:
Practical 4.

Implementation of OR-logic in MCP using MATLAB.


Clear;

clc;
disp(‘Enter Weights’);
w1 = input (‘Weight w1 = ‘);
w2 = input (‘Weight w2 = ‘);
disp(‘Enter Threshold Value ‘);
theta = input (‘theta=’);
y=[0 0 0 0];
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 1 1 1];
con=1;
while con
zin=x1*w1+x2*w2;
for i=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0;
end
end
disp(‘output for net’);
disp(y);
if y==z
con=0;
else
disp(‘not learning’);
w1 = input (‘Weight w1 = ‘);
w2 = input (‘Weight w2 = ‘);
theta = input (‘theta=’);
end
end
disp(‘MCP’);
disp(‘Weight’);
disp(w1);
disp(w2);
disp(‘threshold value’);
disp(theta);
OUTPUT:
Practical 5.

Implementation of Hebb Learning using MATLAB.

Hebb learning rule - Historically the first proposed learning rule for neurons. Weights are
adjusted proportional to the product of the outputs of pre- and post-weight neurons.
clear all;
clc;
disp(’ AUTO ASSOCIATIVE NETWORK-----HEBB RULE’);
w=[0000;0000;0000;0000];
s=[111 -1];
t=[111 -1];
ip=[1 -1 -1 -1];
disp(’INPUT VECTOR’);
s
for i=1:4
for j=1:4
w(i,j)=w(i,j)+(s(i)*t(j));
end
end
disp(’WEIGHTS TO STORE THE GIVEN VECTOR IS’);
w
disp(’TESTING THE NET WITH VECTOR’);
ip
yin=ip*w;
for i=1:4
if yin(i)>0
y(i)=1;
else
y(i)=-1;
end
end
if y==s
disp(’PATTERN IS RECOGNIZED’)
else
disp(’PATTERN IS NOT RECOGNIZED’)
end
Practical 6.

Implementation of Hebbian Learning using MATLAB.

clc ;
clear all;
close all;
x=[1 1 -1 -1;1 -1 1 -1];
t=[1 -1 -1 -1];
w=[0 0];
b=0;
for i=1:4;
for j=1:2
w(j)=w(j)+t(i)*x(j,i);
end
b=b+t(i);
end
disp ( 'Final Weight Matrix: ') ;
disp (w) ;
disp ('Final bias Values');
disp (b);

plot (x(1,1),x(2,1),'or', 'MarkerSize' ,20,'MarkerFaceColor',[0 0 1]);hold on;


plot (x(1,2),x(2,2),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,3),x(2,3),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,4),x(2,4),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
hold on;

m=-(w(1)/w(2));
c=-b/w(2) ;
x1=linspace (-2,2,100) ;
x2=m*x1+c ;
plot(x2,x1,'r');
axis ([-2 2 -2 2]);
OUTPUT:
Practical 7.

Implementation of Perceptron Model using MATLAB.

Perceptron: Perceptron was introduced by Frank Rosenblatt in 1957. He proposed a Perceptron learning
rule based on the original MCP neuron. A Perceptron is an algorithm for supervised learning of binary
classifiers. This algorithm enables neurons to learn and processes elements in the training set one at a
time.

Perceptron is considered a single-layer neural link with four main parameters. The perceptron model
begins with multiplying all input values and their weights, then adds these values to create the weighted
sum. Further, this weighted sum is applied to the activation function ‘f’ to obtain the desired output.
This activation function is also known as the step function and is represented by ‘f.’

This step function or Activation function is vital in ensuring that output is mapped between (0,1) or (-
1,1). Take note that the weight of input indicates a node’s strength. Similarly, an input value gives the
ability to shift the activation function curve up or down.

clc ;
clear all;
close all;
x=[1 1 0 0;1 0 1 0];
t=[1 -1 -1 -1];
w=[0 0];
b=0;
alpha=1 ;%Learning Rate
theta=0 ;%Threshold
yes=1;
epoch=0 ;
while yes
yes=0 ;
for i=1:4;
%------------

if (theta==0)
plot (x(1,1),x(2,1),'or', 'MarkerSize' ,20,'MarkerFaceColor',[0 0 1]);hold on;
plot (x(1,2),x(2,2),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,3),x(2,3),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,4),x(2,4),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
hold on;
m=-(w(1)/w(2));
c=-b/w(2) ;

x1=linspace (-2,2,100) ;
x2=m*x1+c;
plot (x2 , x1 ,'b');
line ([0 w(2)],[0 w(1)]);
axis ([-2 2 -2 2]);
hold off ;
end
if (theta~=0)
plot (x(1,1),x(2,1),'or', 'MarkerSize' ,20,'MarkerFaceColor',[0 0 1]);hold on;
plot (x(1,2),x(2,2),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,3),x(2,3),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,4),x(2,4),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
hold on;
m=-(w(1)/w(2));
c=-b/w(2);
x1=linspace (-2,2,100);
x2=m*x1+c+theta;
x3=m*x1+c-theta;
plot (x2 ,x1,'r');
plot (x3 ,x1,'r');
axis([-2 2 -2 2]);
end
pause ;
cnt=0 ;
yin=b+x(1,i)*w(1)+x(2,i)*w(2);
if (yin>theta)
y=1;
end
if (yin<=theta & yin>=-theta)
y=0;
end
if(yin<-theta)
y=-1;
end
if (y-t(i))
yes=i;
for j=1:2
w(j)=w(j)+t(i)*x(j,i)*alpha;
end
b=b+t(i)*alpha;
end
end
epoch=epoch+1;
end
disp ('Perceptron')
disp ( 'Final Weight Matrix: ') ;
disp (w) ;
disp ('Final bias Values');
disp (b);
disp ('Total epochs used');
disp (epoch) ;
figure (2) ;
if (theta==0)
plot (x(1,1),x(2,1),'or', 'MarkerSize' ,20,'MarkerFaceColor',[0 0 1]);hold on;
plot (x(1,2),x(2,2),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,3),x(2,3),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,4),x(2,4),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
hold on;
m=-(w(1)/w(2));
c=-b/w(2) ;

x1=linspace (-2,2,100) ;
x2=m*x1+c ;
plot (x2 , x1 ,'r');
line ([O w(2)],[0 w(1)]);
axis ([-2 2 -2 2]);
hold off ;
end
if (theta~=0)
plot (x(1,1),x(2,1),'or', 'MarkerSize' ,20,'MarkerFaceColor',[0 0 1]);hold on;
plot (x(1,2),x(2,2),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,3),x(2,3),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
plot (x(1,4),x(2,4),'or', 'MarkerSize' ,20,'MarkerFaceColor',[1 0 0]);hold on;
hold on;
m=-(w(1)/w(2));
c=-b/w(2);
x1=linspace (-2,2,100);
x2=m*x1+c+theta;
x3=m*x1+c-theta;
plot (x2 ,x1,'r');
plot (x3 ,x1,'r');
axis([-2 2 -2 2]);
end
OUTPUT:
Practical 8.

Implementation of Gradient-descent using MATLAB.

Gradient descent is an optimization algorithm which is commonly-used to train machine learning models
and neural networks. Training data helps these models learn over time, and the cost function within
gradient descent specifically acts as a barometer, gauging its accuracy with each iteration of parameter
updates. Until the function is close to or equal to zero, the model will continue to adjust its parameters
to yield the smallest possible error. Once machine learning models are optimized for accuracy, they can
be powerful tools for artificial intelligence (AI) and computer science applications.

• Learning rate (also referred to as step size or the alpha) is the size of the steps that are taken to
reach the minimum. This is typically a small value, and it is evaluated and updated based on the
behavior of the cost function. High learning rates result in larger steps but risks overshooting the
minimum. Conversely, a low learning rate has small step sizes. While it has the advantage of
more precision, the number of iterations compromises overall efficiency as this takes more time
and computations to reach the minimum.
• The cost (or loss) function measures the difference, or error, between actual y and predicted y
at its current position. This improves the machine learning model's efficacy by providing
feedback to the model so that it can adjust the parameters to minimize the error and find the
local or global minimum. It continuously iterates, moving along the direction of steepest descent
(or the negative gradient) until the cost function is close to or at zero. At this point, the model
will stop learning. Additionally, while the terms, cost function and loss function, are considered
synonymous, there is a slight difference between them. It’s worth noting that a loss function
refers to the error of one training example, while a cost function calculates the average error
across an entire training set.

syms x Y
f=x^2 + y^2 + 8;
f_x = diff(f,x);
f_y = diff(f,y);
alpha=[2 1 0.1];
for k = 1:1:3
w_old=[3 4];
x= w_old(1);
y= w_old(2);
w_new = w_old - alpha(k)*[subs(f_x) subs(f_y)];
for i = 1:39
hold on
subplot (1, 3, k)
plot ([w_new(1) w_old(1)],[w_new(2) w_old(2)])
xlim([-5 5])
ylim([-5 5])
w_old = w_new;
x = w_old(1);
y = w_old(2);
w_new = w_old - alpha(k)*[subs(f_x) subs(f_y)];
hold off
end
end

OUTPUT:
Practical 9.

Write a MATLAB program to construct and test auto associative network for input vector
using outer product rule.
clear all;
clc;
disp('To test Auto associatie network using outer product rule for following input vector');
x1=[1 -1 1 -1];
x2=[1 1 -1 -1];
n=0;
w1=x1'*x1;
w2=x2'*x2;
wm=w1+w2;
disp('input');
x1
x2
disp('Target');
x1
x2
disp('Weights');
w1
w2
disp('Weight matrix using Outer Products Rule');
wm
yin=x1*wm;
yin
for i=1:4
if(yin(i)>0)
y=1;
else
y=-1;
end
ny(i)=y;
if(y==x1(i))
n=n+1;
end
end
ny
if(n==4)
disp('This pattern is recognized');
else
disp('This pattern is not recognized');
end
n=0;
yin=x2*wm;
yin
for i=1:4
if(yin(i)>0)
y=1;
else
y=-1;
end
ny(i)=y;
if(y==x2(i))
n=n+1;
end
end
ny
if(n==4)
disp('This pattern is recognized');
else
disp('This pattern is not recognized');
end

OUTPUT:
Practical 10.

Write a MATLAB program to construct and test hetero-associative network for binary
inputs and targets.

%To construct and test Heteroassociative network for binary inputs


and targets
clear all;
clc;
disp(’Heteroassociative Network’);
x1=[100 0];
x2=[110 0];
x3=[000 1];
x4=[001 1];
t1=[1 0];
t2=[1 0];
t3=[0 1];
t4=[0 1];
n=0;
for i=1:4
for j=1:2
w(i,j)=((2*x1(i))-1)*((2*t1(j))-1)+((2*x2(i))-1)*((2*t2(j))-1)+
((2*x3(i))-1)*((2*t3(j))-1)+((2*x4(i))-1)*((2*t4(j))-1);
end
end
w
yin1=x1*w
yin2=x2*w
yin3=x3*w
yin4=x4*w
t1=[ 1 -1];
t2=[ 1 -1];
t3=[-1 1];
t4=[-1 1];
for i=1:2
if(yin1(i)>0)
y1(i)=1;
elseif (yin1(i)==0)
y1(i)=0;
else
y1(i)=-1;
end
end
y1
for i=1:2
if(y1(i)==t1(i))
n=n+1;
end
end
if (n==2)
disp(’The pattern is matched’);
else
disp(’The pattern is not matched’);
end
n=0;
for i=1:2
if(yin2(i)>0)
y2(i)=1;
elseif (yin2(i)==0)
y2(i)=0;
else
y2(i)=-1;
end
end
y2
for i=1:2
if(y2(i)==t2(i))
n=n+1;
end
end
if (n==2)
disp(’The pattern is matched’);
else
disp(’The pattern is not matched’);
end
n=0;
for i=1:2
if(yin3(i)>0)
y3(i)=1;
elseif (yin3(i)==0)
y3(i)=0;
else
y3(i)=-1;
end
end
y3
for i=1:2
if(y3(i)==t3(i))
n=n+1;
end
end
if (n==2)
disp(’The pattern is matched’);
else
disp(’The pattern is not matched’);
end
n=0;
for i=1:2
if(yin4(i)>0)
y4(i)=1;
elseif (yin4(i)==0)
y4(i)=0;
else
y4(i)=-1;
end
end
y4
for i=1:2
if(y4(i)==t4(i))
n=n+1;
end
end
if (n==2)
disp(’The pattern is matched’);
else
disp(’The pattern is not matched’);
end
n-0;

OUTPUT:

You might also like