You are on page 1of 28

ArLlflclal neural neLworks

lnLroducLlon
erlew
1 8lologlcal lnsplraLlon
2 ArLlflclal neurons and neural neLworks
3 Learnlng processes
4 Learnlng wlLh arLlflclal neural neLworks
8lologlcal lnsplraLlon
Anlmals are able Lo reacL adapLlely Lo changes ln Lhelr exLernal and lnLernal
enlronmenL and Lhey use Lhelr nerous sysLem Lo perform Lhese behalours
An approprlaLe model/slmulaLlon of Lhe nerous sysLem should be able Lo
produce slmllar responses and behalours ln arLlflclal sysLems
1he nerous sysLem ls bulld by relaLlely slmple unlLs Lhe neurons so copylng
Lhelr behalor and funcLlonallLy should be Lhe soluLlon
8lologlcal lnsplraLlon
uendrlLes
Soma (cell body)
Axon
8lologlcal lnsplraLlon
synapses
axon
dendrlLes
1he lnformaLlon Lransmlsslon happens aL Lhe synapses
8lologlcal lnsplraLlon
1he splkes Lraelllng along Lhe axon of Lhe presynapLlc neuron Lrlgger Lhe
release of neuroLransmlLLer subsLances aL Lhe synapse
1he neuroLransmlLLers cause exclLaLlon or lnhlblLlon ln Lhe dendrlLe of Lhe
posLsynapLlc neuron
1he lnLegraLlon of Lhe exclLaLory and lnhlblLory slgnals may produce splkes ln
Lhe posLsynapLlc neuron
1he conLrlbuLlon of Lhe slgnals depends on Lhe sLrengLh of Lhe synapLlc
connecLlon
ArLlflclal neurons
neurons work by processlng lnformaLlon 1hey recele and prolde lnformaLlon
ln form of splkes
1he McCulloghlLLs model
l
n
p
u
L
s
uLpuL
w
2
w
1
w
3
w
n
w
n1

x
1
x
2
x
3

x
n1
x
n
y

: H y x w :
n
i
i i

ArLlflclal neurons
1he McCulloghlLLs model
- splkes are lnLerpreLed as splke raLes
- synapLlc sLrengLh are LranslaLed as synapLlc welghLs
- exclLaLlon means poslLle producL beLween Lhe lncomlng splke raLe
and Lhe correspondlng synapLlc welghL
- lnhlblLlon means negaLle producL beLween Lhe lncomlng splke raLe
and Lhe correspondlng synapLlc welghL
ArLlflclal neurons
nonllnear generallzaLlon of Lhe McCulloghlLLs neuron
w x f y
y ls Lhe neuron's ouLpuL x ls Lhe ecLor of lnpuLs and w ls Lhe ecLor of
synapLlc welghLs
Lxamples

,
w x
, x w
e y
e
y
T

slgmoldal neuron
Causslan neuron
ArLlflclal neural neLworks
l
n
p
u
L
s
uLpuL
An arLlflclal neural neLwork ls composed of many arLlflclal neurons LhaL are
llnked LogeLher accordlng Lo a speclflc neLwork archlLecLure 1he ob[ecLle of
Lhe neural neLwork ls Lo Lransform Lhe lnpuLs lnLo meanlngful ouLpuLs
InIormation-processing system.
Neurons process the inIormation.
The signals are transmitted by means oI connection
links.
The links possess an associated weight.
The output signal is obtained by applying activations to
the net input.
ARTIFICIAL NEURAL NET
ARTIFICIAL NEURAL NET

The figure shows a simple artificial neural net with two input
neurons (X
1
, X
2
) and one output neuron (Y). The inter
connected weights are given by W
1
and W
2.
The neuron is the basic information processing unit of a
NN. It consists of:
A set of links, describing the neuron inputs, with weights


2
An adder function (linear combiner) for computing the
weighted sum of the inputs (real numbers):
Activation function for limiting the amplitude of the
neuron output.


u
m

u y -
!ROCESSING OF AN ARTIFICIAL NET
13
O!ERATION OF A NEURAL NET
-
f
Weighted
sum
Input
vector
Output
Activation
function
Weight
vector

w
0
w
1
w
n
x
0
x
1
x
n
ias
WEIGHT AND IAS U!DATION
!er Sample Updating
updating weights and biases after the presentation of each
sample.
!er Training Set Updating (Epoch or Iteration)
weight and bias increments could be accumulated in
variables and the weights and biases updated after all the
samples of the training set have been presented.
IAS OF AN ARTIFICIAL NEURON
The bias vaIue is added to the weighted sum
_w
i
x
i
so that we can transform it from the origin.
Y
3
= _w
i
x
i
b, where b is the bias
x
1
x
2
=0
x
1
x
2
= 1

x
1
x
2
= 1
STO!!ING CONDITION
All change in weights (w
ij
) in the previous epoch are below
some threshold, or
Apre-specified number of epochs has expired.
In practice, several hundreds of thousands of epochs may be
required before the weights will converge.
UILDING LOCKS OF ARTIFICIAL
NEURAL NET
Network Architecture (Connection between
Neurons)
Setting the Weights (Training)
Activation Function
Aprll 2007
21
LAYER !RO!ERTIES
W Input Layer:
Each input unit may be designated by an attribute
value possessed by the instance.
W Hidden Layer:
Not directly observable, provides nonlinearities for the
network.
W Output Layer:
Encodes possible values.
22
TRAINING !ROCESS
Supervised Training - !roviding the network with a series of
sample inputs and comparing the output with the expected
responses.
Unsupervised Training - Most similar input vector is
assigned to the same output unit.
Reinforcement Training - Right answer is not provided but
indication of whether right` or wrong` is provided.
23
ACTIVATION FUNCTION
ACTIVATION LEVEL - DISCRETE OR CONTINUOUS
HARD LIMIT FUCNTION (DISCRETE)
inary Activation function
ipolar activation function
Identity function
SIGMOIDAL ACTIVATION FUNCTION (CONTINUOUS)
inary Sigmoidal activation function
ipolar Sigmoidal activation function
ACTIVATION FUNCTION
Activation functions:
A)Identity
()inary step
(C)ipolar step
(D)inary sigmoidal
(E)ipolar sigmoidal
(F)Ramp
!ROLEM SOLVING
Select a suitable NN model based on the nature of the
problem.
Construct a NN according to the characteristics of the
application domain.
Train the neural network with the learning procedure of
the selected model.
Use the trained network for making inference or solving
problems.
26
NEURAL NETWORKS
Neural Network learns by adjusting the weights so
as to be able to correctly classiIy the training data
and hence aIter testing phase to classiIy unknown
data.
Neural Network needs long time Ior training.
Neural Network has a high tolerance to noisy and
incomplete data.
SALIENT FEATURES OF ANN
WAdaptive learning
WSelf-organization
WReal-time operation
WFault tolerance via redundant information coding
WMassive parallelism
WLearning and generalizing ability
WDistributed representation
FEW A!!LICATIONS OF NEURAL
NETWORKS

You might also like