Professional Documents
Culture Documents
Artificial Neural Network - PDF
Artificial Neural Network - PDF
Human brain?
Recognizing a face by human brain and the same task
done by computer
Is it a easy task by computer ?
Speed of a silicon chip 10^-9 s
Speed of a brain 10^-3 s
Massively parallel network of neuron (with 10
billions of neurons) or 60 trillions of
interconnections
What we can do is to try to mimic some part of brain
work
Evidential Response
Decision with a measure of confidence
Fault tolerance
graceful of degradation
Technical
Biological
Neural Network
Neuron
Biological inspiration
The spikes travelling along the axon of the pre-synaptic
neuron trigger the release of neurotransmitter substances at the
synapse.
The neurotransmitters cause excitation or inhibition in the
dendrite of the post-synaptic neuron.
The integration of the excitatory and inhibitory signals may
produce spikes in the post-synaptic neuron.
The contribution of the signals depends on the strength of the
synaptic connection.
neurons.
In technical systems, we also refer to them as units or nodes.
information.
In biological systems, one neuron can be connected to as
many
as
10,000
other
neurons.
Perceptron
A computer model or computerized machine devised to
represent or simulate the ability of the brain to recognize and
discriminate.
Perceptrons are the easiest data structures to learn for the
study of Neural Networking.
The links between the nodes not only show the relationship
between the nodes but also transmit data and information,
called a signal or impulse.
The perceptron is a simple model of a neuron (nerve cell).
( v )
1 if
1 if
v
v
b (bias)
x1
x2
w1
w2
wn
xn
(v)
0
0
b
x1
w1
Induced
Field
Input
values
x2
w2
xm
wm
weights
Summing
function
Activation
function
()
Output
Bias of a Neuron
The bias b has the effect of applying a transformation to
the weighted sum u
v=u+b
The bias is an external parameter of the neuron. It can be
modeled by adding an extra input.
v is called induced field of the neuron
m
j0
w0 b
w jxj
Neuron Models
The choice of activation function determines the
neuron model.
Examples:
a if v c
(
v
)
step function:
b if v c
ramp function:
a if v c
( v ) b if v d
a (( v c )( b a ) /( d c )) otherwise
Gaussian function:
(v )
1
1 exp( xv y )
1 v 2
1
exp
2
2
Step Function
b
Weight
Weight
Height
Height
y mx c
m is the is the slope and c is the intercept
y mx c
Can be written in neural network form
yk w1 x w0
This equation is the neural model of a straight line
w20
y2
2
x1
w20
w 21
w 21
y2 w21 x w20
is the slope X is the input
w30
y3
w 31
w32
X
Z
y f ( x1 , x2 , x3 ....xn )
weight
Height
E p (t p y p )
Gradient descent
E
w
Gradient descent
E
wij wij
E
p
p E p w
ij
E p
G
p wij
E
E yo
.
woi yo woi
1 p
E (t 0 y 0 p ) 2
2
p
E
(t0 y0 )
yo
yk w1 x w0
y0 woi xi w0 woi xi
j
yo
woi woi
x xi
oi i
E
(t0 y0 ) xi
woi
woi (t0 y0 ) xi
woi (t0 y0 ) xi
Bias
b
x1
w1
Induced
Field
Input
values
x2
w2
xm
wm
weights
Summing
function
Activation
function
()
Output
wk 1
wk 0
wk 2
vk
wk 3
(vk ) y
w kn
vk
x j w kj
Sigmoid function
(vk ) 1
(vk ) 0
1
y ( vk )
1 e avk
Tanh function
(vk ) tanh(a vk )
Learning Mechanism in NN
1
2
E(n) ek (n)
2 k
wkj ( n) ek ( n) x j ( n)
This we have found using gradient descent method
wkj
()
Xi
_
X i
i
_
di
X i , di
X i
i
X test
Using Euclidian distance, we will be finding that
is nearer to any of the
_
X i
i
X test
Xi
i
Now suppose
Xj
is found to be nearer to
X test
X j , {X1 , X 2 ,....X n }
Xj
is nearest neighbor of
X test
X test
0 and 1 pattern for some input vectors with
te st
outlier
te st
Hebbian Learning
Synaptic weight
In other way
It
Hebbian Synapse
Hebbian synapses
a) Time-dependent (It means it should work in
synchronized way)
b) Local ( It depends upon presynaptic and post synaptic)
c) Strongly interactive (It is connected with presynaptic
and post synaptic neuron)
Positive correlation- Synaptic strengthening
Uncorrelated or negative correlation- Synaptic
weakening
Pre synaptic
Hebbs hypothesis
wkj
xj
yk
xj
wkj
yk
The weight rise exponentially, i.e.
w kj
Covariance Hypothesis
x
y
yk
wkj ( x j x )( yk y )
Covariance hypothesis is the modified version of Hebbian learning mechanism
Covariance hypothesis
(xj x )
wkj
yk
- ( x j x )
Covariance hypothesis has got the stability effect
wkj
xj x
yk y
wkj
xj x
yk y
xj x
yk y
xj x
yk y
w kj
Competitive learning
Competitive learning
1
Layer
of
Source
w 51
w52
Feedback
connections
w53
w54
6
Inhibitory
neuron 3
7
4
Feedforward connections
Excitory
O/P layer
yk
1 if vk v j for all
0
otherwise
wkj 1
j k
for all k
wkj
( x j wkj )
0
If neuron k wins
If neuron k looses