Professional Documents
Culture Documents
Week-4
Basic Model of Artificial Neurons
Nonlinear model of an ANN • uq is a linear combiner of input
(xj) and synaptic weight (wqj)
n
uq = wqj x j = wqT x = xT wq
j =1
wqn
n
vq = wqj x j uq = wqj x j = wqT x = xT wq
j =0
j =1
Activation potential
vq = u q − q
Output of activation function:
y q = f ( vq )
y q = f (v q )
n
= f wqj x j − q
f(•) : activation function
j =1
y q = f ( vq )
n
= f wqj x j
j =0
y q = f lin (vq ) = vq
0 if vq 0
y q = f hl (vq ) =
1 if vq 0
− 1 if vq 0
y q = f shl (vq ) = 0 if vq = 0
1 if v 0
q
1
0 if vq −
2
1 1 1
y q = f sl (vq ) = vq + if - vq
2 2 2
1
1 if vq
2
− 1 if vq −1
y q = f ssl (vq ) = vq if - 1 vq 1
1 if v 1
q
−vq
dfbs ( vq ) e
g bs (vq ) = dvq
= −vq 2
(1 + e )
= f bs (vq )[1 − f bs (vq )]
df hts (vq )
g hts (vq ) = = [1 + f hts (vq )][1 − f hts (vq )]
dvq
Week-4
ADAptive LINear Element (ADALINE)
ADALINE is the basic building
linear combiner
block used in many neural
networks. signum
The network consists of a linear
combiner cascaded with a signum
function (bipolar or unipolar).
ADALINE is an adaptive pattern
classification network that is
trained by the LMS algorithm.
1
MSE
J ( w) 1 e 2 (k )
J ( w) w= w( k )
w 2 w
1
= [d (k ) − wT (k ) x(k )]2 w(k )
2 w w(k + 1)
1 2
= [d (k ) − 2d (k ) xT (k ) w(k ) + wT (k ) x(k ) xT (k ) w(k )
2 w
= −d (k ) x(k ) + x(k ) xT (k ) w(k ) = −d (k ) x(k ) + wT (k ) x(k ) x(k )
= −[d (k ) − wT (k ) x(k )]x(k )
= − e( k ) x ( k )
e(k ) = d (k ) − wi (k ) xi (k )
sangat banyak hingga mencapai error surface.
- Jika nilai miu terlalu besar, learning rule nya akan tidak
i =0 stabil ➔ karena pendekatan yang digunakan untuk
evaluasi gradien adalah rumus sebelumnya (gradient
wi (k + 1) = wi (k ) + e(k ) xi (k ) objective function).
- Jika miu terlalu besar, maka menyebabkan weight nya
tidak konvergen (mnejadi divergen).
2
0
max
max : Largest eigen value of input covar matrix
Adjustment of µ :
0
(k ) = ; 0 and 1
k 0
1+
e(k ) = d (k ) − v(k )
Step 5 : Update the synaptic weights : wi (k + 1) = wi (k ) + (k )e(k ) xi (k )
for i = 0, 1, 2, …., n.
Step 6 : if convergence is achieved then stop, else set k k+1 then go to step 2.
w0 (k )
v(k ) y (k )
x1 (k )
w1 (k )
xn (k )
wn (k )
e(k ) d (k )
Adaptive Algorithm
0 if v(k ) 0 − 1 if v(k ) 0
Compute y(k) : y (k ) = f hl (v(k )) = or y (k ) = f shl (v(k )) = 0 if v(k ) = 0
1 if v(k ) 0 1 if v(k ) 0
wi (k + 1) = wi (k ) + e(k ) xi (k )
wn (k )
0 if v(k ) 0
e(k ) d (k )
Adaptive Algorithm
y (k ) = f hl (v(k )) =
1 if v(k ) 0
epoch k x0(k) x1(k) x2(k) d(k) y(k)
1 1 0 0 -0.1 0
2 1 0 1 -0.1 0
1
3 1 1 0 -0.1 0
4 1 1 1 0.1 1
1 1 0 0 -0.1
2 1 0 1 -0.1
2
3 1 1 0 -0.1
4 1 1 1 0.1
1 1 0 0 -0.1
2 1 0 1 -0.1
3
3 1 1 0 -0.1
4 1 1 1 0.1
1 1 0 0 -0.1
2 1 0 1 -0.1
4
3 1 1 0 -0.1
4 1 1 1 0.1
1 1 0 0 -0.1
2 1 0 1 -0.1
2
3 1 1 0 -0.1
4 1 1 1 0.1
1 1 0 0 -0.1
2 1 0 1 -0.1
3
3 1 1 0 -0.1
4 1 1 1 0.1
1 1 0 0 -0.1
2 1 0 1 -0.1
4
3 1 1 0 -0.1
4 1 1 1 0.1
wn (k )
e(k ) = d (k ) − y(k )
e(k ) d (k )
Adaptive Algorithm
wi (k + 1) = wi (k ) + e(k ) xi (k )
y (k ) = sgn v(k )
w1 (k ) x1 (k ) + w2 (k ) x2 (k ) + w0 (k ) = 0
w1 (k ) w (k )
→ x2 ( k ) = − x1 (k ) − 0
w2 (k ) w2 (k )
e( k ) x ( k )
αLMS learning rule : w(k + 1) = w(k ) + ;0.1 1
2
x(k ) 2
MADALINE II
MADALINE I