You are on page 1of 21

c

WhatYou Will Learn


• ADALINE and MADALINE net-
works based on the delta rule or
the error correction learning rule.
• Derivation of delta rule with single
output unit, several output units
and its extension.
• Properties of ADALINE and
MADALINE networks.
• Use of ADALINE for pattern
classi-fication.
• MRI and MRII training algorithms
for MADALINE
theirdetails.
networks and
Adaline and
Madaline
Networks
5.1 Introduction
rule that is very closely related to the perceptron
Widrow and Hoff 119601developed the learning
the weights to reduce the difference between the net
learning rule. The rule, called Delta rule, adjusts
which results in a least mean squared error (LMSerror).
input to the output unit, and the desired output,
(Multilayered Adaline) networks use this LMS learning
Adaline (Adaptive Linear Neuron) and Madaline
Introduction to Neural NctM'0'k.s

weights on the interconnections


rule and are applied to Nanous neural netw€yk applications. The between
atv•adiustable. The ada linc and toadaline networks are discussed
the adalinc and madalltnc
in
detail in this chapter.

5.2 Adaline
Adaline. developed by Widt•oyvand Hoff 119601,is found to use bipolar activations for its input signals
and target output. weights and the bias of the adaline are adjustable. The learning rule used can be
called as Delta rule. l.•ast Mean Square rule or Widrow-Hoff rule. The derivation of this rule with single
output unit. several output units and its extension has been dealt already in Section 3.3.3. Since the
activation function is an identity function, the activation of the unit is its net input.
adaline is to be used for pattern classification,then, after training, a threshold functionis
applied to the net input to obtain the activation.
The activation is,

The adaline unit can solve the problem with linear separability if it occurs.

5.2.1 Architecture
The architecture of an adaline is shown in Fig. 5.1. 1
The adaline has only one output unit. This output unit b
receives input from several units and also from bias; whose
activation is always +1. The adaline also resembles a single
layer network as discussed in Section 2.7. It receives input
from several neurons. It should be noted that it also receives
input from the unit which is always '+1 ', called as bias. The
bias weights are also trained in the same manner as the other
weights. In Fig. 5.1, an input layer with Xi. xn and bias,
an output layer with only one output neuron is present.
The
link between the input and output neurons possess weighted
interconnections. These weights get changed as the
training
progresses.
Fig. 5.11 Architecture of an Adaline
5.2.2 Algorithm
Basically, the initial weights of adaline network haye
to be set to small random
discussed in Hebb or perceptron networks, because values and not to zero as
this may influence the
After the initial weights are assumed, the activations for error factor to be considered•
the
based on the training input patterns and the weights. By input unit are set. The net input is calculated
applying delta learning rule
the weight updation is being carried out. The training discussed in 3.3•3'
process is continued until the error, which is the
difference between the target and the net input becomes
minimum. The step based
training algorithm for
89
Adaline and Madaline Networks

weights (not
Step 1: Initialize but stuall random values are uqcd).
Set Icaming rate (i.
Step 2: NV1ile
stopping condition is false, do Step 3—7,

Step For each bipolar training pair s: t, perform Steps

Stcp 4: Set activations of input units Xi se for i = 1 to n..

Stcp S: Compute net input to output unit b+ XiWi

Step 6: Update bias and weights, i = 1 to n.


wßnew) = Wi(old) +
b(new) = b(old)+ y_in)

Step 7: Test for stopping condition.


etc.
The stopping condition may be when the weight change reaches small level or number of iterations

5.2.3 Application Algorithm


is mainly based
Theapplication procedure, which is used for testing the trained network is as follows. It
on the bipolar activation.
Step 1: Initialize weights obtained from the training algorithm.
Step 2: For each bipolar input vector x, perform Steps 3—5.
Step 3: Set activations of input unit.
Step 4: Calculate the net input to the output unit. y_in =

Step 5: Finally apply the activations to obtain the output y.


1, if y_in > 0
—1, if y_in < 0

SolvedExample

Example 5.1 Develop an adaline network for ANDNOT function with bipolar inputs and bipolar targets:
Solution The truth table for the ANDNOT function with bipolar inputs and targets is:
t
1 1 -1

1 -1 1

-1 1 -1
-1 -1 -1
90 Introduction to Neural Networks

The architecture is given by, 1


value say b
Initially the weights and bias are assumed a random
0.2. The learning rate is also set to 0.2.
error is
The weights are calculated until the least mean square
obtained.
The initial weights WI = w2 = b = 0.2 and x = 0.2
The iterations are carried out according to the algorithm in
Section 5.2.2.
The operations are carried out for 6 epochs, where the mean square error is minimized.

Inputs Target Net Weight changes Weights Error E


b t Aw Aw Aw
Epoch I 1 1 1 -1 0.6 -1.6 -0.32 -().32 .32 -0.12 -0.12 -0.12 2.56
1 -1 1 -0.12 1.12 0.22 -0.22 0.22 0.10 -0.34 0.10 1.25
-1 1 1 -0.34 0.13 -().13 .13 0.24 -0.48 -0.03 0.43
-1 -1 1 -1 0.21 -1.2 0.24 0.24 .24 0.48 -0.23 .......0.27 I .47
E-5.71
Epoch 2 1 1 1 —0.02 -0.98 4).195 -().195 .195 0.28 -0.43 -0.46 0.95
1 -1 1 1 0.25 0.76 0.15 -().15 0.15 0.43 -0.58 -0.31 0.57
-1.33 0.33 4).065 0.065 0.065 0.37 -0.51 -0.25 0.106
-1 -1 1 -0.11 0.18 0.18 .18 0.55 —0.33 0.43 0.8
E-2.43
Epoch 3 1 1 1 -1 O. I. 64 033 -().33 .33 0.22 -0.66 0.1 2.69
1 -1 1 1 0.98 0.018 0.036 0.036 0.03 0.22 -0.69 0.14 0.003
-1 1 1 -0.79 -0.21 0.043 0.043 0.043 -0.74
0.27 0.09 0.046
-1 -1 1 -1 057 -1.57 0.313 0.313 .313 0.58 -0.43 0.22 2.46
E-5.198
Epoch 4 1 1 1 —0.069-0.93 —O.
186--0.186 .1 0.39 -0.41 0.866
1 -1 1 1 0.601 0.39 0.08 4).08
0.08 0.47 -0.69 -().33 0.159
-1 1 1 -1 -1.49 0.49 -0.099 0.099
0.099 0.37 —0.59 -().23 0.248
-1 -1 1 -1 0.006 -().994 0.2 0.2 -02 0.57 -0.4 -0.45 0.988
E-2.251
Epoch 5 1 1 1 -1 -0.273 -().727 -0.145 -(j.145
.145 0.43 -0.55 -().59 0.528
1 -1 1 1 0.33 0.62 0.124-0.124
0.12 0.55 —0.67 —0.47 O. 382
-1 1 1 -1 -1.69 0.69 -().138
0.138 0.138 0.42 0.476
-1 -1 1 -1 -0.21 -0.79 0.157 A).33
.157 057 —0.37 -0.49 0612
Adaline and Mada/ine Networks

-l 4).289 -0.711 -o A42 0.505


Epoch oJ42 0.43 0.52 -0.63
0.68 0.137 -0.65 4).492 0.466
-1.712 0.71 -().142 0A42 0.142 0.425 -0.6 035 0.49
-l -().264 A).74 0.147 0.147 -0.147 0.572 0452 4).497 0.54 J
F*2.004

Thusit is found that the weights at the end of Epoch 6 is approximately,


WI = 0.5, w2 = —0.5, b = —05

By using the above weights, the LMS error is calculated


ym=b+ Ext.wt

Inputs Net input Yin E = (t Yin)


WI= 0.5, —0.5,

- 0.5 0.25
+ 0.5 0.25
-1.5 0.25
- 0.5 0.25
Thus the total error is,

E = E (t - 0.25 + 0.25 + 0.25 + 0.25

Thus the error is minimized from 5.7 at Epoch 1 to 2 at Epoc) 6.


Dependingupon the stopping conditions, the number of iterations or epochs takes place.
Thus the adaline network for the AND NOT function with bipolar inputs and targets is developed.

Example 5.2 Develop a MATLAB program for OR function with bipolar inputs and targets using adaline
network.
Solution The truth table for the OR function with bipolar inputs and targets is given as,

The MATLAB program is given by,


02

inputs and targets' ) :


ORfunction Bipolar

weights and bias

nety(i )+b:
Zoet input calculated andtarget

cel ) -nety(i
del ) -nety(i ) );
Geight changes
delu2 delb]
:updatlng of weights

b=b+delb:
*newweights

: Input pattern
x2(1)
Zpnntnng the results obtained
pnt-[x nt wc
end
for
nety(i i :

end
end
93
Adaline and Madaline Networks

output
b Netin liirget dclwl (lelw2 delb wl(n) w2(n) b(n)
l.oooo 1.0000 oaooo I.OOOO0.0700
().0700 0.0700 0.1700
1,0000 o. 1700 1.0000 0.0830 0.2530
0.0830 0.2530
10000 1.oooo 0.0870 1.0000 4).0913 ().0913 0.0913 0.1617 0.1783 03443
-I.OOOO 1,0000 0.0043-1,0000 0.1004 0.1004 -0.1004 0.2621 0.2787 0.2439
1.oooo l.oooo l.oooo 0.7847 l.oooo 0.0215 0.0215 0.0215 0.2837 0.3003 0.2654
l.oooo -l.oooo l.oooo 0.2488 l.oooo 0.0751-0.0751 0.0751 0.3588 0.2251 03405
-l.oooo l.oooo l.oooo 0.2069 1.0000 -0.0793 0.0793 0.0793 0.2795 0.3044 0.4198
-l.oooo -l.oooo l.oooo -0.1641-1.0000 0.0836 0,0836 -0.0836 0.3631 0.3880 0,3362
Epoch3 1.oooo l.oooo l.oooo 1.0873 1.0000 -0.0087-0.0087 -0.0087 0.3543 0.3793 0.3275
l.oooo -l.oooo l.oooo 0.3025 1.0000 0.0697-0.0697 0.0697 0.4241 0.3096 0.3973
-l.oooo l.oooo l.oooo 0.2827 1.0000 -0.0717 0.0717 0.0717 0.3523 0.3813 04690
-l.oooo -l.oooo l.oooo -0.2647 -l.oooo 0.0735 0.0735 -0.0735 0.4259 0.4548 0.3954
Epoch4 1.oooo I.OOOOl.oooo 1.2761 1.0000 4).0276-0.0276 -0.0276 0.3983 0.4272 0.3678
l.oooo -l.oooo l.oooo 0.3389 l.oooo 0.0661-0.0661 0.0661 0.4644 0.3611 0.4339
-l.oooo l.oooo l.oooo 0.3307 l.oooo -0.0669 0.0669 0.0669 0.3974 0.4280 0.5009
-l.oooo -l.oooo 1.0000-0.3246 -l.oooo 0.0675 0.0675 -0.0675 0.4650 0.4956 0.4333
Epoch5 1.0000 1.oooo l.oooo 1.3939 1.0000 -0.0394 -0.0394 -0.0394 0.4256 0.4562 0.3939
1.0000-1.0000 1.oooo 0.3634 l.oooo 0.0637-0.0637 0.0637 0.4893 0.3925 0.4576
-l.oooo l.oooo l.oooo 0.3609 l.oooo -0.0639 0.0639 0.0639 0.4253 0.4564 0.5215
-l.oooo -1.0000 1.oooo-0.3603 -l.oooo 0.0640 0.0640 -0.0640 0.4893 0.5204 0.4575

Example 5.3 Develop a MATLAB program to perform adaptive prediction with adaline.
Solution The linear neural networks can be used for adaptive prediction in adaptive Signal processing.
Assumenecessaryfrequency, sampling time etc.

Program
Z Adaptive Prediction with Adaline
clear:
CIC.
Z Input signa 1 x(t)

Z 12.5 usec sampling time


tl (0:N)*4*ts
t? + 4*(N+1)*ts
Z 0 to 7.5 sec
N s Ize(t. 2) X N- 302
94 Introduction to Neural Netvcorks

plot (t. ext). grid. title( 'S)qnal to be predicted')


p -4 . t Numberof synapses
formation of the input matrix X of size p by N
use the convolution matrix. Try convmtx(l:8, 5)
\ convmtx(xt. p)
t Thetarget signal is equalto the input signal
% memoryallocation for y
eps = zeros(size(d)) memory allocation for eps
eta = 0.4 learning rate/gain
% Initialisation of the weight vector
for n = learning loop
% predicted output signal
eps(n) = d(n) - y(n) ; %error signal
end
figure(l)
plot(t, d. 'b'. -r'), grid,
title( 'target andpredicted signals'), xlabel( 'time
[sec]' )
figure(2)
plot(t. eps). grid. title( 'predictionerror'). xldbel('time
[secJ')
Output
The output response is the target and predicted signals,
prediction error are shown in the following fig
tgure o.
Eile Edit Yiew Insett Y tndow Help

target and predicted signals


2

1.5

0.5

-0.5

2 3 4 5 6 7 8
time (secl
Target and Predicted
Signals
Ada/ine and Mada/ine Networks 95

pt.t*dion mot

02

0.2

04

f
08 1
0 2 3 4 5 6 7 8
time (sec)

Prediction Error

Example5.4 Write a M-file for adaptive system identificationusing adaline network.


Solution The adaline network for adaptive system identificationis developed using MATLAB
techniques by assuming necessary parameters .
programming

Program

AdaptiveSystem Identification
clear:
tic,
Inputsignal x(t)
Z Hz
t: = 0.005 : Z 5 msec sampling time
= 800 = 400 . N = NI + N2
Z 0 to 4 sec
Z 4 to 6 sec
Z 0 to 6 sec
yt = sin(3*t.*sin(2*pi*f*t)) .
; = 3 . Z Dimensionality of the system
-0.6 0.4) ; X unknownsystem parametersduring tl
b2= [0.9 -0.5 0.7) : Z unknownsystemparametersduring t2
[dl, sttJ - filter(bl. 1. xt(1:N1)) ;
filter(b2. 1. stt) .
dd [dl d2) : X output signal
formationof the input matrix X of size p by N
X- convmtx(xt.p) .
Alternatively. wecould calculated D as
d -
.NI+I:N)] :
X memoryallocation for y
to Ncuml Ncneorks
allocation for eps
memory
learning rate/qain
-0 5) Initial isation of the weight vector
n -IN learning loop
predicted output signal
eps(n) d(n) - y(n) error signal

If NIA. -w
end

fnoure(l)
subplot(2.1.1)
plot(t. xt). grid. title('lnput Signal. x(t)'), xlabel( 'time sec')
subplot(2.1.2)
plot(t. d. t. y. grid.
title( 'target andpredictedsignals'). xlabel ( 'time [sec] ' )
fngure(2)
plot(t. eps). grid, error for eta =
( 'time [sec)' )

Output

-0.6000 0.4000
0.2673 0.9183 -().3996
[b2; WI
0.9000 -().5000 0.7000
0.1357 1.0208 -().0624
The corresponding responses
are shown in the following
figures,
il'.ure o
be

3 4 s
2

Targetand
PredictedSignals
along wm me Input Signal
Ada/ine and Madaline Networks 97
No Z

pt.dtdion errorfot 02
03

0.2

01

0 1 2 3 4 6
5
time (sec)

Prediction Error

Example 5.5 Develop a MATLAB program for adaptive noise cancellation using adaline network.
Solution For adaptive noise cancellation in signal processing, adaline network is used and the performance
isnoted.The necessary parameters to be used are assumed.
Program

AcaptiveNoise Cancel lation


clear:
clc:
Theuseful signal u(t) is a frequency and amplitude modulatedsinusoid
=4e3. Z signal frequency
fm= 300 ; Z frequency modulation
% = 200 : Z amplitude modulation
Z sampling time
N 400 Z numberof sampling points
ZO to 10 msec
ut = . *f*( 1+0.2*cos(2*pi*fm*t)) . *t)
Thenoise IS
xt = 0.7) ,
Z the filtered noise
-0.6 -0.3]
Vt = filter(b. 1. xt) .
Z noisy signal
dt - ut+Vt •
figure(l)
subplot(2.1.l)
ut. le3*t. dt). grid.
title('lnput u(t) andnoisy input signal xlabel('time -- msec')
subplot(2.l.2)
98 Introduction to Neural Networks

xt. le3*t. vt). grid.


title('Notse x(t) andcolorednoise v(t)'). xlabel('time msec')
p -4 : X dimensionality of the input space
formation of the Input matrix Xcf size p by N
X - convmtx(xt.p) : X- X(:.
Y -zeros(1.N) • X memory allocation for y
eps - zeros(l.N) Z memory al location for uh = eps
eta - 0.05 X learning
rate/gain
2*(rand(l. p) -0.5) : Z Initialisation of the weightvector
for c -
for n = 1: N Z learning loop
y(n) = Z predicted outputsignal
eps(n) = dt(n) - y(n) : Z error signal
end
eta = 0.8*eta
end
figure(2)
subplot(2.1.1)
ut. le3*t. eps). grid.
title('lnput signal u(t) andestimatedSignal uh(t)').
xlabel( 'time msec')
subplot(2.l.2)
plot(1e3*t(p:N). ut(p:N)-eps(p.N)). grid.
title( 'estimation error'). xlabel( ' t Irre--[r.secJ')
Output
The output responses obtained are shown in the followmg figures.

ard ard
Ada/ine and Madeline Networkq 99

0.5

time ••(rrwocJ

Input Signal and Estimated Signal, Estimation Error

Madalineis the combination of adalines. It is also called multilayered adaline. If the adalines are combined,
suchthatthe output of some of them becomes input for others, then the net becomes multilayered. This
formsthe Madaline. Madaline has two training algorithms, MRI and MRII discussed in Sections 5.3.2
and 5.3.3.

5.3.1 Architecture
Thearchitectureof a simple madaline is shown in Fig. 5.2. The architecture is explained with 2 i/p
neurons,2 hidden neurons and 1 output neuron.

bl

W12

W22

Fig. 5.2 Architectureof a Madaline


100 Introduction to Neural Networks
and one output adaline. The input and dete
This architecture consists of two hidden adalines denoted by 'y'. Zl and z2 provide
output adaline is
the output signal of hidden adalines and z2.The neural net. Adaline was a
layer special net which
net
computational capabilities that are not in the single in a madaline network as shown in Fig.
results
had only one output unit, but combination of adalines discussed in Section 2.7. Th
network
5.2. This architecture resembles the multilayer feed forward but this increases
layer
can be any number of hidden layers between the input an the output the Computau.on
bias connections,
of the network. In this architecture, both the hidden units have separate along with the
output neuron, where the final
input connections. The output from the hidden neurons are linked to the
output of the network is calculated.

5.3.2 MRI Algorithm


This algorithm was formed by Widrow and Hoff in 1960. Here the weights of the hidden adaline units
are only adjusted while the weights of the output unit are fixed. First, the initialization of the weights
between the input and the hidden units is done (which are small positive random values). The inputis
presented, and based on the weightedinterconnectionsthe net input is calculated for both the hidden
units. Then by applying the activationsthe output is obtained for both the hidden units. With thß as
input for the output layer, and constant weighted inter-connections acting between the hidden and output
layer, the net input to the output neuron is found. By means of the net input, applying the activations the
final output of the net is calculated. Then this compared with the target, and suitable weight updations
are performed.
Parameters
The weights into y, VI and v2 are fixed as 0.5 with bias b3 as 0.5.
The activations function for units Zl, z2 and y is given by,

1, if p 20
f(p) =
0, if p < 0
The training algorithm is as follows: The algorithm is stated for the
architecture in Fig. 5.2.
Step 1: Initialize weights and bias, set learning rate •c.
VI = v2 = 0.5 and b3 = 0.5. Other weights may be
small random values.
Step 2: When stopping condition is false do Steps 3—9.
Step 3: For each bipolar training pair s : t, do Steps 4—8.
Step 4: Set activations of input units:
Xi= Sifor i = 1 to n
Step 5: Calculate net input of hidden adaline units.
= bl + + X2W21

Step 6: Find output of hidden adaline unit using activation


mentioned above.

Z2 = f(z-in2)
Adaline and Madaline Networks 101
net input to output
7• Calculate

Applyactivation to get the output of net.


y f (Yan
Find the and do weight updation.
stcp •
If t = y, no "Qight updation.
If t y. then,
Ift = I. then update weight on z, unit whose net input is closest to 0.
W (new) = WO(old) + oc (1 Xi
b, (new) = by(old) = (1-4)
If t = —l,then update weights on all units Zkwhich have positive net input.
wik(new)= Wik(old) +
bk(new) = bk (old) + (—1—Zink)
Step9: Test for the stopping condition.
Thestoppingcondition may be weight change or number of epochs etc.

MRIIAlgorithm
Thisalgorithm,proposed by Widrow, Winter and Banter in 1987, provides a method for updating all the
geizhtsin the net. Thus it allows training of weights in all layers of the net. Here several output units
maybeused.The total error of any input pattern is given as the sum of the squares of the errors at each
outputunit. This algorithm is different from MRI algorithm in the manner of weight updation only.
Thetrainingalgorithm is as follows:
Step 1: Initialize weights (all weights to some random value).
Set value of learning rate 01.
Step 2: When stopping condition is false, perform Steps 3—11.
Step 3: For each bipolar training pair, do Steps 4—10.
Step 4: Set activations of input units.
Xi=Si
Step 5: Calculate the net input of each hidden unit.

¯ bl +
—inl + X2W21
-in2 = b 2 + + X2W22
Step 6: Apply the activations to calculate output of each hidden unit.
= f (z_inl)
za = f (z-in2)
Step 7: Calculate the net input of y.
-in = 3+ + Z2V2.
Find output y by applying activations.
Y= f (y-in)
102

Step S: the emu and do

-10 toweach hidden unit net is to 0. Stan with


to O.then tot second closest etc. nit
input
Step o: Change the units' output.
It' it is change to -l
it is -l, chameeto +1.
to: Recompute the output ot•the net,
It' the emu is tvdueed:
Adjust the ss•ight on this utut.
(Taketatyt as the nesOycalculated output and use it for delta rule).
Step 11: Test for stopping condition.
The stoppingconditionmay be number of weight update iterations.

Example 5.6 Form a madalinenetworkfor XOR function with bipolar input


and targetsusingMRI
Solution lhe uuth table for XOR function is

1 1 -1

-1 -1 -1
The architecture of the madaline net can be
given as,

b3
x,
wt2

W22

b2

The algorithm dealt in the section 5.3.2 is


followed and
XOR function is generated. It is f(Njndthat within
Ada/ine and Madaline Networks 103

values
lgiåal
ratc 0.5
Lcarling WII W21 0.2, b, 0.3
WI2 0.1, W22
b2 0.5
VI V2= = 0.5
algorithm is followed.
fie training
first input pair (l, l,) .01,
Forfirst epoch,
and bias are initialized.
Stcp 1: Weights

WII = 0.05, W21= 0.2, bl = 0.3


= 0.1,W22 = 0.2, b2 = 0.15
0.5
Step2: Begin training
Step 3: For input and training pair (1,1) .•l, perform steps 4 —8.
Step 4: Xi=S1

Step 5: Calculate the net input


z—inl¯
= 0.3+1 x 0.05+1 x 0.2
= 0.55
z_in2= b2 + + X2W22
= 0.15+1 x 0.1+1 x 0.2
= 0.45 •
Step 6: Find output and z2 apply activations
Zl = f (Zinl)= r
= f (4n2) = L
Step 7: Calculate net input of output
y_in= b3+ + V2Z2
= 0.5+1 x 0.5+1 x 0.5
= 1.5
Find the output, and apply activations.
in
Step 8: Here, t 1 and y = I
Hencet y, so weights are updated on both z! and z.2since both have positive net input
WII (new) = (old) + z.inl)X 1
= 0.05 +0.5 1-0.55)1
= —0.725
,N'eunıl

ı- 0.45) ı
0.625
hl (ncw) hı(old) (İ (—1—Zinı)
2 0.3 + 0.5 (-1-0.55)
0.475
w21(old) + (i (—1—zinl)
x2
0.2 + 0.5 (-1-0.55)1
0.575
(new) z w22(old) + (i (—1—zin2) x 2
0.2 + 0.5(-ı-o.45) ı
0.525
b: (new) 132(old) + d (—1— Zin2)
0.15 + 0.5 (-1-0.45)
- 0.575
Step 9: Test stopping condition
This completes the Ist Epoch, Ist training pair presentation.
This process is repeated until the weight converges.
For this XOR function, the weight converges within 3 epochs, as tabulated below.
Inputs t Zinı Zin2 W21 ı Wi2 W22 2 YinY

ı ı 0.55 0.45 .725 —0.575 .475 —0.625 .525 —0.575 1115


ı ı -0.625 -().675 0.0875 -1.3875 0.3375 -0.625 .525 —0.575 -ı -1-05
-ı ı ı ı -1.1375-0.475 0.0875 -1.3875 0.3375 -1.3625 0.212 0.1625- I -l -05
ı 1.6375 1.31 ı .4065 —0.0685 .9815 -0.207 1.369 —0.994
Epoch 2 ı ı ı ı 0.3565 0.168 0.7285 —0.7465 1.6595-0.791 .207 -1.578
ı ı -().1845-3.154 1.3205 -1.339 1.068 -0.791 0.785 -1.578 -ı
-ı ı ı ı -3.728 4).002 ı .3205 -1.339 1.068 -ı .292 0.785 -1.077 -0.5-
1 1 1-1 -1.0495 -1.071 ı .3205 -1.339 1.068 -1.292 1.286 -1.077 1 5-1

Epoch 3 ı ı -1.0865 -1.083 1.3205 -1.339 -0.5-ı


ı .068 -1.292 1.286 -1.077
ı ı 1.5915 -3.655 ı .3205 -1.339 1 0.5
ı .068 -1.292 1.286 -1.077
-ı ı ı -ı -3.728 ı .501 ı .3205 -1.339 1.068 -ı .292 1.286 -1.077
0.5

ı ı -1.0495 -1.071
-0.5-
ı .3205 -1.339
ı .068 -1.292 1.286 -1.077
İn Epoch 3 all t y. Hence even if further iteration is done, the weights Willremain the same. Thus
network is formed for XOR function.
Ada/ine and Mada/ine Networks 105

S.' Write a MAII.,AI) pmgtam to generate XOR function for bipolar inputs and targets using
mRdalinc
so/gtion 17tc tr•uthtable for XOR function with bipolar inputs and targets ig given as,
x, x,
-l -l -l
-l
-l
-l
MATLAB program is as follows

Program

{Madalinefor XORfuntion
clc;
clear:
\lnput and Target

t-1-1 1 1
{Assumeinitial weight matrix andbias
0.1:o.2 0.2]:
0.15]:

2=0.5:
con-I :
alpha=0.5:
epoch=0 :
while con
con=0;
for
for j-1:2
if
else
end
end
yin-b2+z(1)*v(1)+z(2)*v(2):
if yin>-o
else
end
if y--t(i)
Introduction to Ncuml

con-I.
abs(zinc 2))
if

end

else
for k=i'.2
if

end

end

end
epoch=epoch+l

dispeueight matrix of hidden layer')


disp('Bias of hiddenlayer')
disp(blh
disp('Total Epoch');
disp(epoch)•,
Output
Weight matrix of hidden layer
1.3203 -1.2922
-1.3391 1.2859
Bias of hidden layer
-1.0672 -1.0766
Total Epoch
3
Hence the network converges within 3 epochs.,The final weights and bias in the hidden layer at the
en

Epoch 3 are mentioned above.

In this chapter, the architecture, algorithm and the other features of adaline and madaline netw
discussed. It was found that these networks used an efficient learning rule called as Least Mean
or Widrow-Hoff learning rule. The network in these cases was trained untiltb
Error learning rule
Adaline and Madaline Networks 107

a minimum value. It was found that if a logical


functionwas representedby madaline, the
learningprocedure takes many steps to converge. Obviously, madalinc
is suitable for classification jobs,
in „hich the input pattems have to be storvd according to the specifiedcriteria.

8eviewQ
5.1 Give details on the development of adaline net.
5.2 State the delta learning rule. Why is it called as Least Mean Squares rule?
5.3 Derive the expression for extended delta learning rule.
5.4 State and derive the delta learning rule for several output classes.
5.5 Draw the architecture of the adaline net.
5.6 State the training and application algorithm of the adaline net.
5.7 How is madaline net formed from adaline net?
5.8 What should be the value of learning state in an adaline net?
5.9 What are the two algorithms used in a madaline net?
5.10 Differentiate MRI algorithm from MRII algorithm.
5.11 With architecture, explain the MRI training algorithm.
5.12 Discuss in detail the MRII training algorithm.
5.13 What are the initial weights and bias assumed in MRI training algorithm between the hidden
and the output units? Can they can be varied?
5.14 Interpret the weights from the madaline net geometrically.

Exercise Problems
5.15 Generate AND functions with binary inputs and bipolar targets using adaline net.
5.16 Form the AND NOT function with binary data using adaline net.
5.17 Using adaline net, generate XOR function with bipolar inputs and targets.
5.18 Generate AND function with bipolar inputs and targets with the help of Madaline MRI
algorithm.
5.19 Construct XOR function using two ANDNOT functions, and train it using the Madaline MRI
algorithm.
and targets using madaline MRII algorithm.
V 5.20 Form OR function with bipolar input
5.21 Using LMS rule, find the weights required to perform the following classifications: vectors
(1, 1, —1,—1)and (1, —1—1,1) are member of class (having a target value of 1) and vectors
1, 1, -1) and( —1,- l, 1, 1) are not members of class (having target value of—I). Assume
learning rates and initial weights. Using the training vectors as test vector (input), test the
response of the net.

You might also like