Professional Documents
Culture Documents
Adaline & Medaline Network
Adaline & Medaline Network
5.2 Adaline
Adaline. developed by Widt•oyvand Hoff 119601,is found to use bipolar activations for its input signals
and target output. weights and the bias of the adaline are adjustable. The learning rule used can be
called as Delta rule. l.•ast Mean Square rule or Widrow-Hoff rule. The derivation of this rule with single
output unit. several output units and its extension has been dealt already in Section 3.3.3. Since the
activation function is an identity function, the activation of the unit is its net input.
adaline is to be used for pattern classification,then, after training, a threshold functionis
applied to the net input to obtain the activation.
The activation is,
The adaline unit can solve the problem with linear separability if it occurs.
5.2.1 Architecture
The architecture of an adaline is shown in Fig. 5.1. 1
The adaline has only one output unit. This output unit b
receives input from several units and also from bias; whose
activation is always +1. The adaline also resembles a single
layer network as discussed in Section 2.7. It receives input
from several neurons. It should be noted that it also receives
input from the unit which is always '+1 ', called as bias. The
bias weights are also trained in the same manner as the other
weights. In Fig. 5.1, an input layer with Xi. xn and bias,
an output layer with only one output neuron is present.
The
link between the input and output neurons possess weighted
interconnections. These weights get changed as the
training
progresses.
Fig. 5.11 Architecture of an Adaline
5.2.2 Algorithm
Basically, the initial weights of adaline network haye
to be set to small random
discussed in Hebb or perceptron networks, because values and not to zero as
this may influence the
After the initial weights are assumed, the activations for error factor to be considered•
the
based on the training input patterns and the weights. By input unit are set. The net input is calculated
applying delta learning rule
the weight updation is being carried out. The training discussed in 3.3•3'
process is continued until the error, which is the
difference between the target and the net input becomes
minimum. The step based
training algorithm for
89
Adaline and Madaline Networks
weights (not
Step 1: Initialize but stuall random values are uqcd).
Set Icaming rate (i.
Step 2: NV1ile
stopping condition is false, do Step 3—7,
SolvedExample
Example 5.1 Develop an adaline network for ANDNOT function with bipolar inputs and bipolar targets:
Solution The truth table for the ANDNOT function with bipolar inputs and targets is:
t
1 1 -1
1 -1 1
-1 1 -1
-1 -1 -1
90 Introduction to Neural Networks
- 0.5 0.25
+ 0.5 0.25
-1.5 0.25
- 0.5 0.25
Thus the total error is,
Example 5.2 Develop a MATLAB program for OR function with bipolar inputs and targets using adaline
network.
Solution The truth table for the OR function with bipolar inputs and targets is given as,
nety(i )+b:
Zoet input calculated andtarget
cel ) -nety(i
del ) -nety(i ) );
Geight changes
delu2 delb]
:updatlng of weights
b=b+delb:
*newweights
: Input pattern
x2(1)
Zpnntnng the results obtained
pnt-[x nt wc
end
for
nety(i i :
end
end
93
Adaline and Madaline Networks
output
b Netin liirget dclwl (lelw2 delb wl(n) w2(n) b(n)
l.oooo 1.0000 oaooo I.OOOO0.0700
().0700 0.0700 0.1700
1,0000 o. 1700 1.0000 0.0830 0.2530
0.0830 0.2530
10000 1.oooo 0.0870 1.0000 4).0913 ().0913 0.0913 0.1617 0.1783 03443
-I.OOOO 1,0000 0.0043-1,0000 0.1004 0.1004 -0.1004 0.2621 0.2787 0.2439
1.oooo l.oooo l.oooo 0.7847 l.oooo 0.0215 0.0215 0.0215 0.2837 0.3003 0.2654
l.oooo -l.oooo l.oooo 0.2488 l.oooo 0.0751-0.0751 0.0751 0.3588 0.2251 03405
-l.oooo l.oooo l.oooo 0.2069 1.0000 -0.0793 0.0793 0.0793 0.2795 0.3044 0.4198
-l.oooo -l.oooo l.oooo -0.1641-1.0000 0.0836 0,0836 -0.0836 0.3631 0.3880 0,3362
Epoch3 1.oooo l.oooo l.oooo 1.0873 1.0000 -0.0087-0.0087 -0.0087 0.3543 0.3793 0.3275
l.oooo -l.oooo l.oooo 0.3025 1.0000 0.0697-0.0697 0.0697 0.4241 0.3096 0.3973
-l.oooo l.oooo l.oooo 0.2827 1.0000 -0.0717 0.0717 0.0717 0.3523 0.3813 04690
-l.oooo -l.oooo l.oooo -0.2647 -l.oooo 0.0735 0.0735 -0.0735 0.4259 0.4548 0.3954
Epoch4 1.oooo I.OOOOl.oooo 1.2761 1.0000 4).0276-0.0276 -0.0276 0.3983 0.4272 0.3678
l.oooo -l.oooo l.oooo 0.3389 l.oooo 0.0661-0.0661 0.0661 0.4644 0.3611 0.4339
-l.oooo l.oooo l.oooo 0.3307 l.oooo -0.0669 0.0669 0.0669 0.3974 0.4280 0.5009
-l.oooo -l.oooo 1.0000-0.3246 -l.oooo 0.0675 0.0675 -0.0675 0.4650 0.4956 0.4333
Epoch5 1.0000 1.oooo l.oooo 1.3939 1.0000 -0.0394 -0.0394 -0.0394 0.4256 0.4562 0.3939
1.0000-1.0000 1.oooo 0.3634 l.oooo 0.0637-0.0637 0.0637 0.4893 0.3925 0.4576
-l.oooo l.oooo l.oooo 0.3609 l.oooo -0.0639 0.0639 0.0639 0.4253 0.4564 0.5215
-l.oooo -1.0000 1.oooo-0.3603 -l.oooo 0.0640 0.0640 -0.0640 0.4893 0.5204 0.4575
Example 5.3 Develop a MATLAB program to perform adaptive prediction with adaline.
Solution The linear neural networks can be used for adaptive prediction in adaptive Signal processing.
Assumenecessaryfrequency, sampling time etc.
Program
Z Adaptive Prediction with Adaline
clear:
CIC.
Z Input signa 1 x(t)
1.5
0.5
-0.5
2 3 4 5 6 7 8
time (secl
Target and Predicted
Signals
Ada/ine and Mada/ine Networks 95
pt.t*dion mot
02
0.2
04
f
08 1
0 2 3 4 5 6 7 8
time (sec)
Prediction Error
Program
AdaptiveSystem Identification
clear:
tic,
Inputsignal x(t)
Z Hz
t: = 0.005 : Z 5 msec sampling time
= 800 = 400 . N = NI + N2
Z 0 to 4 sec
Z 4 to 6 sec
Z 0 to 6 sec
yt = sin(3*t.*sin(2*pi*f*t)) .
; = 3 . Z Dimensionality of the system
-0.6 0.4) ; X unknownsystem parametersduring tl
b2= [0.9 -0.5 0.7) : Z unknownsystemparametersduring t2
[dl, sttJ - filter(bl. 1. xt(1:N1)) ;
filter(b2. 1. stt) .
dd [dl d2) : X output signal
formationof the input matrix X of size p by N
X- convmtx(xt.p) .
Alternatively. wecould calculated D as
d -
.NI+I:N)] :
X memoryallocation for y
to Ncuml Ncneorks
allocation for eps
memory
learning rate/qain
-0 5) Initial isation of the weight vector
n -IN learning loop
predicted output signal
eps(n) d(n) - y(n) error signal
If NIA. -w
end
fnoure(l)
subplot(2.1.1)
plot(t. xt). grid. title('lnput Signal. x(t)'), xlabel( 'time sec')
subplot(2.1.2)
plot(t. d. t. y. grid.
title( 'target andpredictedsignals'). xlabel ( 'time [sec] ' )
fngure(2)
plot(t. eps). grid, error for eta =
( 'time [sec)' )
Output
-0.6000 0.4000
0.2673 0.9183 -().3996
[b2; WI
0.9000 -().5000 0.7000
0.1357 1.0208 -().0624
The corresponding responses
are shown in the following
figures,
il'.ure o
be
3 4 s
2
Targetand
PredictedSignals
along wm me Input Signal
Ada/ine and Madaline Networks 97
No Z
pt.dtdion errorfot 02
03
0.2
01
0 1 2 3 4 6
5
time (sec)
Prediction Error
Example 5.5 Develop a MATLAB program for adaptive noise cancellation using adaline network.
Solution For adaptive noise cancellation in signal processing, adaline network is used and the performance
isnoted.The necessary parameters to be used are assumed.
Program
ard ard
Ada/ine and Madeline Networkq 99
0.5
time ••(rrwocJ
Madalineis the combination of adalines. It is also called multilayered adaline. If the adalines are combined,
suchthatthe output of some of them becomes input for others, then the net becomes multilayered. This
formsthe Madaline. Madaline has two training algorithms, MRI and MRII discussed in Sections 5.3.2
and 5.3.3.
5.3.1 Architecture
Thearchitectureof a simple madaline is shown in Fig. 5.2. The architecture is explained with 2 i/p
neurons,2 hidden neurons and 1 output neuron.
bl
W12
W22
1, if p 20
f(p) =
0, if p < 0
The training algorithm is as follows: The algorithm is stated for the
architecture in Fig. 5.2.
Step 1: Initialize weights and bias, set learning rate •c.
VI = v2 = 0.5 and b3 = 0.5. Other weights may be
small random values.
Step 2: When stopping condition is false do Steps 3—9.
Step 3: For each bipolar training pair s : t, do Steps 4—8.
Step 4: Set activations of input units:
Xi= Sifor i = 1 to n
Step 5: Calculate net input of hidden adaline units.
= bl + + X2W21
Z2 = f(z-in2)
Adaline and Madaline Networks 101
net input to output
7• Calculate
MRIIAlgorithm
Thisalgorithm,proposed by Widrow, Winter and Banter in 1987, provides a method for updating all the
geizhtsin the net. Thus it allows training of weights in all layers of the net. Here several output units
maybeused.The total error of any input pattern is given as the sum of the squares of the errors at each
outputunit. This algorithm is different from MRI algorithm in the manner of weight updation only.
Thetrainingalgorithm is as follows:
Step 1: Initialize weights (all weights to some random value).
Set value of learning rate 01.
Step 2: When stopping condition is false, perform Steps 3—11.
Step 3: For each bipolar training pair, do Steps 4—10.
Step 4: Set activations of input units.
Xi=Si
Step 5: Calculate the net input of each hidden unit.
—
¯ bl +
—inl + X2W21
-in2 = b 2 + + X2W22
Step 6: Apply the activations to calculate output of each hidden unit.
= f (z_inl)
za = f (z-in2)
Step 7: Calculate the net input of y.
-in = 3+ + Z2V2.
Find output y by applying activations.
Y= f (y-in)
102
1 1 -1
-1 -1 -1
The architecture of the madaline net can be
given as,
b3
x,
wt2
W22
b2
values
lgiåal
ratc 0.5
Lcarling WII W21 0.2, b, 0.3
WI2 0.1, W22
b2 0.5
VI V2= = 0.5
algorithm is followed.
fie training
first input pair (l, l,) .01,
Forfirst epoch,
and bias are initialized.
Stcp 1: Weights
ı- 0.45) ı
0.625
hl (ncw) hı(old) (İ (—1—Zinı)
2 0.3 + 0.5 (-1-0.55)
0.475
w21(old) + (i (—1—zinl)
x2
0.2 + 0.5 (-1-0.55)1
0.575
(new) z w22(old) + (i (—1—zin2) x 2
0.2 + 0.5(-ı-o.45) ı
0.525
b: (new) 132(old) + d (—1— Zin2)
0.15 + 0.5 (-1-0.45)
- 0.575
Step 9: Test stopping condition
This completes the Ist Epoch, Ist training pair presentation.
This process is repeated until the weight converges.
For this XOR function, the weight converges within 3 epochs, as tabulated below.
Inputs t Zinı Zin2 W21 ı Wi2 W22 2 YinY
ı ı -1.0495 -1.071
-0.5-
ı .3205 -1.339
ı .068 -1.292 1.286 -1.077
İn Epoch 3 all t y. Hence even if further iteration is done, the weights Willremain the same. Thus
network is formed for XOR function.
Ada/ine and Mada/ine Networks 105
S.' Write a MAII.,AI) pmgtam to generate XOR function for bipolar inputs and targets using
mRdalinc
so/gtion 17tc tr•uthtable for XOR function with bipolar inputs and targets ig given as,
x, x,
-l -l -l
-l
-l
-l
MATLAB program is as follows
Program
{Madalinefor XORfuntion
clc;
clear:
\lnput and Target
t-1-1 1 1
{Assumeinitial weight matrix andbias
0.1:o.2 0.2]:
0.15]:
2=0.5:
con-I :
alpha=0.5:
epoch=0 :
while con
con=0;
for
for j-1:2
if
else
end
end
yin-b2+z(1)*v(1)+z(2)*v(2):
if yin>-o
else
end
if y--t(i)
Introduction to Ncuml
con-I.
abs(zinc 2))
if
end
else
for k=i'.2
if
end
end
end
epoch=epoch+l
In this chapter, the architecture, algorithm and the other features of adaline and madaline netw
discussed. It was found that these networks used an efficient learning rule called as Least Mean
or Widrow-Hoff learning rule. The network in these cases was trained untiltb
Error learning rule
Adaline and Madaline Networks 107
8eviewQ
5.1 Give details on the development of adaline net.
5.2 State the delta learning rule. Why is it called as Least Mean Squares rule?
5.3 Derive the expression for extended delta learning rule.
5.4 State and derive the delta learning rule for several output classes.
5.5 Draw the architecture of the adaline net.
5.6 State the training and application algorithm of the adaline net.
5.7 How is madaline net formed from adaline net?
5.8 What should be the value of learning state in an adaline net?
5.9 What are the two algorithms used in a madaline net?
5.10 Differentiate MRI algorithm from MRII algorithm.
5.11 With architecture, explain the MRI training algorithm.
5.12 Discuss in detail the MRII training algorithm.
5.13 What are the initial weights and bias assumed in MRI training algorithm between the hidden
and the output units? Can they can be varied?
5.14 Interpret the weights from the madaline net geometrically.
Exercise Problems
5.15 Generate AND functions with binary inputs and bipolar targets using adaline net.
5.16 Form the AND NOT function with binary data using adaline net.
5.17 Using adaline net, generate XOR function with bipolar inputs and targets.
5.18 Generate AND function with bipolar inputs and targets with the help of Madaline MRI
algorithm.
5.19 Construct XOR function using two ANDNOT functions, and train it using the Madaline MRI
algorithm.
and targets using madaline MRII algorithm.
V 5.20 Form OR function with bipolar input
5.21 Using LMS rule, find the weights required to perform the following classifications: vectors
(1, 1, —1,—1)and (1, —1—1,1) are member of class (having a target value of 1) and vectors
1, 1, -1) and( —1,- l, 1, 1) are not members of class (having target value of—I). Assume
learning rates and initial weights. Using the training vectors as test vector (input), test the
response of the net.