Professional Documents
Culture Documents
&
D. H. Horrocks*
School of Engineering, Curd@ Wniversity of Wales, PO Box 689, Cardif CF2 STF, UK
(Received 20 January 1997; revised version received 15 March 1997; accepted 6 April 1997)
output
Outpat Layer b
differential equation:
--J-&d
(m(t) + My”(t) + CY’(0 + KY(t) = P(0 (1)
v(n) y(n-I) y(n-N+ I)
where y(t) is the deflection signal obtained from the strain Input
Z-I Z-I ... Z-I
gauge on the weighing machine; m(t) and mp are the y(n)
applied mass and the platform mass respectively; C is the
damping factor; K is the spring constant and g is the Fig. 3. Input-output Artificial Neural Network,
gravitational constant.
For a general applied mass function m(t), this is a non- 3 ARTIFICIAL NEURAL NETWORK TECHNIQUE
linear differential equation. However, for commonly
encountered situations m(t) is a step function, which is An input-output model describes a dynamic system based
assumed here. In this case the differential eqn (1) is linear, on input and output data. An input-output model assumes
for which the explicit solution is modelled by a constant that the new system output can be predicted by the past
term plus a transient term which can be underdamped (u), inputs of the system. Further, if the system is supposed to
critically damped (c), or overdamped (0). Thus, be deterministic, time-invariant, single-input-single-output,
the input-output model becomes:
~(0 = 40 + IF&,, t>, F& 6 or F&,, Ol (2)
w(4=f6W,Y(n-l),...,Y(n-N+1)) (3)
and the transient terms for underdamped, critically damped
where y(n) and w(n) represent the input-output pair of the
and overdamped cases are
system at time n, positive integer N is the number of past
F,(qU1 t) = e -qU”q,+in(q,st + ch4) input samples (called the order of the system) and f is a
static non-linear function which maps the past inputs to a
F&,,O=e - qc”G?c2 + qc3o new output. The task of system identification is essentially
and to find suitable mappings which can approximate the
F,(q,, t) = e-q”‘fqo2 + eeqo3’qd mappings implied in a dynamic system. Eqn (3) can be
represented by the block diagram shown in Fig. 3 where
respectively, where the various q parameters are related to
the dynamic weighing system is defined by function f and
the initial platform displacement, bo, initial velocity b ,, the
symbol z -’ in Fig. 3 denotes the time delay between two
platform parameters K, C, mp and the applied mass m(t)
successive samples.”
by the expressions given in the Appendix (see Danaci
A Multi-Layer Perceptron (MLP) Artificial Neural Network
and Hotrocks” and Danaci”). These expressions have
(ANN) trained with the backpropagation supervised learn-
been used to generate data in the simulation study described
ing method was used to train and test the data obtained both
below.
from the simulation and the real-time system performances.
Sampled data signals are assumed for which t = nT,
where T is the sample interval. Thus, y(t) is written as y(n).
Jlisp1aeemeIIt
10
0
0 20 40 80 80 100
Y I I I I I
Applied mass (kg)
sample, n
Fig. 4. Simulated performance of the trained ANN with noise-free
Fig. 2. A typical step response of a weighing platform. patterns.
Dynamic weight estimation using artificial neural network 137
-0.001 I ’
Applied mass (kg) -0.0015 Applied mass (kg}
Fig. 5. Error between applied and estimated masses in Fig. 4. Fig. 7. Error performance for experimental data.
As is well known, backpropagation learning involves (vi) Comparing the output values of the network with
using an iterative gradient descent algorithm to minimise the desired output values and calculating the output
the mean square error between the actual outputs of the errors;
network and the desired outputs in response to given (vii) Adjusting the connection coefficients of the
inputs.20 The following steps are involved in constructing network in order to decrease the output errors;
and training an MLP network: (viii) Repeating steps (iv) to (vii) until the error is
(i) Defining the structure of the network (the number of acceptable or a predefined number of iterations is
layers and the number of neurones in each layer); completed.
(ii) Selecting the learning parameters (learning rate and In backpropagation, training is performed by forward and
momentum rate); backward operations. In the forward operation, the network
(iii) Initialising the connection coefficients; produces its actual outputs for a certain input pattern using
(iv) Selecting an input-output pair from the training the current connection coefficients. Subsequently, the back-
examples set and presenting it to the network; ward operation is carried out to alter the coefficients to
(v) Calculating the output values of the neurones in the decrease the error between the actual and desired outputs.
hidden and output layers;
4 SIMULATED PERFORMANCE
ii 0.2
0.1
Applied mass (kg)
0
-0.1
w Sample n
Fig. 6. Simulated recalling results with 2% noisy patterns: (a) for Fig. 8. Experimental weighing platform waveforms for applied
seen applied masses; (b) for unseen applied masses. masses of lo-40 kg.
138 H. B. Bahar. D. H. Horrocks
Number of output samples w(n) is 1. Table 2. Experimental training and recalling errors
Number of layers are 3: an input layer, a hidden
Patterns RMS error (kg) Average error
layer and an output layer.
Total number of neurones are 301: 200 neurones at Training 0.0487 0.1442%
Recalling 0.4782 1.3470%
the input layer, 100 neurones at the hidden layer,
and a single neurone at the output layer.
Momentum rate is 0.95. 5 EXPERIMENTAL PERFORMANCE
Learning rate is 0.50.
In this section training and recalling processes of the Artificial
A set of 100 patterns is used for ANN training and Neural Network are reported using experimental data which
recalling. Each input pattern is composed of the first 200 were obtained from an industrial weighing platform. The
samples, y(n), . . . ,y(n - 199), following the application of weighing platform has dimensions of 55.50 cm, 50.50 cm
the mass to the platform. The input patterns for training and and 16.50 cm for length, width and height, respectively,
recalling were generated by C + + programming language with a nominal full scale taken to be about 100 kg.
to simulate eqn (2). The weighing platform parameters in all An Artificial Neural Network was trained with experi-
simulations are K = 1000 N/mm, C = 50 Ns/mm, np = mental time series patterns. For each of a sequence of
0 kg, g = 10 m/s2, sampling interval t, = 0.02 ms, initial applied masses, 200 samples were taken in the transient
platform displacement bo = 0 mm, initial velocity bl = region immediately following the application of the mass
0 mm/s. Applied masses were uniformly chosen to cover to the platform with uniform sampling intervals of 2 ms.
the range m(t) = 1,2,. . . , 100 kg. For the platform parameters Thus, the overall time to produce an estimated output for
given, only the underdamped expression, F,(q,,t) in eqn (2) each applied mass is 400 ms. Here the sequence of applied
was required. masses was taken to be 5-45 kg in steps of 1 kg, except for
An Artificial Neural Network was trained by applying the the specific values of masses used for recall purposes below.
noise-free patterns where the training procedure was as The resulting error performance of the trained ANN in a
explained in the previous section. The performance of the laboratory noisy environment for various masses is
trained ANN is illustrated in Fig. 4. The simulation indicates illustrated in Fig. 7. Sources of noise may include signal
a linear relationship between applied mass m(t) and interference and any higher modes of platform vibration.
estimated output mass w(n). Fig. 5 shows the error in the To test the ability of the Artificial Neural Network at
linearity of Fig. 4 and indicates an RMS error and average recalling, time series experimental patterns were obtained
noise error of 0.0772 kg and 0.1187% between applied and that were unseen, i.e. not used for the training of the
estimated masses respectively. These simulation results Artificial Neural Network. The masses were 10, 20, 30
show that the ANN is able to accurately model the non- and 40 kg. To conform to a more realistic use of a weighing
linear relationship between platform time series data and platform these masses were applied with less care than for
corresponding applied mass. the training data. That is, the masses were dropped onto the
For Artificial Neural Network recall purposes, the seen platform from a height of typically 2 cm. Fig. 8 shows the
and unseen input patterns were contaminated by uniformly response waveforms produced.
distributed random simulated measurement noise with an Table 2 shows the overall errors for both the experimental
amplitude of 2% of the steady state mass. Then the patterns training data and recall data. As expected, the recall data
are applied to the aforementioned trained ANN with the errors are higher because the masses were applied in a more
architecture shown in Fig. 3 to obtain the estimated output realistic fashion. Even so, the errors are relatively small
mass values, w(n). The simulation results for recalling the considering that the weights are estimated dynamically,
seen and unseen input patterns are shown in Fig. 6. Table 1 long before the waveforms have settled to steady state.
provides the overall RMS and average noise error of the For recalling, the unseen experimental data patterns were
training and recalling performances. The RMS error of applied as an input to the trained Artificial Neural Network.
the unseen data is 0.4534 kg, which is a negligible The ANN estimated masses of the applied input patterns for
increase of 0.0326 kg compared to the RMS error for the unseen masses are tabulated in Table 3. The estimated
the seen noisy data. The effect of the contaminated masses show that accurately predicted masses can be
noise with the amplitude of 2% results in small average achieved when several modes of vibration are present in
errors of 0.5641% and 0.4593% on estimated seen and the patterns of the unseen applied masses.
unseen masses respectively. These verify that a beneficial
‘noise averaging’ is performed by the Artificial Neural Table 3. Experimental recalling results of the unseen masses
Network. Applied mass to the Estimated mass Error between
weighing platform of the ANN (kg) applied and estimated
(kg) masses (kg)
Table 1. Simulated training and recalling errors
10 9.8814 0.1186
Patterns RMS error (kg) Average error
20 19.8137 0.1863
Training 0.4208 0.5641% 30 30.1190 0.1190
Recalling 0.4534 0.4593% 40 40.923 1 0.9230
Dynamic weight estimation using artljicial neural network 139
ACKNOWLEDGEMENTS
APPENDIX
The co-operation of W. & T. Avery Ltd in providing Here the model parameters are defined for the weighing
access to a weighing platform is gratefully acknowledged. system platform constants K and C, the applied mass m(t),
Thanks are due to Dr M. Danaci and Dr B. G. Cetiner for the platform mass mp, the initial platform displacement bO,
useful discussions while at The School of Engineering, and initial velocity b , . This is done for the underdamped (u),
Cardiff University of Wales, UK. critically damped (c) and overdamped (0) cases, respectively.