You are on page 1of 5

Artificial htelligence in Engineering 12 (1998) 135-139

0 1997 Elsevief Science Limited


Printed in Great Britain. All rights reserved.
pIkSO954-1810(97)00017-4 0954.1810/98/$19,00
ELSEVIER

Dynamic weight estimation using an artificial


neural network
H. B. Bahar
Facuky of Engineering, Tabriz Universi~, Tabriz, Iran

&

D. H. Horrocks*
School of Engineering, Curd@ Wniversity of Wales, PO Box 689, Cardif CF2 STF, UK

(Received 20 January 1997; revised version received 15 March 1997; accepted 6 April 1997)

A Multi-Layer Perceptron Artificial Neural Network is employed to enable the mass


that is applied to a weighing platform to be rapidly and accurately estimated before the
platform has settled to the steady state. This is achieved through training the network
on a set of waveforms resulting from applied masses over the operating range of the
weighing platform. Results are given for both simulated and experimental data that
confirm the success of the method. 0 1997 Elsevier Science Limited.

Key words: Artificial Neural Networks, dynamic weight estimation, Multi-Layer


Perceptron, platform model parameters, real-time system.

1 INTRODUCTION transient effects, and further can be trained on experimental


data which therefore intrinsically include the effects of any
The application of an object to a weighing platform results other dynamic modes of vibration.
in a transient output waveform which can take a consider- The Multi-Layer Perceptron (MLP) Artificial Neural
able time to settle sufficiently to accurately indicate the Network is among the most popular and versatile forms of
weight of the object. Accurate and fast weighing is a wide- ANN classifier. It has been shown that MLP networks with a
spread requirement in industrial and other applications.‘-7 single hidden layer and a non-linear activation function are
Various approaches have been proposed to improve the universal classifiers. 13-i8 The MLP is used in this problem
speed of weighing platforms. These include adaptive with backpropagation training. The implementation of an
filtering,8*9 in which transient effects are shortened but are MLP is described in Pandya and Macy,13 Rumelhart
still present. In another approach,“” ’ using non-linear er al. I4 and Haykin,” for example.
regression, a second-order dynamic model is fitted to a Simulation results demonstrate the potential low
short initial segment of the platform output signal From sensitivity of the method to simulated measurement noise.
this the predicted applied mass is output as a parameter. Results for an ANN operating on experimental data
Here the mass is assumed to be applied as an ideal step, obtained from an actual weighing platform are given that
and that no other dynamic modes of vibration are present. confirm that accurate prediction of mass is obtained even
In this paper the use of an Artificial Neural Network when several modes of vibration are present.
(ANN) is described for the prediction of the applied mass.
ANNs have been used in other dynamic problems (see, for 2 WEIGHING SYSTEM MODEL
example, Ananthraman and Garg” for a simulation study of
a robot control problem). The method can operate on a finite An ideal weighing platform can be modelled by a mass-
segment of platform data and therefore has no residual spring-damping structure shown in Fig. 1. It has a typical
*Corresponding author. Tel.: ++44 (0) 1222 874000, ext. 5917; underdamped ideal step response as illustrated in Fig. 2. It is
fax: ++44 (0) 1222 874420; e-mail: Horrocks@Cardiff.ac.uk. governed by the solution of the following second-order
135
136 H. B. Bahar, D. H. Horrocks

output
Outpat Layer b

Fig. 1. A model of a weighing platform.

differential equation:

--J-&d
(m(t) + My”(t) + CY’(0 + KY(t) = P(0 (1)
v(n) y(n-I) y(n-N+ I)
where y(t) is the deflection signal obtained from the strain Input
Z-I Z-I ... Z-I
gauge on the weighing machine; m(t) and mp are the y(n)
applied mass and the platform mass respectively; C is the
damping factor; K is the spring constant and g is the Fig. 3. Input-output Artificial Neural Network,
gravitational constant.
For a general applied mass function m(t), this is a non- 3 ARTIFICIAL NEURAL NETWORK TECHNIQUE
linear differential equation. However, for commonly
encountered situations m(t) is a step function, which is An input-output model describes a dynamic system based
assumed here. In this case the differential eqn (1) is linear, on input and output data. An input-output model assumes
for which the explicit solution is modelled by a constant that the new system output can be predicted by the past
term plus a transient term which can be underdamped (u), inputs of the system. Further, if the system is supposed to
critically damped (c), or overdamped (0). Thus, be deterministic, time-invariant, single-input-single-output,
the input-output model becomes:
~(0 = 40 + IF&,, t>, F& 6 or F&,, Ol (2)
w(4=f6W,Y(n-l),...,Y(n-N+1)) (3)
and the transient terms for underdamped, critically damped
where y(n) and w(n) represent the input-output pair of the
and overdamped cases are
system at time n, positive integer N is the number of past
F,(qU1 t) = e -qU”q,+in(q,st + ch4) input samples (called the order of the system) and f is a
static non-linear function which maps the past inputs to a
F&,,O=e - qc”G?c2 + qc3o new output. The task of system identification is essentially
and to find suitable mappings which can approximate the
F,(q,, t) = e-q”‘fqo2 + eeqo3’qd mappings implied in a dynamic system. Eqn (3) can be
represented by the block diagram shown in Fig. 3 where
respectively, where the various q parameters are related to
the dynamic weighing system is defined by function f and
the initial platform displacement, bo, initial velocity b ,, the
symbol z -’ in Fig. 3 denotes the time delay between two
platform parameters K, C, mp and the applied mass m(t)
successive samples.”
by the expressions given in the Appendix (see Danaci
A Multi-Layer Perceptron (MLP) Artificial Neural Network
and Hotrocks” and Danaci”). These expressions have
(ANN) trained with the backpropagation supervised learn-
been used to generate data in the simulation study described
ing method was used to train and test the data obtained both
below.
from the simulation and the real-time system performances.
Sampled data signals are assumed for which t = nT,
where T is the sample interval. Thus, y(t) is written as y(n).

Jlisp1aeemeIIt

10
0
0 20 40 80 80 100
Y I I I I I
Applied mass (kg)
sample, n
Fig. 4. Simulated performance of the trained ANN with noise-free
Fig. 2. A typical step response of a weighing platform. patterns.
Dynamic weight estimation using artificial neural network 137

-0.001 I ’
Applied mass (kg) -0.0015 Applied mass (kg}

Fig. 5. Error between applied and estimated masses in Fig. 4. Fig. 7. Error performance for experimental data.

As is well known, backpropagation learning involves (vi) Comparing the output values of the network with
using an iterative gradient descent algorithm to minimise the desired output values and calculating the output
the mean square error between the actual outputs of the errors;
network and the desired outputs in response to given (vii) Adjusting the connection coefficients of the
inputs.20 The following steps are involved in constructing network in order to decrease the output errors;
and training an MLP network: (viii) Repeating steps (iv) to (vii) until the error is
(i) Defining the structure of the network (the number of acceptable or a predefined number of iterations is
layers and the number of neurones in each layer); completed.
(ii) Selecting the learning parameters (learning rate and In backpropagation, training is performed by forward and
momentum rate); backward operations. In the forward operation, the network
(iii) Initialising the connection coefficients; produces its actual outputs for a certain input pattern using
(iv) Selecting an input-output pair from the training the current connection coefficients. Subsequently, the back-
examples set and presenting it to the network; ward operation is carried out to alter the coefficients to
(v) Calculating the output values of the neurones in the decrease the error between the actual and desired outputs.
hidden and output layers;
4 SIMULATED PERFORMANCE

The Artificial Neural Network estimation architecture of


Fig. 3 is used for simulation purposes. The ANN was trained
to emulate the dynamic behaviour of the weighing system,
so that output w(n) is an estimate of the applied mass m(t).
Suitable specifications for the ANN model as illustrated in
Fig. 3 were found to be:

l Number of input samples y(n),..., y(n - N + l), is


taken as N = 200.

Applied mass (kg) 0.9 -(a) W


is 0.6
8 - @)X&I
(a)
g 0.7 -63 2okg

E- 0.6 -WI 104


'E:
2 0.5
:
n 0.4
2
E" 0.3

ii 0.2

0.1
Applied mass (kg)
0

-0.1
w Sample n
Fig. 6. Simulated recalling results with 2% noisy patterns: (a) for Fig. 8. Experimental weighing platform waveforms for applied
seen applied masses; (b) for unseen applied masses. masses of lo-40 kg.
138 H. B. Bahar. D. H. Horrocks

Number of output samples w(n) is 1. Table 2. Experimental training and recalling errors
Number of layers are 3: an input layer, a hidden
Patterns RMS error (kg) Average error
layer and an output layer.
Total number of neurones are 301: 200 neurones at Training 0.0487 0.1442%
Recalling 0.4782 1.3470%
the input layer, 100 neurones at the hidden layer,
and a single neurone at the output layer.
Momentum rate is 0.95. 5 EXPERIMENTAL PERFORMANCE
Learning rate is 0.50.
In this section training and recalling processes of the Artificial
A set of 100 patterns is used for ANN training and Neural Network are reported using experimental data which
recalling. Each input pattern is composed of the first 200 were obtained from an industrial weighing platform. The
samples, y(n), . . . ,y(n - 199), following the application of weighing platform has dimensions of 55.50 cm, 50.50 cm
the mass to the platform. The input patterns for training and and 16.50 cm for length, width and height, respectively,
recalling were generated by C + + programming language with a nominal full scale taken to be about 100 kg.
to simulate eqn (2). The weighing platform parameters in all An Artificial Neural Network was trained with experi-
simulations are K = 1000 N/mm, C = 50 Ns/mm, np = mental time series patterns. For each of a sequence of
0 kg, g = 10 m/s2, sampling interval t, = 0.02 ms, initial applied masses, 200 samples were taken in the transient
platform displacement bo = 0 mm, initial velocity bl = region immediately following the application of the mass
0 mm/s. Applied masses were uniformly chosen to cover to the platform with uniform sampling intervals of 2 ms.
the range m(t) = 1,2,. . . , 100 kg. For the platform parameters Thus, the overall time to produce an estimated output for
given, only the underdamped expression, F,(q,,t) in eqn (2) each applied mass is 400 ms. Here the sequence of applied
was required. masses was taken to be 5-45 kg in steps of 1 kg, except for
An Artificial Neural Network was trained by applying the the specific values of masses used for recall purposes below.
noise-free patterns where the training procedure was as The resulting error performance of the trained ANN in a
explained in the previous section. The performance of the laboratory noisy environment for various masses is
trained ANN is illustrated in Fig. 4. The simulation indicates illustrated in Fig. 7. Sources of noise may include signal
a linear relationship between applied mass m(t) and interference and any higher modes of platform vibration.
estimated output mass w(n). Fig. 5 shows the error in the To test the ability of the Artificial Neural Network at
linearity of Fig. 4 and indicates an RMS error and average recalling, time series experimental patterns were obtained
noise error of 0.0772 kg and 0.1187% between applied and that were unseen, i.e. not used for the training of the
estimated masses respectively. These simulation results Artificial Neural Network. The masses were 10, 20, 30
show that the ANN is able to accurately model the non- and 40 kg. To conform to a more realistic use of a weighing
linear relationship between platform time series data and platform these masses were applied with less care than for
corresponding applied mass. the training data. That is, the masses were dropped onto the
For Artificial Neural Network recall purposes, the seen platform from a height of typically 2 cm. Fig. 8 shows the
and unseen input patterns were contaminated by uniformly response waveforms produced.
distributed random simulated measurement noise with an Table 2 shows the overall errors for both the experimental
amplitude of 2% of the steady state mass. Then the patterns training data and recall data. As expected, the recall data
are applied to the aforementioned trained ANN with the errors are higher because the masses were applied in a more
architecture shown in Fig. 3 to obtain the estimated output realistic fashion. Even so, the errors are relatively small
mass values, w(n). The simulation results for recalling the considering that the weights are estimated dynamically,
seen and unseen input patterns are shown in Fig. 6. Table 1 long before the waveforms have settled to steady state.
provides the overall RMS and average noise error of the For recalling, the unseen experimental data patterns were
training and recalling performances. The RMS error of applied as an input to the trained Artificial Neural Network.
the unseen data is 0.4534 kg, which is a negligible The ANN estimated masses of the applied input patterns for
increase of 0.0326 kg compared to the RMS error for the unseen masses are tabulated in Table 3. The estimated
the seen noisy data. The effect of the contaminated masses show that accurately predicted masses can be
noise with the amplitude of 2% results in small average achieved when several modes of vibration are present in
errors of 0.5641% and 0.4593% on estimated seen and the patterns of the unseen applied masses.
unseen masses respectively. These verify that a beneficial
‘noise averaging’ is performed by the Artificial Neural Table 3. Experimental recalling results of the unseen masses
Network. Applied mass to the Estimated mass Error between
weighing platform of the ANN (kg) applied and estimated
(kg) masses (kg)
Table 1. Simulated training and recalling errors
10 9.8814 0.1186
Patterns RMS error (kg) Average error
20 19.8137 0.1863
Training 0.4208 0.5641% 30 30.1190 0.1190
Recalling 0.4534 0.4593% 40 40.923 1 0.9230
Dynamic weight estimation using artljicial neural network 139

6 CONCLUSION 17. Hartman, K., Keeler, J. D. and Kowalski, J. M., Layered


Neural Networks with Gaussian hidden units as universal
approximation. Neural Computation, 1990, 2, 210-215.
It has been shown that an Artificial Neural Network tech-
18. Homik, K., Stinchcombe, M. and White, H., Multilayer feed-
nique can be used for the rapid prediction of an applied mass forward networks are universal approximators. Neural
in a noisy environment while the weighing platform is still Networks, 1989, 2, 359-366.
in the transient mode. This has been done for both simulated 19. Pham, D. T. and Liu, X., Neural Networks for Identification,
and experimental data. The obtained results are successful Prediction and Control, 2nd edn. Springer-Verlag, London,
UK, 1995.
and encouraging.
20. Masters, T., Practical Neural Network Recipes in C++.
Boston, Academic Press, 1993.

ACKNOWLEDGEMENTS
APPENDIX
The co-operation of W. & T. Avery Ltd in providing Here the model parameters are defined for the weighing
access to a weighing platform is gratefully acknowledged. system platform constants K and C, the applied mass m(t),
Thanks are due to Dr M. Danaci and Dr B. G. Cetiner for the platform mass mp, the initial platform displacement bO,
useful discussions while at The School of Engineering, and initial velocity b , . This is done for the underdamped (u),
Cardiff University of Wales, UK. critically damped (c) and overdamped (0) cases, respectively.

Underdamped (u) case:


REFERENCES
40 = (m(t) + m&g/K
1. Norden, K. E., Electronic Weighing in Industrial Processes.
Granada Publishing, Technical Books, London, UK, 1984. qi = O.SC/(m(t) + mp)
2. Ferguson, C. From beam to load cell. Engineering, 1974,
214, 202-207.
3. Lolley, R. A., A review of industrial weighing systems, Part qz= JG
1. Meas. and Cont., 1976, 9, 411-416.
4. Lolley, R. A., A review of industrial weighing systems, Part
2. Meas. and Cont., 1976, 9, 435-439. qj=wd= K(m(t) + mp) - ’ - qf
5. McNabe, A., The design of a portable micro-based weighing
instrument. Journal ofMicrocomputer Applications, 1985,8, q4 = tan-‘@I,/&)
135-143.
6. King, A., An introduction to weighing systems. Electronic where
Technology, 1987, 21, 25-29.
7. Kersten, J., Weighing systems. Control and Instrumentation, B, =40-bo, &=bt +B,qllq3
1995,99, 23-28.
and ad is the natural damped frequency.
8. Shi, W. J., White, N. M. and Brignell, J. E., Adaptive filters
in load cell response correction. Sensors and Actuators, 1993,
37, 280-285. Critically damped (c) case:
9. Shu, W. Q., Dynamic weighing under non-zero initial
conditions. IEEE Trans. on Inst. and Meas., 1993, 42(4), 90 = (m(t) + mp)glK
806-8 11.
IO. Danaci, M. and Hot-rocks, D. H., A non-linear regression
q1 = O.SC/(m(t) + mp)
technique for improved dynamic weighing. In Proceedings
of the European Conference on Circuit Theory and Design,
Istanbul, Turkey, August 1995, pp. 507-510. qz=qo-bo
Il. Danaci, M., Signal processing techniques applied to dynamic
weighing systems. PhD thesis, School of Engineering,
Cardiff University of Wales, UK, 1996.
12. Ananthraman, A. and Garg; D. P., Training backpropagation
and CMAC Neural Networks for control of a SCARA robot. Overdamped (0) case:
Journal of Engineering Applications of Artificial
Intelligence, 1993, 6(2), 105-I 15. 40 = (4) + mp)glK
13. Pandya, A. S. and Macy, R. B., Pattern Recognition with
Neural Networks in C++. CRC Press Inc., USA, 1996. q] = - O.SC/(m(t) + mp) + ud
14. Rumelhart, D. E., Hinton, G. E. and William, R. J., Learning
internal representation by error propagation. In Parallel Dis-
tributed Processing. Foundations, ed. D. E. Runmelhart and
q2 = _ (90- bobI +h
J. L. McClelland, MIT Press, Cambridge, MA, 1986, pp. 2od
3 18-362.
15. Haykin, S., Neural Networks, A Comprehensive Foundation. q3 = - o.SC/(m(t) + mp) - ad

Macmillan College Publishing Company, New York, USA,


1994. q4 = _ (90- bob + bl
16. Cybenko, G., Approximation by superpositions of a sigmoi-
da1 function. Math. Control, Signals, Syst., 1992,2,303-314.
2wd

You might also like