Professional Documents
Culture Documents
105–121
1. Introduction
Address for correspondence: Dali Wang, Department of Physics, Computer Science & Engineering,
Christopher Newport University, Newport News, VA 23606, USA. E-mail: dwang@cnu.edu
Figure 3 appear in colour online: http://tim.sagepub.com
by the neural network model. To verify the effectiveness of the proposed technique, a
numerical experiment is performed based on a common industrial set-up with different
types of error distribution.
2. Manipulator calibration
The modeless manipulator calibration is divided into two steps (Zhuang and Roth,
1996). The first step is to identify the position errors for all grid points on a standard
calibration board, which is installed on the robot’s workspace. A calibrated camera is
attached to the robot’s end-effector to find the position errors of the end-effector. For
this research, a 2-D measurement is used, and the camera is fixed at a certain height
relative to the calibration board such that it can never touch the board. All movements
of the robot are in a 2-D plane at the same height relative to the calibration board.
In Figure 1, the desired position of the grid point 0 is (x0, y0), and the actual position of
the robot end-effector is ðx00 , y00 Þ. The position errors for this grid point are ex ¼ x0x00 ,
and ey ¼ y0y00 . The robot will be moved to all grid points on the standard calibration
board, and all position errors on these grid points will be measured and stored in the
memory for future use.
One prerequisite requirement for this calibration process is that the camera must be
calibrated accurately. The final robot calibration accuracy is directly dependent on the
accuracy of the camera calibration. The purpose of the calibration of the camera is to
determine the relationship between the Field-Of-View (FOV) co-ordinate system and
the Base Frame of the Robots (BFR). Generally, there are five main parameters to be
calibrated. Two of them are called internal parameters and three of them are called
external parameters. The purpose of the calibration is to obtain the actual values of
these parameters.
(X ′0, Y ′0)
Camera (X0, Y1)
Target
Robot Cell
Grid
Calibration board
Yr
Yv Field of view coord frame
(Unit =pixel)
n
P
m Xv
R a
Cy
Cu
Xr
Cx
i j k
0
Z ¼ J11 J21 0 x The Zv Vector WRT robot frame BFR ð3bÞ
J12 0 y
J22 0
Next, we need to compute the Jacobian Matrix.
Collect N sets of data [dx, dy] and N sets of data [dm, dn]. Then use the pseudo-
inverse to find J:
dx J11 J12 dm
¼
dy J21 J22 dn
ð4Þ
dx 1 dx 2 dx 3 . . . dx N J11 J12 dm 1 dm 2 dm 3 . . . dm N
¼
dy 1 dy 2 dy 3 . . . dy N J21 J22 dn 1 dn 2 dn 3 . . . dn N
A ¼ J B A BT ¼ J B BT
l
J ¼ A BT B BT
where the ‘’ operator represents matrix multiplication, BT represents the transpose of B.
To calculate Cx and Cy, refer to Figure 2, we have
R ¼ Cu P Cu contains translation Cx , Cy and rotation C :
Cu ¼ Ct C ðC is determinedÞ Ct ¼ RðC : PÞ1
After these five parameters are calibrated, we can store them and use them later to
map the target position from the FOV to the BFR.
In the second step, the robot’s end-effector is moved to an arbitrary target position
that is located in the range of the workspace. The target position errors could be found
by an interpolation technique using the stored neighbouring grid position errors
around the target position, which were obtained from the first step. Finally, the target
position could be compensated with the interpolation results to obtain more accurate
positions.
The pixel size of the calibration camera we used is 4 4 mm. The number of pixel is
1600 1200 (about 2 million). The resolution of our camera (about 4 mm) is higher than
it needs to be for our application. Practically, the measurement accuracy and the final
calibration accuracy depend on not only the resolution of the camera, but also the
repeatability of the robot. The repeatability of the PUMA-560 robot we used in this
work is about 0.1 mm (100 mm), whereas the best repeatability nowadays is around
20 mm for similar robots. Since the highest possible calibration accuracy is lower than
this repeatability, the resolution of the camera has little negative impact on position
error estimation.
P34
P4(x +1, y+1)
1–dy
dy
Y 1–dx dx
X
Zi ¼ !ij zj bi ð6cÞ
i20 i2H2
Hidden Hidden
Input 1 Output
Input 2 Output
Input Output
where Di(k) is the target value for Zi(k). The error energy for all the neurons in the
output layer becomes,
1X 2
"ðk Þ ¼ e ðk Þ ð8Þ
2 j20 j
The training objective is to minimize the mean-squared error of the neural network
over Nall sets of training data,
1 X
Nall
E¼ "ðkÞ ð9Þ
Nall k¼1
There are two different phases in the back-propagation algorithm, the forward phase
and the backward phase. In the forward phase, the input signals are calculated in the
forward direction and passed through the neural network layer by layer. In the end, the
neurons in output layer produce the outputs of the neural network. As a result, the
error vector is generated by comparing the outputs of the neural network with the
target value. In the backward phase, weights are adjusted in a direction that would
reduce the error signals. The final synaptic weight vector could be then used to predict
the unknown input–output mapping.
The feed-forward multi-layer neural network structure is chosen mainly because of
the universal approximation ability of such structure (Hornic et al., 1989). Let f(x) be a
smooth function from Rn to Rm. Let Sx Rn, then for x 2 Sx, there exist some number of
hidden layer neurons and weight !ij, such that:
f ðxÞ ¼ Z i e ð10Þ
i20
The value e is the approximation error of the neural network. It has been proven that
for any given value e0 , one can find an aforementioned neural network configuration
such that e5e0 . This is called the universal approximation property of neural network.
This, combined with its tracking record, make this structure the choice for our
application.
114 Robot manipulator calibration
The position errors at the grid points are measured during the first phase of a modeless
calibration process. The error at an unknown position is then estimated using neural
network model as shown in Figure 5.
The neighbourhood of the current cell is used to form the training input/output
pairs of the neural network. The centre of the cell is treated as the centre of the 2-D
space as shown in Figure 6; a neighbouring grid point position is measured in term of
its distance to the centre in horizontal (x) and vertical (y) direction. A vector containing
pairs of co-ordinate of the grid point is used as the input to the neural network. The
outputs are measured end-effector position errors in horizontal (x) and vertical (y)
direction. The position errors in two directions are two separate quantities and could be
treated separately. We therefore refer to them as position errors in general without loss
of generality.
During training, the position errors at the grid points form the input and target
output pairs of the neural network. After each grid point is processed, an error is
computed between the target and the neural network output, and the derivative of this
Estimated
Target Neural position
Robot
network Measured
position
error is back-propagated through the network to iteratively improve and refine the
network approximation of the interpolated function. The arrangement uses a group of
neighbouring cells as training set; it then uses the neural network to extract the local
feature of the mapping function, which is the strength of back-propagation algorithm
(Rumelhart and McClelland, 1986).
In general, a larger neighbourhood area is likely to yield better approximation result.
On the other hand, there is an extra training overhead for using larger neighbourhood
area. This overhead is further compounded by additional memory resources needed to
store the network state. In our experience, a three by three (3 3) surrounding area
would yield satisfactory result while keeping the training time and memory
requirement relatively low.
The neural network chosen for this application is outlined below.
Inputs and outputs: there are two inputs, which represent the co-ordinate of the grid
points as shown in Figure 6. There are two outputs, which are end-effector position errors
in horizontal (x) and vertical (y) direction.
Layers and neurons: there are one hidden layer and four hidden neurons. The number of
hidden neurons has no significant impact on the results when there are more than two
hidden neurons. This conclusion, though independently verified in this work, is also
consistent with the finding in Mehrotra et al. (1991). The initial weight and bias are all
randomly chosen.
Activation functions: because of the non-linearity of the error surface, a hyperbolic
tangent activation function is used in the hidden nodes. Linear output nodes are selected
to reflect the actual error value. When combined with the layer and neuron selection, this
selection of activation functions can provide a smooth mapping to an arbitrary 2-D
function (Lapedes and Farber, 1988).
In essence, the bilinear interpolation method approximates a spatial error surface based
on the errors of known points, and assumes that the error of the target point is located
on that surface. In reality, the error could be random, and could be in an arbitrary form.
The neural network model, however, makes no assumption on the form of the error.
The strength of the neural networks is their ability to approximate an arbitrary
function, especially non-linear mechanisms (Jin et al., 1995). It has been proven that
feed-forward neural networks with sufficient hidden neurons could be used to
approximate an arbitrary function. Their capability as a function approximator does
not depend on the type of the function.
In our work, a neural network model is utilized to extract the local feature of
the error surface by using a window consisting of multiple cells surrounding the
interpolated position. There is no limitation on the form of the error. This takes
116 Robot manipulator calibration
Simulation is performed using data obtained from an industrial calibration board. The
board contains 20 cells both in horizontal and vertical direction. Each cell is 20 mm by
20 mm in length. Given the nature of the position errors, three different types of errors
are simulated in this study. These are:
Uniform distributed random error: random numbers that are uniformly distributed in the
interval (0.5, 0.5) (normalized to the cell’s size);
Normal distributed random error: normally distributed numbers with 0 mean and
variance 1 (normalized to the cell’s size);
Wang et al. 117
Gaussian waveform error: error distribution that follows a 2-D Gaussian function. The 2-
D Gaussian function has been used as a benchmark for many 2-D data processing
problems. It is defined as (Lashgari et al., 1983):
n h io
H ðx, yÞ ¼ A exp 0:103203 ðx Xc Þ2 þðy Yc Þ2 ð11Þ
where x, y are 2-D co-ordinates; A is the amplitude, which is choose to be 10% of the
size of a cell. Xc and Yc are the centre of function, which is chosen to be the centre of the
board.
For simulation purposes, the position error at the centre of each cell is calculated. The
mean error and maximum error are calculated for all three types of errors. The results
in horizontal (or x) direction are presented. Similar results are obtained in a vertical
direction. In additional to the results obtained from the neural network, a second set of
results obtained by the bilinear interpolation method is also presented for comparison.
The mean and maximum errors for the uniform, normal and Gaussian distributed
data are presented in Tables 1–3 respectively. The interpolation errors in horizontal
direction for the three data distribution are shown in Figures 7, 8 and 9, respectively.
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0 2 4 6 8 10 12 14 16 18 20
2.5
1.5
0.5
0
0 2 4 6 8 10 12 14 16 18 20
0.01
0.009
0.008
0.007
0.006
0.005
0.004
0.003
0.002
0.001
0
0 2 4 6 8 10 12 14 16 18 20
For comparison, both errors generated by the neural network model and bilinear model
are shown on the same graph. It is observed that the proposed neural network
technique generates a superior performance in statistics measures. The advantage of
the neural network can also be easily observed from the error distribution graph.
for higher accuracy makes the proposed techniques an attractive alternative to the
traditional approaches.
5. Conclusion
In this paper, the theory, implementation and algorithm involved in a new robot
manipulator calibration process are introduced. Manipulator position measurements
are taken by a camera system to provide us with calibration data. A neural network-
based method was presented to estimate the position errors of a manipulator.
A window containing neighbour cells of the area under calibration is used to exploit the
local feature of the underline mapping function. After training is completed, the neural
network model could be used to approximate the mapping between the location on the
board and the position errors of manipulator within the calibration space. The
proposed algorithm improves the accuracy of the error estimation in comparison with
traditional analytical methods. The experiment result demonstrates the effectiveness of
the proposed method.
Acknowledgements
The authors appreciate the support from the Motorola Manufacturing System, which is
responsible for design and engineering of different robots including manipulators to be
used in various Motorola facilities. The measurement system has been experimented at the
facility, and the simulation model is adapted from the data collected from one of our
experimental robot calibration processes at the facility.
References
Bai, Y. 2007: On the comparison of model- using wavelet transform. Proceedings of IEEE
based and modeless robotic calibration based ICASSP-99 International Conference on
on a fuzzy interpolation method. International Acoustics, Speech, and Signal Processing
Journal of Advanced Manufacturing Technology 3217–20.
31, 1243–50. Jenkinson, I.D. 2000: An application of neural
Bai, Y. and Wang, D. 2004: Improve the robot networks to improve the accuracy of an
calibration accuracy using a dynamic on-line industrial robot for offline programming.
fuzzy interpolation technique. IEEE PhD thesis, Liverpool John Moores
Transactions on Systems, Man and Cybernetics University.
34, 1155–60. Jin, L., Gupta, M. and Nikiforuk, P.N. 1995:
Dolinsky, J.U. 2001: The development of a Universal approximation using dynamic
genetic programming method for kinematic recurrent neural networks: discrete-time ver-
robot calibration. PhD thesis, Liverpool John sion. Proceedings of IEEE ICNN, 403–8.
Moores University. Kumar, N., Panwar, V. and Sukavanam, N.
Hornic, K., Stinchcombe, M. and Whitw, H. 2007: Neural network control of coordinated
1989: Multilayer feedforward networks are multiple manipulator systems. Proceedings of
universal approximators. Neural Networks 2, the International Conference on Computing:
359–69. Theory and Applications. 250–6.
Huang, Y.-L. and Chang, R.-F. 1999: MLP Lapedes, A. and Farber, R. 1988: How neural
interpolation for digital image processing nets work. In Anderson, D.Z., editor. Neural
Wang et al. 121