You are on page 1of 17

Transactions of the Institute of Measurement and Control 34, 1 (2012) pp.

105–121

Robot manipulator calibration using


neural network and a camera-based
measurement system
Dali Wang1, Ying Bai2 and Jiying Zhao3
1
Department of Physics, Computer Science & Engineering, Christopher Newport
University, Newport News, VA 23606, USA
2
Department of Computer Science & Engineering, Johnson C. Smith University,
Charlotte, NC 28216, USA
3
School of Information Technology and Engineering, University of Ottawa, Ottawa,
ON, K1N 6N5, Canada

A robot manipulator calibration method is proposed using a camera-based measurement system


and a neural network algorithm. The position errors at various points within the calibration
space are first obtained by camera-based measurement devices. A window consisting of multiple
cells surrounding the interpolated positions is used to form the input and output pairs of training
data set. A neural network model is utilized to approximate the error surface. The target pose is
then compensated for by the position errors obtained by the neural network model. Numerical
experiment is performed based on a common industrial set-up. A significant improvement in
accuracy is obtained by the proposed techniques in comparison with traditional bilinear
analytical methods.

Key words: camera-based measurement; manipulator; neural network.

1. Introduction

An essential objective of robot manipulator calibration is to establish accurate mapping


between nominal and corrected end-effector poses. This mapping can be based on
either parametric (model-based) or non-parametric (modeless) methods.

Address for correspondence: Dali Wang, Department of Physics, Computer Science & Engineering,
Christopher Newport University, Newport News, VA 23606, USA. E-mail: dwang@cnu.edu
Figure 3 appear in colour online: http://tim.sagepub.com

Ó 2010 The Institute of Measurement and Control 10.1177/0142331210377350


106 Robot manipulator calibration

Model-based calibration methods aim at the identification of accurate robot models


associated with the object to increase the positional accuracy of the robot (Mooring
et al., 1991). Because of the complexity in characterizing positional errors of industrial
robots, it is also good practice to approximate the errors rather than modelling them
explicitly by developing parametric models (Kumar et al., 2007). This is the basis for the
modeless method. Robot poses are then modified by subtracting the expected position
errors. In this way, false target poses are produced with the objective of compensating
for the positional errors. The deviation of the manipulator at these altered poses
eventually leads the robot to the desired positions. The model-based method involve
setting up a kinematic model for the robot, measuring positions and orientations of the
robot end-effector, identifying its kinematic parameters and compensating for its pose
errors by modifying its joint angles (Mooring et al., 1991). The advantage of a model-
based calibration method is that a large workspace can be calibrated accurately and all
pose errors within the calibrated workspace can be compensated for by joint angles. Its
disadvantage lies in the fact that the understanding of kinematic modelling and
identification processes needs advanced knowledge in robot kinematics, which may
pose a challenge to field engineers (Bai, 2007).
On the other hand, a modeless method does not go through any kinematic
modelling and identification steps. In a pose measuring process, a robot workspace is
divided into a sequence of small squares in a two-dimensional (2-D) case, or cubes in a
three-dimensional (3-D) case with nominal grid points around each cell assumed
known. All position errors on the grid points are measured and recorded by moving
the robot through all the grid points. These position errors are stored in memory for
future use. With a modeless method, simple error compensation for a target position
can be realized by interpolating errors from its neighbouring grid points (Zhuang and
Roth, 1996). Its disadvantage, however, is the conflict between calibration accuracy and
number of grid points. In spite of this, because of its simplicity and effectiveness,
modeless calibration techniques are widely adopted in industrial applications.
There are also several alternative approaches to the model-based and modeless robot
calibration methods discussed above. Several researchers have implemented a non-
parametric accuracy compensation method using polynomial approximating functions
(Jenkinson, 2000); however, a nominal inverse kinematic model of the robot to be
calibrated is still needed to calculate the joint position vector corresponding to the
desired position of the robot end-effector. Another non-parametric compensation
approach is to divide the workspace of the robot into a sequence of discrete areas or
cubic cells, and then use a numerical procedure to determine the inverse kinematic
solution for each area or cubic cell. The problem of implementing that approach is that
a huge size of memory space is needed to store those inverse kinematic solutions.
To estimate the manipulator position errors, an approximation method needs
to be chosen according to the characteristics of the data. Analytical methods such
as bilinear are predominantly utilized in the position error estimation of manipulator
(Zhuang and Roth, 1996). A fuzzy interpolation method is developed by the authors to
Wang et al. 107

improve the compensation accuracy of manipulator calibration. A dynamic online


fuzzy inference system is implemented to meet the needs of fast real-time control
system and calibration environment (Bai and Wang, 2004).
Soft computing techniques, including neural network, fuzzy logic and genetic
algorithm have been extensive used for manipulator control and modelling (Dolinsky,
2001; Kumar et al., 2007; Liu et al., 2007; Monica et al., 2003; Tian et al., 2004; Wai and
Chen, 2004). Robot manipulators are often highly non-linear and heavily coupled
complex systems. Their accurate dynamic models are difficult to obtain (Tian et al.,
2004). Neural networks as universal function approximator provide an alternative to
the parametric model of manipulators. Both feed-forward neural networks (Wai and
Chen, 2004) and recurrent neural networks (Tian et al., 2004) are used in the modelling
of the dynamics of robot manipulators, including forward and inverse dynamics
(Zhong et al., 1996). In Liu et al. (2007), a feed-forward neural network is employed to
improve positioning accuracy of the robot arm. A multi-layer neural network is used to
find the mapping between joint angle and motor rotation angle. Pairs of input and
output are formed by rotating the motor at steps of 58 and measuring the actual angle
that the joint rotates with a 3-D measuring device. The joint transmitting error of the
robot is compensated for by using the neural network trained with back propagation.
In most aforementioned applications, neural networks serve as part of the solution
by providing a non-parametric model for the dynamics of the manipulators. Another
way to employ a neural network is to use it along with a parametric model. In Monica
et al. (2003), a multi-layer feed-forward neural network, trained with back propagation,
is introduced in combination with a parametric model. The function of the neural
network is to compensate for the error of the parametric model by simulating all the
phenomena that affect the end-effector’s error.
The position error estimation of robot manipulator is fundamentally a multi-
dimensional data approximation problem. It shares some common characteristics with
some other applications such as image scaling or image interpolation. There has been a
great deal of research effort on image interpolation, and some of the more recent works
use neural network techniques (Huang and Chang, 1999; Plaziac, 1999). A general
gradient-based back-propagation algorithm is utilized for the training of a neural
network.
In this paper, a neural network algorithm is proposed to estimate the positional
errors in a robot manipulator calibration process. The position errors at various grid
points within the calibration space are first obtained by a camera that is installed on the
end-effector of the robot to be calibrated. Some other measurement devices can also be
employed, such as a co-ordinate measurement machine (CMM) or a laser tracking
system (LTS). However, the camera approach has the advantage of relatively high
precision, low cost and being easy to operate. A window consisting of multiple cells
surrounding the interpolated position is used to form the input and output pairs of the
training data set. A neural network model is utilized to extract the local feature of the
error surface. The target pose is then compensated for by the position errors obtained
108 Robot manipulator calibration

by the neural network model. To verify the effectiveness of the proposed technique, a
numerical experiment is performed based on a common industrial set-up with different
types of error distribution.

2. Manipulator calibration

2.1 Modeless manipulator calibration

The modeless manipulator calibration is divided into two steps (Zhuang and Roth,
1996). The first step is to identify the position errors for all grid points on a standard
calibration board, which is installed on the robot’s workspace. A calibrated camera is
attached to the robot’s end-effector to find the position errors of the end-effector. For
this research, a 2-D measurement is used, and the camera is fixed at a certain height
relative to the calibration board such that it can never touch the board. All movements
of the robot are in a 2-D plane at the same height relative to the calibration board.
In Figure 1, the desired position of the grid point 0 is (x0, y0), and the actual position of
the robot end-effector is ðx00 , y00 Þ. The position errors for this grid point are ex ¼ x0x00 ,
and ey ¼ y0y00 . The robot will be moved to all grid points on the standard calibration
board, and all position errors on these grid points will be measured and stored in the
memory for future use.
One prerequisite requirement for this calibration process is that the camera must be
calibrated accurately. The final robot calibration accuracy is directly dependent on the
accuracy of the camera calibration. The purpose of the calibration of the camera is to
determine the relationship between the Field-Of-View (FOV) co-ordinate system and
the Base Frame of the Robots (BFR). Generally, there are five main parameters to be
calibrated. Two of them are called internal parameters and three of them are called
external parameters. The purpose of the calibration is to obtain the actual values of
these parameters.

(X0, Y0) (X1, Y0)

(X ′0, Y ′0)
Camera (X0, Y1)

Target
Robot Cell
Grid

Calibration board

Figure 1 Set-up of modeless calibration


Wang et al. 109

The parameters are:


(1) Pixel scale factors along Xv (x) and Yv (y) direction in the FOV (mm/pixel);
(2) The origin co-ordinates of the FOV relative to the BFR Cu (Cx and Cy);
(3) The misalignment between the FOV and the BFR ();
(4) The m and n are the numbers of pixels in the horizontal Xv and the vertical direction Yv
in the FOV;
(5) The vector of the target position in the FOV ().
Those parameters are illustrated in Figure 2.
The relationship between the FOV and the BFR can be expressed by the following
4  4 homogeneous transformation matrix based on those five parameters (refer to
Figure 2 for the definitions of the parameters , X, Y, m and n):
2 32 3 2 3
cos   sin  Cx mx X
6 76 7 6 7
4 sin  cos  Cy 54 ny 5 ¼ 4 Y 5 ð1aÞ If Zr is parallel with Zv
0 0 1 1 1
2 32 3 2 3
cos  sin  Cx mx X
6 76 7 6 7
4 sin   cos  Cy 54 ny 5 ¼ 4 Y 5 ð1bÞ If Zr is anti-parallel with Zv
0 0 1 1 1

where X and Y represents the target position in the BFR system.


The calibration of x, y and  are performed in the following sequence:
 Define the default pixel sizes (the nominal pixel size given by the vendors) as x0 and
y0;
 Move the robot by [dx, dy] in BFR;
 The vision target is moved by [du, dv] in FOV.

Yr
Yv Field of view coord frame
(Unit =pixel)
n
P
m Xv
R a
Cy

Cu

Xr
Cx

Figure 2 The relationship between the FOV and the BFR


110 Robot manipulator calibration

The movement of target in pixels is


2 3
1
        
dm 6 x0 0 7 du dx J J12 dm
¼4 6 7 ¼ 11 ð2Þ
dn 1 5 dv dy J21 J22 dn
0
y0
where J is Jacobian matrix, m and n represent the number of pixels in the xv and xy
directions.
Based on the above equations, we can obtain:
2 3 2 3
J11 J12 2 3 SQRT ðJ11  J11 þ J21  J21 Þ
     x
6 x y 7  x 0 6 7
dx
¼6 7 dm 4 y 5 ¼ 6 SQRT ðJ12  J12þ J22  J22 Þ 7
dy 4 J J 5 0 y dn 4 J 5
21 22
 ATAN 21
x y J11
ð3aÞ

 
 i j k  
  0
Z ¼  J11 J21 0  x The Zv Vector WRT robot frame BFR ð3bÞ
 J12 0 y
J22 0
Next, we need to compute the Jacobian Matrix.
Collect N sets of data [dx, dy] and N sets of data [dm, dn]. Then use the pseudo-
inverse to find J:
    
dx J11 J12 dm
¼
dy J21 J22 dn
     ð4Þ
dx 1 dx 2 dx 3 . . . dx N J11 J12 dm 1 dm 2 dm 3 . . . dm N
¼
dy 1 dy 2 dy 3 . . . dy N J21 J22 dn 1 dn 2 dn 3 . . . dn N

A ¼ J  B A  BT ¼ J  B  BT
   l
J ¼ A  BT  B  BT
where the ‘’ operator represents matrix multiplication, BT represents the transpose of B.
To calculate Cx and Cy, refer to Figure 2, we have
 
R ¼ Cu P Cu contains translation Cx , Cy and rotation C :
Cu ¼ Ct C ðC is determinedÞ Ct ¼ RðC : PÞ1

Choose the point P that we can find R.


Wang et al. 111

After these five parameters are calibrated, we can store them and use them later to
map the target position from the FOV to the BFR.
In the second step, the robot’s end-effector is moved to an arbitrary target position
that is located in the range of the workspace. The target position errors could be found
by an interpolation technique using the stored neighbouring grid position errors
around the target position, which were obtained from the first step. Finally, the target
position could be compensated with the interpolation results to obtain more accurate
positions.
The pixel size of the calibration camera we used is 4  4 mm. The number of pixel is
1600  1200 (about 2 million). The resolution of our camera (about 4 mm) is higher than
it needs to be for our application. Practically, the measurement accuracy and the final
calibration accuracy depend on not only the resolution of the camera, but also the
repeatability of the robot. The repeatability of the PUMA-560 robot we used in this
work is about 0.1 mm (100 mm), whereas the best repeatability nowadays is around
20 mm for similar robots. Since the highest possible calibration accuracy is lower than
this repeatability, the resolution of the camera has little negative impact on position
error estimation.

2.2 Bilinear manipulator calibration

The bilinear interpolation method is based on a linear analysis method. The


interpolated error is obtained from an error surface that is constructed based on the
four neighbouring errors of the grid points, which is shown in Figure 3.
The interpolation error on the target position P(x0 , y0 ) is:

ex ðx0 , y0 Þ ¼ dx dy ex ðx, yÞ þ ð1  dxÞdy ex ðx þ 1, yÞ þ ð1  dyÞdx ex ðx, y þ 1Þ


þ ð1  dxÞð1  dyÞ ex ðx þ 1, y þ 1Þ
0 0 ð5Þ
ex ðx , y Þ ¼ dx dy ey ðx, yÞ þ ð1  dxÞdy ey ðx þ 1, yÞ þ ð1  dyÞdx ey ðx, y þ 1Þ
þ ð1  dxÞð1  dyÞ ey ðx þ 1, y þ 1Þ

P (x′, y′) P1(x, y)


P12
P2(x + 1, y)
P3(x, y+1)

P34
P4(x +1, y+1)
1–dy

dy

Y 1–dx dx

Figure 3 Physical meaning of the bilinear interpolation


112 Robot manipulator calibration

As illustrated in Figure 3, the bilinear interpolation technique is based on two


assumptions. First, the position error of the target point P(x0 , y0 ) must be located on the
error surface, which is built based on errors of the four neighbouring grid points P1 to
P4. Secondly, the error surface has to be constructed prior to the application of bilinear
interpolation technique. However, in the real world, all position errors on each cell are
randomly distributed and the error curving surfaces, ex and ey , are also randomly
distributed at any given moment. We can consider the ex(x0 , y0 ) as a third dimensional
function value based on the position x0 and y0 inside each cell. The same consideration
is applied to ey(x0 , y0 ).

3. Neural network for positional error estimation

3.1 Network architecture

We use a generalized feed-forward neural network architecture, which consists of


interconnected layers of processing units or neurons. An illustration of two hidden
layer structure is shown in Figure 4. In this architecture, the neurons of the input layer
apply the input signals to the neurons of the hidden layer. The output signals of a
hidden layer are used as inputs to the next hidden layer. Finally, the output layer
produces the output results using the last hidden layer as its inputs. We use linear
output nodes and a hyperbolic tangent activation function in the hidden nodes.
In this figure, the notation of weights and biases follows the following convention:
weights of connections between layer j and layer i are indicated by !ij; the three sets of
neurons are denoted as input set I, output set O, and hidden set H1 (for the first hidden
layer) and H2 (for the second hidden layer); the bias of layer i are indicated by bi. Let Zi
be the output from any node i, hidden or output (for an input node, it is the received
input signal): 0 1
X
Zi ¼ tanh@ !ij zj  bi A ð6aÞ
i2H1 j2I
!
X
Zi ¼ tanh !ij zj  bi ð6bÞ
i2H2 i2H1

X
Zi ¼ !ij zj  bi ð6cÞ
i20 i2H2

The error signal at the output neuron i 2 O at iteration k is defined by:

ei ðkÞ ¼ Zi ðkÞ  Di ðkÞ ð7Þ


Wang et al. 113

Hidden Hidden

Input 1 Output
Input 2 Output

Input Output

Figure 4 A multiple layer neural network structure

where Di(k) is the target value for Zi(k). The error energy for all the neurons in the
output layer becomes,

1X 2
"ðk Þ ¼ e ðk Þ ð8Þ
2 j20 j

The training objective is to minimize the mean-squared error of the neural network
over Nall sets of training data,

1 X
Nall

E¼ "ðkÞ ð9Þ
Nall k¼1

There are two different phases in the back-propagation algorithm, the forward phase
and the backward phase. In the forward phase, the input signals are calculated in the
forward direction and passed through the neural network layer by layer. In the end, the
neurons in output layer produce the outputs of the neural network. As a result, the
error vector is generated by comparing the outputs of the neural network with the
target value. In the backward phase, weights are adjusted in a direction that would
reduce the error signals. The final synaptic weight vector could be then used to predict
the unknown input–output mapping.
The feed-forward multi-layer neural network structure is chosen mainly because of
the universal approximation ability of such structure (Hornic et al., 1989). Let f(x) be a
smooth function from Rn to Rm. Let Sx  Rn, then for x 2 Sx, there exist some number of
hidden layer neurons and weight !ij, such that:

f ðxÞ ¼ Z i e ð10Þ
i20

The value e is the approximation error of the neural network. It has been proven that
for any given value e0 , one can find an aforementioned neural network configuration
such that e5e0 . This is called the universal approximation property of neural network.
This, combined with its tracking record, make this structure the choice for our
application.
114 Robot manipulator calibration

3.2 Neural network for position error estimation

The position errors at the grid points are measured during the first phase of a modeless
calibration process. The error at an unknown position is then estimated using neural
network model as shown in Figure 5.
The neighbourhood of the current cell is used to form the training input/output
pairs of the neural network. The centre of the cell is treated as the centre of the 2-D
space as shown in Figure 6; a neighbouring grid point position is measured in term of
its distance to the centre in horizontal (x) and vertical (y) direction. A vector containing
pairs of co-ordinate of the grid point is used as the input to the neural network. The
outputs are measured end-effector position errors in horizontal (x) and vertical (y)
direction. The position errors in two directions are two separate quantities and could be
treated separately. We therefore refer to them as position errors in general without loss
of generality.
During training, the position errors at the grid points form the input and target
output pairs of the neural network. After each grid point is processed, an error is
computed between the target and the neural network output, and the derivative of this

Estimated
Target Neural position
Robot
network Measured
position

Figure 5 Neural network for position error estimation

Figure 6 A window containing grid points (cycles) and the


interpolated areas (shaded)
Wang et al. 115

error is back-propagated through the network to iteratively improve and refine the
network approximation of the interpolated function. The arrangement uses a group of
neighbouring cells as training set; it then uses the neural network to extract the local
feature of the mapping function, which is the strength of back-propagation algorithm
(Rumelhart and McClelland, 1986).
In general, a larger neighbourhood area is likely to yield better approximation result.
On the other hand, there is an extra training overhead for using larger neighbourhood
area. This overhead is further compounded by additional memory resources needed to
store the network state. In our experience, a three by three (3  3) surrounding area
would yield satisfactory result while keeping the training time and memory
requirement relatively low.
The neural network chosen for this application is outlined below.
 Inputs and outputs: there are two inputs, which represent the co-ordinate of the grid
points as shown in Figure 6. There are two outputs, which are end-effector position errors
in horizontal (x) and vertical (y) direction.
 Layers and neurons: there are one hidden layer and four hidden neurons. The number of
hidden neurons has no significant impact on the results when there are more than two
hidden neurons. This conclusion, though independently verified in this work, is also
consistent with the finding in Mehrotra et al. (1991). The initial weight and bias are all
randomly chosen.
 Activation functions: because of the non-linearity of the error surface, a hyperbolic
tangent activation function is used in the hidden nodes. Linear output nodes are selected
to reflect the actual error value. When combined with the layer and neuron selection, this
selection of activation functions can provide a smooth mapping to an arbitrary 2-D
function (Lapedes and Farber, 1988).

3.3 Neural network versus bilinear

In essence, the bilinear interpolation method approximates a spatial error surface based
on the errors of known points, and assumes that the error of the target point is located
on that surface. In reality, the error could be random, and could be in an arbitrary form.
The neural network model, however, makes no assumption on the form of the error.
The strength of the neural networks is their ability to approximate an arbitrary
function, especially non-linear mechanisms (Jin et al., 1995). It has been proven that
feed-forward neural networks with sufficient hidden neurons could be used to
approximate an arbitrary function. Their capability as a function approximator does
not depend on the type of the function.
In our work, a neural network model is utilized to extract the local feature of
the error surface by using a window consisting of multiple cells surrounding the
interpolated position. There is no limitation on the form of the error. This takes
116 Robot manipulator calibration

the advantage of non-linear approximation capabilities of neural network and offers


the potential to improve the approximation results.

4. Implementation and simulation experiment

4.1 Computer representation and implementation issues:

Neural network as a soft computing method involve a training process to obtain an


error model for each cell. Therefore, it is a numerical method by nature. The training
objectives are defined in advance and the process terminates whenever the criteria are
met. The computation time cannot be determined in advance because of the nature of
the numerical method. In addition, the running conditions of the algorithm are either
the number of hidden neurons or the learning rate of the training algorithm could be
changed in the run-time. In essence, it is a numerical and dynamic process.
On the other hand, the bilinear interpolation is an analytical computational method.
The generation of the error models can be performed with the aid of computer algebra
systems. Using computer software tools, the functions can be manipulated exclusively
symbolically. In addition, the computation methods are also static, which ensure that
all the calculation is predefined and no run-time changes are needed.
The training of neural network is an iterative process, which involves multiple
epochs. In general, it involves more computation than an analytical method such as
bilinear interpolation. In addition, the number of epochs needed for the training of
neural network cannot be determined in advance. As a result, the overall computation
load can only be estimated prior to run-time operation.
With the rapid advance of computing hardware, the speed of computation is no
longer a major limitation in a robotic application. In addition, the time it takes to
calibrate a robot manipulator is rarely a major practical constraint for an implemen-
tation. Therefore, the additional computation load introduced by the neural network
model is not a major limitation to its applications.

4.2 Simulation experiment:

Simulation is performed using data obtained from an industrial calibration board. The
board contains 20 cells both in horizontal and vertical direction. Each cell is 20 mm by
20 mm in length. Given the nature of the position errors, three different types of errors
are simulated in this study. These are:
 Uniform distributed random error: random numbers that are uniformly distributed in the
interval (0.5, 0.5) (normalized to the cell’s size);
 Normal distributed random error: normally distributed numbers with 0 mean and
variance 1 (normalized to the cell’s size);
Wang et al. 117

 Gaussian waveform error: error distribution that follows a 2-D Gaussian function. The 2-
D Gaussian function has been used as a benchmark for many 2-D data processing
problems. It is defined as (Lashgari et al., 1983):

n h io
H ðx, yÞ ¼ A exp 0:103203  ðx  Xc Þ2 þðy  Yc Þ2 ð11Þ

where x, y are 2-D co-ordinates; A is the amplitude, which is choose to be 10% of the
size of a cell. Xc and Yc are the centre of function, which is chosen to be the centre of the
board.
For simulation purposes, the position error at the centre of each cell is calculated. The
mean error and maximum error are calculated for all three types of errors. The results
in horizontal (or x) direction are presented. Similar results are obtained in a vertical
direction. In additional to the results obtained from the neural network, a second set of
results obtained by the bilinear interpolation method is also presented for comparison.
The mean and maximum errors for the uniform, normal and Gaussian distributed
data are presented in Tables 1–3 respectively. The interpolation errors in horizontal
direction for the three data distribution are shown in Figures 7, 8 and 9, respectively.

Table 1 Results of uniform distributed random error

Bilinear Neural network


Mean error 1.9071e  001 1.4353e  001
Max error 3.1931e  001 2.4692e  001

Table 2 Results of normal distributed random error

Bilinear Neural network


Mean error 6.4884e  001 5.3733e  001
Max error 2.2051e þ 000 1.5891e þ 000

Table 3 Results of Gaussian distributed data

Bilinear Neural network


Mean error 6.9762e  004 3.5596e  004
Max error 3.7299e  001 1.3689e  001
118 Robot manipulator calibration

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0
0 2 4 6 8 10 12 14 16 18 20

Figure 7 Uniform distributed data (mm): error by neural network


(dot) and bilinear (solid) model

2.5

1.5

0.5

0
0 2 4 6 8 10 12 14 16 18 20

Figure 8 Normal distributed data (mm): error by neural network


(dot) and bilinear (solid) model
Wang et al. 119

0.01

0.009

0.008

0.007

0.006

0.005

0.004

0.003

0.002

0.001

0
0 2 4 6 8 10 12 14 16 18 20

Figure 9 Gaussian distributed data (mm): error by neural network


(dot) and bilinear (solid) model

For comparison, both errors generated by the neural network model and bilinear model
are shown on the same graph. It is observed that the proposed neural network
technique generates a superior performance in statistics measures. The advantage of
the neural network can also be easily observed from the error distribution graph.

4.3 Practical implications

Bilinear interpolation as an analytical computational method can be programmed by


codes in a sequential fashion; therefore, its computational load could be determined in
advance. A neural network, on the other hand, is a soft computing technique that needs
to be programmed by codes in an iterative fashion. In theory, its computational load
could be measured in run-time but hard to determine in design stage. The simulation
model adopted in this work uses data from one of our actual experimental robot
calibration processes that were conducted in the Motorola Manufacturing System. Our
experience is that the neural network approach could be implemented using regular
off-the-shelf hardware available a few years ago with adequate performance. With
newer hardware nowadays, the computational load would not be a barrier to the
implementation.
In general, calibration accuracy and easiness of implementation are two major
considerations in selecting a calibration process. As a modeless technique, the proposed
neural network approach is easy to implement. The impact of computational load is
alleviated by the off-line nature of the training algorithm. We believe that the demand
120 Robot manipulator calibration

for higher accuracy makes the proposed techniques an attractive alternative to the
traditional approaches.

5. Conclusion

In this paper, the theory, implementation and algorithm involved in a new robot
manipulator calibration process are introduced. Manipulator position measurements
are taken by a camera system to provide us with calibration data. A neural network-
based method was presented to estimate the position errors of a manipulator.
A window containing neighbour cells of the area under calibration is used to exploit the
local feature of the underline mapping function. After training is completed, the neural
network model could be used to approximate the mapping between the location on the
board and the position errors of manipulator within the calibration space. The
proposed algorithm improves the accuracy of the error estimation in comparison with
traditional analytical methods. The experiment result demonstrates the effectiveness of
the proposed method.

Acknowledgements

The authors appreciate the support from the Motorola Manufacturing System, which is
responsible for design and engineering of different robots including manipulators to be
used in various Motorola facilities. The measurement system has been experimented at the
facility, and the simulation model is adapted from the data collected from one of our
experimental robot calibration processes at the facility.

References
Bai, Y. 2007: On the comparison of model- using wavelet transform. Proceedings of IEEE
based and modeless robotic calibration based ICASSP-99 International Conference on
on a fuzzy interpolation method. International Acoustics, Speech, and Signal Processing
Journal of Advanced Manufacturing Technology 3217–20.
31, 1243–50. Jenkinson, I.D. 2000: An application of neural
Bai, Y. and Wang, D. 2004: Improve the robot networks to improve the accuracy of an
calibration accuracy using a dynamic on-line industrial robot for offline programming.
fuzzy interpolation technique. IEEE PhD thesis, Liverpool John Moores
Transactions on Systems, Man and Cybernetics University.
34, 1155–60. Jin, L., Gupta, M. and Nikiforuk, P.N. 1995:
Dolinsky, J.U. 2001: The development of a Universal approximation using dynamic
genetic programming method for kinematic recurrent neural networks: discrete-time ver-
robot calibration. PhD thesis, Liverpool John sion. Proceedings of IEEE ICNN, 403–8.
Moores University. Kumar, N., Panwar, V. and Sukavanam, N.
Hornic, K., Stinchcombe, M. and Whitw, H. 2007: Neural network control of coordinated
1989: Multilayer feedforward networks are multiple manipulator systems. Proceedings of
universal approximators. Neural Networks 2, the International Conference on Computing:
359–69. Theory and Applications. 250–6.
Huang, Y.-L. and Chang, R.-F. 1999: MLP Lapedes, A. and Farber, R. 1988: How neural
interpolation for digital image processing nets work. In Anderson, D.Z., editor. Neural
Wang et al. 121

information processing systems. American Plaziac, N. 1999: Image interpolation using


Institute of Physics, 442–56. neural networks. IEEE Transactions on Image
Lashgari, B., Siverman, L.M. and Abramatic, J. Processing 8, 1647–51.
1983: Approximation of 2-D separable in Rumelhart, D.E. and McClelland, J.L. 1986:
denominator filters. IEEE Transactions on Parallel distributed processing: explorations
Circuits and Systems CAS-30, 107–21. in the microstructure of cognition, 1. MIT
Liu, J., Zhang, Y. and Li, Z. 2007: Improving Press.
the positioning accuracy of a neurosurgical Tian, L., Wang, J. and Ma, Z. 2004: Constrained
robot system. IEEE/ASME Transactions on motion control of flexible robot manipulators
Mechatronics 12, 527–33. based on recurrent neural networks. IEEE
Mehrotra, K.G., Mohan, C.K. and Ranka, S. Transactions on Systems, Man, and Cybernetics –
1991: Bounds on the number of samples Part B: Cybernetics 34, 1541–52.
needed for neural learning. IEEE Transactions Wai, R.J. and Chen, P.C. 2004: Intelligent
on Neural Networks 6, 548–58. tracking control for robot manipulator includ-
Monica, T., Giovanni, L., PierLuigi, M. and ing actuator dynamics via TSK-type fuzzy
Diego, T. 2003: A closed-loop neuro-para- neural network. IEEE Transactions on Fuzzy
metric methodology for the calibration of a 5 Systems 12, 552–60.
DOF measuring robot. Proceedings of IEEE Zhong, X., Lewis, J. and N-Nagy, F.L. 1996:
International Symposium on Computational Inverse robot calibration using artificial
Intelligence in Robotics and Automation, neural networks. Engineering Applications of
Volume 3, 16–20, 1482–7. Artificial Intelligence 9, 83–93.
Mooring, B.W., Roth, Z.S. and Driels, M.R. Zhuang, H. and Roth, Z.S. 1996: Camera-aided
1991: Fundamentals of manipulator calibration. robot calibration. CRC Press.
John Wiley & Sons.

You might also like