Professional Documents
Culture Documents
IROS '90
- 635 -
the output layer are made up of nine nodes and six nodes
respectively, because of three sets of three dimensional erid-
effector location vectors input and six driving joirit output.
In this experiment, number of the hidden layer was set to 50.
Output functions for the input layer and the hidden layer
were sigmoid functions. That for the output layer was a
linear function, because absolute values for joint angle coin-
aqo pensation are necessary in this application.
sq 1
sq2 Node outputs in the network of each layer relate to a sum
of weighted input level. Outputs in each layer are obtained
6q3 by next equations.
sq4
aq5
Of= f(N/) (1)
ut Layer]
N;" = cw$!-'
j
. ok-1 - Bk 3 (2)
-1
[Neural Network] +--- J ( y- x)
As 1 - a
. -
16 4 [Supewisaty Signay
I 1 I
- 636 -
69 = J i1 * 6X (3)
Here,
6X=Y-X (4)
Here, J is inverse of the Jacobi matrix for' the manip
ulator at joint angle q. The Jacobi matrix was efficiently
derived by Orin's method5).
here,
3 EXPERIMENTS [Preparation]
This method's effectiveness and the generality in regard to 0 Calculate joint angle set q from a set of end effector
the work area, which is not used for the learning, have been locations X ,F i n g the inverse kinematics function I ( X ) .
investigated by several numerical simulations. 0 Determine actual end effector location Y from q, using
the forward kinematics equation F(q), which includes
3.1 Manipulator M o d e l model error.
A forward kinematics equation F ( q ) , which includes a link [Learning]
parameter error, was defined. An inverse kinematics equa-
tion I ( X ) was described without the model error. The Ja- 0 Carry out the error back propagation learning with
cobi matrix J was calculated from the model without the x,Q, y.
error. [Execution]
In actual systems, the forward kinematics equation F ( q ) 0 Various locations, X', are given to the network 6 I ( X )
corresponds to a real manipulator, while the inverse kine- and the inverse kinematics equation I ( X ) . The sum of
matics equation I ( X ) corresponds to a conventional robot q and 6q is sent to F(q).
-637- ,
Table 1 Location Data Sets for Experiments
! Arm end p o i n t set f o r XI p l a n e move (unit:”)
: Ipx py p7.1 ( a x a y a21 (SX SY S L )
* 25
240 -200 -335 0 0 -I 0 I 0
280 -200 -335 0 0 -I 0 I 0
320 -200 -335 0 0 -I 0 I 0
360 -200 -335 0 0 -I 0 I 0
400 -200 -335 0 0 -I 0 I 0
240 -100 -335 0 0 -1 0 I 0
280 -100 -335 0 0 -1 0 1 0
320 -100 -335 0 0 -I 0 I 0
360 -100 -335 0 0 -1 0 I 0
400 -LOO -335 0 0 -1 0 I 0
240 0 -335 0 0 -I 0 I 0
280 0 -335 0 0 -I 0 I 0
-335 0 -1 0 I 0
x
320 0 0
360 0 -335 0 0 -I 0 1 0
400 0 -335 0 0 -I 0 1 0
240 100 -335 0 0 -1 0 1 0
280 100 -335 0 0 -I 0 I 0 Fig.4 Location Data Sets for Experiments
320 100 -335 0 0 -I 0 I 0
360 100 -335 0 0 -I 0 I 0
400 100 -335 0 0 -I 0 I 0
1 I
400 -100 240 I 0 0 0 1 0
1
400 -100 280 1 0 0 0 I 0
400 -100 320 I 0 0 0 1 0
400 -100 360 I 0 0 0 I 0 .00010 -.07183 .69384 .00301 -.61962 .OOOOO
400 -100 400 I 0 0 0 I 0 bq
400 0 240 I 0 0 0 1 0
400 0 280 I 0 0 0 I 0
400 0 320 1 0 0 0 1 0
400 0 360 I 0 0 0 I 0
400 0 400 1 0 0 0 1 0
- 638 -
thousand times of training.
( d 4 Learning Curve for Position Correction
I\
20.00 1 1 Figure 6 to 8 show absolute positioning error distribution.
In Figure 6, the network was trained using 25 points on the
16.00 X-Y plane and the cmpensatory joint angles were gener-
ated for the same 25 p h t s on the X-Y plane,i.e. data set
XY was applied to the network both for the training and for
the execution. As shown in the figure, absolute positioning
error was reduced to 0.45mm, on an average. Compensated
manipulator motion is four times accurate compared with
the m d o n generated by a conventional robot controller.
1
In Figure 7, a combination of 25 points on the XY plane
.oo I and 25 points on the YZ plane were used for learning. In
0 200 400 600 800 1000
this case, points on the XY plane were used to the train-
Number of Learning (N) ing first, and then points on the YZ plane were used. Rest
Fig.5 Learning Curve
of the training was carried out similarly using data on the
XY plane and then data on the YZ plane. The same com-
bination of 50 points were used for the effectiveness evalua-
network learning, were shown in the same table, too. tion. Also, in this case, the absolute positioning error was
reduced to 0.45". The result means that the network has
The table shows that the network generates appropriate been acquired completely different nonlinear compensation
compensation joint angles. The network output 6q is close functions. The accuracy was improved more than four times
to the supervisory signal. Similar results were obtained for better than the motion with no correction.
other points. The result proves that the neural network has
learned non-linear computation. Also it proves that the net- Figure 8 shows the error distribution, when data set XY
work generates accurate analog absolute value. t
was used for the learning and XY' was used for the execu-
tion. In this case, data sets for the learning and the execu-
In pattern recognition applications, each element absolute tion differ from each other. Even in this case, the absolute
output value in the network output layer is of less impor- positioning error was reduced to 0.78mm, on an average.
tance. Mutual differences among output node value are used This result shows this method feature for learning set gen-
to select one category as the recognized result. In control erality.
application, all output absolute value relates tot4achieving
positioning accuracy. Therefore, the absolute output value
is important for the positioning accutacy improvement.
5 SUMMARY
Table 3 shows the compensated end-effector location. Ab-
The experiment results show the effectiveness of the pro-
solute positioning error was evaluated by summing position-
posed method. This simple computation scheme improves
ing deviation from the commanded position along all X,Y,Z
the 6 D.O.F. manipulator absolute positioning accuracy.
direction as root mean square ( R.M.S ) error. Here,
Also, it was shown experimentally that the error back p r o p
agation model artificial neural network learns highly non-
linear functions. The network generated appropriate joint
R.M.S. angles for the end effector location compensation. Though
error = d(Yz - XZ)Z + (Yy- X,)Z + (Yz- X,)? (7) the results show the possibility of applying an artificial neu-
ral network to manipulator accuracy improvement, the a b
Because the manipulator model incorporates 2mm link solute positioning compensation capability is still not suf-
length error in the 2nd link, the R.M.S. error without com- ficient. One of the reasons is regarded as in the training
pensation was 2". The R.M.S error, after correction by data sets. In this experiment, the end effector position was
the network output, has been reduced to 0.6" in table changed, while its orientation was fixed. Further investiga-
3. Thus, the absolute positioning error was reduced to 1/3, tion on data sets for learnifig is now carrying out.
compared with no correction. Positioning accuracy improve-
ment in the same magnitude has been achieved for other
points. Acknowledgement
Figure 5 shows the learning curve. Difference between the The author thanks Dr. Tagawa and Mr. Fukuchi for their
network output and the supervisory signal versus number of helpful discussions.
training was plotted. The training gradually advances up to
a thousand times. No more error reduction occurred over a
- 639 -
1 Average
= -454068
Standard Deviation
= .379379
Number of Samples
= 25
Average
= .454043
. = -780489
8 - - Standard Deviation
= .414220
Number ofSamples
= 25
- 640 -