You are on page 1of 10

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO.

5, SEPTEMBER 1999 1123

A Lagrangian Network for Kinematic Control


of Redundant Robot Manipulators
Jun Wang, Senior Member, IEEE, Qingni Hu, and Danchi Jiang

Abstract—A recurrent neural network, called the Lagrangian problem for serial-link manipulators. The difficulties are com-
network, is presented for the kinematic control of redundant pounded by the requirement of real-time solution in sensor-
robot manipulators. The optimal redundancy resolution is de- based robotic operations. The most direct way to solve (2)
termined by the Lagrangian network through real-time solution
to the inverse kinematics problem formulated as a quadratic is to derive a closed-form solution from (1). Unfortunately,
optimization problem. While the signal for a desired velocity obtaining a closed-form solution is difficult for most manipula-
of the end-effector is fed into the inputs of the Lagrangian tors due to their nonlinearity of . Moreover, the solution is
network, it generates the joint velocity vector of the manipulator often not unique for kinematically redundant manipulators due
in its outputs along with the associated Lagrange multipliers. The to their redundancy. Making use of the relation between joint
proposed Lagrangian network is shown to be capable of asymp-
totic tracking for the motion control of kinematically redundant velocity and Cartesian velocity is a common indirect
manipulators. approach to the inverse kinematics problem. The velocity
vectors and have the following linear relation:
Index Terms— Asymptotic stability, kinematic control, kine-
matically redundant manipulators, optimization method, recur- (3)
rent neural networks.
where is the Jacobian matrix and can be rank-
I. INTRODUCTION deficient sometimes. In a kinematically redundant manipulator,
. This indirect approach begins with the desired

I N ROBOTIC motion planning and control, the solutions to


the kinematics problems are essential to achieve the goals
of a robotic operation. The forward kinematics problem in
velocity of the end-effector based on a planned trajectory
and required completion time . As a requirement,
. The corresponding joint position vector is
robotics is concerned with the transformation of position and obtained by integration of for a given . The resulting
orientation information in a joint space to a Cartesian space then is used to control the manipulator.
described by a forward kinematics equation Much effort has been devoted to solve the inverse kinemat-
(1) ics problem; e.g., [1]–[19]. For example, in the table lookup
method [1] the inverse Jacobian matrix is stored in memory a
where is an -vector of joint variables, is an -vector priori instead of computing in real time. In the pivot method
of Cartesian variables, and is a continuous nonlinear [2], the inverse Jacobian is broken down into manageable
function whose structure and parameters are known for a given submatrices. In the extended pivot method [3], the joint
manipulator. The inverse kinematics problem is to find the velocity is obtained by directly computing the joint velocities.
joint variables given the desired positions and orientations of In the residue arithmetic method [4], the pseudoinverse of the
the end-effector through the inverse mapping of the forward Jacobian is computed in a parallel manner. In the least-squares
kinematics equation (1) method [5], is computed directly without solving the
pseudoinverse of Jacobian explicitly. Other numerical methods
(2) such as the Newton’s method are also developed for solving
the inverse kinematics problem; e.g., [6] and [7]. See [8] and
The inverse kinematics problem involves the existence and [9] for a review and an in-depth discussion of the solution
uniqueness of a solution, and effectiveness and efficiency of methods.
solution methods. The inverse kinematics problem is thus In recent years, numerous neural networks have been de-
much more difficult to solve than the forward kinematics veloped for solving various matrix algebra and optimization
problems; e.g., [10]–[20]. In robotics research, various neural
Manuscript received April 25, 1998; revised March 8, 1999. This work
was supported by the Hong Kong Research Grants Council under Grant networks have been applied for kinematic/dynamic control and
CUHK4165/98E. This work was conducted when Q. Hu was a Research path planning of manipulators [21]–[54]. Many of the neural
Associate at the Chinese University of Hong Kong. networks for robot kinematic contol are feedforward networks
J. Wang is with the Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, N.T., Hong Kong. such as the multilayer perceptron trained via supervised learn-
Q. Hu is with the Department of Mechanical Engineering, Dalian University ing using the backpropagation algorithm or its variants [22],
of Technology, Dalian, Liaoning 116023, China. [23], [25]–[30], [32]–[37], [39], [40], [44], [46]–[48], [50].
D. Jiang is with Daedalian Systems Group Inc., Toronto, Ont., M5C 1E5,
Canada. Recurrent neural networks with feedback connections, such as
Publisher Item Identifier S 1045-9227(99)07320-8. the Hopfield networks, have also been applied for kinematic
1045–9227/99$10.00  1999 IEEE
1124 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 5, SEPTEMBER 1999

control [24], [31], [38], [41]–[43], [45], [49], [50]–[54]. Unlike By setting the partial derivatives of to zero,
feedforward neural networks, most recurrent neural networks the Lagrange necessary condition gives rise to the following
do not need off-line supervised learning and thus are more time-varying algebraic equations:
suitable for real-time robot control in uncertain environments.
In this paper, a recurrent neural network, called the La- (8)
grangian network, is presented for the kinematic control of (9)
redundant manipulators. The proposed Lagrangian network
explicitly minimizes a weighted norm of the joint velocity Multiplying both sides of (8) by 1, then rewriting (8) and
vector while keeping the linear relation between and (9) in a combined matrix form, we have
satisfied. It is shown to be asymptotically stable and capable
of solving the inverse kinematics problem in real time. The (10)
operating characteristics and performance of the Lagrangian
network are demonstrated by use of simulation results. III. NETWORK DESCRIPTION
The rest of this paper is organized in five sections. In
Section II, the inverse kinematics problem is formulated as A recurrent neural network is presented in [28] for solving
a time-varying quadratic programming problem with equal- the quadratic programming problem with equality constraints
ity constraints. In Section III, the energy function and dy- like (5) and (6). While the original neural network was used for
namic equation of the Lagrangian network are described. In solving static quadratic programming with equality constraints,
Section IV, the Lagrangian network is proven to have the it can be easily extended to time-varying problems. Let the
capability of asymptotic tracking. In Section V, the simulation state vectors of output neurons and hidden neurons be denoted
results of the Lagrangian network to control a seven-degree-of- by and , an -vector representing estimated
freedom (DOF) manipulator are discussed. Finally, Section VI and an -vector representing estimated , respectively.
concludes the paper with final remarks. The dynamic equations of the Lagrangian network can be
expressed by the following time-varying linear differential
equations:
II. PROBLEM FORMULATION
In order to determine for given through (3), the (11)
joint velocity vector needs to be computed. One way to (12)
determine is as follows [9]:
where and are positive diagonal
(4) capacitive matrices. The positive diagonal capacitive matrices
and are used to precondition the system matrix and
where is the pseudoinverse of , is an
scale the convergence rate of the Lagrangian network.
-vector of desired velocity, is an identity matrix, and
Equation (11) shows that the symmetric connection weight
is an -vector of arbitrary time-varying variables. This
matrix among the neurons represented by is . Equa-
method entails the computation of time-varying pseudoinverse
tion (11) also shows that the time-varying connection weight
. Since is underdetermined in a kinematically
matrix from the neurons represented by to the neurons
redundant manipulator, another way to determine without
represented by is . Equation (12) shows that
the need for computing the pseudoinverse is to solve the
the time-varying connection weight matrix from the neurons
following time-varying quadratic programming problem with
represented by to the neurons represented by is
equality constraints:
. Equation (12) also shows that the external input
minimize (5) vector to the hidden layer is . Fig. 1 illustrates the
kinematic control process based on the Lagrangian network.
subject to (6)
In this context, the desired velocity vector is input into
where the superscript denotes the transpose operator and the Lagrangian network, and at the same time the Lagrangian
is an symmetric positive-definite weighting matrix. network outputs the computed joint velocity vector .
In particular, if is an identity matrix, then the objective
function to be minimized is equivalent to the 2-norm of joint IV. STABILITY ANALYSIS
velocity . If is the inertia matrix, then the objective Written in a combined format, the Lagrangian network can
function to be minimized is the kinetic energy. be described as the following time-varying linear dynamic
The Lagrangian of the time-varying quadratic programming system:
problem subject to equality constraints described in (5) and
(6) is defined as follows: (13)

where diag ,
(7) , and

where is an -dimensional column vector of Lagrangian


multipliers at time .
WANG et al.: LAGRANGIAN NETWORK FOR KINEMATIC CONTROL 1125

Fig. 1. Block diagram of the neural-network approach to robot kinematics control.

Given an initial point , we say the solution of the Proof:


system starting from is stable if for any positive real number 1) First we prove that none of the eigenvalues of
, there exists a positive real number , such that has positive real part. Since
for any initial point in the -neighborhood of , the
corresponding solution of (13) remains in the -neighborhood
of for . We also say that the state of
the solution is asymptotically stable if the part of
components of the corresponding solution converges to as
. As the system defined in (13) is linear, these types
of stability are equivalent to the stability of zero solution to
and
the corresponding homogeneous system, namely, the system
without term. The following theorem gives the stability This means that the quadratic form associated with
property of the proposed Lagrangian network. is always negative semidefinite. In the case of real
Theorem 1: The Lagrangian network defined in (13) is eigenvalue , if is an eigenvector
globally stable. Furthermore, part of the state vector is of associated with , we have
globally asymptotically stable. because the quadratic
Proof: We need to consider the zero solution to the form of is always negative semidefinite. Therefore,
corresponding homogeneous system only. .
Let us define a function Consider the case of complex eigenvalue . If
is the corresponding eigenvector, then
(14)

Obviously, it is a radially unbounded Lyapunov function.


Along any trajectory of the homogeneous part of the system where and are the complex conjugates of and ,
(13), the time derivative of is calculated as respectively. Let

(15)
where . Then is a real matrix. Furthermore
Since is a constant positive definite symmetric matrix and
is a diagonal matrix with positive diagonal entries, (15)
implies that

(16)
Choose a vector as
as (17)

where is the minimal eigenvalue of . Hence the Theorem


is proven. Then
The following Theorem is important for the asymptotic Re
stability of the Lagrangian network for kinematic control of
redundant manipulators. Therefore, Re
Theorem 2: The system matrix has no eigenvalue with 2) Now we prove by contradiction that none of the eigen-
positive real part or pure imaginary part. Furthermore, if the values of has pure imaginary part. Let’s assume
row rank of is , then has only one zero that has a pure imaginary part where .
eigenvalue. If is a complex eigenvector of
1126 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 5, SEPTEMBER 1999

corresponding to where and , Now


then from , we have

(18)
(19)

Multiplying both sides of (14) by where the


superscript * denotes the Hermitian operator [i.e., the
conjugate and transpose of ], we have

(20) Since

From (19), . Substituting it


into (16), we have

the characteristic polynomial of has only one zero root. The


Since is symmetric, the left-hand side is always proof is complete.
real. If , then the right-hand side is The above analytical results show that no pole of the neural
imaginary. Moreover, if and both system is located on the right-hand side or on the imaginary
and , then the right-hand side is zero, axis of the complex plane, and only one pole is located at
but the left-side hand is positive for any nonzero . the origin if the rank of is . Therefore, if the
This contradicts the assumption of the existence of a redundant manipulator loses at most one degree of freedom
pure imaginary eigenvalue. at any time (i.e., rank( ), the Lagrangian
3) Finally, we prove that there is only one zero eigenvalue network with sufficiently large gains (i.e., sufficiently small
of if the row rank of is (i.e., values of and ) has the asymptotic tracking capability.
rank( ). At the limiting state,
Note that for any four-block square matrix , if
is invertible, then

which satisfies the Lagrange optimality condition (10).

Applying this result, it is clearly that is not of full V. SIMULATION RESULTS


column rank, otherwise Simulations have been performed based on the PA-10 ma-
nipulator (Portable General Purpose Intelligent Arm) made
by Mitsubishi. The PA-10 manipulator has seven degrees-of-
which violates the rank condition of . freedom (three rotation axes and four pivot axes) as illustrated
Without loss of generality, let is in the form of in Fig. 2. Fig. 3 shows the coordinate system of the PA-
, where is of full column rank. We claim that 10 manipulator. The homogeneous transformation matrices
is a column vector. To this end, let and the Jacobian are shown in the Appendixes. The key
specifications of the PA-10 manipulator are as follows: the
full length of the manipulator is 1345 mm, the weight of the
manipulator is 35 kgf, the maximum combined speed with all
axes is 1.55 m/s, the payload weight is 10 kgf, and the output
torque is 9.8 Nm.
where are identity matrices of appropriate dimen-
In this section the simulation results are presented to demon-
sion. Its inverse can be calculated as
strate the performance of the proposed Lagrangian network
for pseudoinverse kinematic control of the PA-10 manipulator.
The objective is to simulate the PA-10 manipulator in follow-
ing an ellipse with the short radius of 0.254 m and the long
radius of 0.273 m in the 3-D workspace with the orientation
Then of the end-effector unchanged. The fourth-order Runge–Kutta
method is used to solve the differential equations (11) and (12)
based on MATLAB. In the simulation, the weighting matrix in
(5) , the scaling matrices in (11) and (12) are chosen as
and , the sampling period is
The rank condition of establishes the claim that is a 100 S, and the run time of the manipulator is 4 s. Fig. 4 shows
column vector. the simulated actual oval trajectory in the three-dimensional
WANG et al.: LAGRANGIAN NETWORK FOR KINEMATIC CONTROL 1127

Fig. 4. Simulated trajectory of the PA-10 redundant manipulator.

ters, ( ) denotes the orientation variables in radians,


( ) denotes the translational velocity variables in m/s,
and ( ) denotes the angular velocity variables in
rad/s, respectively. The orientation of the end-effector is
Fig. 2. Mechanical configuration of the PA-10 redundant manipulator.
required to keep unchanged while the manipulator moves;
i.e., and are set to zero through-
out the manipulation. The initial angular vector is
. The actual joint velocity vector
in rad/s and the actual joint vector in radians
generated from the Lagrangian network with an integrator
are shown in Fig. 5(c) and (e) and their 2-norm in
rad/s and in radians are shown in Fig. 5(d) and (f),
respectively. Fig. 5(g) and (h) shows that the deviations be-
tween the actual position and the desired one and between the
actual velocity and the desired one over time are very small;
precisely, mm,
mm/s,
rad, and
rad/s. Fig. 5(i) shows the transient behavior of
the Lagrangian multiplier vector . Since the singularity
of a manipulator is defined as the rank deficiency of the
Jacobian, the condition number of the Jacobian is a good
indicator of singularity. Fig. 5(j) shows the transient behavior
of the condition number of the Jacobian, Cond . The dotted
lines in these subplots correspond to the one-layer recurrent
Fig. 3. Coordinate system of the PA-10 redundant manipulator. neural network presented in [12] which solves (3) without
minimizing an objective function. The condition number of the
Jacobian corresponding to the Lagrangian network indicated
(3-D) space and its orthographic projections, which is virtually
by the solid line is always smaller than the one corresponding
identical to the desired path. to the one-layer recurrent neural network indicated by the
Illustrated in Fig. 5 are the transient behaviors of the PA- dotted line which reaches its peak near the end of the run. The
10 manipulator and the Lagrangian network. Specifically, the performance of the proposed Lagrangian network is clearly
first nine subplots in Fig. 5 show, respectively, , , superior to that of the one-layer one, especially near the end
, , , , , , and of the path. Fig. 6 illustrates the 13 poles of the Lagrangian
. More precisely, Fig. 5(a) and (b) shows the desired network [the eigenvalues of ] on the complex plane where
coordinate and the desired two subplots in each row show a pair of real or conjugate
velocity , where ( ) de- complex poles. It verifies that no pole is located on the right-
notes the Cartesian coordinates of the end-effector in me- hand side or the imaginary axis of the complex plane.
1128 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 5, SEPTEMBER 1999

(a) (b)

(c) (d)

(e) (f)

(g) (h)

(i) (j)

(k) (l)
Fig. 5. Transient behaviors of the Lagrangian network and the PA-10 manipulator.
WANG et al.: LAGRANGIAN NETWORK FOR KINEMATIC CONTROL 1129

The simulations of the PA-10 manipulator at singular con-


figurations have also been conducted and the results are
illustrated in Fig. 5(k) and (l) using the condition number of
the Jacobian. A close examination of the Jacobian reveals
that if any of , or is zero, then two columns in
the Jacobian are equal (i.e., the degree-of-freedom of the
manipulator is reduced from seven to six and the Jacobian
becomes ill-conditioned). Furthermore, if two of those joints
are equal to zero, then three columns in the Jacobian are
equal (i.e., the degrees-of-freedom of the manipulator are
reduced from seven to five and the Jacobian becomes rank-
deficient). Shown in Fig. 5(k), the manipulator starts from the
singular configuration of elbow lock where the lower arm is
parallel to the upper arm in the beginning [i.e., ].
Fig. 5(l) shows another example in which the manipulator
moves out of the singular configuration of shoulder lock where
the upper arm is parallel to the shoulder in the beginning
[i.e., ]. Since the upper arm does not move in the
beginning, the spike value of the condition number of the
Jacobian for the one-layer neural network is located in the
middle of the run, instead of in the beginning. Specifically,
the spike of Cond corresponded to the one-layer recurrent
neural network near the end of the run in Fig. 5(k) and the
two spikes in Fig. 5(l) are all resulted from elbow lock [i.e.,
]. The cause of this outcome is that the one-layer
recurrent neural network does not minimize , the
is underdetermined, and one of angular variables, , or ,
of the rotation axes is equal to zero. Other simulation results
show that the Lagrangian network is advantageous over the
one-layer one in terms of the magnitude of errors (especially
the position errors) and the norm of joint velocity when the
velocity requires being changed quickly from acceleration to
deceleration or vice versa.

VI. CONCLUSIONS
The proposed Lagrangian network provides a new parallel
distributed model for real-time kinematic control of redundant
robot manipulators. Compared with the supervised learning
neural networks for robot kinematic control, the present ap-
proach eliminates the need for training and produces much
smaller position error and velocity error. Compared with other
recurrent neural networks, the present approach explicitly
minimizes the norm of joint velocity and performs better in
terms of joint velocity norm as well as position and velocity
errors. The simulation results have shown the capability and
characteristics of the proposed Lagrangian network for the
kinematic control of redundant manipulators. Implemented in
a dedicated hardware such as ASIC, the Lagrangian network
is suitable for the real-time kinematic control of redundant
robots, especially mobile ones.

APPENDIX I
The homogeneous transformation matrix , which spec-
ifies the position and orientation of the end-effector with
respect to the base coordinate system, is the chain product of
Fig. 6. Trajectories of the poles of the Lagrangian network on the complex
plane. successive coordinate transformation matrices of , for
1130 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 5, SEPTEMBER 1999

0s1 q114 0 c1 q234 c1 p23 s1 s2 p33 0 c2 p32 q423 p43 0 q433 p42 0q422 p53 + q432 p52 q522 p63 0 q532 p62 0
c1 q114 0 s1 q234 s1 p23 c2 p31 0 c1 s2 p33 q433 p41 0 q413 p43 0q432 p51 + q412 p53 q532 p61 0 q512 p63 0
J= 0 0s1 p22 0 c1 p21 c1 s2 p32 0 s1 s2 p31 q413 p42 0 q423 p41 0q412 p52 + q422 p51 q512 p62 0 q522 p61 0
0 0 s1 c1 s2 q413 0q412 q512 0q612 (21)
0 c1 s1 s2 q423 0q422 q522 0q622
1 0 c2 q433 0q432 q532 0q632

, expressed as

where is the normal vector of the end-effector in the


base coordinate system, is the sliding vector of the end-
effector, is the approach vector of the end-effector, and
is the position vector of the end-effector

and

, and . The lengths of the arms are


mm, mm, and mm. REFERENCES
[1] D. E. Whitney, “The mathematics of coordinated control of prosthetic
APPENDIX II arms and manipulators,” Trans. ASME, J. Dynamical Syst., Measure-
ment, Contr., vol. 94, pp. 303–309, 1972.
The Jacobian of the PA-10 manipulator can be derived as [2] H. Meloush and P. Andre, “High speed computation of the inverse
shown in (21) at the top of the page, where Jacobian matrix and of servo inputs for robot arm control,” in Proc.
21st IEEE Conf. Decision Contr., 1982, pp. 89–94.
[3] M. B. Leahy, Jr., L. M. Nugent, and G. N. Saridis, “Efficient PUMA
manipulation and inversion,” J. Robot. Syst., vol. 4, no. 2, pp. 185–197,
1987.
[4] P. R. Chang and C. S. G. Lee, “Residue arithmetic VLSI array architec-
ture for manipulator pseudoinverse Jacobian computation,” IEEE Trans.
Robot. Automat., vol. 5, pp. 569–582, 1989.
[5] C. W. Wampler II, “Manipulator inverse kinematics solutions based on
vector formation and damped least-squares methods,” IEEE Trans. Syst.,
WANG et al.: LAGRANGIAN NETWORK FOR KINEMATIC CONTROL 1131

Man, Cyben., vol. 16, pp. 93–101, 1986. [35] J. Hakala, G. Fahner, and R. Eckmiller, “Rapid learning of inverse robot
[6] R. V. Mayorga, A. K. C. Wong, and N. Milano, “A fast procedure for kinematics based on connection assignment and topographical encoding
manipulator inverse kinematics evaluation and pseudoinverse robust- (CATE),” in Proc. IJCNN, Seattle, WA, vol. II, 1991, pp. 1536–1541.
ness,” IEEE Trans. Syst., Man, Cybern., vol. 22, pp. 790–798, 1992. [36] C. Kazakiewicz, T. Ogiso, and N. Myyake, “Partitioned neural-network
[7] T. Isabe, K. Nagasaka, and S. Yamamoto, “A new approach to kinematic architecture for inverse kinematic calculation of a 6 DOF robot manip-
control of simple manipulators,” IEEE Trans. Syst., Man, Cybern., vol. ulators,” in Proc. IJCNN, Seattle, WA, vol. III, 1991, pp. 2001–2006.
22, pp. 1116–1124, 1992. [37] C. Cox, R. Saeks, M. Lothers, R. Pap, and C. Thomas, “A neurocon-
[8] C. A. Klein and C.-H. Huang, “Review of pseudoinverse control for use troller for robotic applications,” in Proc. IEEE Int. Conf. Syst., Man,
with kinematically redundant manipulators,” IEEE Trans. Syst., Man, Cybern., 1992, pp. 712–716.
Cybern., vol. 13, pp. 245–250, 1983. [38] Y. Li and N. Zeng, “A neural-network-based inverse kinematics solution
[9] D. R. Baker and C. W. Wampler II, “On the inverse kinematics of in robotics,” in Neural Networks in Robotics, G. A. Bekey and K. Y.
redundant manipulators,” Int. J. Robot. Res., vol. 7, pp. 3–21, 1988. Goldberg, Eds. Boston, MA: Kluwer, 1993, pp. 97–111.
[10] J. J. Hopfield and D. W. Tank, “Neural computation of decisions in [39] T. C. Hsia and Z. Mao, “Obstacle avoidance inverse kinematics solution
optimization problems,” Biol. Cybern., vol. 52, pp. 141–152, 1985. of redundant manipulators by neural networks,” in Proc. IEEE Int. Conf.
[11] D. W. Tank and J. J. Hopfield, “Simple neural optimization networks: Robot. Automat., 1993, p. 1014.
An A/D converter, signal decision circuit, and a linear programming [40] J. M. I. Zannatha, D. F. Bassi, and R. A. Garcia, “Position and dif-
circuit,” IEEE Trans. Circuits Syst., vol. 33, pp. 533–541, 1986. ferential kinematic neural control of robot manipulators: A comparison
[12] J. Wang, “Analysis and design of a recurrent neural network for linear between two schemes,” in Proc. IEEE Int. Conf. Syst., Man, Cybern.,
programming,” IEEE Trans. Circuits Syst. I, vol. 40, pp. 613–618, 1993. 1993, pp. 479–484.
[13] J. Wang, “A deterministic annealing neural network for convex pro- [41] D. H. Rao, M. M. Gupta, and P. N. Nikiforuk, “On-line learning of robot
gramming,” Neural Networks, vol. 7, pp. 629–641, 1994. inverse kinematic transformations,” in Proc. IJCNN, Nagoya, Japan,
[14] A. Cichocki and R. Unbehauen, “Neural networks for solving systems 1993, pp. 2827–2830.
of linear equations and related problems,” IEEE Trans. Circuits Syst., [42] S. Lee and R. M. Kil, “Redundant arm kinematic control with recurrent
vol. 39, pp. 124–138, 1992. loop,” Neural Networks, vol. 7, pp. 643–655, 1994.
[15] A. Cichocki and R. Unbehauen, Neural Networks for Optimization and [43] G. Wu and J. Wang, “A recurrent neural network for manipulator inverse
Signal Processing. New York: Wiley, 1993. kinematics computation,” in Proc. IEEE ICNN, Orlando, FL, 1994, vol.
[16] Y. Xia and J. Wang, “A general methodology for designing globally con- V, pp. 2715–2720.
vergent optimization neural networks,” IEEE Trans. Neural Networks, [44] Y. Kuroe, Y. Nakai, and T. Mori, “A new neural-network learning of
vol. 9, pp. 1331–1343, 1998. inverse kinematics of robot manipulator,” in Proc. IEEE ICNN, Orlando,
[17] J. Wang, “A recurrent neural network for real-time matrix inversion,” FL, 1994, vol. V, pp. 2819–2824.
Appl. Math. Comput., vol. 55, pp. 89–100, 1993. [45] K. Mathia, R. Saeks, and G. G. Lendaris, “Linear Hopfield networks,
[18] , “Recurrent neural networks for solving linear matrix equations,” inverse kinematics and constrained optimization,” in Proc. IEEE Int.
Comput. Math. Applicat., vol. 26, no. 9, pp. 23–34, 1993. Conf. Syst., Man, Cybern., 1994, pp. 1269–1273.
[19] J. Wang and H. Li, “Solving simultaneous linear equations using [46] R. Gazit and B. Widrow, “Back propagation through links: A new
recurrent neural networks,” Inform. Sci., vol. 76, pp. 255–277, 1994. approach to kinematic control of serial manipulators,” in Proc. IEEE
[20] J. Wang, “Recurrent neural networks for computing pseudoinverses of Int. Symp. Intell. Contr., 1995, pp. 99–104.
rank deficient matrices,” SIAM J. Sci. Comput., vol. 18, pp. 1497–1493, [47] B.-L. Lu and K. Ito, “Regulation of inverse kinematics for redundant
1997. mainulators using neural network inversions,” in Proc. IEEE ICNN,
[21] G. A. Bekey, “Robotics and neural networks,” in Neural Networks for Perth, Australia, 1995, pp. 2726–2731.
Signal Processing, B. Kosko, Ed. Englewood Cliffs, NJ: Prentice-Hall, [48] A. Ramdane-Cherif, V. Perdereau, and M. Drouin, “Optimization
1992, pp. 161–185. schemes for learning the forward and inverse kinematic equations
[22] A. Guez and Z. Ahmad, “Solution to the inverse problem in robotics by with neural network,” in Proc. IEEE ICNN, Perth, Australia, 1995, pp.
neural networks,” in Proc. IEEE ICNN, San Diego, CA, vol. II, 1988, 2732–2737.
[49] H. Ding and S. P. Chan, “A real-time planning algorithm for obstacle
pp. 617–624.
[23] G. Josin, D. Charney, and D. White, “Robot control using neural avoidance of redundant robots,” J. Intell. Robot. Syst., vol. 12, pp. 1–15,
networks,” in Proc. IEEE ICNN, San Diego, CA, 1988, pp. 625–631. 1996.
[24] J. Guo and V. Cherkassy, “A solution to the inverse kinematics [50] D. Wang and A. Zilouchian, “Solutions of kinematics of robot manip-
problem in robotics using neural network processing,” in Proc. IJCNN, ulators using Kohonen self-organizing neural network,” in Proc. IEEE
Int. Symp. Intell. Contr., Istanbul, Turkey, 1997, pp. 251–255.
Washington, D.C., 1989, vol. II, pp. 299–304.
[51] J. Wang, Q. Hu, and D.-C. Jiang, “A two-layer recurrent neural network
[25] R. Eckmiller, J. Beckmann, and M. Lades, “Neural kinematics net for
for pseudoinverse control of kinematically redundant manipulators,” in
a redundant robot arm,” in Proc. IJCNN, Washington, D.C., 1989, vol.
Proc. IEEE Conf. Decision Contr., San Diego, CA, 1997, pp. 2507–2512.
II, pp. 333–339. [52] A. F. R. Araujo and M. Vieira, “Associative memory used for trajec-
[26] A. Guez and Z. Ahmad, “Accelerated convergence in the inverse
tory generation and inverse kinematics problem,” in Proc. IEEE/INNS
kinematics via multilayer feedforward networks,” in Proc. IJCNN,
IJCNN, Anchorage, AK, 1998, pp. 2057–2062.
Washington, D.C., 1989, vol. II, pp. 341–344. [53] H. Ding and S. K. Tso, “Minimum infinity-norm kinematic solution for
[27] Z. Ahmad and A. Guez, “On the solution to the inverse kinematic prob- redundant robots using neural networks,” in Proc. IEEE Int. Conf. Robot.
lem,” in Proc. IEEE Int. Conf. Robot. Automat., 1990, pp. 1692–1697. Automat., Leuven, Belgium, 1998, pp. 1719–1724.
[28] F. J. Arteage-Bravo, “Multilayer backpropagation network for learn- [54] A. F. R. Araujo and H. D’Arbo, “A partially recurrent neural network to
ing the forward and inverse kinematics equations,” in Proc. IJCNN, perform trajectory planning, inverse kinematics, and inverse dynamics,”
Washington, D.C., 1990, vol. II, pp. 319–322. in Proc. IEEE Int. Conf. Syst., Man, Cybern., San Diego, CA, 1998, pp.
[29] H. W. Werntges, “Delta rule-based neural network for inverse kinemat- 1784–1789.
ics: Error gradient reconstruction replaces the teacher,” in Proc. IJCNN,
Washington, D.C., 1990, vol. III, pp. 415–420.
[30] J. Hakala, R. Steiner, and R. Eckmiller, “Quasilocal solution for inverse
kinematics of a redundant robot arm,” in Proc. IJCNN, San Diego, CA,
1990, vol. III, pp. 321–326.
[31] S. Lee and R. M. Kil, “Robot kinematic control based on bidirectional Jun Wang (S’89–M’90–SM’93) received the B.S.
mapping neural network,” in Proc. IJCNN, San Diego, CA, vol. III, degree in electrical engineering and the M.S. degree
1990, pp. 327–335. in systems engineering from Dalian University of
[32] S. Kieffer, V. Morellas, and M. Donath, “Neural network learning of Technology, China. He received the Ph.D. degree
the inverse kinematic relationships for a robot arm,” in Proc. IEEE Int. in systems engineering from Case Western Reserve
Conf. Robot. Automat., 1991, pp. 2418–2425. University, Cleveland, OH.
[33] C. L. P. Chen and A. D. McAulay, “Robot kinematics learning com- Presently, he is an Associate Professor at the Chi-
putations using polynomial neural networks,” in Proc. IEEE Int. Conf. nese University of Hong Kong. Prior to this position,
Robot. Automat., Sacramento, CA, 1991, pp. 2638–2643. he was an Associate Professor at the University
[34] F. Pourboghrat, “Neural networks for learning inverse kinematics of of North Dakota, Grand Forks, ND. His current
redundant manipulators,” in Proc. IJCNN, Seattle, WA, vol. II, 1991, research interests include neural networks and their
pp. 1004–1006. applications in control and manufacturing systems.
1132 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 5, SEPTEMBER 1999

Qingni Hu received the B.S. degree in mechanical Danchi Jiang received the B.S. degree in mathemat-
engineering in 1982 and the M.S. degree in com- ics from Wuhan University, Wuhan, China, the M.S.
puter science in 1988, both from Dalian University degree in control systems and applications from
of Technology, China. East China Normal University, Shanghai, China,
He is currently an Associate Professor at Dalian and the Ph.D. degree in systems engineering from
University of Technology. His research interests Australian National University, Canberra, Australia.
include image processing, computer graphics, and This work was done while he was a Postdoctoral
neural networks. Research Fellow at the Chinese University of Hong
Kong. His present research interests include engi-
neering software development, neural computing,
system optimization, and control system design.

You might also like