Professional Documents
Culture Documents
Introduction To Robotics: Recommended Literature
Introduction To Robotics: Recommended Literature
Introduction to Robotics
Module:
Trajectory generation and robot programming
(summer term 2000)
Recommended literature
Text Books
rd
Journals
Acknowledgement
This handout has been partially derived from the lecture notes from Wolfgang Weber
teaching robotics at the Fachhochschule Darmstadt, Germany and the above mentioned
textbooks from J.J. Craig and R.P. Paul.
Revisions
Revision Date
0.9
12-Mar-99
1.0
20.2.2000
Author
History
Th. Horsch 1st draft
Th. Horsch 1st release
E:\Robot_Erw\Publications\LectureRobotics.doc
1/50
CONTENT
1.
MOTIVATION ........................................................................................................................ 3
1.1.
1.2.
1.3.
2.
3.
ROBOT KINEMATICS......................................................................................................... 19
3.1.
3.2.
3.3.
4.
5.
E:\Robot_Erw\Publications\LectureRobotics.doc
2/50
1. Motivation
1.1.
It is worth noting that many problems in computer graphics animation are related to
robotics problems (much of the material in this course is relevant to animation).
1.2.
Terminology
links
prismatic joint
end effector
joints
E:\Robot_Erw\Publications\LectureRobotics.doc
3/50
a 2 degree of freedom (DOF) robot with rotational joints (2R planar robot) (->load >robot ->dh2.rob in EASY-ROB)
You can consider more examples with EASY-ROB. Robots from different robot vendors
are available: Fanuc, ABB, KUKA, Reis (->load ->extern in EASY-ROB)
Kinematics is the science of motion which treats motion without regard to the forces
which cause it. Within the science of kinematics one studies the position, velocity,
acceleration, and all higher order derivatives of the position variables. Hence, the study of
the kinematics of robots refers to all geometrical and time-based properties of the motion.
A very basic problem to be solved is:
How to relate the robots configuration or pose to the position and orientation of its end
effector. A configuration of an n-degree of freedom robot is an n-vector (q1 , q 2 ,..., q n ),
where each qi is either a rotational joint angle or a prismatic joint translation.
This is known as the forward kinematics of the robot. This is the static geometrical
problem of computing the position and orientation of the end-effector of the robot.
Specifically, given a set of joint angles, the forward kinematic problem is to compute the
position and orientation of the TCP relative to the base frame. Sometimes we think of this
as changing the representation of robot position from joint space description into a
cartesian space description.
The following problem is considered the inverse kinematics: Given the position and
orientation of the end-effector of the robot, calculate all possible sets of joint angles which
E:\Robot_Erw\Publications\LectureRobotics.doc
4/50
A robot programming language serves as an interface between the human user and the
industrial robot. Central questions arise such as:
How are motions through space described easily by a programmer ?
How are multiple robots programmed so that they can work in parallel ?
How are sensor-based actions described in a language ?
The sophistication of the user interface is becoming extremely important as robots and
other programmable devices are applied to more and more demanding industrial
applications.
An off-line programming system is a robot programming environment which has been
sufficiently extended, generally by means of computer graphics, so that the development
of robot programs can take place without access to the robot itself. A common argument
raised in the favor is that an off-line programming system will not cause production
equipment (i.e. the robot) to be tied up when it needs to be reprogrammed; hence,
automated factories can stay in production mode a greater percentage of time.
Programming system:
A robot user needs to teach the robot the specific task which is to be performed. This
can be done with the so called teach in method. The programmer uses the teach box of
the robot control and drives the robot to the diesired positions and stores them and other
values like travel speed, corner smoothing parameters, process parameters, etc. For the
programming period which can take a significant time for complex tasks the production is
idle. Another possibility is the use of Off-line programming systems mentioned before.
Robot control:
E:\Robot_Erw\Publications\LectureRobotics.doc
5/50
external
control
coordinate
transformation
world coordinates
(desired)
joint values
(desired)
internal
control
(jointcontrol)
measured
joint values
Robot mechanics:
The robot mechanics will transform the joint torques applied by the servo drives into an
appropriate motion. Nowadays AC motors are used to drive the axis.
Resolvers serve as a position sensor and are normally located on the shaft of the
actuator. Resolvers are devices which output two analog signals one is the sine of the
shaft angle and the other the cosine. The shaft angle is determined from the relative
magnitude of the two signals.
1.3.
PTP(Point to Point)-motion
A motion which specifies the configuration of the robot at the start and the end point. The
motion between these points is not determined in the sense that the TCP follows a
desired (cartesian) path. This motion type is mainly relevant for pick and place type tasks,
where the position and orientation along the path is not important. Since this motion is
very hard to predict in cartesian space, the programmer has to be careful regarding
collisions between the robot and ist environment. On the other hand this motion is easy
(and fast) in calculation, because no transformation of inverse kinematics needs to be
computed.
Figure 1-1 describes two examples:
E:\Robot_Erw\Publications\LectureRobotics.doc
6/50
y
pick and
n sp o
T r aplace
Bsp.:
an n s o rr
drilling table
B
A
Figure 1-1: PTP-motions in world coordinate system, left: path bewteen two drilling holes,
right: robot end-effector and three configurations for pick & place
Applications for PTP motions are spot welding, handling, pick and place, (partly) machine
handling and machine transfer
via point
Figure 1-2: example for interpolated motions between programmed positions, left: a
sequence consisting of 5 linear motion, right: linear circular linear motion
Applications for CP motions are arc welding, deburring, clueing, etc.
E:\Robot_Erw\Publications\LectureRobotics.doc
7/50
2.1.
To describe the position and orientation of the TCP, gripper, robot links, objects in the
environment, etc. coordinate systems are used.
y
y
x
z
arrow points out of the plane
start
E:\Robot_Erw\Publications\LectureRobotics.doc
8/50
v
v
vx
v = vy
v
z
vx
x
(2.1)
vx / v
v
ev =
= vy / v
v
vz / v
(2.2)
addition of vectors:
v1
v 1x + v 2x
v 3 = v 1 + v 2 = v 1y + v 2y
v + v
2z
1z
v
v
(2.3)
subtraction of vectors:
v1- v
v 1x v 2 x
v 1 v 2 = v 1y v 2 y
v v
2z
1z
cross product of two vectors:
E:\Robot_Erw\Publications\LectureRobotics.doc
-v
v1
(2.4)
v3 = v1 v2
9/50
v 3 x v 1y v 2z v 1z v 2y
v 3 = v 3 y = v 1z v 2 x v 1x v 2z
v v v v v
1y
2x
3z 1x 2 y
(2.5)
v3 is orthogonal to the plane, generated by v1 and v2. The three-finger-rule derives the
direction of v3 (see Figure 2-4).
.
v1
Figure 2-4: cross product
Dot product of two vectors:
The dot product of two vectors is defined by (2.6). The result is a scalar value which is
related to the opening angle between the vectors. If one of the vectors is a unit vector (a
vector with unit length) the result is the length of projection onto the unit vector. (see
Figure 2-5).
a b = a b cos ,
a b = ax bx + a y by + az bz
(2.6)
1
0
0
x i = 0 , y i = 1 , z i = 0 , x i = y i = z i = 1 (unit vectors )
0
0
1
E:\Robot_Erw\Publications\LectureRobotics.doc
(2.7)
10/50
zi
y
y
Ki
xi
i
I
K
A=
(x
I
I
K
yK
zK
xk xi
= xk yi
xk zi
y k xi
yk yi
yk zi
z k x i
z k yi .
z k z i
(2.8)
Each component of the (3x3) rotation matrix K A can be written as the dot product of a
pair of unit vectors. Note, that the components of any vector are simply the projections of
that vector onto the unit direction of its reference frame. Since the dot product of two unit
vectors yields the cosine of the angle between them, these components are oftens
referred to as direction cosines.
As explained before and further inspection of (2.8) shows that the rows of the matrix are
K
the unit vectors of system i expressed in system k. Hence, I A , the description of frame i
relative to k is given by the transpose of (2.8). This suggests that the inverse of a rotation
matrix is equal to ist transpose.
Exercise: proof
K
I
T K
I
A=I
a= K A
I
a or
a = I A a with
K
K
I
A =K A
I
(2.9)
Properties of rotations:
det(A) = 1
E:\Robot_Erw\Publications\LectureRobotics.doc
11/50
R = (I 3 S )
(I 3 + S ), S is scew symmetric (S = S T ) , S = s z
-s
y
sy
-s x
0
-s z
0
sx
x = y = z =1
xT y = yT z = xT z = 0
Exercise:
Calculate Rot X ( ) Rot Z ( ) and Rot Z ( ) Rot X ( ) and solve both compound
transformations geometrically by drawing a frame diagram.
I a KI A
=
1 0T
K
p a
1 1
zk
KK
yk
yi
xk
I
zi
xi
xK , yK , zK ,
KI
E:\Robot_Erw\Publications\LectureRobotics.doc
12/50
The homogeneous transform K D can be viewed as the representation of the vectors xk,
yk, zk, p in the i-th coordinate system:
Ix
D = K
0
I
K
yK
0
pK
zK
0
(2.10)
Ix
I
I +1
D= I +1 D I + 2 D = I + 2
0
y I +2
0
z I +2
0
p I +2
1
x
i+2
yi+2
(2.11)
i+2
zi+1
y
K i+2
i+1
p
K
i+1
i+1
i+1
p
p
zi
i
i+2
Ki
I
I +1
D and
I +1
I +2
I
I +2
D.
Interpretations:
I
K
I
K
relative to the frame I. Specifically, the columns of R are unit vectors defining the
principal axes of frame K and p locates the position of the origin of K relativ to frame I.
E:\Robot_Erw\Publications\LectureRobotics.doc
13/50
2.2.
I
K
D maps K P
P .
Considerations
Joints
B ase
Base
Ba se
z0
y0
p
K0
x0
Figure 2-2: Description of the position and orientation of the tool center point (TCP) with
respect to the base (world) frame K0
E:\Robot_Erw\Publications\LectureRobotics.doc
14/50
(Position)
(Orientation)
( n,
0
0 n 0o 0 a 0 p
0
T = N D =
0
0
0
1
(2.12)
2.3.
(Position)
(Orientierung)
0
ca sa 0 cb 0 sb 1 0
cbsc
cbcc
(2.13)
Inverse problem is of interest.
r11
r12
r22
r32
r13
r23
r33
E:\Robot_Erw\Publications\LectureRobotics.doc
15/50
sin cos
sin sin
cos sin
a = sin sin
cos
(2.14)
= arctan( ), 0 / 2, fr + x,+ y
y
= + arctan( ), / 2 , fr x,+ y
x
= arctan 2( y, x ) =
y
= + arctan( ), / 2, fr x, y
x
= arctan( x ), / 2 0, fr + x, y
(2.15)
Definition B (Z-Y-X Euler-Angles)
E:\Robot_Erw\Publications\LectureRobotics.doc
16/50
y
y
x
x
Cos A Sin A 0
Cos A 0
0 A = Sin A
0
0
1
Cos B 0 Sin B
A= 0
1
0
Sin B 0 Cos B
z
x
y
0
1
A = 0 Cos C
0 Sin C
Sin C
Cos C
0
E:\Robot_Erw\Publications\LectureRobotics.doc
17/50
C A CB
A= A A A = S A C B
S
B
S A CC + C A S B S C
C A CC + S A S B S C
CB SC
S A SC + C A S B SC
C A S C + S A S B CC
C B CC
(2.16)
C A CB
S A CC C A S B S C
S A SC C A S B SC
n = S A C B o = C A CC S A S B S C a = C A S C S A S B CC
S
CB SC
C B CC
B
A=arctan2( y,
B=arcsin( z)
x)
(2.17)
C=arctan2(mz, nz)
Definition: Equivalent angle-axis
Rotation about the vector K by an angle a according to the right hand rule
k x k x va + ca k x k y va k z sa k x k z va + k y sa
R K ( a) = k x k y va + k z sa k y k y va + ca k y k z va k x sa with va = 1 cos a
k k va + k sa k k va + k sa k k va + ca
y
y z
x
z z
x z
(2.18)
E:\Robot_Erw\Publications\LectureRobotics.doc
18/50
3. Robot kinematics
Definitions:
A robot may be thought of as a set of bodies connected in a chain by joints.
These bodies are called links.
Joints form a connection between a neighboring pair of links
Properties:
Normally robots consist of joints with one degree of freedom (1 DOF).
Revolute/prismatic joints
n-DOF joints can be modeled as n joints with 1 DOF connected with n-1 links of zero
length
Positioning a robot in 3-space a minimum of six joints is required
Typical robot consist of 6 joints
Joint axis are defined by lines in space
A link can be specified by two numbers which define the relative location of the two
joint axes in space: link length and link twist
Link length: measured along the line which is mutually perpendicular to both axes
Link twist: measured in the plane defined by the perpendicular axis
3.1.
Axis i
Axis i+1
Link i
ai
19/50
Due to the authors of this method attaching frames to links, these four parameters are
called the Denavit Hartenberg parameters (DH parameters).
E:\Robot_Erw\Publications\LectureRobotics.doc
20/50
Axis i
Axis i+1
Link i
Axis i-1
Link i-1
Link i+1
ai
i-1
di
Xi
Z i-1
i
Zi
i
i+1
X i-1
cos i
sin i
=
0
sin i cos i
cos i cos i
sin i
0
sin i sin i
cos i sin i
cos i
0
ai cos i
ai sin i
di
(3.1)
The homogeneous matrices
I 1
I
Concenating link transformations to calculate the single transformation that relates frame
N (usually the TCP frame) to frame 0 (usually the base frame):
T = N0 D = 01 D 21 D NN21 D N N1 D
(3.2)
n defines the number of joints.
I 1
In case of rotational joints the components of the DH-matrices I D only depend on the
joint angles i , since the parameters ai , di , i are constants. For that case the
transformation
T = T ( 1 , 2 ,
, n ) = T ( )
E:\Robot_Erw\Publications\LectureRobotics.doc
21/50
1
2
is a function of all joint angles. The joint angles can be formed to a vector: = .
n
Examples2-dof robot with parallel axes
z
0
x
Gelenk 1
h1
Gelenk 2
z2
TCP
h2
y
1
Joint
1
2
i / degree
0
-90
di
Ai
h1
h2
0
0
i / degree
0
0
cos q1
sin q1
0
1 D =
0
sin q1
cos q1
0
0
0 h1 cos q1
0 h1 sin q1
1
0
0
1
sin q 2
cos q 2
1
2 D =
0
C S S C C1 C 2 + S1 S 2
1 2 1 2
S1 S 2 + C1C 2 S1 C 2 C1 S 2
0
2 D= T (q) =
0
0
0
0
cos q 2
sin q 2
0
0
0 h2 sin q 2
0 h2 cos q 2
1
0
0
1
0 h 2 (C S 2 S1C 2) + h1C1
1
0 h 2 (S S 2 + C1C 2) + h1 S1 Ci = cos qi
1
S = sin q
1
0
i
i
0
1
E:\Robot_Erw\Publications\LectureRobotics.doc
22/50
h2
z2
x2
x1
z1
y1
h1
y2
qi / degree
1
2
180
0
Di
-h1
0
i / degree
90
0
ai
0
h2
x3
z2
z1
y
x1
z3
3
x0
z0
Figure 3-5: 3-dof robot with 2 rotational joints and 1 prismatic joint
DH parameter
joint
qi / degree
1
2
3
180
90
0
Di
-h1
0
d*
E:\Robot_Erw\Publications\LectureRobotics.doc
ai
0
0
0
/ degree
90
90
0
a1, a2,a3 d1, d2, 1, 2, 3 constants
23/50
x
x
x2
y
y
2
z
3
x4
z
5
z
4
y
0
x
z
Figure 3-6: 6-dof robot with 6 rotational joints (frequently used kinematic structure)
h =0
x3
x2 y3
z2
z3
h2
z1
h
x5
x4
z4
h6
x6
z5
z6
x1
y
z0
Figure 3-7: sketch of 6-dof robot
E:\Robot_Erw\Publications\LectureRobotics.doc
24/50
DH parameter:
Joint
1
2
3
4
5
6
qi / degree
0
-90
0
0
0
0
di
-h1(-0.78m)
0
0
h4 (0.8m)
0
h6 (0.334m)
Ai
0
h2(0.8m)
0
0
0
0
/ degree
90
0
-90
90
-90
0
E:\Robot_Erw\Publications\LectureRobotics.doc
25/50
x2
h =0
5
3
h6
x5
x4
z0
y
h10
4 y
5
x6
x1
RV6
h11
Figure 3-9: Reis robot RV6, sketch
DH parameter RV6:
Joint
1
2
3
4
5
6
qi / rad
0
-/2
0
-/2
di
0
0
0
h4
0
h6
Ai
h11
h2
0
0
0
0
/ rad
-/2
0
/2
/2
-/2
0
h11=0.28m
h2=0.615m
h4=0.54m
h6= depending on tool
3.2.
Forward kinematics
Given the joint values of the robot with n degrees of freedom, the homogenous matrix
defining the position and orientation of the TCP is:
n(q) o(q ) a (q )
T (q) =
0
0
0
p ( q) 0 1
= 1 D 2 D... NN21 D N N1 D
1
The vectors p, n, o, a are described in the base (world) frame. This mapping from robot
coordinates to world coordinates is unique.
E:\Robot_Erw\Publications\LectureRobotics.doc
26/50
Robot
Robotercoordinates
koordinaten
T ( q)
o,,a,
ln,, m
n ,pp
World
Weltcoordinates
koordinaten
Figure 3-1: forward kinematics: mapping from robot coordinates to world coordinates
3.3.
Inverse kinematics
Given the position and orientationg of the TCP with respect to the base frame calculate
the joint coordinates of the robot which correspond it.
WeltWorld
coordinates
koordinaten
n,l ,o,ma,, n
p ,p
T( q )-1
RoboterRobot
coordinates
koordinaten
Figure 3-1: inverse kinematics: mapping from world coordinates to robot coordinates
0 n 0o 0 a 0 p
=T (q ) the left side is given.
In the equation
0
0
0
1
Problem:
Given T(q) as 16 numerical values (four of which are trivial), solve (2.24) for the 6 joint
angles q1,..,q6.
Among the 9 equations arising from the rotation matrix portion of T, only three equations
are independant. These added with 3 equations from the position vector portion of T give
6 equations with 6 unknowns. These equations are nonlinear, transcendental equations
which can be quite difficult to solve.
Problems related to inverse kinematics:
Existence of solutions
Multiple solutions
Method of solution
Existence of solutions
The existence of solutions is closely linked to the workspace of the robot which is the
volume of space which the TCP of the robot can reach. For the solution to exist, the
specified goal point must be in the workspace.
Dexterous workspace: volume of space which the robots TCP can reach with all
orientations
Reachable workspace: volume of space which the robots TCP can reach with at least one
orientation
Examples:
E:\Robot_Erw\Publications\LectureRobotics.doc
27/50
the origin
disc of radius l1 + l 2
empty
ring of outer radius l1 + l 2 and inner radius l1 l 2
Since the 6 parameters are necessary to describe an arbitrary position and orientation of
the TCP, the degree of freedom of the robot must be at least 6, in order a dexterous
workspace exists at all.
Multiple solutions:
Lsung 2
Lsung 1
q5
q6
p
4
q* =0
5
Figure 3-3: example for infinite solutions
Suppose in example of
E:\Robot_Erw\Publications\LectureRobotics.doc
28/50
Figure 3-3, that the given vectors p and n are realized by the joint angles
q1*, q*2, q*3 , q*5 = 0 . The orientation can be realized by an infinite numbrer of pairs q4 and
q6 (normally by q4+q6 = 0). In practical implementations of the inverse kinematics an
additional condition will generate the solution, like
!
Geometrical solution:
Law of cosine:
c
a = b + c 2 b c cos
2
E:\Robot_Erw\Publications\LectureRobotics.doc
29/50
q1
h1
h
q 2
= arctan 2( p (y0) , p x( 0) )
For the cosine rule applies:
2
h2 + p 2 h2
2
1
= arccos
2 h1 p
p = h + h 2 h1 h2 cos ,
2
2
2
1
Therefore q2 =
h2 + h2 p 2
2
1
= arccos
2 h1 h2
The arccos function is not unique. For the vector p the solution indicated by the dotted line
is given by the positiv sign of and and the straight line solution by the negative sign.
Obviously the solution is not unique.
Solution for a typical 6 DOF robot with intersecting wrist axis
The general solution simplifies, if the robot has a wrist with intersecting axes (axes 4-6).
A partial decoupling of position and orientation is possible.
For this case the intersection point p3 can be calculated by
0
p 3 = p a d 6
0
The position of the p3 only depends on the first 3 joint values q1, q2 and q3. These 3 joint
values are calculated depending on p3.
E:\Robot_Erw\Publications\LectureRobotics.doc
30/50
l2
l1
q3
R3
p
0
R 3 = p 3 + 0 ,
d
1
R3 =
p32x + p32y + ( p 3 z + d 1 ) 2
l12 + l 22 R 3
= arccos
= l12 + l 22 2 l1 l 2 cos
therefore q 3 is known : q3 =
2 l1 l 2
Calculation of q2 :
l2
l1
-q
1
E:\Robot_Erw\Publications\LectureRobotics.doc
31/50
sin 1 =
p3 z d1
R3
1 = arcsin
p3 z d1
R3
l12 + R 3 l 22
2
2 = arccos
2 l1 R 3
therefore q 2 is known : q 2 = ( 1 + 2 )
Calculation of q1 :
q1 = arctan 2( p3 y , p3 x )
Calculation of q5 :
3
a
q5 = arccos( z 3 a)
Calculation of q4:
T = 01 D 21 D 23 D 43 D 45 D 56 D
[ D]
D D ]
T = 03 D 63 D 63 D =
3
4
D 45 D 65 D =
[ D
0
1
1
2
0
3
2
3
The right side of the last equation is known, since T is given and the Denavit-Hartenberg
matrices at the right side only depend on q1, q2, q3 , which now available. The left side
depends only q4, q5 and q6. The modulus of q5 is also available. It is possible to eliminate
q6 and to calculate q4 .
E:\Robot_Erw\Publications\LectureRobotics.doc
32/50
Calculation of q6:
z
q
o
a
Figure 3-9: Calculation of q6
q 6 = arccos( 0 y 5 o)
0
0 x5
D =
0
y5
0
z5
0
p5
E:\Robot_Erw\Publications\LectureRobotics.doc
33/50
4. Trajectory Generation
Scope: methods of computing a trajectory in multidimensional space
Definition: trajectory refers to a time history of position, velocity and acceleration for each
degree of freeom
Issues:
Specification of trajectories
Motion description easy for robot operators
start and end point
geometrical properties
Representation of trajectories in the robot control
Computing trajectories on-line (at run time)
General considerations
Basic problem: move the robot from the start position, given by the tool frame Tinitial to the
end position given by the tool frame Tfinal.
Spatial motion constraints:
Specification of motion might include socalled via points
Via points: intermediate points between start and end points
Temporal motion contraints:
Specification of motion might include elapsed time between via points
Requirements:
Execution of smooth motions
Smooth (motion) function: function and its first derivative is continous
Jerky motions increase wear on the mechanism (gears) and cause vibrations by
exciting resonances of the robot.
Joint space schemes: path shapes in space and time are described in terms of
functions of joint angles. This motion type is named Point-to-Point motion (PTP)
Cartesian space schemes: path shapes in space and time are described in terms of
functions of cartesian coordinates. This motion type is named Continous Path motion
(CP)
4.1.
E:\Robot_Erw\Publications\LectureRobotics.doc
34/50
Path shape (in space and time) described in terms of functions of joint angles
Description of path points (via points plus start and end point) in terms of tool frames
Each path point is converted into joint angles by application of inverse kinematics
Identifying a smooth function for each of the n joints passing through the via points
and ends at the target point
Synchronisation of motion (each joint starts and ends at the same time)
Joint which travels the longest distance defines the travel time (assuming same
maximal acceleration for each joint)
In between via points the shape of the path is complex if described in cartesian space
Joint space trajectory generation schemes are easy to compute
Each joint motion is calculated independantly from other joints
B: Ziel
A: Start
o
q
n
6
q
4
.
Figure 4-1: PTP motion between two points
Many, actually infinite, smooth functions exist for such motions
Four contraints on the (single) joint function q(t) are evident:
Velocities q(0) = 0 ,
q(t end ) = 0
Velocities q(0) = q0 ,
Choosing velocities by
robot user
automatically chosen by the robot control
Using these type of polynomials does not in general generate time optimal motions
E:\Robot_Erw\Publications\LectureRobotics.doc
35/50
q(t)
degree/s
q(t)
degree/s
q(t)
degree
t0
tb
th
t
tf-tb tf
Velocity at the end of the blend is equal to velocity during the linear section:
qh qb
and
t h tb
1
qb = q 0 + qt b2 leads to
2
qt b =
qt b2 qtt b + (q f q 0 ) = 0
Free parameters q and t b . Usually acceleration is chosen and the equation is solved for
tb .
E:\Robot_Erw\Publications\LectureRobotics.doc
36/50
q (t ) = qmax sin 2 t
tb
(4.1)
q(t)
tE
tv
tb
q(t)
tb
q(t)
tv
tE
tb
tv
tE
t f q max
q max =
2
t 2f q max
16
2 q max
tb =
,t v = t f t b
q max
q f q max
2
qf
2 q max
=
,t b = t f
q max t f q f
q max
, q max
(4.2)
t 2f q max 8 q f
(4.3)
Calculation of q(t):
E:\Robot_Erw\Publications\LectureRobotics.doc
37/50
1
t2
0 t t b : q(t ) = qmax t 2 + b 2
8
4
1
t b t t v :q(t ) = qmax t t b
2
2
cos
t 1
tb
t 2f
t2
t2
t f t + t f t b t b2 + b 2
2
2
4
q
t v t t f : q(t ) = max
2
2
1 cos( (t t v ))
tb
(4.4)
4.1.1.
Asynchrone PTP
for each joint the values q0 , q f , q max are given. The travel time ti for each joint is
calculated
4.1.2.
Synchrone PTP:
for each joint the values q0 , q f , q max are given. The travel time ti for each joint is
calculated
calculate the maximum travel time t f ,max =
max {t }
i =1,..,dof
f ,i
q f ,i
t f ,i = t f , max =
q max,i
q max,i
q max,i
2
1
1
q max,
i 2 q max,i q max,i t f , max + 2 q f ,i q max,i = 0
and therefore
q max,i t f , max
q max,i =
2
2
q max,
i t f , max 8q f ,i q max,i
16
(4.5)
The negative sign of the square root has to be chosen in order to fulfill 2tb<tf,max
4.2.
robot user would like to control the motion between start and end point
several possibilities:
E:\Robot_Erw\Publications\LectureRobotics.doc
38/50
issues:
interpolation of TCP position (linear change of coordinates)
interpolation of orientation (linear change of matrix elements would fail)
4.2.1.
Linear Interpolation
p x, A
p x,B
p y,B
p y,A
p p
p p
z, A
= A target point x B = z , B = B
start point: x A =
A A
B B
B
A
B
A
sEP
sP
p(t)
K0
Figure 4-1: linear interpolation for the positioning of the TCP in world frame
p (t ) = p A +
s p (t )
sf
(pB p A)
(4.6)
E:\Robot_Erw\Publications\LectureRobotics.doc
39/50
s fP = p B p A = ( p x , B p x , A ) 2 + ( p y , B p y , A ) 2 + ( p z , B p z , A ) 2
(4.7)
The motion starts at time t=0 and ends in the target at time t=tEP.
s P (0) = s P (t fP ) = 0 .
Orientation:
( t ) = A +
s ( t )
( B A )
sE
s f = B A = ( B A ) 2 + ( B A ) 2 + ( B A ) 2
(4.8)
s (0) = s (t E ) = 0
For the positioning of the TCP velocity profiles like in the PTP case are used
Trapezoidal profile
Sinusoidal profile
Robot user sets speed and acceleration for position and orientation
The duration of the motion (in case of sinusoidal profile) is:
t fP =
s fP
vP
s f v
vP
and t f =
+
bP
v b
(4.9)
Synchronisation of motion:
t f = Max(t fP ,t f ) .
Case 1: tf = tfP
v =
b t f
2
b2 t 2f
4
s b
, t b =
v
.
b
(4.10)
Case 2: tf = tf
vP =
bP t f
2
bP2 t 2f
4
s P bP
, t bP =
vP
.
bP
(4.11)
Reference to the PTP motion:
E:\Robot_Erw\Publications\LectureRobotics.doc
40/50
t b ,tV t bP ,tVP
t b ,tV t b ,tV
q(t ) s P (t )
s(t) s (t )
q max (t ) v P (t )
or
q max (t ) bP (t )
v m (t ) v (t )
bm (t ) b (t )
The following scheme presents the algorithm for straight line motions
E:\Robot_Erw\Publications\LectureRobotics.doc
41/50
p , p , ,
A B
A
vP, b , v , b B
P
Calculation of
sf P , t bP, t f P sf , tb, t f
yes
< 2 tb
Correction of
v , t b , t f
no
yes
t f P < 2 t bP
Correction of
vP , t bP, t f P
no
tf = t
f
Correction of
vP , t bP
tf P > t
t f = tf P
Correction of
v , t b
Calculation of
s P (t)
s (t)
p (t)
(t)
Transformation from
(t)
to
inverse kinematics
E:\Robot_Erw\Publications\LectureRobotics.doc
(t)
42/50
4.2.2.
Circular interpolation
Motion of the TCP along a circular segment from point P1 via P2 to point P3.
P2 is sometimes called auxiliary point.
The points P1,P2,P3 and the velocity vC and acceleration bC are defined by the robot user.
n K = ( P2 P1 ) ( P3 P2 ) .
Ei = { X P
| x n = d , x : = OX }
n1:= ( P2 P1 ), d1:= 12 ( P1 + P2 ) n1
n2 := ( P3 P2 ), d 2 := 12 ( P2 + P3 ) n 2
The vector m of the center point M is the solution of the linear system
n1T
d1
T
n
m
=
2
d2 ,
nT
dK
K
introduction of path parameter sC(t) = r*(t), where (t) is opening angle up to actual
position. sCE = r*G denotes the total path length.
analog procedure to linear interpolation
Exercise:
Given the points P1,P2,P3 defining a circular segment, with
P1 = [0 1 1]T; P2 = [1 1 2]T; P3 = [2 1 1]T Calculate center point M and radius r.
E:\Robot_Erw\Publications\LectureRobotics.doc
43/50
SC(t)
P2
p12
pC
yC
p
p
23
13
xC
P1
M
p
r1
r2
z0
y0
K
4.2.3.
x0
Corner smoothing
The above mentioned interpolation methods let the motion stop at the end point
Corner smoothing let the robot move from one segment to another without reducing speed
E:\Robot_Erw\Publications\LectureRobotics.doc
44/50
4.2.4.
Bezier curves are useful to design trajectories with certain roundness properties
Defined by a series of control points with associated weighting factors
Weighting factors pull the curve towards the control points
Bezier curves are based on Bernstein polynomials (we consider here cubic
polynomials, n=3)
n
Bkn (u): = (1 u) n k u k
k
n n
1 = ( (1 u) + u) n = (1 u) n k u k
k =0 k
Bi2(u), Bi3(u)
(a)
(b)
X ( u) = bi Bin (u)
i= 0
bi denotes the Bzier-points or control points. The polygon connecting the Bzier-points
are called Bzier-polygon or control polygon.
E:\Robot_Erw\Publications\LectureRobotics.doc
45/50
b2
b5
b1
b3
b5
b0
b1
b0
b4
b3
b2
b4
Exercise:
Given the (two-dimensional) control points B0, B1, B2, and B3 for a cubic Bezier curve by
B0 =(0,0)T , B1 =(1,0)T , B2 =(1,2)T , B3 =(0,2)T
n
a + (b a ) u :
E:\Robot_Erw\Publications\LectureRobotics.doc
46/50
n
n k
k
( b u ) (u a )
(b a ) k
1
Further examples:
B1
B3
B1
B2
B0
B0
B2
(a)
B3
(b)
B3
B0
B1
B3
B2
B0
B2
B1
Problems:
Path length can not be calculated in closed form (like for linear or circular segments)
Numerical approximation possible
E:\Robot_Erw\Publications\LectureRobotics.doc
47/50
ez
ey
ex
ez
ex
ez
ey
ey
ez
ex
ex
ey
E:\Robot_Erw\Publications\LectureRobotics.doc
48/50
5.1.
Teach In programming
Explicit robot languages
Task level programming languages
Teach In programming
The robot will be moved by the human user through interaction with a teach pendant
(sometimes called teach box). Teach pendant are hand-held button boxes which allow
control of each robot joint or of each cartesian degree of freedom. Todays controllers
allow alphanumeric input, testing and branching so that simple programs involving logic
can be entered.
Explicit robot languages
With the arrival of inexpensive and powerful computers, the trend has been increasingly
toward programming robots via programs written in computer programming languages.
Usually these computer programming languages have special features which apply to the
problems of programming robots. An international standard has been established with the
programming language IRL (Industrial Robot Language, DIN 66312)
Task level programming
The third level of robot programming methodology is embodied in task-level programming
languages. These are languages which allow the user to command desired subgoals of
the task directly, rather than to specify the details of every action the robot is to take. In
such a system, the user is able to include instructions in the application program at a
significantly higher level than in an explicit programming language. A task-level
programming system must have the ability to perform many planning tasks automatically.
For example, if an instruction to grasp the bolt is issued, the system must plan a path of
the manipulator which avoids collision with any surrounding obstacles, automatically
choose a good grasp location on the bolt, and grasp it. In contrast, in an explicit robot
language, all these choices must be made by the programmer.
The border between explicit robot programming languages is quite distinct. Incremental
advances are beeing made to explicit robot programming languages which help to ease
programming, but these enhancements cannot be counted as components of a task-level
programming system. True task-level programming of robots does not exist yet in
industrial controllers but is an active topic of research.
E:\Robot_Erw\Publications\LectureRobotics.doc
49/50
5.2.
World modeling
Motion specification
Flow of execution
Sensor inegration
World modeling:
Motion execution:
Flow of execution:
Sensor integration:
E:\Robot_Erw\Publications\LectureRobotics.doc
50/50