## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

**Introduction to Robotics Module: Trajectory generation and robot programming
**

(summer term 2000)

Recommended literature

Text Books

• • •

rd

Craig,J.J.: Introduction to Robotics. Addison-Wesley, 2nd. Ed, 1989 (3 edition will be available 2000 ?) Craig,J.: Adaptive Control of Mechanical Manipulators. Addison-Wesley, Reading(Mass.), 1988 R.P. Paul: Robot Manipulators, MIT Press 1981

Journals

• •

Transactions on Robotics and Automation, IEEE IEEE Conference on Robotics and Automation

Acknowledgement

This handout has been partially derived from the lecture notes from Wolfgang Weber teaching robotics at the Fachhochschule Darmstadt, Germany and the above mentioned textbooks from J.J. Craig and R.P. Paul.

Revisions

Revision Date 0.9 12-Mar-99 1.0 20.2.2000 Author History Th. Horsch 1st draft Th. Horsch 1st release

E:\Robot_Erw\Publications\LectureRobotics.doc

1/50

Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt, summer term 2000

CONTENT

1. MOTIVATION ........................................................................................................................ 3 1.1. 1.2. 1.3. 2. W HY STUDY ROBOTICS ...................................................................................................... 3 TERMINOLOGY.................................................................................................................. 3 TYPES OF ROBOT MOTION .................................................................................................. 6

SPATIAL DESCRIPTIONS AND TRANSFORMATIONS ....................................................... 8 2.1. 2.2. 2.3. DESCRIPTIONS: POSITIONS, ORIENTATIONS AND FRAMES ...................................................... 8 CONSIDERATIONS ........................................................................................................... 14 ORIENTATION DEFINED BY EULER-ANGLES ........................................................................ 15

3.

ROBOT KINEMATICS......................................................................................................... 19 3.1. 3.2. 3.3. LINK CONNECTION DESCRIPTION BY DENAVIT-HARTENBERG NOTATION ................................. 19 FORWARD KINEMATICS .................................................................................................... 26 INVERSE KINEMATICS ...................................................................................................... 27

4.

TRAJECTORY GENERATION ............................................................................................ 34 4.1. TRAJECTORY GENERATION IN JOINT SPACE (PTP MOTIONS) ................................................ 34 4.2. ASYNCHRONE PTP AND SYNCHRONE PTP ....................................................................... 38 4.3. TRAJECTORY GENERATION IN CARTESIAN SPACE ................................................................ 38 4.3.1. Linear Interpolation................................................................................................... 39 4.3.2. Circular interpolation................................................................................................. 43 4.3.3. Corner smoothing ..................................................................................................... 44 4.4. SPLINE INTERPOLATION BASED ON BÉZIER CURVES ............................................................ 45

5.

ROBOT PROGRAMMING LANGUAGES ............................................................................ 49 5.1. REQUIREMENTS OF A ROBOT PROGRAMMING LANGUAGE ..................................................... 50

E:\Robot_Erw\Publications\LectureRobotics.doc

2/50

Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt, summer term 2000

1. Motivation

1.1. Why study robotics

Robots are mainly used for • Automation of industrial tasks • Spot welding • Arc welding • Machine transfer and machine tending • Pick and Place • Clueing and sealing • Deburring • Assembly/Disassembly Testbed for Artificial Intelligence (AI)

•

Robotics is an interdisciplinary subject requiring Mechanical Engineering Electrical/Electronic Engineering Computer Science/Mathematics (addressed in module Components of a robot arm) (addressed in module Control of a robot arm) (addressed in this module)

It is worth noting that many problems in computer graphics animation are related to robotics problems (much of the material in this course is relevant to animation).

1.2.

Terminology

We begin with some robot terminology. Exactly what constitutes a robot is sometimes debated. Numerically controlled (milling) machines are usually not referred as robots. The distiction lies somewhere in the sophistication of the programmability of the device – if a mechanical device can be programmed to perfom a wide variety of applications, it is probably an indstrial robot. Machines which are for the most part relegated to one class of task are considered fixed automation.

links prismatic joint

end effector joints

E:\Robot_Erw\Publications\LectureRobotics.doc

3/50

Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt, summer term 2000 Industrial robots constist of (nearly) rigid links which are connected with joints which allow relative motion of neighboring links. These joints are usually instrumented with position sensors which allow the relative position of neighboring links to be measured. In case of rotary or revolute joints, these displacements are called joint angles. Some robots contain sliding, or prismatic joints. The number of degrees of freedom that a robot possesses is the number of independant position varaibles which would have to be specified in order to locate all parts of the mechanism. Usually a robot is an open kinematic chain. This implies, that each joint variable is usually defined with a single variable and the number of joints equals the number of degrees of freedom. At the free end of the chain of links which make up the robot is the end effector. Depending on the intended application of the robot, the end effector may be a gripper, welding torch or any other device. We generally describe the position of the manipulator by giving a description of the tool frame or sometimes called the TCP frame (TCP=Tool Center Point), which is attached to the end-effector, relative to the base frame which is attached to the nonmoving base of the manipulator. Frames (coordinate systems) are used to describe the position and orientation of a rigid body in space. They serve as a reference system within which to express the position and orientation of a body. Exercises with EASY-ROB: • • • a 6R robot with intersecting wrist axes (->load ->robot ->rv6.rob in EASY-ROB) a 2 degree of freedom (DOF) robot with rotational joints (2R planar robot) (->load >robot ->dh2.rob in EASY-ROB) a 7R robot with rotational joints (->load ->robot ->dh_manth7.rob in EASY-ROB)

You can consider more examples with EASY-ROB. Robots from different robot vendors are available: Fanuc, ABB, KUKA, Reis (->load ->extern in EASY-ROB) Kinematics is the science of motion which treats motion without regard to the forces which cause it. Within the science of kinematics one studies the position, velocity, acceleration, and all higher order derivatives of the position variables. Hence, the study of the kinematics of robots refers to all geometrical and time-based properties of the motion. A very basic problem to be solved is: How to relate the robot’s configuration or pose to the position and orientation of its end effector. A configuration of an n-degree of freedom robot is an n-vector (q1 , q 2 ,..., q n ), where each qi is either a rotational joint angle or a prismatic joint translation. This is known as the forward kinematics of the robot. This is the static geometrical problem of computing the position and orientation of the end-effector of the robot. Specifically, given a set of joint angles, the forward kinematic problem is to compute the position and orientation of the TCP relative to the base frame. Sometimes we think of this as changing the representation of robot position from joint space description into a cartesian space description. The following problem is considered the inverse kinematics: Given the position and orientation of the end-effector of the robot, calculate all possible sets of joint angles which

E:\Robot_Erw\Publications\LectureRobotics.doc

4/50

Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt, summer term 2000 could be used to attain the this given position and orientation. The inverse kinematics is not as simple as the forward kinematics. Because the kinematic equations are nonlinear, their solution is not always easy or even possible in a closed form. The existence of a kinematic solution defines the workspace of a given robot. The lack of a solution means that the robot cannot attain the desired position and orientation because it lies outside the robot’s workspace. In addition to dealing with static positioning problems, we may wish to analyse robots in motion. Often in performing velocity analysis of a mechanism it is convenient to define a matrix quantity called the jacobian of the robot. The jacobian specifies a mapping from velocities in joint space to velocities in cartesian space. The nature of this mapping changes as the configuration of the robot varies. At certain points, called singularities, this mapping is not invertible. An understanding of this phenomenon is important to designers and users of robots. A common way of causing a robot to move from A to B in a smooth, controlled fashion is to cause each joint to move as specified by a smooth function of time. Commonly, each joint starts and ends ist motion at the same time, so that the robot motion appears coordinated. Exactly how to compute these motion functions is the problem of trajectory generation.

A robot programming language serves as an interface between the human user and the industrial robot. Central questions arise such as: How are motions through space described easily by a programmer ? How are multiple robots programmed so that they can work in parallel ? How are sensor-based actions described in a language ? The sophistication of the user interface is becoming extremely important as robots and other programmable devices are applied to more and more demanding industrial applications. An off-line programming system is a robot programming environment which has been sufficiently extended, generally by means of computer graphics, so that the development of robot programs can take place without access to the robot itself. A common argument raised in the favor is that an off-line programming system will not cause production equipment (i.e. the robot) to be tied up when it needs to be reprogrammed; hence, automated factories can stay in production mode a greater percentage of time.

A robot constists of three main systems: • Programming system:

A robot user needs to „teach“ the robot the specific task which is to be performed. This can be done with the so called teach in method. The programmer uses the teach box of the robot control and drives the robot to the diesired positions and stores them and other values like travel speed, corner smoothing parameters, process parameters, etc. For the programming period which can take a significant time for complex tasks the production is idle. Another possibility is the use of Off-line programming systems mentioned before. •

Robot control:

E:\Robot_Erw\Publications\LectureRobotics.doc

5/50

Resolvers are devices which output two analog signals – one is the sine of the shaft angle and the other the cosine. 1. sensor signals external control coordinate transformation internal control (jointcontrol) world coordinates (desired) joint values (desired) measured joint values Figure 1-1: structure of a robot control • Robot mechanics: The robot mechanics will transform the joint torques applied by the servo drives into an appropriate motion.3. Figure 1-1 describes two examples: E:\Robot_Erw\Publications\LectureRobotics. Today mainly an independant joint control approach is taken.doc 6/50 . The motion between these points is not determined in the sense that the TCP follows a desired (cartesian) path. like adaptive control. summer term 2000 The robot control interprets the robot’s application program and generates a series of joint values and joint velocities (and sometimes accellerations) in an appropriate way for the feedback control. the programmer has to be careful regarding collisions between the robot and ist environment. On the other hand this motion is easy (and fast) in calculation. etc are involved in robotics.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. Types of robot motion PTP(Point to Point)-motion A motion which specifies the configuration of the robot at the start and the end point. where the position and orientation along the path is not important. but more and more sophisticated control approaches. Nowadays AC motors are used to drive the axis. This motion type is mainly relevant for pick and place type tasks. The shaft angle is determined from the relative magnitude of the two signals. Resolvers serve as a position sensor and are normally located on the shaft of the actuator. Since this motion is very hard to predict in cartesian space. because no transformation of inverse kinematics needs to be computed. nonlinear control. The following figure shows a possible control structure.

left: a sequence consisting of 5 linear motion.doc 7/50 . since inverse kinematics must be solved for each time step (usually around 10 milliseconds in todays robot controller). etc. via point and end point). That means that the motions of each axes are mutually depending. These motions are more computationally expensive to execute at run time compared to PTP motions. Possible cartesian motion types include linear interpolation (specified by start and end point) and circular interpolation (specified by start point. E:\Robot_Erw\Publications\LectureRobotics. left: path bewteen two drilling holes. right: robot end-effector and three configurations for pick & place Applications for PTP motions are spot welding. (partly) machine handling and machine transfer CP (Continuous Path) motion A motion type which specifies the whole path regarding position and orientation of the TCP.:and a n sp o an n s o rr B A ° x x ° Figure 1-1: PTP-motions in world coordinate system. pick and place. deburring. clueing. right: linear – circular – linear motion Applications for CP motions are arc welding. handling.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. Figure 1-2 describes two examples y y via point x x Figure 1-2: example for interpolated motions between programmed positions. summer term 2000 y drilling table y pick T r place Bsp.

Descriptions: positions. gripper. objects in the environment. end start Figure 2-2: geometrical description of a vector A vector can be translated without change (two vectors are identical. robot links.1. summer term 2000 2. Geometrically a vector can be interpreted as an arrow describing its length and direction. orientations and frames To describe the position and orientation of the TCP. So vectors must be tagged with information identifying which coordinate system they are defined within. Spatial descriptions and transformations The following section summarizes the main mathematical methods in order to describe positions and orientations with respect to a particular coordinate system. 2. Once a coordinate system is established we can locate any point in the world with 3x1 position vector. etc. if they equal in length and direction). Sometimes more than one coordinate system are defined. In this lecture vectors are written with a leading subscript indicating the coordinate system to which they are referenced (unless it is clear from context.doc 8/50 . this subscript is omitted).Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. In the context of a coordinate system a (three-dimensional) vector is a three-tupel of numerical values indicating the distances along the axes of the coordinate E:\Robot_Erw\Publications\LectureRobotics. middle finger point then in (positive) z-direction. pointing finger in (positive) y-direction. z y y x x z arrow points out of the plane x z y arrow points into the plane Figure 2-1: Coordinate system For coordinate systems the following rule can be used: Three-finger-rule of the right hand: Thumb points in (positive) x-direction. coordinate systems are used.

summer term 2000 system.3) v1.v 2 -v 2 v 1x − v 2 x v 1 − v 2 = v 1y − v 2 y v − v 2z 1z cross product of two vectors: v1 (2.doc . Each of these distances along an axis can be thought of as a result of projecting the vector onto the corresponding axis.1) (unit) direction of a vector: addition of vectors: vx / v v ev = = vy / v v vz / v (2.4) v3 = v1 × v2 9/50 E:\Robot_Erw\Publications\LectureRobotics.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.2) v1 v 1x + v 2x v 3 = v 1 + v 2 = v 1y + v 2y v + v 2z 1z subtraction of vectors: v v 2 3 (2. y v v vx v = vy v z y vx z x Figure 2-3: description of a vector in a coordinate system modulus (length) of a vectors: v = v2 + v2 + v2 x y z (2.

The result is a scalar value which is related to the opening angle between the vectors. If one of the vectors is a unit vector (a vector with unit length) the result is the length of projection onto the unit vector. (see Figure 2-5). yi. which are orthogonal to each other (Figure 2-6). zi. A cartesian coordinate system is fully described by ist base vectors xi. yi . 2 v1 Figure 2-4: cross product Dot product of two vectors: The dot product of two vectors is defined by (2.doc 10/50 .7) E:\Robot_Erw\Publications\LectureRobotics.5) v3 is orthogonal to the plane. z i = 0 . a ⋅ b = a ⋅ b ⋅ cos ϕ . 1 0 0 x i = 0 . summer term 2000 v 3 x v 1y ⋅ v 2z − v 1z ⋅ v 2y v 3 = v 3 y = v 1z ⋅ v 2 x − v 1x ⋅ v 2z v v ⋅ v − v ⋅v 1y 2x 3z 1x 2 y (2. The three-finger-rule derives the direction of v3 (see Figure 2-4). v 3 v . generated by v1 and v2. zi of the coordinate system Ki . a ⋅ b = ax ⋅ bx + a y ⋅ by + az ⋅ bz (2.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.6) ϕ Figure 2-5: dot product of two vectors Base vectors xi .6). y i = 1 . x i = y i = z i = 1 (unit − vectors ) 0 0 1 (2. .

I a : vector a.doc 11/50 .9) Properties of rotations: • det(A) = 1 E:\Robot_Erw\Publications\LectureRobotics. This suggests that the inverse of a rotation matrix is equal to ist transpose. As explained before and further inspection of (2.8) shows that the rows of the matrix are K the unit vectors of system i expressed in system k. that the components of any vector are simply the projections of that vector onto the unit direction of its reference frame.8). these components are oftens referred to as direction cosines.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.8) Each component of the (3x3) rotation matrix K A can be written as the dot product of a pair of unit vectors. Note. summer term 2000 zi y z i i y Ki i x xi i Figure 2-6: base vectors of a coordinate system Rotation matrix: Rotation matrices are used to describe the base vectors of a coordinate system relative to another reference system. A vector a can be expressed with such a rotation matrix in different coordinate systems. zk ⋅ zi (2. represented in the i-th coordinate system Given the base vectors of two coordinate systems i and k: I K A= (x I I K yK I zK ) xk ⋅ xi = xk ⋅ yi xk ⋅ zi y k ⋅ xi yk ⋅ yi yk ⋅ zi I z k ⋅ xi z k ⋅ yi . Exercise: proof K I A T K I A=I An arbitrary vector a represented in the coordinate system k can be expressed in the coordinate system i using the rotation matrix (and vice versa): I a= K A ⋅ I K a or K a = I A ⋅ a with K I K I A =K A I T (2. the description of frame i relative to k is given by the transpose of (2. Hence. Since the dot product of two unit vectors yields the cosine of the angle between them. I A .

Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. Hence. S = s z -s y -s z 0 sx sy -s x 0 three parameters to describe rotation same result by investigation of R six constraints and nine matrix elements x = y = z =1 xT y = yT z = xT z = 0 • rotations do not generally commute: R1 R2 ≠ R2 R1 Exercise: Calculate Rot X ( ) Rot Z ( ) and Rot Z ( ) Rot X ( ) and solve both compound transformations geometrically by drawing a frame diagram. π 2 π 2 π 2 π 2 Homogeneous coordinates (frames) 4x4 homogeneous transforms are used to cast the rotation and translation of a general transform into a single matrix form. In other fields of study it can be used to compute perspective and scaling operations. yK .doc 12/50 . S is scew symmetric (S = −S T ) . summer term 2000 • • • columns are mutually orthogonal possible to describe rotations with fewer than 9 numbers result from linear algebra: Cayley’s formula for orthogonal matrices −1 R = (I 3 − S ) • • • 0 (I 3 + S ). I I a K A = 1 0T I K p a ⋅ 1 1 • • 1 added as the last element of the 4x1 vector row (0 0 0 1) is added as last row of 4x4 matrix zk KK yk yi p xk I zi KI xi xK . I I I p Figure 2-7: spatial relationship between two coordinate systems E:\Robot_Erw\Publications\LectureRobotics. zK . they describe the spatial relationship between two coordinate systems (see Figure 2-7).

10) Homogeneous transforms of several coordinate systems (compound transformations): Combining two transformations is performed by multiplication of two homogenous transforms (see Figure 2-8): I I +2 Ix I I +1 D= I +1 D⋅ I + 2 D = I + 2 0 I y I +2 0 I z I +2 0 i+2 I p I +2 1 x i+2 (2. yk.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. p in the i-th coordinate system: I K I Ix D = K 0 I yK 0 I zK 0 I pK 1 (2. the columns of R are unit vectors defining the principal axes of frame K and p locates the position of the origin of K relativ to frame I. I K I K D describes the frame K relative to the frame I. E:\Robot_Erw\Publications\LectureRobotics. zk. summer term 2000 The homogeneous transform K D can be viewed as the representation of the vectors xk.doc 13/50 .11) z yi+2 zi+1 y i+1 K i+2 p i+1 K i+1 x i+1 p p zi i i+2 y i Ki Figure 2-8: Compound transformations Exercise: • x i Describe rotational and translational part of translational parts of I I +1 I I +2 D in terms of the rotational and D and I +1 I +2 D. Specifically. • Invert a homogenous transform D = T 0 R p 1 Interpretations: • A homogeneous transform is a description of a frame.

Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.2. summer term 2000 • A homogeneous transform is a mapping. I K D maps K P I P .doc 14/50 . 2. Considerations Consideration of robots only with open kinematic chain Example: Links Endeffector Joints B ase Base Ba se Figure 2-1: possible design of robots and their schematic sketch TCP o z0 y0 p n K0 x0 Figure 2-2: Description of the position and orientation of the tool center point (TCP) with respect to the base (world) frame K0 E:\Robot_Erw\Publications\LectureRobotics.

a : normal. r11 R XYZ (c.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.. o. a with respect to K0 Rotation matrix representing the orientation: (Position) (Orientation) ( n.. a ) = RZ (a) RY (b) R X (c) = sa ca 0 0 1 0 0 cc − sc 0 0 1 − sb 0 cb 0 sc cc cacb casbsc − sacc casbcc + sasc = sacb sasbsc + cacc sasbcc − casc with sa = sin a. b. Orientation defined by Euler-Angles Avoid redundant information by introduction of angles translation p with respect to K0 three angles a.3. (Position) (Orientierung) 0 ca − sa 0 cb 0 sb 1 0 R XYZ (c. 0 0 o. approach 0 ) homogeneous transformation integrating position and orientation with respect to base frame K0: 0 n 0o 0 a 0 p 0 T = N D = 0 0 0 1 Index n denotes the degree of freedom of the robot (2. summer term 2000 Attached to the TCP is the tool frame described by Translation p with respect to K0 n.13) Inverse problem is of interest. a) = r21 r 31 r12 r22 r32 r13 r23 r33 15/50 E:\Robot_Erw\Publications\LectureRobotics. ca = cos a.doc . − sb cbsc cbcc (2. b.12) 2. b.. c Fixed angles: rotation takes place around fixed frame Euler angles: rotation takes place around moving frame Definition: X-Y-Z fixed angles a: rotation around the z0-axis b: rotation around the y0-axis c: rotation around the x0-axis Each of the three rotations takes place about an axis of the fixed reference frame. orthogonal..

a : cos α ⋅ cos β ⋅ cos γ −sin α ⋅ sin γ n = sin α ⋅ cos β ⋅ cos γ + cos α ⋅ sin γ − sin β ⋅ cos γ − cos α ⋅ cos β ⋅ sin γ −sin α ⋅ cos γ o = − sin α ⋅ cos β ⋅ sin γ + cos α ⋅ cos γ sin β ⋅ sin γ cos α ⋅ sin β a = sin α ⋅ sin β cos β β = arccos(a z ) with : 0 < β < π α = arctan 2(a y . x ) = y θ = − π + arctan( ).− y x y θ = arctan( x ). π / 2 ≤ θ ≤ π.− y (2.14) y θ = arctan( ). if B=+/. − π ≤ θ ≤ − π / 2. − π / 2 ≤ θ ≤ 0. summer term 2000 Exercise: Calculate a.b. γ and n.+ y x y θ = π + arctan( ).Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. What happens. n x ) Arctan calculates values between -π/2 and π/2 Arctan2 calculates values between -π and π (2. für + x. für − x. a x ) γ = arctan 2(o z . für − x.+ y x θ = arctan 2( y. 0 ≤ θ ≤ π / 2.c depending on matrix values.π/2 Definition A: (Z-Y-Z Euler angles) α: β: γ: rotation around the z0-axis rotation around the (new) y-axis rotation around the (new) z-axis The following relation holds between α.−n z ) if β=0: α + γ = arctan 2( n y . β.doc 16/50 . für + x.15) Definition B (Z-Y-X Euler-Angles) Angle A rotates around the z-axis: E:\Robot_Erw\Publications\LectureRobotics. o.

summer term 2000 z A y‘ y x‘ Cos A − Sin A 0 ’ Cos A 0 0 A = Sin A 0 0 1 x Angle B rotates around the new y-axis z´ ’’ ’ Cos B 0 Sin B A= 0 1 0 − Sin B 0 Cos B z‘‘ x‘‘ y´ Angle C rotates around the new x-axis (x‘‘): B X´ x‘‘ * ’’ 0 1 A = 0 Cos C 0 Sin C − Sin C Cos C 0 E:\Robot_Erw\Publications\LectureRobotics.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.doc 17/50 .

17) C=arctan2(mz. B=arcsin( z) x) (2.doc 18/50 .16) with C A = Cos A.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. summer term 2000 * C A ⋅ CB A= A ⋅ A ⋅ A = S A ⋅ C B −S B ’ ’’ ’ * ’’ − S A ⋅ CC + C A ⋅ S B ⋅ S C C A ⋅ CC + S A ⋅ S B ⋅ S C CB ⋅ SC S A ⋅ SC + C A ⋅ S B ⋅ SC − C A ⋅ S C + S A ⋅ S B ⋅ CC C B ⋅ CC (2.18) E:\Robot_Erw\Publications\LectureRobotics. S A = Sin A etc relationships: C A ⋅ CB − S A ⋅ CC − C A ⋅ S B ⋅ S C S A ⋅ SC − C A ⋅ S B ⋅ SC n = S A ⋅ C B o = C A ⋅ CC − S A ⋅ S B ⋅ S C a = − C A ⋅ S C − S A ⋅ S B ⋅ CC S CB ⋅ SC C B ⋅ CC B A=arctan2( y. nz) Definition: Equivalent angle-axis Rotation about the vector K by an angle a according to the right hand rule k x k x va + ca k x k y va − k z sa k x k z va + k y sa R K ( a) = k x k y va + k z sa k y k y va + ca k y k z va − k x sa with va = 1 − cos a k k va + k sa k k va + k sa k k va + ca y y z x z z x z (2.

Link connection description by Denavit-Hartenberg notation Neighboring links have a common joint axis between them Parameters: • Distance along common axis from one link to the other (link offset) • Amount of rotation about this common axis between one link and its neighbour (joint angle) Axis i Link i Axis i+1 ai αi Figure 3-1: The length a and twist α of a link A serial link robot consists of a sequence of links connected together by actuated joints. The only significance of links is that they maintain a fixed relationship between the robot joints at each end of the link. E:\Robot_Erw\Publications\LectureRobotics. summer term 2000 3. there will be n joints and n links.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. There is no joint at the end of the final link (TCP). Any link can be characterized by two dimensions: the common normal distance ai (called link length) and the angle αi (called link twist) between the axes in a plane perpendicular to ai (see Figure 3-1).doc 19/50 . • Revolute/prismatic joints • n-DOF joints can be modeled as n joints with 1 DOF connected with n-1 links of zero length • Positioning a robot in 3-space a minimum of six joints is required • Typical robot consist of 6 joints • Joint axis are defined by lines in space • A link can be specified by two numbers which define the relative location of the two joint axes in space: link length and link twist • Link length: measured along the line which is mutually perpendicular to both axes • Link twist: measured in the plane defined by the perpendicular axis 3. two links are connected at each joint axis (see Figure 3-2). Generally. Link 1 is connected to the base link by joint 1. • Joints form a connection between a neighboring pair of links Properties: • Normally robots consist of joints with one degree of freedom (1 DOF). For an n DOF robot.1. • These bodies are called links. The base of the robot is link 0 and is not considered one of the (n=6) links. Robot kinematics Definitions: • A robot may be thought of as a set of bodies connected in a chain by joints.

di and θi are called the distance and the angle between the links. these four parameters are called the Denavit Hartenberg parameters (DH parameters). Notice that this condition is also satisfied for the x axis directed along the normal between joints i and i+1.doc 20/50 . The relative position of two such connected links is given by di. If a tool (or end effector) is used whose origin and axes do not coincide with the frame of link 6. With the robot in its zero position. one for each link. For a prismatic joint we will define the zero position when di = 0. the origin is at the point of intersection of the joint axes. In case of intersecting joint axes.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. summer term 2000 The axis will have two normals to it. The direction of the joint axis is the direction in which the joint moves. At the end of the robot. then the relationship between the reference and base frames can be described by a fixed homogeneous transformation. The direction of the axis is defined but. We will first consider revolute joints in which θi is the joint variable. the origin is chosen to make the joint distance zero for the next link whose coordinate origin is defined. we will assign coordinate systems (frames) to each link. unlike a revolute joint. In the case of a prismatic joint. The origin of the frame of link i is set to be at the intersection of the common normal between the axes of joints i and i+1 and the axis of joint i+1. the tool can be related by a fixed homogeneous transformation to link 6. and θi the angle between the normals measured in a plane normal to the axis. the final displacement d6 or rotation θ6 occurs with respect to z5. The origin of the frame for a prismatic joint is coincident with the next defined link origin. If it is desired to define a different reference frame. Having assigned frames to all links according to the preceding scheme.i by the following rotations and translations: θi = the angle between Xi-1 and Xi measured about Zi-1 di = the distance from Xi-1 to Xi measured along Zi-1 ai = the distance from Zi-1 to Zi measured along Xi αi = the angle between Zi-1 and Zi measured about Xi • • • • Due to the authors of this method attaching frames to links. If the axes are parallel. the positive sense of rotation for revolute joints or displacement for prismatic joints can be decided and the sense of the direction of the z axis determined. In case of intersecting joints. The xi axis is parallel or antiparallel to the vector cross product of the direction of the prismatic joint and zi. the distance between the normals along the joint i axis. The z axis for link i will be aligned with the axis of joint i+1. the length ai has no meaning and is set to zero. respectively. θi is zero for the i-th revolute joint when xi-1 and xi are parallel and have the same direction. E:\Robot_Erw\Publications\LectureRobotics. The z axis of the prismatic joint is aligned with the axis of joint i+1. the distance di is the joint variable. In order to describe the relationship between links. The x axis will be aligned with any common normal which exists and is directed along the normal from joint i to joint i+1. The origin of the frame for link 6 is chosen to be coincident with that of the link 5 frame. The origin of the base link (zero) will be coincident with the origin of link 1. the position in space is not defined. In case of prismatic joint. we can establish the relationship between successive frames i-1. the direction of the x axis is parallel or antiparallel to the vector cross product zi-1 x zi.

summer term 2000 Axis i Axis i-1 Link i Axis i+1 Link i-1 ai di Zi αi Xi θ i+1 X i-1 Link i+1 θ i-1 Z i-1 θi Figure 3-2: Link parameters θ.doc . In case of rotational joints the components of the DH-matrices I D only depend on the joint angles θi .Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. Concenating link transformations to calculate the single transformation that relates frame N (usually the TCP frame) to frame 0 (usually the base frame): 0 1 T = N D = 0 D ⋅ 2 D⋅ ⋅ ⋅ N − 2 D⋅ N −1 D 1 N −1 N (3. di .θ n ) = T (θ ) 21/50 E:\Robot_Erw\Publications\LectureRobotics. For that case the transformation I −1 T = T (θ 1 .θ 2 . a and α The transformation from one link frame to the next can now be defined by homogeneous transformations using this notation: I −1 I D= Rot Z i −1 (θ i )Trans Z i −1 (d i )Trans X i (ai ) Rot X i (α i ) − sin θ i cos α i cosθ i cos α i sin α i 0 (3. αi are constants. .2) n defines the number of joints. d.1) cosθ i sin θ i = 0 0 sin θ i sin α i − cosθ i sin α i cosα i 0 ai cosθ i ai sin θ i di 1 The homogeneous matrices I −1 I D are called Denavit-Hartenberg matrices. since the parameters ai .

doc 22/50 . θ n Examples2-dof robot with parallel axes Gelenk 1 Gelenk 2 TCP z 0 x y 0 h1 z2 x 2 0 z y 2 y 1 1 h2 x 1 Figure 3-3: Planar 2-dof robot including coordinate system The following table shows the set of DH parameters: q1and q2 are variables (joint angles) Joint 1 2 θi / degree 0 -90 di 0 0 Ai h1 h2 αi / degree 0 0 The DH-transformations are: cos q1 sin q1 0 1 D = 0 0 − sin q1 cos q1 0 0 0 h1 ⋅ cos q1 0 h1 ⋅ sin q1 1 0 0 1 − sin q 2 cos q 2 1 2 D = 0 0 − cos q 2 − sin q 2 0 0 0 − h2 ⋅ sin q 2 0 h2 ⋅ cos q 2 1 0 0 1 − C S − S C − C1 C 2 + S1 S 2 1 2 1 2 − S1 S 2 + C1C 2 − S1 C 2 − C1 S 2 0 2 D= T (q) = 0 0 0 0 2-dof robot with perpendicular axes 0 h 2 (−C S 2 − S1C 2) + h1C1 1 0 h 2 (−S S 2 + C1C 2) + h1 S1 Ci = cos qi 1 S = sin q 1 0 i i 0 1 E:\Robot_Erw\Publications\LectureRobotics.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. The joint angles can be formed to a vector: θ = . summer term 2000 θ1 θ 2 is a function of all joint angles.

a3 d1. d2.doc 23/50 . d1. α1. α1. summer term 2000 h2 x2 h1 z2 x1 z1 y1 y2 y 0 x 0 z Figure 3-4: 2-dof robot with perpendicular axes DH parameter: joint 1 2 qi / degree 180 0 Di -h1 0 ai 0 h2 αi / degree 90 0 0 q1. d2.d3 : variables. E:\Robot_Erw\Publications\LectureRobotics. α2 are constants 3-dof robot with 2 rotational joints and 1 prismatic joint x2 d* y 2 z2 x1 h 1 x3 z1 y y 1 3 z3 y 0 x0 z0 Figure 3-5: 3-dof robot with 2 rotational joints and 1 prismatic joint DH parameter joint 1 2 3 qi / degree 180 90 0 Di -h1 0 d* ai 0 0 0 αι / degree 90 90 0 a1. α3 constants q1. q2 . a1. α2. a2.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. q2 are variables. a2.

doc 24/50 . summer term 2000 6-dof robot with 6 rotational joints (frequently used kinematic structure) x x x2 3 5 z 5 5 y y 2 2 z 3 x4 3 y x 6 z y z 4 z 4 y 6 6 z 1 x 1 y 1 y 0 x z Figure 3-6: 6-dof robot with 6 rotational joints (frequently used kinematic structure) 0 x3 x2 y3 z2 h2 y h =0 5 h6 z3 x4 y x5 z5 x6 y 2 z4 h 4 4 y 5 6 z6 z1 h 1 x1 y 1 y 0 x z0 0 Figure 3-7: sketch of 6-dof robot E:\Robot_Erw\Publications\LectureRobotics.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.

Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.doc 25/50 . summer term 2000 DH parameter: Joint 1 2 3 4 5 6 qi / degree 0 -90 0 0 0 0 di -h1(-0.334m) Ai 0 h2(0.8m) 0 h6 (0.8m) 0 0 0 0 αι / degree 90 0 -90 90 -90 0 Figure 3-8: Reis robot RV6 (payload 6 kg) E:\Robot_Erw\Publications\LectureRobotics.78m) 0 0 h4 (0.

615m h4=0..⋅ N −−21 D⋅ N −1 D N N 1 The vectors p. This mapping from robot coordinates to world coordinates is unique.. E:\Robot_Erw\Publications\LectureRobotics.2.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. Forward kinematics Given the joint values of the robot with n degrees of freedom. the homogenous matrix defining the position and orientation of the TCP is: n(q) o(q ) a (q ) T ( q) = 0 0 0 p ( q) 0 1 = 1 D⋅ 2 D⋅.54m h6= depending on tool 3.doc 26/50 . o. n. summer term 2000 y x2 z h 2 2 3 x z h =0 5 3 h6 3 x4 z y x5 z y 6 y 2 4 4 y 5 5 x6 z 6 z0 y 0 h 4 x 0 z y 1 x1 RV6 1 h10 h11 Figure 3-9: Reis robot RV6. sketch DH parameter RV6: Joint 1 2 3 4 5 6 qi / rad 0 -π/2 π π 0 -π/2 di 0 0 0 h4 0 h6 Ai h11 h2 0 0 0 0 αι / rad -π/2 0 π/2 π/2 -π/2 0 h11=0. a are described in the base (world) frame.28m h2=0.

m . For the solution to exist.m . the specified goal point must be in the workspace. summer term 2000 Robot Robotercoordinates koordinaten q T ( q) ln. Inverse kinematics Given the position and orientationg of the TCP with respect to the base frame calculate the joint coordinates of the robot which correspond it. o. n . WeltWorld coordinates koordinaten l o.. .24) for the 6 joint angles q1. n p World Weltcoordinates koordinaten Figure 3-1: forward kinematics: mapping from robot coordinates to world coordinates 3. These added with 3 equations from the position vector portion of T give 6 equations with 6 unknowns. Problems related to inverse kinematics: • • • Existence of solutions Multiple solutions Method of solution Existence of solutions The existence of solutions is closely linked to the workspace of the robot which is the volume of space which the TCP of the robot can reach. Among the 9 equations arising from the rotation matrix portion of T. a. Dexterous workspace: volume of space which the robot’s TCP can reach with all orientations Reachable workspace: volume of space which the robot’s TCP can reach with at least one orientation Examples: E:\Robot_Erw\Publications\LectureRobotics.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. .3. only three equations are independant. In the equation 0 0 0 1 Problem: Given T(q) as 16 numerical values (four of which are trivial)..doc 27/50 . transcendental equations which can be quite difficult to solve. p n. p q ( “T q )-1” RoboterRobot coordinates koordinaten Figure 3-1: inverse kinematics: mapping from world coordinates to robot coordinates 0 n 0o 0 a 0 p =T (q ) the left side is given. These equations are nonlinear.. p .q6. solve (2.a.

the degree of freedom of the robot must be at least 6. Multiple solutions: Lösung 2 Lösung 1 Figure 3-2: Example for multiple solutions for a 2 DOF robot q5 q6 n a q p 4 q* =0 5 Figure 3-3: example for infinite solutions Suppose in example of E:\Robot_Erw\Publications\LectureRobotics. summer term 2000 2 DOF robot with l1 = l 2 Dexterous workspace: Reachable workspace: 2 DOF robot with l1 ≠ l 2 Dexterous workspace: Reachable workspace: empty ring of outer radius l1 + l 2 and inner radius l1 − l 2 the origin disc of radius l1 + l 2 Since the 6 parameters are necessary to describe an arbitrary position and orientation of the TCP. in order a dexterous workspace exists at all.doc 28/50 .Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.

It depends on the kinematic structure whether an analytical solution is possible.i ) 2 + c 2 (q 6. Method of solution: There are two classes: • Closed form solutions (analytical. q* = 0 . With ! ( p x0) = h2 (cos q1 cos q 2 − sin q1 sin q 2 ) + h1 cos q1 p (y0) = h2 (sin q1 cos q 2 + cos q1 sin q 2 ) + h1 sin q1 the joint values q1 and q2 can be calculated. q* . like c1 (q 4.i −1 − q 4. q* .i ) 2 = MIN This sum minimizes the travel range of q4 and q6 . Geometrical solution: Law of cosine: b a c a = b + c − 2 ⋅ b ⋅ c ⋅ cos α 2 2 2 Example: planar 2-dof robot (Figure 3-4) E:\Robot_Erw\Publications\LectureRobotics. Example : planar 2-dof robot: ( ( Given p x0 ) and p y0 ) as cartesian position. that the given vectors p and n are realized by the joint angles * q1. geometrical) • Numerical solutions We concentrate in this lecture on closed form solutions. In practical implementations of the inverse kinematics an additional condition will generate the solution.i −1 − q6. summer term 2000 Figure 3-3. The orientation can be realized by an infinite numbrer of pairs q4 and 2 3 5 q6 (normally by q4+q6 = 0).Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.doc 29/50 .

2 2 2 1 Therefore q2 = π − χ h2 + h2 − p 2 2 1 χ = arccos 2 ⋅ h1 ⋅ h2 The arccos function is not unique. For the vector p the solution indicated by the dotted line is given by the positiv sign of β and χ and the straight line solution by the negative sign. A partial decoupling of position and orientation is possible. summer term 2000 y x 0 0 q1 α β p χ h1 h q 2 2 P Figure 3-4: Geometrical solution of 2 dof robot Relationships: ( α = arctan 2( p (y0) . Obviously the solution is not unique.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. For this case the intersection point p3 can be calculated by 0 p 3 = p− a ⋅ d 6 0 0 The position of the p3 only depends on the first 3 joint values q1. q2 and q3. p x0) ) For β the cosine rule applies: 2 h22 = h12 + p − 2 ⋅ h1 ⋅ p ⋅ cos β . These 3 joint values are calculated depending on p3. Therefore q1 = α − β For χ the cosine rule applies: 2 h2 + p 2 − h2 2 1 β = arccos 2 ⋅ h1 ⋅ p p = h + h − 2 ⋅ h1 ⋅ h2 ⋅ cos χ . E:\Robot_Erw\Publications\LectureRobotics.doc 30/50 . if the robot has a „wrist“ with intersecting axes (axes 4-6). Solution for a typical 6 DOF robot with intersecting wrist axis The general solution simplifies.

doc 31/50 . d 1 R3 = 2 2 2 2 p3 x + p3 y + ( p 3 z + d 1 ) 2 law of cosine for ϕ :: R 3 = l12 + l 22 − 2 ⋅ l1 ⋅ l 2 ⋅ cos ϕ therefore q 3 is known : q3 = ϕ = arccos Calculation of q2 : l12 + l 22 − R 3 2 ⋅ l1 ⋅ l 2 π −ϕ 2 l1 β 2 l2 -q β1 2 3 R d p 3 1 x z 0 0 Figure 3-6: Calculation of q2 E:\Robot_Erw\Publications\LectureRobotics. summer term 2000 Calculation of q3 : l1 l2 ϕ q3 R3 p 3 d 1 x z 0 0 Figure 3-5: Calculation of q3 0 R 3 = p 3 + 0 .Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.

doc 32/50 . E:\Robot_Erw\Publications\LectureRobotics. which now available. since T is given and the Denavit-Hartenberg matrices at the right side only depend on q1. The left side depends only q4. summer term 2000 sin β 1 = p3 z − d1 R3 β 1 = arcsin p3 z − d1 R3 2 2 law of cosine for β 2 : l 2 = l12 + R 3 − 2 ⋅ l1 ⋅ R 3 ⋅ cos β 2 β 2 = arccos Calculation of q1 : l12 + R 3 − l 22 2 2 ⋅ l1 ⋅ R 3 therefore q 2 is known : q 2 = −( β 1 + β 2 ) Figure 3-7: Calculation of q1 q1 = arctan 2( p3 y . q3 . The modulus of q5 is also available. q2.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. q5 and q6. p3 x ) Calculation of q5 : x 3 z 3 z p 3 a 3 x z 0 q 0 5 z 3 a Figure 3-8: Calculation of q5 q5 = arccos( z 3 • a) Calculation of q4: 1 3 T = 0 D⋅ 2 D⋅ 2 D⋅ 4 D⋅ 4 D⋅ 5 D 1 3 5 6 3 3 T = 0 D⋅ 6 D ⇒ 6 D = 3 3 4 5 D⋅ 4 D⋅ 6 D = 5 [ D⋅ 0 1 1 2 [ D] D⋅ D ] 0 3 2 3 −1 −1 ⋅T ⇒ ⋅T The right side of the last equation is known. It is possible to eliminate q6 and to calculate q4 .

.q5 0 5 0 0 x5 D= 0 0 y5 0 0 z5 0 0 p5 1 E:\Robot_Erw\Publications\LectureRobotics.. since 5 D can be calculated by the given q1 .. summer term 2000 Calculation of q6: x y 5 5 z q 6 5 n o a Figure 3-9: Calculation of q6 q 6 = arccos( 0 y 5 • o) 0 y 5 is known.doc 33/50 ..Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.

end-effector (modularity) Exchangeability with other robots Supports the idea of moving frames (conveyer belt) Basic problem: move the robot from the start position. This motion type is named Continous Path motion (CP) 4.1.doc 34/50 . given by the tool frame Tinitial to the end position given by the tool frame Tfinal. Trajectory generation in joint space (PTP motions) E:\Robot_Erw\Publications\LectureRobotics. This motion type is named Point-to-Point motion (PTP) Cartesian space schemes: path shapes in space and time are described in terms of functions of cartesian coordinates. summer term 2000 4. velocity and acceleration for each degree of freeom Issues: • Specification of trajectories • Motion description easy for robot operators • start and end point • geometrical properties • Representation of trajectories in the robot control • Computing trajectories on-line (at run time) General considerations • • • • • Motion of robot is described as motion of TCP (tool frame) Supports robot operator’s imagination Decoupling the motion description from any robot.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. There are two methods of path generation: • • Joint space schemes: path shapes in space and time are described in terms of functions of joint angles. Trajectory Generation Scope: methods of computing a trajectory in multidimensional space Definition: trajectory refers to a time history of position. Spatial motion constraints: • Specification of motion might include socalled via points • Via points: intermediate points between start and end points Temporal motion contraints: • Specification of motion might include elapsed time between via points Requirements: • Execution of smooth motions • Smooth (motion) function: function and its first derivative is continous • Jerky motions increase wear on the mechanism (gears) and cause vibrations by exciting resonances of the robot.

In case of via points the velocity is not zero.doc 35/50 . Figure 4-1: PTP motion between two points Many. ¢ ¢ q(t end ) = qend Choosing velocities by • • robot user automatically chosen by the robot control Using these type of polynomials does not in general generate time optimal motions E:\Robot_Erw\Publications\LectureRobotics. end configuration q(t end ) = q B ¡ Velocities q(0) = 0 . actually infinite. • • Start configuration q(0) = q A . end configuration q(t end ) = q B £ £ Velocities q(0) = q0 . summer term 2000 • • • • • • • • • Path shape (in space and time) described in terms of functions of joint angles Description of path points (via points plus start and end point) in terms of tool frames Each path point is converted into joint angles by application of inverse kinematics Identifying a smooth function for each of the n joints passing through the via points and ends at the target point Synchronisation of motion (each joint starts and ends at the same time) Joint which travels the longest distance defines the travel time (assuming same maximal acceleration for each joint) In between via points the shape of the path is complex if described in cartesian space Joint space trajectory generation schemes are easy to compute Each joint motion is calculated independantly from other joints A: Start o q q 5 B: Ziel n 6 a q 4 .Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. q(t end ) = 0 Four constraints can be satisfied by a polynomial of degree 3 or higher. smooth functions exist for such motions Four contraints on the (single) joint function q(t) are evident: • • Start configuration q(0) = q A .

summer term 2000 Linear function with parabolic blends: q(t) degree/s 2 q(t) degree/s q(t) degree t0 t tb t th t tf-tb tf Figure 4-2: path with trapezoidal velocity profile smooth motion constructed by adding parabolic blends to the linear function Assumptions: • • • Constant acceleration during blend Same duration for the (two) parabolic blends Therefore symmetry about the halfway point in time th Velocity at the end of the blend is equal to velocity during the linear section: qh − qb and t h − tb 1 2 qb = q 0 + qt b leads to 2 qt b = ¢ 2 qt b − qtt b + (q f − q 0 ) = 0 ¢ ¢ ¢ Free parameters q and t b . Usually acceleration is chosen and the equation is solved for tb .doc £ £ ¡ ¡ 36/50 .Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. E:\Robot_Erw\Publications\LectureRobotics.

t v = t f − t b q max The constraint on the acceleration is t 2 ⋅ q max ≥ 8 ⋅ q f f Calculation of q(t): £ £ E:\Robot_Erw\Publications\LectureRobotics.1) t q(t) tb tv tE t tb tv tE Figure 4-3: profile for a sinusoidal path The following relationships are valid: ¡ ¡ 4 ¢ ¢ ¢ 16 2 tb = 2 ⋅ q max . summer term 2000 • • Acceleration must be sufficiently high (otherwise a solution does not exist) Acceleration is not continous.doc ¡ ¡ q max = ¡ − − . which excites vibrations Alternative: sinoidal profile Acceleration defined by: q(t) tv tb q(t) tE t π q (t ) = qmax ⋅ sin 2 ⋅ t t b (4.2) (4.t b = t f − q max ⋅ t f − q f q max (4. q max ¡ ¡ ¡ ¡ ¡ t f ⋅ q max ¡ ¡ 2 t 2 ⋅ q max f q f ⋅ q max 2 qf 2 ⋅ q max = .3) 37/50 .Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.

max + 1 q f .i = 0 2 2 q max.i = t f .2. The travel time ti for each joint is calculated calculate the maximum travel time t f .i ⋅ q max.max = for each joint set t f .i − 1 q max.i + q max.i ⋅ t 2 .i ⋅ t f . max adjust joint speeds q max. max £ £ 2 q max.1. Asynchrone PTP independant calculation of joint angles along each path different travel time for each axis in general (misbehaviour) for each joint the values q0 . max = and therefore q f . q f . q f .1.i q max. The travel time ti for each joint is calculated ¡ ¡ ¡ ¡ • 4.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.5) .i = t f .4) 2π cos ⋅ t −1 tb t2 t2 t2 f 2 ⋅ t f ⋅ t − + t f ⋅ t b − − t b + b 2 2 2 4π 2π 1 − cos( ⋅ (t − t v )) tb 4..2. q max are given.i ⋅ t f . Synchrone PTP: each joint ends its motion at the same time • • • • for each joint the values q0 . q max are given. max − 8q f .i q max.i ⋅ q max.i = £ 4 − The negative sign of the square root has to be chosen in order to fulfill 2tb<tf..doc £ £ q max.i 2 ⇒q max.i f 16 (4..i ¢ max {t } i =1.1.i ⋅ q max.i t f . Trajectory generation in cartesian space robot user would like to control the motion between start and end point several possibilities: • • • straight line motion (TCP follows a straight line) circular motion (TCP follows a circle segment) spline motion 38/50 E:\Robot_Erw\Publications\LectureRobotics.max 4.dof f . summer term 2000 1 t2 0 ≤ t ≤ t b : q(t ) = qmax ⋅ t 2 + b 2 8π 4 1 t b ≤ t ≤ t v :q(t ) = qmax ⋅ t − ⋅ t b 2 q t v ≤ t ≤ t f : q(t ) = max 2 (4.

γB) Orientation defined by Euler angles Calculation of intermediate positions p and orientations Θ: p (t ) = p A + s p (t ) sf ⋅(pB − p A) (4.A p p p p z. γA) to (αB.B p y. sEP denotes the path length: E:\Robot_Erw\Publications\LectureRobotics. Linear Interpolation Start point and end point are given in world frame: p x.6) sP denotes the path parameter.1. βB. βA.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. A p x.doc 39/50 . A = A target point x B = z .2. summer term 2000 • • inverse kinematics needs to be calculated at run time computational expensive (depending on the robot) issues: • interpolation of TCP position (linear change of coordinates) • interpolation of orientation (linear change of matrix elements would fail) 4.B p y. B = B start point: x A = α A Θ A α B Θ B β β B A χ χ B A A sP z p 0 A sEP p(t) K0 x y 0 p 0 B B Figure 4-1: linear interpolation for the positioning of the TCP in world frame • • • • • TCP is moving between the points A and B along a straight line Robot user sets motion speed and motion acceleration Orientation changes from (αA.

B − p y . bΘ (4. Orientation: Θ ( t ) = ΘA + sΘ ( t ) ⋅ ( ΘB − Θ A ) sEΘ (4.9) Synchronisation of motion: t f = Max(t fP . summer term 2000 s fP = p B − p A = ( p x . B − p z .8) s fΘ = Θ B − Θ A = (α B − α A ) 2 + ( β B − β A ) 2 + ( χ B − χ A ) 2 s Θ (0) = s Θ (t EΘ ) = 0 For the positioning of the TCP velocity profiles like in the PTP case are used • • Trapezoidal profile Sinusoidal profile ¡ ¡ Robot user sets speed and acceleration for position and orientation The duration of the motion (in case of sinusoidal profile) is: t fP = s fP vP + s fΘ v Θ vP and t fΘ = + bP vΘ bΘ (4. B − p x . bP (4. A ) 2 + ( p z .11) Reference to the PTP motion: E:\Robot_Erw\Publications\LectureRobotics. Case 1: tf = tfP vΘ = bΘ ⋅ t f 2 − 2 bΘ ⋅ t 2 f 4 − s Θ ⋅ bΘ . t bP = vP . A ) 2 (4. A ) 2 + ( p y .Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.doc 40/50 .7) The motion starts at time t=0 and ends in the target at time t=tEP.t fΘ ) .10) Case 2: tf = tfΘ vP = bP ⋅ t f 2 − 2 bP ⋅ t 2 f 4 − s P ⋅ bP . t bΘ = vΘ . s P (0) = s P (t fP ) = 0 .

tVΘ s(t) → s Θ (t ) or v m (t ) → v Θ (t ) bm (t ) → bΘ (t ) The following scheme presents the algorithm for straight line motions E:\Robot_Erw\Publications\LectureRobotics.tV → t bP . summer term 2000 t b .tVP q(t ) → s P (t ) q max (t ) → v P (t ) q max (t ) → bP (t ) t b .Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.doc 41/50 .tV → t bΘ .

t f P tf = t fΘ Correction of vP . t f Θ yes t f < 2 tb Θ Correction of vΘ . t bΘ . Θ . t bP. m(t). summer term 2000 p . v . t bP tf P > t f t f = tf P Correction of vΘ . b B P Θ Θ Calculation of sf P . t bP.doc 42/50 . t f P sf Θ. t b Θ Calculation of s P (t) p (t) sΘ (t) Θ (t) Transformation from Θ(t) to l(t). tbΘ.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. n(t) inverse kinematics q d (t) E:\Robot_Erw\Publications\LectureRobotics.Θ A B A vP. b . p . t f Θ no yes t f P < 2 t bP no Correction of vP .

The points P1. where ϕ(t) is opening angle up to actual position.P3 and the velocity vC and acceleration bC are defined by the robot user. d 2 := 1 ( P2 + P3 ) ⋅ n 2 2 The vector m of the center point M is the solution of the linear system T n1 d1 T n2 m = d 2 .2 with → n1:= ( P2 − P1 ).P3 defining a circular segment. P2 = [1 1 2]T. sCE = r*ϕG denotes the total path length.P2.P2.2.doc £ £ £ £ ¡ ¥ ¥ ¥ ¥ ¡ ¢¡ ¤ 43/50 . nT dK K • • introduction of path parameter sC(t) = r*ϕ(t).Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. • from points P1. P3 = [2 1 1]T Calculate center point M and radius r. summer term 2000 Problems related to cartesian motions: • • • Intermediate points unreachable (out of work space) High joint rates near singularities Start and target position reachable in different solutions 4. d1:= 1 ( P1 + P2 ) ⋅ n1 2 n2 := ( P3 − P2 ). E i = { X ∈ P 3 | x ⋅ ni = d i . Circular interpolation Motion of the TCP along a circular segment from point P1 via P2 to point P3. with P1 = [0 1 1]T.P3 calculate center point M and radius r by n K = ( P2 − P1 ) × ( P3 − P2 ) .P2. P2 is sometimes called auxiliary point.2. x : = OX } for i=1. E:\Robot_Erw\Publications\LectureRobotics. analog procedure to linear interpolation Exercise: Given the points P1.

2.doc 44/50 .Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. summer term 2000 SC(t) pC yC xC P1 r M P2 p12 p p 23 13 P 3 ϕ M ϕG R M p r1 r2 z0 y0 K x0 r 3 0 4.3. Corner smoothing The above mentioned interpolation methods let the motion stop at the end point Corner smoothing let the robot move from one segment to another without reducing speed E:\Robot_Erw\Publications\LectureRobotics.

summer term 2000 4. • • • • Spline interpolation based on Bézier curves Bezier curves are useful to design trajectories with certain „roundness“ properties Defined by a series of control points with associated weighting factors Weighting factors „pull“ the curve towards the control points Bezier curves are based on Bernstein polynomials (we consider here cubic polynomials.doc 45/50 . B3(u) i (a) (b) Definition of Bezier curve is based on these polynomials X ( u) = ∑ bi Bin (u) i= 0 n bi denotes the Bézier-points or control points.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.2.4. n=3) n n Bk (u): = (1 − u) n − k u k k with normalized parameter interval. The polygon connecting the Bézier-points are called Bézier-polygon or control polygon. E:\Robot_Erw\Publications\LectureRobotics. Bernstein polynomials can be derived from formula: n n 1 = ( (1 − u) + u) n = ∑ (1 − u) n − k u k k =0 k Bernstein polynomials with n=2 (quadratic) and n=3 (cubic) Bi2(u).

B2 =(1. B1 =(1. and B3 for a cubic Bezier curve by B0 =(0. summer term 2000 Two examples for Bezier curves with associated control polygon b5 b2 b1 b3 b5 b0 b1 b4 b0 b3 b2 b4 de Casteljau-Algorithm is used to calculate points on the Bezier curve: X n (u 0 ) Geometrical interpretation: Consecutive division of control polygon (1 − u0 ) : u0 Exercise: Given the (two-dimensional) control points B0.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.0)T . b] by u a + (b − a ) u : 46/50 .5) geometrically and by computation of X ( u) = ∑ bi Bin (u) i= 0 n Generalized definition of Bernstein polynomials: E:\Robot_Erw\Publications\LectureRobotics. B3 =(0. 1] to [a.2)T .doc Transformation of parameter interval [0 .0)T .2)T Calculate X (0. B2. B1.

summer term 2000 n Bk (u): = n n− k k ( b − u ) (u − a ) (b − a ) k 1 n Further examples: B1 B3 B1 B0 B0 B2 B2 (a) (b) B3 B3 B0 B1 B3 B2 B0 B2 B1 Problems: • • • • • Path length can not be calculated in closed form (like for linear or circular segments) Numerical approximation possible Integration in robot controllers: Teach points programmed by robot user System „estimates“ tangents E:\Robot_Erw\Publications\LectureRobotics.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.doc 47/50 .

doc ¡ Synchronisation with orientation: ¡ ¡ ey ez ¡ ¡ ¡ 48/50 . summer term 2000 ex ez ey ex ex ¡ P ey P ey ez ex ¡ P ez Generic trajectory generation (example linear interpolation): B0 : = PA B1 : = PA + 1 ( PE − PA ) 3 B2 : = PE − 1 ( PE − PA ) 3 B3: = PE E:\Robot_Erw\Publications\LectureRobotics.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt.

5. Teach pendant are hand-held button boxes which allow control of each robot joint or of each cartesian degree of freedom. testing and branching so that simple programs involving logic can be entered. DIN 66312) Task level programming The third level of robot programming methodology is embodied in task-level programming languages. rather than to specify the details of every action the robot is to take. For example. These are languages which allow the user to command desired subgoals of the task directly. but these enhancements cannot be counted as components of a task-level programming system. part feeders and fixtures. Levels of robot programming Three levels of robot programming exist: • • • Teach In programming Explicit robot languages Task level programming languages Teach In programming The robot will be moved by the human user through interaction with a teach pendant (sometimes called teach box). Explicit robot languages With the arrival of inexpensive and powerful computers. Usually these computer programming languages have special features which apply to the problems of programming robots.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. True task-level programming of robots does not exist yet in industrial controllers but is an active topic of research. In such a system. all these choices must be made by the programmer. The border between explicit robot programming languages is quite distinct. At the next higher level. In contrast. A task-level programming system must have the ability to perform many planning tasks automatically.doc 49/50 . it is important to remember that they are typically only a minor part of an automated process. the system must plan a path of the manipulator which avoids collision with any surrounding obstacles. Incremental advances are beeing made to explicit robot programming languages which help to ease programming. Todays controllers allow alphanumeric input.1. E:\Robot_Erw\Publications\LectureRobotics. In considering the programming of manipulators. Robot programming languages Robot programming systems are the interface between the robot and the human user. An international standard has been established with the programming language IRL (Industrial Robot Language. summer term 2000 5. workcells might be interconnected in factorywide networks so that a central control computer can control the overall factory flow. The sophistication of such an user interface is becoming very important as robots are applied to more and more demanding industrial applications. if an instruction to grasp the bolt is issued. The term workcell is used to describe a local collection of equipment which may include one or more robots. automatically choose a good grasp location on the bolt. in an explicit robot language. and grasp it. the trend has been increasingly toward programming robots via programs written in computer programming languages. the user is able to include instructions in the application program at a significantly higher level than in an explicit programming language. conveyor systems.

goal points. corner smoothing parameters Ability to specify goals relative to various frames. calls to subroutines Parallel processing. summer term 2000 5. velocity.doc 50/50 . vectors and rotation matrices Ability to describe geometric entities like frames in several different convenient representations with the ability to convert between representations • • Motion execution: • • • Description of desired motion (motion type.2.Introduction to Robotics: Module Trajectory generation and robot programming FH Darmstadt. signal and wait primitives Interrupt handling Sensor integration: • • • • Interaction with sensors Integration with vision systems Sensor to track the conveyor belt motion Force torque sensor for force control strategies E:\Robot_Erw\Publications\LectureRobotics. acceleration) Specifications of via points. Requirements of a robot programming language Important requirements of a robot programming language are: • • • • World modeling Motion specification Flow of execution Sensor inegration World modeling: • Existence of geometric types to present • Joint angle sets • Cartesian positions • Orientations • Representation of frames Ability to to do math on on structured types like frames. including frames defined by the user and frames in motion (on a conveyor for example) Flow of execution: • • • • Support of concepts like testing and branching Looping.

- 1439802874_Mechatro
- Robotics
- INTRO ROBOTICS
- robotics
- robotics
- Robo
- Robot
- Robotic
- Robotics
- Modern Robotics Building Versatile Macines - Harry Henderson - Allbooksfree.tk
- Robotics
- AM100 S6 Maintenance Manual E
- robotics
- Fundamentals of Robotics Linking Perception to Action - Ming Xie
- MATLAB - Introduction to Robotics
- Basic Electronics Basic Robotics
- Mechanics of Robotic Manipulation
- Electrical Network Transfer Function
- Robotics
- Niku Robot
- Robotics Solution
- Serial and Parallel Robot Manipulators - Kinematics Dynamics Control and Optimization
- 31444612-Robotics
- Robotics Toolbox
- Operational Research Notes
- Appin - Robotics (Infinity, 2007)
- MECHANICAL FINAL YEAR STUDENT PROJECTS,CONTACT 9819757639
- L8 - Example for Jacobian of Robots
- introduction to robotics

Skip carousel

- tmp37E4.tmp
- tmp29BF.tmp
- UT Dallas Syllabus for taught by Nicholas Gans (nrg092000)
- Review Paper On Screw Theory And Its Application In The Field Of Parallel Robotics
- tmp7A74.tmp
- tmp6AAE.tmp
- tmp51FA.tmp
- Slider Crank Mechanism for Four bar linkage
- tmp362E.tmp
- tmp475C.tmp
- As 2341.3-1993 Methods of Testing Bitumen and Related Roadmaking Products Determination of Kinematic Viscosit
- Inverse Kinematics of Robot Manipulator with 5 Degree of Freedom based on ANFIS toolbox
- tmpE15E.tmp

Sign up to vote on this title

UsefulNot usefulRead Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading