You are on page 1of 53

Faculty of Engineering UASLP

UNIVERSIDAD AUTONOMA DE
SAN LUIS POTOSI

FACULTY OF ENGINEERING

MECHANICAL AND ELECTRICAL DEPARTMENT

FINAL PROJECT: 3rd DELIVERABLE

ROBOESPONJA MEMBERS
FAZ ORNELAS ALEJANDRO MAXIMILIANO
HERNANDEZ MARTINEZ PEDRO ALEJANDRO
JASSO PADRON DAVID GUADALUPE
PEREZ GONZALEZ JAIR EDGARDO

PROFESOR
GONZALEZ DE ALBA ALEJANDRO

November 16, 2021

Mechatronics Engineering Robotics A


1
Faculty of Engineering UASLP

Index
Figure Index......................................................................................................................................................................................3
Table Index.......................................................................................................................................................................................4
Introduction.......................................................................................................................................................................................5
Historical Context........................................................................................................................................................................5
Robots Manipulators...............................................................................................................................................................6
Theoretical Framework................................................................................................................................................................7
Research Problem........................................................................................................................................................................8
Objective...........................................................................................................................................................................................8
Procedure..........................................................................................................................................................................................9
Mechanical Design.......................................................................................................................................................................9
Parts.........................................................................................................................................................................................9
Assembly...............................................................................................................................................................................13
Geometric Model.......................................................................................................................................................................16
Configuration Test 1.............................................................................................................................................................18
Configuration Test 2.............................................................................................................................................................19
Configuration Test 3.............................................................................................................................................................20
Inverse geometric model............................................................................................................................................................21
Jacobian Matrix...............................................................................................................................................................................32
Singularities....................................................................................................................................................................................35
Path Planning..................................................................................................................................................................................37
Simulation / Graphical Interface.....................................................................................................................................................40
Conclusions.....................................................................................................................................................................................44
References.......................................................................................................................................................................................44
Appendix.........................................................................................................................................................................................44

Mechatronics Engineering Robotics A


2
Faculty of Engineering UASLP

Figure Index
Figure 1 Architectural design of a 6 DOF Robot..............................................................................................................................8
Figure 2. Base of the Robot..............................................................................................................................................................9
Figure 3. Revolute Joint no. 1.........................................................................................................................................................10
Figure 4. Base of Prismatic joint 1.................................................................................................................................................10
Figure 5. Prismatic Joint no. 1 and Base of Prismatic Joint no. 2..................................................................................................11
Figure 6. Revolute Joint no. 2.........................................................................................................................................................11
Figure 7. Revolute Joint no. 3.........................................................................................................................................................12
Figure 8. End effector with revolute joint no. 4..............................................................................................................................12
Figure 9. Assembly of all parts for the robot in default state.........................................................................................................13
Figure 10. Fully Extended Robotic Arm........................................................................................................................................13
Figure 11. Assembly of all parts for the robot with measurements................................................................................................14
Figure 12. Servomotor MG90S......................................................................................................................................................14
Figure 13. Workspace Model Top View........................................................................................................................................15
Figure 14. Workspace Model..........................................................................................................................................................15
Figure 15. Value of Workspace......................................................................................................................................................16
Figure 16. 6-DOF with reference frames for each joint.................................................................................................................16
Figure 17. 3D model of Robot configuration no. 1........................................................................................................................19
Figure 18. 3D model of Robot configuration no. 2........................................................................................................................19
Figure 19. Configuration for end effector given q 2......................................................................................................................20
Figure 20. 3D model of Robot configuration no. 3........................................................................................................................21
Figure 21. Configuration for end effector given q 3 ......................................................................................................................21
Figure 22. 6 degree of freedom robot with reference frames for each joint...................................................................................22
Figure 23. All of the factible solutions for the robot given a q of all ones, overlapping................................................................29
Figure 24. All of the factible solutions for the robot given a q of all random numbers, overlapping............................................29
Figure 25. All of the factible solutions for the robot given a q that fully stretches out the robot, overlapping.............................32
Figure 26. Singularity in d3 = 0......................................................................................................................................................36
Figure 27. Singularity in θ 5=0 ,± π ,± Nπ ................................................................................................................................36
Figure 28. Result for "animation" of path between point A and point B.......................................................................................39
Figure 29. Movement between two points with translation and orientation..................................................................................40
Figure 34. Design implemented for the robot base.........................................................................................................................48
Figure 35. Design implemented for the first revolute joint of the robot.........................................................................................48
Figure 36. Design implemented for the first prismatic joint of the robot.......................................................................................49
Figure 37. Design implemented for the first prismatic joint of the robot.......................................................................................49
Figure 38. Design implemented for the second prismatic joint of the robot..................................................................................50
Figure 39. Design implemented for the second prismatic joint of the robot..................................................................................50
Figure 40. Design implemented for the second prismatic joint of the robot..................................................................................51
Figure 41. Design implemented for the second Joint revolute of the robot....................................................................................51
Figure 42. Design implemented for the second Joint revolute of the robot....................................................................................52
Figure 43. Design implemented for the third revolute joint of the robot........................................................................................52
Figure 44. Design implemented for the third revolute joint of the robot........................................................................................53
Figure 45. Design implemented for the last Joint revolute of the robot and the end effector........................................................53
Figure 46. Design implemented for the last Joint revolute of the robot and the end effector........................................................54

Table Index
Table 1. DH Parameters for 6 DOF R2P3R robot..........................................................................................................................15
Table 1. DH Parameters for 6 degrees of freedom for RPP robot with spherical wrist.................................................................21
Table 2. Types of equations encountered in the Paul method........................................................................................................22

Mechatronics Engineering Robotics A


3
Faculty of Engineering UASLP

Introduction
In the last couple of years, software and hardware have been developing for Mechatronics. Many start-ups and open-source
projects now include a variety of new technology and creative ideas for innovation and marketing of new products. In the high-
tech industry, robotics and the use of robotic arms appear to be promising in terms of increasing labor efficiency. Industrial
robots are becoming increasingly sophisticated in their applications. Understanding the planning and construction of a
sophisticated robot is in our best interests as students. Starting with the project's "skeleton," we'll gradually add more and more
complex components to our project. To get the end effector where it needs to go, a variety of motors, gears, and bearings can be
used. Each combination produces a unique robotic arm with its own set of advantages and disadvantages. In our case, we
worked with a set of mechanical joints given for the architecture, and it was our job to figure out what the end design was going
to be. The goal of this report is to document the key steps in the development of a six-degree-of-freedom robot, including its
geometrical model, motion simulation, and mechanical design.

Historical Context
In ancient times there were records of some mechanical systems, to name a few, in the 14th century Al Jazari developed the
first humanoid robot, in 1352 he developed the Rooster of Strasbourg, which was a mechanical rooster that crowed and flapped
its wings and is also the oldest automaton that is preserved today, in 1495 Leonardo Da Vinci made his own humanoid robot
that was lost to history, but thanks to the detailed plans found in his notes in 1950 it was recreated in 2007, in 1738 Jacques de
Vaucanson built a mechanical duck that could perform simple movements such as walking or swimming.
In the contemporary times, the robot word was conceived in 1920 by the Czech writer and playwright Karel Čapek and was
introduced in his drama R.U.R. (Rossum’s Universal Robots). In 1938, the Westinghouse Electric Corporation appliance
company built a robot called Elektro the Moto-Man that could walk using voice commands via a telephone device, and could
make others things. In 1953, the first mobile robot in history was ELSIE built in England, with very limited capabilities. ELSIE
was limited to following a light source using a mechanical feedback system. In 1954, George Devol designed a Unimate, the
first programmable robot, and it was put into operation in 1961 at the General Motors company and in a short time this
company begins to manufacture robots. In 1961, Victor Scheinman developed a 6-axis joint robot at Stanford University, which
is why it was called the Stanford Robot. In 1963, Fuji Yosuko Kogyo developed the robot called the Palletizer and it was the
first robot for palletizing applications. In 1973, ABB introduced the first fully electric industrial robot controlled by a
microprocessor to the market, the IRB 6 robot. In 1975, Victor Scheinman developed the PUMA robot of the Unimation
company. In 1981, Haruko Asada designed and built the first direct drive robot in the Carnegie- Mellon University. In 2000, the
ASIMO humanoid robot manufactured by the Honda Motor company was introduced, which can walk and interact with people.
Fourth generation robots have been developed since 2010, they are intelligent, with sophisticated sensors and real-time control.

Robots Manipulators
The Robot Institute of America (RIA) provides an official definition of such a robot: A robot is a reprogrammable
multifunctional manipulator that can move material, parts, tools, or specialized devices using variable programmed motions to
perform a variety of tasks. [2].

Articulated arm robots are typically used to carry out dangerous, perilous, and highly repetitive and obnoxious tasks. This entire
system is controlled by a trained operator who uses a portable device such as a teach pendant to instruct a robot to perform
manual tasks. The main concern is not how the robot works, but how it will be protected in everyday use in the workplace [1].

Robot manipulators can be classified by several criteria, such:


 Power source. The robots can be operated electrically, hydraulically or pneumatically. Hydraulic robots are mainly
used to lift heavy loads, these robots have some drawbacks. On the other hand, robots driven DC or AC servo motors
are popular because they are cheaper, cleaner, and quieter. The Pneumatic robots are inexpensive and simple, but they
cannot be precisely controlled. As a result, pneumatic robots are very limited.
 Application Area. Robots are classified by application into assembly and non-assembly robots. Assembly robots tend
to be small, electrically driven and either revolute or SCARA in design. The main non-assembly application areas
have been in welding, spray painting, material handling, and machine loading and unloading.
 Control method. The robots control method can be of servo and non-servo robots. Non-servo robots are open-circuit
devices whose movement is limited to predetermined mechanical stops. Servo robots use closed-loop computer control

Mechatronics Engineering Robotics A


4
Faculty of Engineering UASLP

to determine their movement and can be reprogrammable devices. This control method is further classified according
to the method the controller uses to guide the end effector, can be a point-to-point robot or of continuous path.
 Geometry. Industrial manipulators are usually classified kinematically based on the first three joints of the arm, so they
have the following architectures: articulated (RRR), spherical (RRP), SCARA (RRP), cylindrical (RPP) or Cartesian
(PPP). The doll is described separately.

The above manipulator arms are serial link robots. A sixth distinct class of manipulators consists of the so-called parallel robot,
in these manipulators, the links are arranged in a closed kinematic chain instead of an open one.

Architecture of robot
Our robot is a cylindrical robot, since it has three first joints at the base, the first joint is revolute and produces a rotation around
the base, while the second and third joints are prismatic. An example of a cylindrical robot, is the Seiko RT3300.

Some applications of the cylindrical robots are:


 Spot welding
 Assembly operations
 Palletizing
 Loading and unloading of machines
 Applications in foundry and welding
 Assembly of containers and products in the manufacturing and packaging industry
 Grinding procedures
 Material transfer tasks

Advantages and disadvantages of cylindrical robots


Advantages
Cylindrical based Robots can travel between the required points faster than Cartesian Robots, which is an advantage, mainly
when these two points are at an identical radius.

Disadvantages
The overall mechanical stiffness is lower because the axis of rotation of this robot must overcome the inertia of the object
during its rotation.
Accuracy and precision are also lower in the direction of rotation.
A more sophisticated control scheme is needed for cylindrical configurations than for Cartesian configurations.

Theoretical Framework
Kinematics is the study of motion geometry. It's limited to using position, orientation, and their time derivatives to describe
motion in a pure geometrical way. The essential equations for dynamics and control are drawn up in robotics using kinematic
descriptions of manipulators and their assigned jobs. The issue of direct kinematics is reduced to constructing a transformation
A
matrix ❑T B that connects the body's local coordinate frame B to the global reference coordinate frame A.
The basis we created in the first part of the report was the calculation of the direct geometric model of a 6 degree of freedom
robot. The forward kinematics problem and the inverse kinematics problem are two significant challenges in robot kinematic
analysis. Last time, we addressed the problem of knowing the values of joint variables and obtaining the position of the end-
effector. The challenge now with inverse kinematics is to take the location of the end-effector and ask for a set of joint variables
that will allow the robot to reach point P. Computer-controlled robots are often actuated in joint variable space, although
manipulable objects are typically described in global Cartesian coordinate frames. As a result, robotics requires the transfer of
kinematic data back and forth between joint space and Cartesian space. The inverse kinematics problem must be addressed in
order to regulate the configuration of the end-effector in order to reach an item. As a result, we need to know what joint
variable values are required to attain a given point in a desired orientation. This moves us closer to joint control of our robot
and the potential of developing a genuine prototype to test all of our theoretical studies.

Mechatronics Engineering Robotics A


5
Faculty of Engineering UASLP

The science of desired motion is known as control. It connects a robot's dynamics and kinematics to a predefined motion. It
covers optimization challenges for determining forces that will cause the system to perform ideally.
Path or trajectory planning is a control component in which we prepare a path for the manipulator to follow in a predetermined
time profile. Paths can be drawn in either joint or Cartesian space. The time evolution of the joint variables is explicitly
specified by joint path planning. The Cartesian path, on the other hand, provides the position and orientation of the end frame.
As a result, the path comprises achieving a desired aim from an initial setup. It could entail avoiding obstacles. Joint route
planning is straightforward because it does not include inverse kinematics, but the motion of the manipulator in Cartesian space
is difficult to understand. Cartesian coordinates, on the other hand, make sense but necessitate inverse kinematics calculations.

Research Problem
Manually picking and handling objects in an automated application is time consuming and dangerous. As a result, we must
avoid this in the first phase by employing an automotive application, namely a pick and place robot, and the automatic feeding
mechanism begins with the creation of a cylindrical workspace for our operations.
Consider the robot's architectural design below.

Figure 1 Architectural design of a 6 DOF Robot.

We’ll consider three links for the mechanical design of the robotic arm whose base is fixed, as shown in the diagram above.
Joints connect these three links to one another.
The first link has rotational and linear motion upwards and downwards, while the second link has transient forward and
backward motion. Link 3 has a rotary motion as well. At the end of link 3, there is an end effector.

Objective
The main objective of this delivery for our robot is to learn to implement and know how to perform the analysis of the
singularities that we can find in the robot and identify which set of configurations help us to avoid them, also seeks to
implement the analysis of path planning for our robot and to observe graphically the movement between two positions for the
root, all this based on previously performed analysis as the IGM and the Jacobian.

Procedure
Mechanical Design
The "skeleton" of the entire system was designed on paper to give us a basic understanding of the robotic arm, which then
served as a springboard for our ideas about how the shapes should be. If we consider the possibility of creating a robotic arm
for the purpose of carrying objects with its end effector, the mechanical design of the robotic arm is the most important aspect
and backbone of this work. A robotic arm must meet certain design requirements and consider certain characteristics.
The arm should be able to lift, move, lower and release an object and closely mimic the movement of the human arm. Any
machinery capable of making the necessary motions to pick and place the required objects would have fulfilled those criteria.

Mechatronics Engineering Robotics A


6
Faculty of Engineering UASLP

Every robot has a payload weight limit. In our application, the robot must be able to carry a payload of 200 g and have a
working space of 80,000 cm3.

Parts
Base

Figure 2. Base of the Robot.

Revolute Joint 1

Figure 3. Revolute Joint no. 1.

Base of the Prismatic Joint 1

Mechatronics Engineering Robotics A


7
Faculty of Engineering UASLP

Figure 4. Base of Prismatic joint 1.

Prismatic Joint 2 and Base of the Prismatic Joint 3

Figure 5. Prismatic Joint no. 1 and Base of Prismatic Joint no. 2.

Prismatic Joint 2

Figure 6. Pismatic joint no. 2.

Revolute Joint 2

Mechatronics Engineering Robotics A


8
Faculty of Engineering UASLP

Figure 6. Revolute Joint no. 2.

Revolute Joint

Figure 7. Revolute Joint no. 3.

Revolute Joint 4 and End Effector

Figure 8. End effector with revolute joint no. 4.

Mechatronics Engineering Robotics A


9
Faculty of Engineering UASLP

Assembly

Figure 9. Assembly of all parts for the robot in default state.

Figure 10. Fully Extended Robotic Arm.

For our mechanical design we decided to implement this model because of its versatility and a bit of originality in relation to
the fact that it is not a very common model. We used a type of prismatic joints similar to pistons and for the revolute joints we
decided to give them 360° freedom.
For the development and calculation of the workspace we used SolidWorks software, first we made a drawing of how our robot
would be and then we proposed measures to respect as much as possible the proposed workspace.

Mechatronics Engineering Robotics A


10
Faculty of Engineering UASLP

Figure 11. Assembly of all parts for the robot with measurements.

In the previous image we can see the measures established for our robot, as you can see it is not a very big robot.
The material by which our robot will be conformed will be Aluminum 6061, we obtained this material because its use is very
common finding it in applications such as construction of aircraft structures, such as wings and fuselage of commercial aircraft
and military use, in industrial parts, in the construction of yachts, including small boats, in automotive parts, in the manufacture
of aluminum cans for packaging food and beverages, we can also mention that it has the properties that is easy to machine and
is resistant to corrosion.

Figure 12. Servomotor MG90S.

For the motors that provide the movement of the joints we decided to use servomotors specifically the MG90S servomotors,
these servomotors have the characteristics of being small in size 22.5 mm long, 12 mm wide, height 35.5 mm approximately,
which makes them easy to adapt in any small robot, another factor for which we chose them was the weight of these which is
13.4 g and finally the most determining factor was the mass that can lift for a distance which is 1.8 kg∙cm (4.8 V) or 2.2 kg∙cm
(6 V).

Mechatronics Engineering Robotics A


11
Faculty of Engineering UASLP

The amount of servomotors that we will use for our robot will be a total of 8 servomotors which are located as follows: 1
servomotor in the first revolute joint, 2 servomotors in the second prismatic joint, 2 servomotors in the third prismatic joint, 1
servomotor in the fourth revolute joint, 1 servomotor in the fifth revolute joint and 1 servomotor in the sixth revolute joint.

Our robot will be able to load the 200 g requested, this is due to the material chosen and the load capacity of our servomotors.

Figure 13. Workspace Model Top View.

Figure 14. Workspace Model.

We obtained a working space very close to the requested one, this working space is 7840.24 cm 3. This value is obtained
because operations are performed to disregard some parts of the design that are negligible.

Mechatronics Engineering Robotics A


12
Faculty of Engineering UASLP

Figure 15. Value of Workspace.

Geometric Model
In this case, the hand calculations are only used to ensure that the simulation shown in the next section accurately represents our
robot's geometric model. We begin the analysis of the geometrical model by placing the reference axes in all the joints of the
robot architecture, this is shown in the next picture.

Figure 16. 6-DOF with reference frames for each joint.

We develop the kinematics of our 6 DOF robot by using the Denavit-Hatenberg parameters, in attempts to find the
transformation matrix associated with the end effector of our robot. The DH parameters that we discovered for our robot are
listed in the table below.
Table 1. DH Parameters for 6 DOF R2P3R robot.

i σ a α d θ
1 0 0 0 L1 θ1
2 1 0 -π/2 d2 0
3 1 0 0 d3 -π/2
4 0 0 π/2 0 θ4
5 0 0 -π/2 0 θ5
6 0 0 0 L6 θ6
Using these parameters, we can find the matrices and obtain the orientation and position matrices from frame 0 to frame 6. We
must first obtain the position and orientation matrix for each frame described in the geometric model in order to find the
equations needed to calculate the position and orientation of the robot's end effector with respect to the base, such that:

Mechatronics Engineering Robotics A


13
Faculty of Engineering UASLP

0 0 1 2 3 4 5
T 6=T 1 ∙ T 2 ∙ T 3 ∙T 4 ∙ T 5 ∙ T 6
(1)
All transformation matrices are decomposed into two rotation operations and two translation operations.
i−1
T= i ¿ rot ( z , θ ) trans ( z , d ) rot ( x , α ) trans ( x , a ) ¿ ( 2 )

[ ]
cos θi −sin θi cos α i sin θi sin α i ai cos θi
i−1 sin θi cos θi cos α i −cos θ 1 sin α 1 ai sin θi
T = i¿ ¿ (3)
0 sin α i cos α i di
0 0 0 1
Following the formula shown before, we can substitute all the DH parameters to find the transformation matrix for each
reference frame, up to reference frame 6. We’ll have:

[ ]
C θ1 −S θ1 0 0
0 S θ 1 C θ1 0 0
T 1=
0 0 1 L1
0 0 0 1
(4 )

[ ]
1 0 0 0
0 0 1 0
T 12=
0 −1 0 d2
0 0 0 1
( 5)

[ ]
0 1 0 0
−1 0 1 0
T 23=
0 0 1 d3
0 0 0 1
( 6)

[ ]
C θ4 0 S θ4 0
T 34= S θ4 0 −C θ 4 0
0 1 0 0
0 0 0 1
(7)

[ ]
C θ5 0 −S θ5 0
T 45= S θ5 0 C θ5 0
0 −1 0 0
0 0 0 1
(8)

Mechatronics Engineering Robotics A


14
Faculty of Engineering UASLP

[ ]
C θ6 −S θ6 0 0
S θ6 C θ6 0 0
T 56=
0 0 1 L6
0 0 0 1
(9 )

After finding all the transformation matrices from reference frame to reference frame, we substitute them in equation 1.

[ ][ ][ ][ ][ ][
C θ1 −S θ1 0 0 1 0 0 0 0 1 0 0 C θ4 0 S θ4 0 C θ 5 0 −S θ 5 0 Cθ
S θ 1 C θ1 0 0 0 0 1 0 −1 0 1 0 0 −C θ 4 0 × S θ5 0 C θ5 0 × S θ
T 06= × × × S θ4
0 0 1 L1 0 −1 0 d2 0 0 1 d3 0 1 0 0 0 −1 0 0
0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1
( 10 )
Using Symbolab, we compute the end transformation matrix from equation 10.

[
C θ 6 ( C θ1 S θ 4 C θ5−S θ1 S θ 5 ) +C θ1 C θ 4 S θ6 C θ1 C θ4 C θ 6−S θ6 ( C θ1 S θ 4 C θ5−S θ 1 S θ5 ) −C θ1 S θ 4 S θ 5−S θ1 C
0 C θ6 ( S θ1 S θ 4 C θ 5+ C θ1 C θ5 ) + S θ1 C θ4 S θ6 S θ1 C θ4 C θ 6−S θ6 ( S θ 1 S θ 4 C θ 5+ C θ1 S θ 5 ) C θ1 C θ5−S θ1 S θ 4 S
T 6=
C θ4 C θ 5 C θ 6−S θ4 S θ6 −C θ 4 C θ5 S θ 6−S θ4 C θ 6 −C θ 4 S θ5
0 0 0
( 11 )
Once we've determined the matrix that defines our robot's position and rotation, we run tests with various configurations to
determine where the end effector will be placed. To do so, we must propose values for each of the joint variables.
All DH parameters are stored in a 1x6 vector before being sent to a MATLAB function that returns the end effector's position
and rotation. Some distances are exaggerated We will use computer programs to plot the configurations in 3D, and the vector
T
q j will be defined as the one that stores all of the configuration's joint variables.
aT= [0 0 0 0 0 0]
( 12 )

α =0
T
[ −π
2
0
π
2
−π
2
0
]
( 13 )
T
d =[ 1.8 6.5 6.5 1 0.5 0.4 ]
( 14 )

θT = 0 0
[ −π
2
0 0 0
]
( 15 )
q j = [ θ 1 d 2 d3 θ4 θ5 θ 6 ]
T

( 12 )
Configuration Test 1
We begin by testing with the robot in its most basic configuration, i.e., with all joint variables set to 0. This is represented by
T
the vector q 1 .

Mechatronics Engineering Robotics A


15
Faculty of Engineering UASLP

q1 = [0 0 0 0 0 0 ]
T

The matrix that describes the end effector with respect to the base of the robot is:

[ ]
0 1 0 −0.5
T =00
6
0 1 7.9
1 0 0 8.3
0 0 0 1
T = GENDGM(sigma, a, alpha, d, theta, q);
End Effector Matrix =

0.0000 1.0000 0 -0.5000


-0.0000 0 1.0000 7.9000
1.0000 -0.0000 0.0000 8.3000
0 0 0 1.0000

Figure 17. 3D model of Robot configuration no. 1.

Configuration Test 2
T
We follow up by testing with the robot in an intermediate configuration. This is represented in the vector q 2 .

T
q2= [ −π
4
4 3.5
π
3
π
6
π
2 ]
The matrix that describes the end effector with respect to the base of the robot is:

[ ]
0.3536 −0.8839 0.3062 7.7239
T 06= −0.3536 0.1768 0.9186 8.3224
−0.866 −0.433 −0.25 12.633
0 0 0 1
End Effector Matrix =

0.3536 -0.8839 0.3062 7.7239


-0.3536 0.1768 0.9186 8.3224
-0.8660 -0.4330 -0.2500 12.6330
0 0 0 1.0000

Mechatronics Engineering Robotics A


16
Faculty of Engineering UASLP

Figure 18. 3D model of Robot configuration no. 2.

Figure 19. Configuration for end effector given q 2.

Configuration Test 3
We’ll finish by testing the robot in its fully extended configuration, with their prismatic joint variables set to 6.5. This is
T
represented by the vector q 3 .

q1=
T
[ −π
3
6.5 6.5
−π
3

3
0
]
The matrix that describes the end effector with respect to the base of the robot is:

[ ]
0.9665 0.25 −0.058 11.9762
0 0.058 −0.433 −0.8995 6.8567
T =
6
−0.25 0.866 −0.433 14.1938
0 0 0 1
End Effector Matrix =

0.9665 0.2500 -0.0580 11.9762

Mechatronics Engineering Robotics A


17
Faculty of Engineering UASLP

0.0580 -0.4330 -0.8995 6.8567


-0.2500 0.8660 -0.4330 14.1938
0 0 0 1.0000

Figure 20. 3D model of Robot configuration no. 3.

Figure 21. Configuration for end effector given q 3.

Inverse geometric model


We begin our analysis by recalling the robot's kinematic diagram from its architecture (see Figure 1), which will take us to the
direct geometric model of the robot. In order to identify the transformation matrix linked with our robot's end effector, we use
the Denavit-Hartenberg parameters to construct the kinematics of our 6 degree of freedom robot.

Mechatronics Engineering Robotics A


18
Faculty of Engineering UASLP

Figure 22. 6 degree of freedom robot with reference frames for each joint.

The table below lists the DH parameters that we discovered for our robot.

Table 2. DH Parameters for 6 degrees of freedom for RPP robot with spherical wrist.
i σ a α d θ
1 0 0 0 L1 θ1
2 1 0 -π/2 d2 0
3 1 0 0 d3 -π/2
4 0 0 π/2 0 θ4
5 0 0 -π/2 0 θ5
6 0 0 0 L6 θ6

Using these parameters, we can find the matrices and obtain the orientation and position matrices from frame 0 to frame 6. We
can recall the homogeneous transformation matrix from frame of reference 0 to 6 using these parameters.

[
C θ 6 ( C θ1 S θ 4 C θ5−S θ1 S θ 5 ) +C θ1 C θ 4 S θ6 C θ1 C θ4 C θ 6−S θ6 ( C θ1 S θ 4 C θ5−S θ 1 S θ5 ) −C θ1 S θ 4 S θ 5−S θ1 C
C θ6 ( S θ1 S θ 4 C θ 5+ C θ1 C θ5 ) + S θ1 C θ4 S θ6 S θ1 C θ4 C θ 6−S θ6 ( S θ 1 S θ 4 C θ 5+ C θ1 S θ 5 ) C θ1 C θ5−S θ1 S θ 4 S
T 06=
−s θ4 s θ6 + c θ4 c θ5 c θ 6 −s θ 4 c θ 6−c θ4 c θ 5 s θ6 −c θ 4 s θ 5
0 0 0
(1)
Multiple solutions are provided by trigonometric functions by their very nature. As a result, when the six equations for the
unknown joint variables are solved, a variety of robot configurations are expected.
Inverse kinematics can be broken down into two subproblems: inverse position and inverse orientation. Now that we have all of
the equations that will determine the end effector's location and orientation, we must decouple the first three joints using the
spherical wrist. As a result of this decoupling, it is possible to split the problem into two separate problems, each with only
three unknown parameters. Based on the table below, it will be verified that all matrices contain systems of equations of a
specific type.

Table 3. Types of equations encountered in the Paul method.


System type Equations
Type 1 X r i=Y
Type 2 Xs θ i+Yc θi=Z

Mechatronics Engineering Robotics A


19
Faculty of Engineering UASLP

Type 3 X 1 s θi +Y 1 c θi=Z 1
X 2 s θi +Y 2 c θi=Z 2
Type 4 X 1 r j s θ i=Y 1
X 2 r j c θi=Y 2
Type 5 X 1 s θi =Y 1+ Z 1 r j
X 2 c θi=Y 2 + Z2 r j
Type 6 Ws θ j= Xc θi +Ys θi + Z1
Wc θ j =Xs θi−Yc θi +Z 2
Type 7 W 1 c θ j+W 2 s θ j=Xc θ i+ Ys θi +Z 1
W 1 s θ j −W 2 c θ j=Xs θ i−Yc θi + Z 2
Type 8 Xc θ i+ Yc θ1,2=Z 1
Xs θ i+ Ys θ1,2=Z 2
r j: prismatic joint variable
s θi , c θi : sine and cosine of a revolute joint variable θi

Based on the kinematic diagram in Figure 2, we see that the spherical wrist is located at joint 5. Using this information, we may
decouple the kinematics of the wrist and the manipulator by decomposing the global kinematics transformation matrix into its
0
component ❑T 5 for wrist position.

[ ][ ]
0 0 5
0 0 5 R 5d R d6
6 T = 5 T 6T = 5 6
0 1 0 1
( 2)
0 0 5 −1
5 T =6T × 6T
(3)

The wrist position is:

[]
0
px
d = py
5 pz
(4 )
0
Because our previous programs computed the symbolic equations required to construct the ❑T 5 transfer matrix.
We simply need to input the DH parameters and output the outcome of this matrix, which looks like this,

[ ]
c θ1 s θ 4 cθ5 −s θ 1 s θ5 c θ1 c θ 4 −c θ 1 s θ4 s θ5−s θ1 c θ 5 −d 3 s θ 1
0 s θ 1 s θ4 c θ5 +c θ 1 s θ5 c θ4 s θ1 c θ 1 c θ5−s θ1 s θ 4 s θ 5 d 3 c θ1
❑T 5=
c θ 4 c θ5 −s θ4 −c θ4 s θ5 d 2 + L1
0 0 0 1
( 5)
The matrix of the spherical wrist position is now extracted,

Mechatronics Engineering Robotics A


20
Faculty of Engineering UASLP

[ ]
0
−d 3 s θ 1
d = d 3 c θ1
5 d 2 + L1
( 6)
Remembering that to represent position and orientation, the following matrix can be used

[ ]
sx n x ax px
s ny ay py
U= y
sz n z az pz
0 0 0 1
( 7)
We can utilize the values of this matrix to obtain the equations of the inverse geometric model under the premise that we will
send our robot the required location and orientation of our end effector. We must extract the location of the spherical wrist and
remove the distance at which the end effector is positioned in this example, as we began by determining the position of the
spherical wrist.
0 5 −1
5 d =U × 6T
(8 )
Equating equations 6 and 8 by substituting the appropriate values, we have,

[ ][ ]
−d 3 sinθ 1 −L6 a x + p x
d 3 cos θ 1 = −L6 a y + p y
d 2 + L1 −L6 a z+ p z
(9)
From this system of equations, we have a type 4 (see Table 2) system that can find the values for joint variables d 3 and θ1,

−d 3 sin θ1=−L6 a x + p x
d 3 cos θ1=−L6 a y + p y
( 10 )

The type 4 system has the solution,

√( )( )
2 2
Y1 Y2
r j =± +
X1 X2

θi =atan 2
( Y1 Y
, 2
X1 r j X2 r j )
(11 )

X 1=−1 X 2=1
Y 1=−L6 a x + px Y 2=−L6 a y + p y

Mechatronics Engineering Robotics A


21
Faculty of Engineering UASLP

√(
−L6 a x + p x 2 −L6 a y + p y
)( )
2

d 3=± +
−1 1

θ1=atan2
−d 3 (
−L6 ax + p x −L6 a y + p y
,
d3 )
( 12 )

Because this is a square root, the solution for the joint variable d 3 can have two values. However, in terms of the mechanical
design of the robot, a negative joint variable makes no sense because it would require the robot to flip itself with respect to its
point of origin. As a result, we can dismiss one of these options.
We may get the next joint variable d 2 using the same system of equations shown in the vector of equation 9, identifying an
independent lineal system of type 1. This algebraic equation's solution is

d 2= p z−L6 a z −L1
( 13 )
All of the joint variables that determine the location of the robot's end effector are now known. The corresponding rotation
matrix of the spherical wrist to the end effector is then evaluated as the following stage in the decoupling procedure. For the
3
orientation part of the robot, we can look to the matrix ❑ R6 , only includes spherical wrist rotation. Of course, there are several
0
approaches, but since we already have the transformation matrices U and ❑T 6, we can decompose them into the equations
below.

[ ][ ]
0
0 0 3 R 03 d 3
R d6
6 T = 3 T 6T = 3 6
0 1 0 1
(14 )
 
Where the wrist orientation matrix is:
 
3
❑ R6 =❑0 R−1 0 0 −1
3 × ❑ R6= ❑ R3 ×U
(15 )
The orientation matrix turns out to be,

[ ][ ]
c θ 4 c θ 5 c θ6−s θ 4 s θ 6 −s θ4 c θ6−c θ 4 c θ5 s θ6 −c θ 4 s θ5 sz nz az
c θ 4 s θ6 +s θ4 c θ 5 c θ6 c θ4 c θ 6−s θ 4 c θ5 s θ 6 −s θ4 s θ5 = s x c θ1 +s y s θ 1 n x c θ1 +n y s θ1 a x c θ1 + a y s θ1
s θ5 c θ 6 −s θ5 s θ 6 c θ5 −s x s θ1 + s y c θ 1 −n x s θ1 +n y c θ1 −ax s θ 1+ a y c θ1
( 16 )
From here, all we have to do is find the equations that are independent or those in which we already "know" the joint variables
in order for them to be independent. The a z component of the rotation matrix has the first equation we can extract.

cos θ 5=−a x sinθ 1+ a y cos θ1


( 17 )
Since we already “know” the values of θ1 , this system becomes a type 2. The formulas below must be used to solve it.
For X = 0 and Y ≠ 0,

Mechatronics Engineering Robotics A


22
Faculty of Engineering UASLP

Z
cos θ i=
Y
( 18 )
θi =atan 2 (± √ 1−c θi ,c θi )
2

( 19 )
Substituting the values
X =0 Y =1 Z=−ax sin θ1+ a y cos θ1


θ5 =atan2 (± 1−(−a x sin θ1 +a y cos θ 1) ,−a x sin θ1 +a y cos θ 1)
2

( 20 )
Now we have another ser of two solutions, since the square root can have two values. Now that we now another variable, we’ll
use the equation from the component a y ,

−cos θ 4 sin θ5=az


( 21 )

Now θ5 can be considered a constant, which leaves us with a type 2 system. We’ll use the exact same equations from eq. 18
and 19.
X =0 Y =−sin θ5 Z=a z
az
cos θ 4=
−sin θ5

(√ ( ) )
2
az az
θ 4=atan 2 ± 1− ,
−sin θ5 −sin θ 5
( 22 )

We see that θ 4 can also take two values, since it has a square root in its operations. Since θ 4 is dependent on θ5 , and θ5 also
has 2 solutions, that leaves us with a total of 4 different solutions for the robot so far. Lastly, we’ll use the component s x from
the matrix on the equation 16 to compute the last joint variable θ6 . The equation is,

cos θ 4 cos θ5 cos θ6 −sin θ4 sin θ6 =s z

This is a type 2 equation once more. After adjusting the variables, we discover that,

X =−sin θ4 Y =cos θ4 cos θ 5 Z =s z

We have to solve the equations using the next two quadratics

XZ ±Y √ X +Y −Z
2 2 2
sin θ6= 2 2
X +Y
YZ ∓ X √ X 2 +Y 2−Z 2
cos θ 6= 2 2
X +Y
(23 )

Mechatronics Engineering Robotics A


23
Faculty of Engineering UASLP

The solution can only exist if X 2 +Y 2 ≤ Z 2. In that case, the solution is

θ 4=atan 2 ( sinθ 6 , cos θ 6 )


( 24 )

Since θ6 is dependent on θ 4, which has 4 solutions, the total amount of possible solutions for our problem rises to 8. This
constitutes all of the equations needed to solve the inverse kinematics problem of our robot.

All possible solutions were tested by changing the sign of some values of q and only four combinations gave satisfactory
results. When the signs of equations 20 and 22 coincided, as well as in both alternations of pm for equation 23, factible
solutions were obtained. This is,

Mechatronics Engineering Robotics A


24
Faculty of Engineering UASLP

i. Factible solution 1

{

θ5=atan 2 ( + 1−(−a x sin θ1 +a y cos θ 1) 2 ,−a x sin θ1 +a y cos θ 1)

(√ ( ) )
2
az az
θ 4=atan2 + 1− ,
−sin θ5 −sin θ5
XZ+ Y √ X 2 +Y 2−Z 2
sinθ 6=
X 2+Y 2
YZ−X √ X 2 +Y 2−Z 2
cos θ6= 2 2
X +Y
ii. Factible solution 2

{

θ5=atan2 (− 1−(−a x sin θ1 +a y cos θ 1) 2 ,−a x sin θ1 +a y cos θ 1)

(√ ( ) )
2
az az
θ 4=atan 2 − 1− ,
−sin θ5 −sin θ5
XZ+ Y √ X 2 +Y 2 −Z 2
sinθ 6=
X 2+Y 2
YZ−X √ X 2 +Y 2−Z 2
cos θ6= 2 2
X +Y
iii. Factible solution 3

{

θ5=atan 2 ( + 1−(−a x sin θ1 +a y cos θ 1) 2 ,−a x sin θ1 +a y cos θ 1)

(√ ( ) )
2
az az
θ 4=atan2 + 1− ,
−sin θ5 −sin θ5
XZ−Y √ X +Y −Z
2 2 2
sin θ6 = 2 2
X +Y
YZ + X √ X 2 +Y 2−Z 2
cos θ6 =
X 2+ Y 2
iv. Factible solution 4

Mechatronics Engineering Robotics A


25
Faculty of Engineering UASLP

{

θ5=atan2 (− 1−(−a x sin θ1 +a y cos θ 1) 2 ,−a x sin θ1 +a y cos θ 1)

(√ ( ) )
2
az az
θ 4=atan 2 − 1− ,
−sin θ5 −sin θ5
XZ−Y √ X +Y −Z
2 2 2
sin θ 6=
X 2+Y 2
YZ + X √ X 2 +Y 2−Z 2
cos θ6 = 2 2
X +Y

We then proceed to put our IGM functions to the test. First, we compute the robot's DGM using pre-determined joint variables,
we insert that homogeneous transformation matrix in an IGM function, expecting to create a new set of joint. Then, we
introduce those new joint variables into the DGM function in hopes of getting an homogenous transformation matrix identical
to the one that would be generated using the values we input at the start. The following is the script that was used:

clear all
close all
clc

% q = input vector
t1 = 0;d2 = 0;d3 = 0;t4 =0;t5 =0;t6=0;
L1 = 1.8;L6 = 0.5;

DH.sigma = [0 1 1 0 0 0];
DH.a = [0 0 0 0 0 0];
DH.alpha = (pi/2)*[0 -1 0 1 -1 0];
DH.d = [L1 d2 d3 0 0 L6];
DH.theta = [t1 0 -pi/2 t4 t5 t6];

T0Tn = DGM(DH,q);
plot_Robot(T0Tn);
hold on;
q_sol = IGM(T0Tn(:,:,6));
T0Tn_sol = DGM(DH,q_sol);
plot_Robot(T0Tn_sol);
[q q_sol]
T0Tn(:,:,6)
T0Tn_sol(:,:,6)

The answers are then plotted to ensure that the robot arrives at the same location. To begin, we'll use an input vector with all
joint variables set to 1 to see which plausible solutions keep the robot in the same position and orientation and which merely
modify the robot's orientation.

q = ones(6,1)

Output:
q q1 q2 q3 q4
1.00000 1.00000 1.00000 1.00000 1.00000
1.00000 1.00000 1.00000 1.00000 1.00000
1.00000 1.00000 1.00000 1.00000 1.00000
1.00000 1.00000 -2.14159 -2.14159 1.00000

Mechatronics Engineering Robotics A


26
Faculty of Engineering UASLP

1.00000 1.00000 -1.00000 -1.00000 1.00000


1.00000 1.00000 -2.14159 -0.33214 2.80946

Figure 23. All of the factible solutions for the robot given a q of all ones, overlapping.

For the following test, we take 6 random values for the configuration vector q.

q = [1.14775 1.66324 0.48010 1.02514 0.63335 0.23202]';

Output:
q q1 q2 q3 q4
1.14775 1.14775 1.14775 1.14775 1.14775
1.66324 1.66324 1.66324 1.66324 1.66324
0.48010 0.48010 0.48010 0.48010 0.48010
1.02514 1.02514 -2.11645 1.02514 -2.11645
0.63335 0.63335 -0.63335 0.63335 -0.63335
0.23202 0.23202 -2.90957 -2.46337 0.67822

Figure 24. All of the factible solutions for the robot given a q of all random numbers, overlapping.

Mechatronics Engineering Robotics A


27
Faculty of Engineering UASLP

In the last test, we used more realistic values for our robot, which is in its fully stretched configuration, with all theta values
rotated by 90 degrees. This time, the homogeneous matrices for each new configuration discovered by the IGM will be shown.

Output:

q =

1.5708
6.5000
6.5000
1.5708
1.5708
1.5708

q =

-0.00000 1.00000 -0.00000 -6.50000


-0.00000 -0.00000 -1.00000 -0.50000
-1.00000 -0.00000 0.00000 8.30000
0.00000 0.00000 0.00000 1.00000

q1 =

1.5708
6.5000
6.5000
1.5708
1.5708
1.5708

q1 =

-0.00000 1.00000 -0.00000 -6.50000


-0.00000 -0.00000 -1.00000 -0.50000
-1.00000 -0.00000 0.00000 8.30000
0.00000 0.00000 0.00000 1.00000

Mechatronics Engineering Robotics A


28
Faculty of Engineering UASLP

q2 =

1.5708
6.5000
6.5000
1.5708
1.5708
1.5708

q2 =

-0.00000 1.00000 -0.00000 -6.50000


-0.00000 -0.00000 -1.00000 -0.50000
-1.00000 -0.00000 0.00000 8.30000
0.00000 0.00000 0.00000 1.00000

q3 =

1.5708
6.5000
6.5000
1.5708
1.5708
1.5708

q3 =

-0.00000 1.00000 -0.00000 -6.50000


-0.00000 -0.00000 -1.00000 -0.50000
-1.00000 -0.00000 0.00000 8.30000
0.00000 0.00000 0.00000 1.00000

q4 =

1.5708
6.5000
6.5000
-1.5708
-1.5708
-1.5708

q3 =

-0.00000 1.00000 -0.00000 -6.50000


-0.00000 -0.00000 -1.00000 -0.50000
-1.00000 -0.00000 0.00000 8.30000
0.00000 0.00000 0.00000 1.00000

Mechatronics Engineering Robotics A


29
Faculty of Engineering UASLP

Figure 25. All of the factible solutions for the robot given a q that fully stretches out the robot, overlapping.

Something quite intriguing happens in this last example, in that the solutions wind up looking extremely similar to each other.
Aside from that, the transformation matrix for any q-vector remained unchanged. This concludes the IGM section, and we
move on to analyze the Jacobian of our robot.

Jacobian Matrix
The Jacobian matrix is defined by the following expression.

[ ][
∂0 P j
❑0 J
❑0 J j (: , k )= ∂q x = 0 j , v
σ k 0 ak−1
❑ J j,ω ]
( 25 )

In addition, the component Jv and Jw can be found with the following expressions.

❑ J j ,v ( :, k )=σ k ak−1 +σ k a^ k−1 ( ❑ P j−❑ Pk−1 )


i i i i i

( 26 )

i i
❑ J j ,ω ( :, k )=σ k ak−1
( 27 )
Where a is a The Skew matrix which is defined as:

[ ]
0 −a z a y
S ( a ) =^a= a z 0 −a x
−a y a x 0
( 28 )

Mechatronics Engineering Robotics A


30
Faculty of Engineering UASLP

The Jacobian matrix for our robot will have 6 rows and 6 columns, to find each element, we use the above expressions. the
calculations are shown below.

To find Jv and Jw we require the following transformation matrices, to obtain the vector a and the position vector as well as the
skew matrix:

[ ]
1 0 0 0
T 00= 0 1 0 0
0 0 1 0
0 0 0 1
( 29 )

[ ]
C θ1 −S θ1 0 0
S θ 1 C θ1 0 0
T 01=
0 0 1 L1
0 0 0 1
( 30 )

[ ]
C θ 1 0 −S θ1 0
S θ1 0 C θ1 0
T 02=T 01 ∙ T 12=
0 −1 0 d 2 + L1
0 0 0 1
(31 )

[ ]
0 C θ1 −S θ1 −d 3 S θ1
0 0 1 2 0 S θ1 C θ1 d 3 C θ 1
T 3=T 1 ∙ T 2 ∙ T 3 =
1 0 0 d 2 + L1
0 0 0 1
( 32 )

[ ]
C θ1 S θ 4 −S θ1 −C θ1 C θ4 −d 3 S θ1
0 0 1 2 3 S θ 1 S θ4 C θ1 −C θ4 S θ1 d3 C θ1
T 4=T 1 ∙T 2 ∙ T 3 ∙ T 4=
C θ4 0 S θ 4 d 2 + L1
0 0 0 1
( 33 )

Mechatronics Engineering Robotics A


31
Faculty of Engineering UASLP

[ ]
C θ1 S θ 4 Cθ5−S θ 1 S θ5 C θ1 C θ 4 −C θ1 S θ 4 S θ 5−S θ1 C θ5 −d3 S θ1
S θ1 S θ 4 C θ5 +C θ 1 S θ5 C θ4 S θ1 C θ1 C θ 5−S θ1 S θ 4 S θ 5 d 3 C θ1
T 05=T 01 ∙ T 12 ∙ T 23 ∙T 34 ∙ T 54=
C θ 4 C θ5 −S θ 4 −Cθ4 S θ 5 d 2 + L 1
0 0 0 1
( 34 )
For k=1, the components Jv and Jw are:

[ ]([ ] [ ]) [ ]
0 −1 0 −L6 ( S θ1 C θ 5+ C θ1 S θ 4 S θ 5 )−d 3 S θ 1 0 −L6 ( C θ 1 C θ 5−S θ1 S θ 4 S θ5 ) −d 3 C θ1
i
❑ J j ,v ( :, 1 )=( 1 ) 1 0 0 L6 ( C θ1 C θ5−S θ1 S θ 4 S θ 5 ) +d 3 C θ1 − 0 = L6 (−C θ1 S θ 4 S θ5 −S θ 1 C θ 5 )−d 3 S θ 1
0 0 0 L +d −L C θ S θ 0 0
1 2 6 4 5

( 35 )

[][]
0 0
❑i J j ,ω ( :, 1 )=( 1 ) 0 = 0
1 1
( 36 )
For k=2, the components Jv and Jw are:

[][]
0 0
❑i J j ,v ( :, 2 )=( 1 ) 0 = 0
1 1
(37 )

[][]
0 0
i
❑ J j ,ω ( : , 2 )=( 0 ) 0 = 0
1 0
( 38 )
For k=3, the components Jv and Jw are:

[ ][ ]
−S θ1 −S θ1
i
❑ J j ,v ( :, 3 )=( 1 ) C θ1 = C θ1
0 0
( 39 )

[ ][]
−S θ1 0
i
❑ J j ,ω ( :, 3 )=( 0 ) C θ 1 = 0
0 0
( 40 )
For k=4, the components Jv and Jw are:

Mechatronics Engineering Robotics A


32
Faculty of Engineering UASLP

[ ]( [ ] [ ]) [ ](
0 0 C θ1 −L6 ( S θ 1 C θ 5+ C θ1 S θ 4 S θ 5 )−d 3 S θ 1
−d 3 S θ1 0 0 C θ1
i
❑ J j ,v ( :, 4 )=( 1 ) 0 0 S θ1 L6 ( C θ1 C θ5−S θ1 S θ 4 S θ 5 ) +d 3 C θ1 − d3 C θ1 =¿ 0 0 S θ1
−C θ1 −S θ 1 0 d 2 + L1 −L6 C θ 4 S θ 5 d 2+ L 1 −C θ1 −S θ1 0
(4

[ ][ ]
−S θ1 −S θ 1
❑i J j ,ω ( :, 4 )=( 1 ) C θ1 = C θ1
0 0
( 42 )
For k=5, the components Jv and Jw are:

[ ]([ ] [ ]) [
0 −S θ4 −C θ 4 S θ 1 −L6 ( S θ1 C θ5 +C θ1 S θ 4 S θ 5 )−d 3 S θ1 −d 3 S θ1 0
i
❑ J j ,v ( :, 5 )=( 1 ) S θ4 0 C θ1 C θ 4 L6 ( C θ1 C θ5−S θ1 S θ 4 S θ5 ) +d 3 C θ1 − d 3 C θ 1 =¿ S θ 4
C θ 4 S θ1 −C θ1 C θ4 0 d 2 + L1−L6 C θ 4 S θ5 d 2 + L1 C θ4 S θ 1

[
L6 ( S
¿ −L6

[ ][ ]
−C θ1 C θ 4 −C θ1 C θ4
i
❑ J j ,ω ( :, 5 )=( 1 ) −C θ 4 S θ 1 = −C θ 4 S θ 1
Sθ4 Sθ4
( 44 )
For k=6, the components Jv and Jw are:

[ ]([
0 Cθ4 S θ5 C θ1 C θ5 −S θ 1 S θ4 S θ5 −L6 ( S θ 1 C θ 5+C θ1 S θ
i
❑ J j ,v ( :, 6 )= (1 ) −Cθ4 S θ 5 0 C θ 1 S θ4 S θ5 + S θ1 C θ5 L6 ( C θ1 C θ 5−S θ1 S θ
−C θ1 C θ 5+ S θ 1 S θ4 S θ5 −C θ1 S θ 4 S θ5 −S θ 1 C θ 5 0 d 2 + L1−L6 C

[
L6 Cθ 4 S θ5 ( C θ 1 C θ5 −S θ 1
¿ L6 Cθ 4 S θ5 ( S θ 1 C θ5 +C θ 1
−L6 ( S θ1 C θ 5+ C θ1 S θ 4 S θ 5 )( −C θ1 C θ5 +S θ

Mechatronics Engineering Robotics A


33
Faculty of Engineering UASLP

[ ][ ]
−C θ 1 S θ4 S θ5−S θ 1 C θ5 −C θ1 S θ 4 S θ5 −S θ 1 C θ 5
i
❑ J j ,ω ( : , 6 )=( 1 ) C θ 1 C θ 5−S θ1 S θ 4 S θ5 = C θ1 C θ 5−S θ1 S θ 4 S θ 5
−Cθ 4 S θ5 −Cθ4 S θ5
( 46 )

Therefore, the Jacobian matrix of our robot configuration is

[
−L6 ( C θ1 C θ5−S θ1 S θ 4 S θ 5 )−d3 C θ1 0 −S θ 1 −L6 C θ1 C θ 4 S θ 5 L6 ( S θ 1 S θ5−S θ 4 C θ1 C θ 5 ) 0
L6 (−C θ1 S θ 4 S θ 5−S θ1 C θ5 )−d 3 S θ1 0 C θ1 −L 6 S θ 1 C θ 4 S θ 5 −L 6( C θ 1 S θ 5 + S θ 1 S θ 4 C θ 5) 0
1 0 L6 S θ 4 Sθ5 −L6 C θ 4 C θ 5 0
J= 0
0 0 0 −S θ1 −C θ1 C θ4 −C θ 1 S θ4 S
0 0 0 C θ1 −C θ 4 S θ1 C θ 1 C θ 5−S
1 0 0 0 Sθ 4 −Cθ
( 47 )
This Jacobian matrix provides us with the relationship between the joint velocities and the velocities of the end-effector of the
manipulator. For example, if the joints move with certain velocities, we need to know the velocity with which our end effector
will move. For this, the relationship between the joint velocities and the end-effector velocities is

Ẋ =J ( q ) q̇
( 48 )

Other uses for the Jacobian matrix are the study of singularities and the determination of the static forces in the robot
configuration.

Singularities
The singularities are configurations that allow us to know the configurations where our robot loses degrees of freedom causing
also that no velocity is generated at that point, we can say that the singularities happen when the final degrees of the end
effector are less than the dimension of the task space.
The determinant of the Jacobian matrix of equation (47) is:

det ( J )=−d 3 sin ( θ5 )


( 49 )
Based on the expression obtained we can say that in the following cases is when the singularity occurs.

θ5 =0 , ± π , ± Nπ
( 50 )

d 3=0
( 51 )
With N an integer number.

The singularities represented in our robot are shown below.

Mechatronics Engineering Robotics A


34
Faculty of Engineering UASLP

For d 3=0, we have to the degree of freedom generated by joint 3 in the robot task space is lost, so there is no horizontal
displacement.

Figure 26. Singularity in d3 = 0

For θ5 =0 , ± π , ± Nπ , we have to the degree of freedom generated by joint 5 in the robot task space is lost, so it does not
generate angular velocity.

Figure 27. Singularity in θ5 =0 , ± π , ± Nπ

Mechatronics Engineering Robotics A


35
Faculty of Engineering UASLP

Path Planning
Motion planning is the process of decomposing a desired movement job into discrete motions that satisfy movement limitations
and may optimize some component of the movement. For this task, we have to find the functions that describe the path our
robot has to take to get from one point to another.
Having the coordinates of the end-start effector's and finish points as:

P0=P 0 ( X 0 ,Y 0 , Z 0 ) P1=P1 (X 1 , Y 1 , Z 1)

We can connect the points by geometric space curve, where,

X ( t 0 )= X 0 X ( t f )= X f

Then, given one of the coordinates, we may establish a temporal path between P 0 and Pf to determine the kinematic behavior of
the other coordinates on the geometric route.

The position and orientation of the final frame are specified by a Cartesian path followed by the manipulator, as well as the
time profile along the path. Cartesian path planning issues include reaching a given objective from a starting point, avoiding
obstacles, and remaining within manipulator capabilities. A path is represented by n control points. Straight lines connect the
control locations.

The most natural application of path planning is Cartesian path planning. Given that pick and place motion is the primary
function of an industrial robot, we must establish a desirable geometric path for the end-effector in the base frame's 3-
dimensional Cartesian space. We can then establish a time path for one of the coordinates, say X, and use the geometric path to
deduce the time history of the other coordinates. To make this time route computation easier, we will assume that the robot
moves at a constant speed between the two points, and that this speed is the maximum speed at which each joint can move,
which is the maximum speed of the servomotors.

Because all servomotors have the same rated speed, we can consider it constant for each joint. For a MG90S servomotor, its
0.1 s
operating velocity is , so, we convert this value to rps
60°

( )( )
−1
0.1 s 0.6 s 360 ° 0.6 s
Kv= = = =1.66 rps
60 ° 360° 1 rev 1 rev
( 52 )
This speed is K v =1.66 rps.

The easiest path to follow is that of a line. If we define a time function for variation of a coordinate
between two given values, we get.
1
r ( t )= t
tf
( 53 )
To calculate the time it takes to get to the second coordinate, we have to consider the distance between both points and the
velocity a joint can move on:

|D j|
tf =
Kv
( 54 )

Mechatronics Engineering Robotics A


36
Faculty of Engineering UASLP

Now that we’ve found the rate of change for the positions, we now use that equation to find an interpolation function for the
position and orientation in Task Space. The motion is decomposed into translation and rotation with these two equations:

P ( t ) =P i+ r ( t ) ⋅ D
( 55 )
Where

D=P f −P i
( 56 )
And for the orientation, we have to create an intermediate axis between both orientations,

rot ( u , α ) =A−1i ∗A f
( 57 )
So that we can iterate trough this axis of rotation
A ( t )= Ai rot ( u ,r ( t ) α )
( 58 )
Now that we’ve found the position and orientation for any time t, we can apply the Inverse Geometric Model to find each
configuration vector which will take us to that point. The following code should do this operation

for t = 0:dt:tf
 
  //Position shift for end effector
  Pdt = Pi + t*D/tf;
 
  // Small rotation on axis u
  alpha_dt = t*alpha/tf;
 
  R_aux=[[ux^2*(1-cos(alpha_dt))+cos(alpha_dt),ux*uy*(1-cos(alpha_dt))-
uz*sin(alpha_dt),ux*uz*(1-cos(alpha_dt))+uy*sin(alpha_dt)];
         [ux*uy*(1-cos(alpha_dt))+uz*sin(alpha_dt),uy^2*(1-cos(alpha_dt))
+cos(alpha_dt),uy*uz*(1-cos(alpha_dt))-ux*sin(alpha_dt)];
         [ux*uz*(1-cos(alpha_dt))-uy*sin(alpha_dt),uy*uz*(1-cos(alpha_dt))
+ux*sin(alpha_dt),uz^2*(1-cos(alpha_dt))+cos(alpha_dt)]];
 
  //Rotation shift for end effector
  Rdt = Ri*R_aux;
 
  Udt = [Rdt,Pdt;0,0,0,1];
 
  qdt = IGM(Udt);
  T0Tn = DGM(DH,qdt);
  q = [q qdt];
  plot_Robot(T0Tn);
endfor

Mechatronics Engineering Robotics A


37
Faculty of Engineering UASLP

A test is run to ensure that our application is working properly. In this scenario, we want to go from point A,

A=[10,7.9,8 .3,0,0,0]
To point B

B=[5.5,8 .5,12 .6,0,0,0]

However, given the rotation constraints around the u-axis, an identity matrix will be computed, giving us errors for our
algorithm's computation. As a result, we must specify very small orientation values in order to have a floating point division,
even if it is very little.

Ui=[10,7.9,8.3,0,0,0];
Uf=[5.5,8.5,12.6,pi/400,0,pi/100];

drawPath(Ui,Uf);

The function drawPath Will show every step of the algorithm, to simulate the movement done in real time. The code for the
function is annexed in the appendix.
The result is:

Figure 28. Result for "animation" of path between point A and point B.

We see a fluke for the orientation of the first configuration vector. However, after that it is easy to identify how a line is being
made from one point to another. Now, we try the next two points, now with a noticeable change in orientation.

Ui=[15,7.9,10.9,0,0,0];
Uf=[10.5,8.5,17.6,pi/3,pi/6,0];

Mechatronics Engineering Robotics A


38
Faculty of Engineering UASLP

Figure 29. Movement between two points with translation and orientation.

Now, the orientation change is a bit more clear.

Simulation / Graphical Interface


For this interface we added another Button Group for the configurations, within this Group, 5 different configurations were
added in XYZ-RPY, these configurations can be computed to be plotted and their q vector values obtained.
In addition, we added two pop-up Menu to choose an initial configuration and another one for a final configuration of the robot.
This means that the path taken by the robot will be displayed graphically.

Mechatronics Engineering Robotics A


39
Faculty of Engineering UASLP

Figure 30: Updated robot interface.

When changing the configuration, the position and orientation values are changed, and when computed, they are plotted and
solved.

Figure 31: Computing configuration 2.

Mechatronics Engineering Robotics A


40
Faculty of Engineering UASLP

Figure 32: Computing configuration 5.

For the Path Planning section, a start configuration and a final configuration can be set for the robot to perform the path.

Figure 33: Path from configuration 1 to configuration 2.

Mechatronics Engineering Robotics A


41
Faculty of Engineering UASLP

Figure 34: Path from configuration 3 to configuration 5.

Figure 35: Path from Configuration 4 to Configuration 1.

Mechatronics Engineering Robotics A


42
Faculty of Engineering UASLP

Conclusions
The Path Planning allows us to know a set of configurations for our robot which can be used in applications such as autonomy,
automation and to make a proper design of the robot, all this equally applied in fields from video games, character animation to
perform an architectural analysis or a robot that performs surgery.
Now commenting on the singularities, these allow us to know which of its configurations will not be valid for optimal
operation, if we were to implement a robot without knowing its singularities this would cause us to lose degrees of freedom at
certain points without knowing why this happens.

Based on the results obtained we can say that the analysis for the singularities and for the path planning could be implemented
in an adequate way, in general the most complicated thing to do in this delivery was the implementation of the Path Planning
within our GUI, this is because there were problems when it came to plot the points through which it moved. Due to the DH
parameters considered in our robot when it came to the simulation, the robot had to move in some very strange configurations
in order to a desired point, which did not at all math our proposed mechanical design and its limitations. This inaccuracy,
however, could not be corrected because it would have required resolving the equations of the inverse geometric model for the
new DH parameters, which would have taken a long time, and nearly the entire project would have had to be rebuilt from
scratch.
This mistake is particularly obvious when the end effector is rotated so that its z-axis points up and down. Because combining a
translational and a rotational movement result in minor deviations in the target position. A movement was tested in which only
the robot's orientation was modified, yet the robot displayed a tiny movement between orientations. Because it is unknown with
certainty what causes this movement in the simulation, it is suggested that the code be thoroughly tested to look for any flaws.
In the variety of cases tested for the GUI, the robot succeeds in making the linear motion between the two desired points,
indicating that the algorithm used has only minor flaws.

Finally, we can now implement for the graphical interface of our robot the graphs for the configurations when singularities
occur and a model for the Path Planning graph.

References
[1] "Jacobian", Robotics, 2021. [Online]. Available: https://www.rosroboticslearning.com/jacobian. [Accessed:
15- Oct- 2021].

[2] Anibal Ollero Baturone "Robótica, Manipuladores y robots móviles", marcombo, 2001. [Online]. [Revised:
15- Oct- 2021].

[3] R. Jazar, Theory of applied robotics. New York: Springer, 2010.

Appendix
Program used to compute IGM:

function q = IGM(U)

L6 = 0.5;
L1 = 1.8;

SX = U(1,1);
SY = U(2,1);
SZ = U(3,1);

NX = U(1,2);

Mechatronics Engineering Robotics A


43
Faculty of Engineering UASLP

NY = U(2,2);
NZ = U(3,2);

AX = U(1,3);
AY = U(2,3);
AZ = U(3,3);

PX = U(1,4);
PY = U(2,4);
PZ = U(3,4);

q = zeros(6,1);

% solve d3
Y1 = PX-AX*L6;
Y2 = PY-AY*L6;
q(3) = sqrt((Y1)^2+(Y2)^2);

%solve t1
q(1) = atan2((Y1/-q(3)),(Y2/q(3)));

%solve t2
q(2) = PZ - L6*AZ - L1;

%solve t5
C5 = -AX*sin(q(1))+AY*cos(q(1));
q(5) = atan2(sqrt(1-C5^2),C5);

%solve t4
C4 = -(AZ/sin(q(5)));

q(4) = atan2(sqrt(1-C4^2),C4);

%solve t6
X = -sin(q(4));
Y = cos(q(4))*cos(q(5));
Z = SZ;
S6 = (X*Z+Y*sqrt(X^2+Y^2-Z^2))/(X^2+Y^2);
C6 = (Y*Z-X*sqrt(X^2+Y^2-Z^2))/(X^2+Y^2);
q(6) = atan2(S6,C6);

return
end

Program used to compute the path planning:

function drawPath(Ui,Uf)
%{
Uj = [X Y Z phi thet psi]
RPY angles are in radians
Ui = Configuration Vector for first Task
Uf = Configuration for second Task
%}

Mechatronics Engineering Robotics A


44
Faculty of Engineering UASLP

% DH Parameters of the Robot


t1 = 0;d2 = 0;d3 = 0;t4 =0;t5 =0;t6=0;
L1 = 5.5;L6 = 5.0;

DH.sigma = [0 1 1 0 0 0];
DH.a = [0 0 0 0 0 0];
DH.alpha = (pi/2)*[0 -1 0 1 -1 0];
DH.d = [L1 d2 d3 0 0 L6];
DH.theta = [t1 0 -pi/2 t4 t5 t6];

Pi = Ui(1:3)'; % Get the position of the end effector at task 1


Pf = Uf(1:3)'; % Get the position of the end effector at task 2

RPYi = Ui(4:6); % Get orientation of the end effector at task 1


RPYf = Uf(4:6); % Get orientation of the end effector at task 2

%For Octave
Ri = rpy2r(RPYi);

Rf = rpy2r(RPYf);

% Difference between two points


D = Pf - Pi;

%Rotation on axis u

rotU = inv(Ri)*Rf;

cAl = (1/2)*(rotU(1,1) + rotU(2,2) + rotU(3,3) - 1);


sAl = (1/2)*sqrt((rotU(3,2)-rotU(2,3))^2 + (rotU(1,3)-rotU(3,1))^2 + (rotU(2,1)-
rotU(1,2))^2);

alpha = atan2(sAl,cAl);

% Components for rotation in axis u


ux=sign(rotU(3,2)-rotU(2,3))*sqrt((rotU(1,1)-cAl)/(1-cAl));
uy=sign(rotU(1,3)-rotU(3,1))*sqrt((rotU(2,2)-cAl)/(1-cAl));
uz=sign(rotU(2,1)-rotU(1,2))*sqrt((rotU(3,3)-cAl)/(1-cAl));

% Minimum time required


Kv = 1.66; % Speed for all servomotors

tf = max(abs(D)/Kv);
dt = tf/10; % 20 Intervals for the loop

% Stored configuration vectors


q = [];

for t = 0:dt:tf

%Position shift for end effector


Pdt = Pi + t*D/tf;

Mechatronics Engineering Robotics A


45
Faculty of Engineering UASLP

% Small rotation on axis u


alpha_dt = t*alpha/tf;

R_aux=[[ux^2*(1-cos(alpha_dt))+cos(alpha_dt),ux*uy*(1-cos(alpha_dt))-
uz*sin(alpha_dt),ux*uz*(1-cos(alpha_dt))+uy*sin(alpha_dt)];
[ux*uy*(1-cos(alpha_dt))+uz*sin(alpha_dt),uy^2*(1-cos(alpha_dt))
+cos(alpha_dt),uy*uz*(1-cos(alpha_dt))-ux*sin(alpha_dt)];
[ux*uz*(1-cos(alpha_dt))-uy*sin(alpha_dt),uy*uz*(1-cos(alpha_dt))
+ux*sin(alpha_dt),uz^2*(1-cos(alpha_dt))+cos(alpha_dt)]];

%Rotation shift for end effector


Rdt = Ri*R_aux;

Udt = [Rdt,Pdt;0,0,0,1];

qdt = IGM(Udt);
T0Tn = DGM(DH,qdt);
q = [q qdt];
plot_Robot(T0Tn);
endfor
endfunction

Mechatronics Engineering Robotics A


46
Faculty of Engineering UASLP

Robot Design

Figure 30. Design implemented for the robot base.

Figure 31. Design implemented for the first revolute joint of the robot.

Mechatronics Engineering Robotics A


47
Faculty of Engineering UASLP

Figure 32. Design implemented for the first prismatic joint of the robot.

Figure 33. Design implemented for the first prismatic joint of the robot.

Mechatronics Engineering Robotics A


48
Faculty of Engineering UASLP

Figure 34. Design implemented for the second prismatic joint of the robot.

Figure 35. Design implemented for the second prismatic joint of the robot.

Mechatronics Engineering Robotics A


49
Faculty of Engineering UASLP

Figure 36. Design implemented for the second prismatic joint of the robot.

Figure 37. Design implemented for the second Joint revolute of the robot.

Mechatronics Engineering Robotics A


50
Faculty of Engineering UASLP

Figure 38. Design implemented for the second Joint revolute of the robot.

Figure 39. Design implemented for the third revolute joint of the robot.

Mechatronics Engineering Robotics A


51
Faculty of Engineering UASLP

Figure 40. Design implemented for the third revolute joint of the robot.

Figure 41. Design implemented for the last Joint revolute of the robot and the end effector.

Mechatronics Engineering Robotics A


52
Faculty of Engineering UASLP

Figure 42. Design implemented for the last Joint revolute of the robot and the end effector.

Mechatronics Engineering Robotics A


53

You might also like