Professional Documents
Culture Documents
Robotics File
Robotics File
TRACKING
LABORATORY
Delhi Technological University
VIII Semester
Dept. of Electronics &
Communication Engg.
Under the Guidance of Kumar
Mayank
Name: SAURABHGUPTA
Roll
No.:2K11/EC/140
Batch:ECE D2
IND
EX
s
Name
of
of
for
.
w
Experi
N
ar
ment
1
o0
d
To
..
ki
stud ne
1y
m
. vari ati
ous cs
2 spat us
. ial in
des g
crip Pe
3 tion
te
. s
r
and Co
tran rk
sfor
4 mat e
. ions to
ol
usin bo
g
x
Pete in
r
M
5 Cor
atl
. ke
ab
tool .
box
To
in
6
st
. Matl
ab. ud
y
7 To
. stud co
nc
y
8 con ep
.
cept ts
9
S.
oa
f
T
i
nT
v
eT
rT
s
eT
k
T
i
nT
e
m
a
ti
c
s
u
s
i
n
g
P
e
t
e
r
C
o
r
k
e
t
o
o
l
b
o
x
i
n
M
0
6
0
2
2
0
1
5
3
1
3
0
2
2
0
1
5
9
1
9
0
2
2
0
1
5
1
5
2
0
0
3
2
0
1
5
2
0
2
7
0
3
2
0
1
5
2
5
1
7
0
4
2
0
1
5
2
9
1
7
0
-10
4
5-2
2
0
0
1
1
5
5
3
3
8
4
0
1-
0
5
2
0
1
5
5
9
0
1
0
5
2
0
1
5
6
6
Attendance(Out of 10
classes)
InternalAssessment
Marks
February 6, 2015
>
EX >
PE T
3
RI
ME =
NT
1T
1
Aim(a)(i) :To study various spatial
descriptionsand transformations
*
using Peter Corke toolbox in
T
Matlab.
T1 = se2(1, 2, 30*pi/180) %se2:
T
3
0 1.0000
=
0.8660 -0.5000 2.2321
0.5000 0.8660 3.8660
0
2> T2 = se2(2, 1, 0)
0 1.0000
2> T4 = T2*T1
T2 =
T4 =
1
0 2
1 1
0 1
February 6, 2015
0 0 1.0000
Aim(a)(ii): Using a
ZYZ ,
when the user enters
Euler
property of rotational
matrix.
convention,
write
. = 10, =
20,
dr = pi/180;
ang = input('enter alpha, beta and gamma in degrees');
alpha = ang(1)*dr;
% get alpha, beta and gamma in separate variables
beta = ang(2)*dr;
% degree to radian conversion
gamma = ang(3)*dr;
Tz = rotz(alpha);
% rotate alpha degree along Z direction
Ty = roty(beta);
% rotate beta degree along Y direction
Tx = rotz(gamma);
% rotate gamma degree along Z direction
Rmult = Tz*Ty*Tx;
% to get final ZYZ rotation matrix
Trpy = eul2tr(gamma,beta,alpha); % does the same using toolbox function
Enter alpha, beta and gamma
in degrees [10 20 30]
February 6, 2015
Rmult =
0.7146
0.6337
-0.2962
-0.6131
0.7713
0.1710
Trpy =
0.7146
0.6337
-0.2962
0
-0.6131
0.7713
0.1710
0
0.3368
0.0594
0.9397
0
dr = pi/180;
ang =
input('enter
alpha, beta
and gamma in
degrees');
alpha =
ang(1)*dr;
b
e
t
Aim(a)(iii): Write a Matlab program
a to calculate, the
Euler angles
rotation matrix i.e.
A
RB
a
n
g rotation angles from transformation matrix
ang = tr2eul(Trpy)
% calculates
(
ang/dr
% convert angles
back to degree
2
10.0000
20.0000 30.0000 )
*
d
r angle =20, and
Aim(a)(iv): For a simple rotation of
;
=
a
n
g
(
3
)
*
d
r
;
February 6, 2015
Trpy1 = rpy2tr(gamma,beta,alpha);
%get transformation matrix for
XYZ rotation
Ttrn = transl(1,2,3);
Tba = Ttrn*Trpy1;
% transl Create
T = transl(X,
transform (4x4)representing
% final transformation matrix
and translational components
Ttrn
=
1
0
0
0
0
1
0
0
0
0
1
0
1
2
3
1
Trpy1 =
0.163
0.9254 2
0.3420
0.823
0.3188 2
-0.4698
-0.2049 0.5438 0.8138
0
0
0 1.0000
0
0
0
Tba =
0.9254 -0.1632 0.3420 1.0000
0.823
0.3188 2
0.4698 2.0000
-0.2049 0.54380.8138 3.0000
0
0
0 1.
A
pB = [3 0 1]T and
Aim(b)(ii): For
=20,=
Pb = input('enter Pb column wise'
Ttrn1 = transl(3,0,1)
Tba1 =
rpy2tr(0,20*dr,0);
Tba2 = Ttrn1*Tba1
Pa = Tba*[Pb;1];
1
0
0
0
0
1
0
0
0
0
1
0
3
0
1
1
February 6, 2015
Tba2
=
0.342 3.000
0.9397
0 0
0
0 1.0000
0
0
1.000
-0.3420
0 0.9397
0
0
0
0 1.0000
Pa =
4.2817
0
1.5977
1.0000
Aim(b)(iii): Write a
Matlabprogram to calculate the
inverse homogeneous
transformation matrix i.e.
(ATB)-1 = BTA. Also show that ATB .
(ATB)-1 = I4
Tba_inv =
inv(Tba2);
eye(4) Tba_inv*Tba2
ans =
1.0e-15
*
0.1110
0
0
0
0
0
0
0
0.0555
0
0
0
0
0
0
0
eye(4) Tba2*Tba_inv
ans =
1.0e-15
*
0.1110
0 -0.0555
0
0
0
0
0
0
0
0 0.1110
0
0
0
0
February 6, 2015
0 1.0000
aTc_inv =
inv(aTc)
aTc_inv
=
0.460
0.7526 3
0.4709 -2.7377
Experiment 2
Aim (i): To study concepts of forward kinematics using Peter Corke toolbox in
Matlab.
Theory:
Forward Kinematics: To determine the position and orientation of the end-effector
given the values for the joint variables of the robot.
>>L =Link([0, 0.1, 0.2, pi/2, 0]) % A Link object holds all information related to a
robot link
L=
theta=q,
d=
>>L.A(0
.5)
0.1,
a=
0.2,
alpha=
1.571
(R,stdDH)
% obtain transformation matrix for theta = 0.5
rad
ans =
0.8776 -0.0000 0.4794
0.1755
0.000 0.877
0.4794 0
6 0.0959
1.000 0.000
0
0 0
0.1000
1.000
0
0
0
0
>>
% gets type of
L.RP
joint
>> L.a
ans =
ans =
R
>> L.offset =
0.5;
>>
L.A(0)
0.2000
% specify theta in advance i.e. 0.5 rad
0
0
0
L(1) = Link([0 0
1 0])
1.000
0
% create L(1) Link
object
L=
theta=q,
1,
d=
0, a=
alpha=
0 (R,stdDH)
>> L(2) = Link([0 0
% create L(2)
1 0])
Linkobject
February 13,
2015
L=
theta=q1,
d=
theta=q2,
d=
0, a=
0, a=
1,
alpha=
1,
alpha=
SAURABH GUPTA
(2K11/EC/140)
0 (R,stdDH)
0 (R,stdDH)
---+
-----------+
theta |
+----------+----------q1|
q2|
+----------+----------0 base = 1 0 0 0 tool = 1 0 0
grav=
0
0
0100
0100
9.81
0010
0010
0001
0001
-----------+
-----------+
d|
a |alpha |
+----------+----------+
0|
1|
0|
0|
1|
0|
+----------+----------+
% grav specifies that gravity will be applied in
z direction
% base specifies the initial position of robot
% tool specifies the end effector direction
>> mdl_twolink
-----------+
twolink =
two link (2 axis, RR, stdDH, slowRNE)
from Spong, Hutchinson, Vidyasagar;
+
|j|
---+-----------
theta | d |
a |
-----------+
a
l
p
h
a
-----------
+-----------
+---
|
+---------- +-----------
+-----------
| 1|
q1|
0|
1|
| 2|
q2|
0|
1|
+-----------
+-----------
0
|
0
|
+-----------
0
0
1
>>
twolink.f
kine([0
0])
0 0
01
ans =
1
0
February 13,
2015
0 1 0 0
0 0 1 0
0 0 0 1
>> twolink.fkine([pi/4
-pi/4])
ans
=
1.707
1.0000
0
0 1
0.707
0 1.0000
0 1
0
0 1.0000 0
0
2%
0 0 1.0000
plot displays the robot orientation in 3D
SAURABH GUPTA
(2K11/EC/140)
>>twolink.plot([0 0])
>>mdl_puma560;
>>p560.plot(qz) >> p560.plot(qr)
February 13,
2015
>> p560.plot(qs)
SAURABH GUPTA
(2K11/EC/140)
>>p560.plot(qn)
Code:
Write down a Matlab program for the planar 3 DOF RRR robot with the following
parameters
L1 = 4m, L2=3m,L3 = 2m
To derive theforward kinematics i.e. 0T3, with the following input cases:
(1)
(2)
(3)
[0 0 0]T
[10 20 30]T
[90 90 90]T
SAURABH GUPTA
(2K11/EC/140)
February 13,
2015
theta=q1,
d=
theta=q2,
d=
theta=q3,
d=
0, a=
0, a=
0, a=
4,
alpha=
3,
alpha=
2,
alpha=
0 (R,stdDH)
0 (R,stdDH)
0 (R,stdDH)
+-----------
+-----------
+-----------
0 base = 1 0 0 0 tool = 1 0
grav = 0 0
0 10
0
0100
0
9.8
1
0010
0010
0
0
0
1 00
01
1> dr = pi/180;
2> q1 = [0 0 0]*dr;
3> q2 = [10 20 30]*dr;
4> q3 = [90 9090]*dr;
T30_1 = saurabh.fkine(q1)
T30_1 =
1
0
0
0
0
1
0
0
0
0
1
0
9
0
0
1
SAURABH GUPTA
(2K11/EC/140)
T30_3
=
1.000
3.000
-0.0000 0
0
0
2.000
-1.0000 0.0000
0
0
0
0 1.0000
0
0
0
0 1.0000
>>saurabh.plot([0 0 0], 'workspace', [55
[-5 5 -5 5 -5 5])
'workspace', -5 5 -5 5])
Result: Hence the robot simulation in Matlab was successfully performed for
different types of robots in different configurations.
EXPERIMENT 3
Aim: To study concepts of inverse kinematics using Peter Corke toolbox in Matlab.
Theory:
Inverse Kinematics: To determine the values of the joint variables given the endeffector position and orientation.
1> mdl_puma560
2> qn
qn =
0 0.7854 3.1416 0 0.7854 0
>> T = p560.fkine(qn) % fkine gives the pose (4x4) of the robot end-effector
as a homogeneous transformation
20
-0.0000
0.0000
0.5963
-0.0000
-0.0000 -0.1501
1.0000
1.0000
- 0.014
-1.0000 0.0000 0.0000
4
0
0
0 1.0000
0 0 1.0000
2> qi
p560.ikine6s(T,
qi =
=
'lu')
1> p560.plot(qi)
2> p560.ikine6s( transl(3,
0, 0) ) ans =
NaN NaN NaN NaN NaN NaN % shows that the point is unreachable
>> qi = p560.ikine(T) % ikine gives the joint coordinates corresponding to the
robot end-effector with
pose T
qi =
-0.0000 -0.8335 0.0940 -0.0000 -0.8312 -0.0000
>> p560.fkine(qi)
ans =
0.0000
0.0000
1.0000
0.5963
-0.0000
1.0000
-0.0000 -0.1501 -1.0000
-0.0000 0.0000 -0.0144
0
0 0 1.0000
2> p560.plot(qi)
qi = p560.ikine(T, [0 0 3 0 0 0] % here [0 0 3 0 0 0] gives the initial estimate of
robot
qi =
-0.0000 0.7854 3.1416 0.0000 0.7854 -0.0000
>>
p560.plot(
qi)
Code:
Write down a Matlab program to solve planar RRR robot inverse kinematics
problem having the length parameters L1=4m, L2=3m, L3=2m.
For the following relative pose
(a) 0TH = [1 0 0 9; 0 1 0 0; 0 0 1 0; 0 0 0 1]
(b) 0TH = [0.5 -0.866 0 7.5373; 0.866 0.6 0 3.9266; 0 0 1 0; 0 0 0 1]
(c) 0TH = [0 1 0 -3; -1 0 0 2; 0 0 1 0; 0 0 0 1]
(d) 0TH = [0.866 0.5 0 -3.1245; -0.5 0.866 0 9.1674; 0 0 1 0; 0 0 0 1]
(a) TH0
=
1 0 0 9
0 1 0 0
0 0 1 0
0 0 0 1
dr =
pi/180;
q = saurabh.ikine(TH0, [0 0 0], [1 1 0 0 0
1 1])
q
1 =
0 0 0
q = saurabh.ikine(TH0, [90 -90 90]*dr, [1
1 1 0 0 0 1])
q
1 =
1.0e-03 *
0.1934 -0.4512 0.2578
saurabh.plot(q1, 'workspace', [-8 8 -8 8 -8 8])
>>
saurabh.fkine(q
1) ans =
1.0000 0.0000
9.000
0
0.000
0.0000 1.0000
0 0
0
0 1.0000
0
0
0 0 1.0000
0
0 1 0; 0 0 0 1] TH1 =
0.5000 0.8660
7.537
3
3.926
0.8660 0.6000
0 6
0
0 1.0000
0
0
0 0 1.0000
2> q1 = saurabh.ikine(TH1, [0 0 0], [1 1 0 0 0 1])
0
SAURABH GUPTA
(2K11/EC/140)
TH1(1:3,1:3)*TH1(1:3,1:
3)'
% The above can be proved as
% for orthonormal matrix the result should be identity
ans =
matrix
1.0000 0.0866
0
-0.0866 1.1100
0
0
(3)
0 1.0000
TH2 = [0 1 0 -3; -1 0 0 2;
0 0 1 0; 0 0 0 1] TH2 =
0
-1
0
0
>> q2 = saurabh.ikine(TH2, [0 0 0], [1 1
0 0 0 1])
q2 =
1.5708 1.5708 1.5708
>> saurabh.plot(q2)
1
0
-3
0
0
2
0
1
0
0
0
1
q = saurabh.ikine(TH2, [90 -90 90]*dr, [1
2 1 0 0 0 1])
q
2 =
2.8578 -1.5708 -2.8578
>> saurabh.plot(q2)
TH3 =
3.124
0.8660 0.5000
0
5
9.167
-0.5000 0.8660
0
4
0
0 1.0000
0
0
0 0 1.0000
February 20,
2015
SAURABH GUPTA
(2K11/EC/140)
0.866
0
-0.5000 0
2.3529
0
0 1.0000
0
0
0
0 1.0000
q3 =
1.0e+04 *
-2.6746 6.4932 -3.7081
>> saurabh.fkine(q3)
ans =
0.480
0.8769 8
0 -1.6122
-0.4808 0.8769
0 2.3772
1.000
0
0 0
0
0
0
0 1.0000
= saurabh.ikine(R, [0 0 0], [2 0 0 0 0
q3 1])
% for different
mask
q3 =
-2.0655 0.9075 2.4494
>> saurabh.fkine(q3)
ans =
3.124
0.8660 0.5000
05
5.023
-0.5000 0.8660
08
0 0 1.0000
0
0
0 0 1.0000
2> saurabh.plot(q3)
The different result for the same post 0TH is due to different usage of Initial
estimate (Q0) and mask (M).
Result: Hence the various concepts related to inverse kinematics were
Experiment - 4
Aim: To perform forward kinematic of 3 DOF Planar robot, Stanford manipulator,
PUMA robot for the given DH parameters using Roboanalyser and calculate the endeffector position for a given set of joint values using Matlaband verify the results
with Roboanalyser.
Theory:
Forward Kinematics: To determine the position and orientation of the end-effector
given the values for the joint variables of the robot.
Using Roboanalyser: RoboAnalyzer is a 3D Model Based Robotics Learning System
developed using OpenTK and Visual C#.
The above figures have been suitably labeled for proper description. The UI may be
explained as follows:
1. In this block, one can select the DOF and type of robot from tabs provided on
the far left. The center table shows the DH parameter for the robot. These
can be changes as per the requirement.
2. Here 4 tabs are present. One can see a DH paremeter animation for the robot
in Visualize DH tab. Link Config and EE Config gives the transformation
matrix for a particular link and for base to end-effector respectively.
3. Once the robot is configured, press the FKin button to completer the
forward kinematic analysis. Then press the play button to view the trajectory
of robot with initial and final position as specified in block 1. One can also
change thetime duration and no. of steps taken by robot for reaching from
initial to final position.
4. Here one can see the actual robot, coordinate axis w.r.t each link, and the
5. This screen is obtained by hitting the Graph tab on top of the screen. Here
one can see the variation in position for each link separately with time and
the joint parameters such as joint value, velocity, acceleration etc.
6. This block shows thegraph for selected parameter in block 5.
Code:
(a) 3 DOF Planar robot
+----------
-----------
a |
0.3|
0.25 |
0.1|
1> dr
=
pi/180 dr =
0.0175
2> T30_1 = saurabh.fkine([0 0 0])
T30_1 =
0.650
1.0000
0
0
0
0 1.0000
0
0
0
0 1.0000
0
0
0
0 1.0000
Roboanayseroutput:
6> saurabh_puma=
SerialLink(B,
| 1|
q1|
| 2|
| 3|
q2|
q3|
0.774
2|
+----------
alpha |
+----------
0| -1.571|
0.5066|
0.101
3.142|
6|
0.02| -1.571|
0.0381|
0.268
| 4|
q4|
0| 1.571|
|
| 5|
q5|
0| 0| -1.571|
0.0584
| 6|
q6|
0|
0|
2|
+- +-------- +-------+---------- ----+
-+----------
grav
0 base = 1 0 0 0 tool = 1 0
=
00
0
0100
0100
9.81
0010
0010
00 0 1
0001
>> T30_1 = saurabh_puma.fkine([0 0 0 0 0 0])
T30_1
=
0.526
1.0000
0
0
6
0 1.0000
0 0.1397
0
0
0
0
1.000
0 1.1006
0 1.0000
Roboanalyseroutput:
SerialLink(S,
+---
+-----------
+
-
+-----------
+
-
th d
eta
|j| | |
+---
| 1|
| 2|
| 3|
| 4|
| 5|
| 6|
+---
----------
-----------+
a | alpha |
+---------
+--------- +
0
.
7
6
q1 2
| |
0|
q2| 0.3934|
0|
-q
1.53
71||
0|
0|
q4| 0.2268|
0|
q5 0
| |
0| -1.571|
q6| 0.4318|
0|
+
+----------- +--+--- +
----- --------- 1.571|
- 1.571|
- 1.571|
0|
Result: Hence the various robots were successfully simulated in the RoboAnalyser
environment with different configurations. Further the transformation matrix
obtained using Matlab for given joint values were verified using RoboAnalyser.
Experiment 5
Aim: Perform the inverse kinematics of a 3DOF planar & articulated robot of the
given DH parameters using RoboAnalyser. Derive the analytical expressions for the
inverse kinematic problem of 3 DOF planar robot and verify the solutions from
Matlab.
Theory:
Inverse Kinematics: To determine the values of the joint variables given the endeffector position and orientation.
Using Roboanalyser:RoboAnalyzer is a 3D Model Based Robotics Learning System
developed using
OpenTK and Visual C#.
The above figures have been suitably labeled for proper description. The UI may be
explained as follows:
1. To perform Inverse Kinematic analysis, press the IKin button which open up
2.
3.
4.
5.
a new window.
Here, select the type of robot that is to be analysed e.g. 2R, 3R, wrist etc.
This block is used to set up the various parameters used to configure the
robot such as link length (a), joint offset (b), twist angle (alpha). Now, in order
to perform Inverse Kinematics, specify the end effector coordinates w.r.t the
base frame.
Press the Ikin button to start the analysis.
The corresponding results i.e. the joint values will get displayed in block 5.
Two solutions are provided, one for left hand configuration and other for right
6.
7.
8.
9.
hand configuration.
One may press the Show button to display the robot simulation for the
obtained joint values.
Now the 2 obtained solutions may be used to plot robot trajectory from one
solution to the other. Select either of the solution as the initial position and
the other solution as the final solution.
Press Ok button. This step changes the initial and final joint values in DH
parameter table.
Now press FKinbutton to display the corresponding trajectory from solution
1 to solution 2.
Code:
The solution may be obtained from Matlab using trigonometric analysis or directly
from RoboAnalyzer.
(a) DOF planar robot
a1=2; a2=2; a3=1;
% Input
wx = px - a3*cos(phi);
wy = py-a3*sin(phi); del=wx*wx + wy*wy;
c2 = (del-a1*a1-a2*a2)/(2*a1*a2);
s2 = sqrt(1-c2*c2);
th21=atan2(s2,c2); th22=atan2(-s2,c2);
%Calculation for theta1
s11= ((a1+a2*cos(th21))*wy-a2*sin(th21)*wx)/del; c11 =
((a1+a2*cos(th21))*wx+a2*sin(th21)*wy)/del; s12 = ((a1+a2*cos(th22))*wya2*sin(th22)*wx)/del; c12= ((a1+a2*cos(th22))*wx+a2*sin(th22)*wy)/del;
RoboAnalyze
r Output
14>
th23=pi-th21;
th24=pi-th22;
15>
r2d=180/pi;
>>th11d=th11*r2d;
(2) 3R Articulated
2> a2=1; a3=1;
3> px=1;py=0;pz=0;
4> delxy=px*px+py*py; del=delxy+pz*pz;
5> th11=atan2(py,px);
6> th12=pi+atan2(py,px);
7> c3=(del-a2*a2-a3*a3)/(2*a2*a3); s3=sqrt(1c3*c3);
10>
c21=
((a2+a3*cos(th31))*delxy+a3*s3*pz)/del;
11>
s22=((a2+a3*cos(th31))*pz+a3*s3*delxy)/del;
12>
c22=((a2+a3*cos(th31))*delxya3*s3*pz)/del;
13>
th21=atan2(s21,c21);
1> th12d=th12*r2d;
2> th21d=th21*r2d;
3> th22d=th22*r2d;
4> th23d=th23*r2d;
5> th24d=th24*r2d;
6> th31d=th31*r2d;
7> th32d=th32*r2d;
8> th11d=th11*r2d
RoboAnalyzer Output
Result: Hence the Inverse Kinematic Analysis for various robots is successfully
done using RoboAnalyzer.
The results are then verified using Matlab too.
Experiment 6
Aim: To study velocity kinematics using Peter Corke tool box.
Theory:
Velocity Kinematics: Velocity Kinematics relates linear and angular velocities
ofthe end-effector to the joint velocities.
>>
mdl_puma560
>> T0 =
p560.fkine(qn)
T0 =
0.000 0.000
0
0
1.0000 0.5963
0.000 1.000
0
0
0.0000-0.1501
1.000
0
0.0000 0.0000 -0.0144
0
0
0 1.0000
>> dq = 1e6;
>> Tp =p560.fkine(qn + [dq 0 0 0
0 0])
Tp =
0.000
0
0.0000 1.0000 0.5963
0.000
0
1.0000 0.0000 0.1500
1.000
0
0.0000 0.0000 -0.0144
0
0
0 1.0000
>> dTdq1 = (Tp T0) / dq
dTdq1
=
0
-1.0000 -0.0000 0.1500
0.000
0
0.0000 1.0000 0.5963
0
0
0
0
0
0
0
0
>> dRdq1 =
dTdq1(1:3,1:3)
dRdq1
=
0
-1.0000 -0.0000
0.000
0
0.0000 1.0000
0
0
0
18
.
0
0
.
0
0
0
.
0
0
0
0
0
.
0
0
0
0
.
0
0
0
0
0
.
0
0
0
0
19
1.0000
0
>>
vex(S)
ans =
-0.0000 0.0000
0
0
%get vector from symmetric matrix
-0.0000
0.0000
1.0000
>> J = p560.jacob0(qn)
J=
0.1501 0.0144 0.3197
0
0
0
0.5963 0.0000 0.0000
0
0
0
0
0.5963 0.2910
0
0
0
0
0.0000 0.0000 0.7071 0.0000 1.0000
-1.0000 -1.0000 -0.0000 -1.0000
0.0000
-0.0000
0.0000 0.0000
0.0000
1.0000 -0.7071
-0.0000
>>T = transl(1, 0,
%get a transformation matrix with rotation in Y
0)*troty(pi/2)
direction
T=
1.000
0.0000
0 0
1.0000
0 1.0000
0
0
-1.0000
0 0.0000
0
0
0
>> J =
tr2jac(T)
0 1.0000
% get new jacobian matrix w.r.t. new
frame
J=
0.0000
0-1.0000
0 1.0000 0
0 1.0000
0
0
0 1.0000
0.000
1.0000
0 0
0-0.0000 0
0
0
0 0.0000
0 1.0000
0
0
0
0 1.0000
0
0
0
0 1.0000
0 0.0000
0
% get jacobian matrix w.r.t. endeffector
ans =
-0.0000 -0.5963
-0.2910
0
0
0
0.000
0.5963 0.0000 0
0
0
0
0.319
0.1500 0.0144 7
0
0
0
1.000
0
0
0 0.7071
0
0
0.000
1.000
0
-1.0000 0
0.0000 1.0000
0
0.000
0.000
0.000
0
0.0000 0
0.7071 0
1.0000
>> p560.jacob0(qn,
% get jacobian matrix for Eular
'eul')
values
ans =
0.319
0.1501 0.0144 7
0
0
0
0.000
0.5963 0.0000 0
0
0
0
0 0.5963 0.2910
0
0
0
1.0000 0.0000 0.0000 -0.7071 0.0000 0.0000
0.0000 -1.0000 -0
1.000
1.000
0
0.0000 0
0.000
0.000
0.000
0
0.0000 0
0.7071 0
1.0000
To study inverse kinematics and develop a MATLAB program using peter coorke
toolbox to calculate a jacobianmatrix for planar 3R robot, given the robots length
l1=4m, l2=3m, l3=2m and initial joint angles (1,2,3)=[10 o, 20o,30o].
April 17,
2015
L=
theta=q1,
d=
theta=q2,
d=
theta=q3,
d=
0, a=
0, a=
0, a=
4,
alpha=
3,
alpha=
2,
alpha=
0 (R,stdDH)
0 (R,stdDH)
0 (R,stdDH)
---+
grav =
0
9.81
1> dr=
pi/180
-----------+
0 base = 1 0 0 0 tool = 1 0
00
0100
0100
0010
0010
0001
0001
dr =
0.0175
3> jacobian
jacob0(two_link_jacob,
jacobian =
-3.9266 -3.2321 -1.7321
3.598 1.000
7.5373 1
0
0
0
0
0
0
0
0
0
0
1.000 1.000
1.0000 0
0
=
q0)
Result: Various concepts related to velocity kinematics were studied and jacobian
matrix were obtained successfully for the 3DOF planar robot.
Experiment 7
Aim: To study forward dynamics of a 3DOF of 3R robot.
Theory:
Dynamics:Robotdynamicsis concerned with the relationship between the forces
acting on a robot mechanism and the accelerations they produce. Forward
dynamics works out the accelerations given the forces.
>>mdl_puma560
Q = p560.rne(qn, qz, qz)
specified joint
Q=
-0.0000 31.6399 6.0351 0.0000 0.0283 0
Q = p560.rne(qn, qz, qz, [0 0 9.81]') % [0 0 9.81] gives the gravitational vector
Q=
31.639
-0.0000
9 6.0351 0.0000 0.0283
0
[q, q1, q2] = jtraj(qz,
% Compute a joint space trajectory between
qr, 10)
two points
p560.links(1).dyn
p560.gravi
ty'
ans =
0
0 9.8100
>> p560.gravload(qn)
ans =
-0.0000 31.6399 6.0351 0.0000
0.0283
p560.gravity=
p560.gravity/6;
>> p560.gravload(qn)
ans =
0.000 5.273
0.000
0
3
1.0059 0
0.0047
p560.base = trotx(pi);
p560.gravload(
qn)
ans =
0.0000 -5.2733 -1.0059
-0.0000
>> Q =
p560.gravload(qs)
Q=
46.0069
-0.0000
8.7722
-0.0047
0
% Robot in qs position
0.0000
0.0283
>> Q =
p560.gravload(qr)
Q=
0 -0.7752 0.2489 0
April 17,
2015
SAURABH GUPTA
(2K11/EC/140)
program
robots
,
normal
plane
peter coorke
as L1=4m,
.
. Using Peter
Coorke toolbox and Matlab commands solve the forward dynamics problem (i.e.
with the available
driving
joint
torque
s an
joint
angles
and
initialjo
in
Perfor
rates)
m
simul
ation
for 4
sec.
{
}=
}=
= Link([0 0 4 0
0 0 0.5 0 0 0
L(1) 020 2 0
0
= Link([0 0 3 0 1. 0 0 0 0.2 0 0
L(2) 0 15
5
0
= Link([0 0 2 0 1 0 0 0 0.1 0 0 0
L(3) 0 10
0
L(1).mdh=1;
01
% Link1 with a=4m, m=20kg,
000])
I=0.5kgm
% Link2 with a=3m, m=15kg,
00
I=0.2kgm2
1 00 0])
% Link3 with a=2m, m=10kg,
01
I=0.1kgm2
0 0 0])
L(2).mdh=1;
L(3).mdh=1;
two_link_sau = SerialLink(L, 'name', 'two link_sau');
two_link_sau =
two link_sau (3 axis, RRR, stdDH, slowRNE)
------+
+
----+
+
---+----------| j|
thet
a| d| a|
alpha |
+
- +----+--- +----------- -----+
| 1| q1| 0|
4|
0|
| 2| q2| 0|
3|
0|
| 3| q3| 0|
2|
0|
+
- +----+--- +----------- -----+
0 base = 1 0 0 0 tool = 1 0
grav =
00
0 10
0
0100
0
9.8
1
0010
0010
0
0
0
1 00
01
-----------
+-----------
+-----------
+-----------
t=[t0:dt:tf];
dr= pi/180;
Q = [0
60
Q
0 =
90
30]*dr
% Specify initial robot position
Experiment 8
Aim: To study Peter Corke Computer Vision toolbox.
Theory:
// Spectral Representation of Light//
//define a range of wavelengths//
>>lambda = [300:10:1000]*1e-9;
//in this case from 300 to 1 000 nm, and then compute the
blackbody spectra// >>for T=1000:1000:6000
>>plot( lambda*1e9, blackbody(lambda,
T)); hold all >>end
as shown in Fig. 8.1a.
//The Suns spectrum at ground level on the Earth has been measured and
tabulated//
>>sun_ground = loadspectrum(lambda,
'solar.dat'); >>plot(lambda*1e9,
sun_ground)
and is shown in Fig. 8.3a.
AIM: Absorption
1> [A,
lambda]
loadspectrum([400:10:700]*1e-9,
'water.dat');
//Ais
the
absorption
coefficient//
Reflection
//the reflectivity of a red housebrick//
>> [R, lambda] = loadspectrum([100:10:10000]*1e-9,
'redbrick.dat'); >>plot(lambda*1e6, R);
which is plotted in Fig. 8.4. We see that it reflects red colors more than blue.
Color
//The luminosity function is provided by the
Toolbox// >>human = luminos(lambda);
>>plot(lambda*1e9,
human) //and is shown
in Fig. 8.6a. //
Reproducin
g Colors
Color Names
>>colorname('?burnt')
>>bs =
colorname('bur
ntsienna', 'xy')
bs =
0.5258 0.3840
>>colorn
ame('cho
colate',
'xy') ans
=
0.5092 0.4026
>>colorn
ame([0.5
4 0.20
0.06]) ans
=
burntsienna
2/ Image Formation //
cam = CentralCamera('focal', 0.015); // creatinga model of a central-perspective
camera //
1>P = [0.3, 0.4, 3.0]';
2> cam.project(P);
ans =
0.00
15
0.00
20
cam.project(P, 'Tcam', transl(-0.5, 0, 0) ) // moving camera 0.5 m
to theleft // ans =
0.0040
0.0020
cam = CentralCamera('focal', 0.015, 'pixel', 10e-6, ... //displays the parameters of
the imaging model
//
'resolution', [1280 1024], 'centre', [640 512],
0
0
0
// field of view //
46.212737.6930
// we create a 33 grid of points in thexy-plane with overall side length 0.2 m and
centred at (0, 0, 1) ,returns 3*9 matrix //
0.1000 0.1000
1.000
1.0000 1.0000 0
1.0000
>>
// The image plane coordinates of the
cam.project(P)
vertices are //
ans =
-0.1000
>>
cam.plot(P)
h=
173.0022
2> cam.projec
t([1 0 0 0]',
'Tcam', Tcam)
ans =
1.0e+03 *
1.8303
0.5120
>> p = cam.plot(P, 'Tcam', Tcam)
>>
// oblique viewing case the image plane
p(:,1:4)
ans =
887.763 887.763 887.763 955.245
8
8
8
1
364.333 512.000 659.667 374.905
0
0
0
0
>> cube = mkcube(0.2, 'T', transl([0,
0, 1]) ); // projection of cube //
1> cam.plot(cube);
2> P = mkcube(0.2);
3> T_unknown = transl(0.1, 0.2, 1.5)
>> cam.mesh(X, Y, Z)
>>
ca
m.T
=
tra
nsl(
-1,0
,0.5
)*tr
oty(
0.8)
; //
obli
que
vie
w //
>>
ca
m.
me
sh(
X,
Y,
Z,
'Tca
m',
Tca
m);
2
7
1
2
.
0
1
8
0
0
.
1
0
4
3
0
.
0
9
8
5
0
.
6
4
9
4
1
.
0
0
0
0
o
s
i
n
g
t
h
e
C
a
m
e
r
a
C
a
l
i
b
r
a
t
i
o
n
M
a
t
r
i
x
/
/
e
s
t
/
/
l
D
e
c
o
m
p
i
n
v
c
a
m
c
a
l
(
C
)
est =
n
a
m
e
:
i
n
v
c
a
m
c
a
l
[
c
e
n
t
r
a
l
p
e
r
s
p
e
c
t
i
v
e
]
f
o
c
a
l
l
e
n
g
t
h
:
1
5
0
4
p
i
x
e
l
s
i
z
e
:
(
1
,
0
.
9
9
8
5
)
p
r
i
n
c
i
p
a
l
p
t
:
(
5
1
8
.
6
,
5
0
5
)
Tcam:
May 1,
2015
SAURABH GUPTA
(2K11/EC/140)
// Pose Estimation //
>>cam = CentralCamera('focal', 0.015, 'pixel', 10e6, ...'resolution', [1280 1024], 'centre', [640 512]); // create
a calibrated camerawith known parameters//
>> P = mkcube(0.2); // cube determination //
>> T_unknown =
transl(0,0,2)*trotx(0.1)*troty(0.2) // pose estimation
w.r.t camera// T_unknown =
0.9801
0 0.1987 0
May 1,
2015
0.097
0.0198 0.9950
8
0
0.975
-0.1977 0.0998
2 2.0000
1.000
0
0
0
0
// lSynthetic Perspective
Images //
>> W = 1000;
>> m =
W/2/
tan(45/2*
pi/180) //
45 fied of
view // m
=
1.2071e+03
1> l = 0;
2> u0 = W/2; v0 = W/2;
3> [Uo,Vo] = meshgrid(0:W-1, 0:W1); // The domain ofthe output
image //
>> r = sqrt((Uo-u0).^2 + (Vov0).^2); // polar coordinates //
1> phi = atan2((Vo-v0), (Uo-u0));
2> Phi_o = phi; // spherical
coordinates //
1> Theta_o
=
pi
acos( ((l+m)*sqrt(r.^2*(1-l^2) +
(l+m)^2) -l*r.^2) ./ (r.^2 +
(l+m)^2) );
2> idisp(perspective)
3> spherical
=
sphere_rotate(spherical,
troty(1.2)*trotz(-1.5));
1 Obtaing
an image
from a
file
a
d
(
'
f
l
o
w
e
r
s
8
.
p
n
g
'
)
;
>
>
a
b
o
u
t
(
f
l
o
w
e
r
s
)
flowers [uint8] : 426x640x3
(817920 bytes)
>
>
p
i
x
=
f
l
o
w
e
r
s
(
2
7
6
,
3
1
8
,
:
)
%
p
i
x
e
l
a
t
(
3
1
8
,
2
7
6
)
a
n
s
(
:
,
:
,
1
)
=
57
ans(:,:,2
) = 91
ans(:,:,3
) = 198
% The pixel value is
>>idisp( flowers(:,:,1) )
2 Images from an Attached Camera
Most laptop computers today have a builtin camera for video conferencing. For
computers without a builtin camera an external camera can be easily attached via
a USB. The means of accessing a camera is operating system specific
and the Toolbox provides a simple interface to a camera for MacOS,
Linux and Windows. % A list of all attached cameras and their
capability can be obtained by >>VideoCamera('?')
1%
We
open
a
particular camera >>cam
= VideoCamera('name')
2%
which returns an instance of a VideoCamera object that
is a subclass of the %ImageSource class.
3%
The dimensions of the image returned by the camera are
given by the size method >>cam.size()
4%
an image is obtained using the grab method
>>im = cam.grab();
% which waits until the next frame becomes available.
x
5
7
6
@
2
.
9
9
9
9
7
0
e
+
0
1
f
p
s
350 frames
m
e
w
i
t
h
i
n
t
h
e
m
o
v
i
e
i
s
>
>
c
a
m
.
s
i
z
e
(
)
a
n
s
=
7
2
0
5
7
6
1
%
a
n
d
t
h
e
n
e
x
t
f
r
a
m
e
i
s
r
e
a
d
f
r
o
m
t
h
e
m
o
v
i
e
f
i
l
e
b
y
>
>
i
m
=
matrix.
c
a
m
.
g
r
a
b
(
)
;
>>about(im)
im
[uint8]
:
(1244160 bytes)
2%
which is a 720576 color=
image.
t
e
3%
s
Wit
t
h
p
the
a
se
t
few
t
pri
e
mit
r
ive
n
s
(
we
'
can
r
wri
a
te
m
a
p
ver
x
y
'
sim
,
ple
mo
2
vie
5
pla
6
yer
,
1
whi
2
le
)
1
;
2 im = cam.grab;
3 if isempty(im), break; end
>
4 image(im)
>
5 end
i
%where the test at line 3 is to m
detect the end of file, in which
case grab returns an empty
=
t
e
s
t
p
a
t
t
e
r
n
(
'
s
i
n
y
'
,
2
5
6
,
2
)
;
>
>
i
m
=
t
e
s
t
p
a
t
t
e
r
n
(
'
s
q
u
a
r
e
s
'
,
2
5
6
,
5
0
,
2
5
)
;
>
>
i
m
=
t
e
s
t
p
a
t
t
e
r
n
(
'
d
o
t
s
'
,
5
6
,
2
5
6
,
1
0
0
)
;
We can also
construct an image
from simple
graphical
primitives. First we
create a
blankcanvas
containing all black
pixels (pixel value
of zero)
>>canvas = zeros(200, 200);
% then we create two squares
1> sq1 = 0.5 * ones(40, 40);
2> sq2 = 0.9 * ones(20, 20);
The first has pixel values of 0.5
(medium grey) and is 4040.
The second is smaller (just
2020) but brighter with pixel
values of 0.9. Now we can paste
s
q
1
,
[
2
0
,
4
0
]
)
;
>
>
c
a
n
v
a
s
=
i
p
a
s
t
e
(
c
a
n
v
a
s
,
s
q
2
,
[
6
0
,
1
2
0
]
)
;
>
>
c
i
r
c
l
e
=
0
.
6
c
i
r
c
l
e
(
3
0
)
;
%
o
f
r
a
d
i
u
s
3
0
p
i
x
e
l
s
w
i
t
h
a
g
r
e
y
v
a
l
u
e
o
f
.
6
.
>
>
s
i
z
e
(
c
i
r
c
l
e
)
a
n
s
=
6
1
6
1
>
>
c
a
n
v
a
s
=
i
p
a
s
t
e
(
c
a
n
v
a
s
,
c
i
r
c
l
e
,
[
1
0
0
,
3
0
]
)
;
%
F
i
n
a
l
l
y
,
w
e
d
r
a
w
a
l
i
n
e
s
e
g
m
e
n
t
o
n
t
o
o
u
r
c
a
n
v
a
s
>
>
c
a
n
v
a
s
=
i
l
i
n
e
(
c
a
n
v
a
s
,
[
3
0
,
4
0
]
,
[
1
5
0
,
1
9
0
]
,
0
.
8
)
;
%which extends
from (30, 40) to
(150, 190) and
its pixels are all
set to 0.8.
>>idisp(canvas)
i
o
n
i
s
>
>
c
o
l
o
r
=
i
c
o
l
o
r
(
g
r
e
y
)
;
% the
histogr
am of
the
street
scene
is
comput
ed and
display
ed by
>>ihist
(street
)
%Th
e
Toolb
ox
funct
ion
peak
will
auto
mati
cally
find
the
peak
s
>>p
eak(
n, v)'
ans =
8 40 43 51 197 17 60 26 75 213
88 92 218 176 153 108 111 147 113
119
121 126 130 138 230 244
We can identify the pixels in
May 1, 2015
Experiment 9
Aim: Object tracking using Kalman filter using matlab.
Theory:
The Kalman filter keeps track of the estimated state of the system and the variance
or uncertainty of the
estimate. The estimate is updated using a state transition model and
measurements.
denotes the estimate of the system's state at time step k
before the k-th measurement zk has been taken into
account;
The Kalman filter, also known as linear quadratic estimation (LQE), is an algorithm
that uses a series of measurements observed over time, containing noise (random
variations) and other inaccuracies, and produces estimates of unknown variables
that tend to be more precise than those based on a single measurement alone.
More formally, the Kalman filter operates recursively on streams of noisy input data
to produce a statistically optimal estimate of the underlying system state.
The algorithm works in a two-step process. In the prediction step, the Kalman filter
produces estimates of the current state variables, along with their uncertainties.
Once the outcome of the next measurement (necessarily corrupted with some
amount of error, including random noise) is observed, these estimates are updated
using a weighted average, with more weight being given to estimates with higher
certainty. Because of the algorithm's recursive nature, it can run in real time using
only the present input measurements and the previously calculated state and its
uncertainty matrix; no additional past information is required.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ne
1%
length
object
tracking
and
using kalman
width
of
2%
frame
clear workspace
len
vidObje
ct.Heig
ht;
w
i
l
l
% read a video
vidObject =
VideoReader('
car-2.avi'); %
determine no.
frames
nFrames =
vidObject.NumberOfFrames;
.
W
idth;
sumFra
me =
zeros(l
en,
wid);
2%
dt = 0.1;%
time
2%
between 2
initil
frames %
ize
initialize
obs
state model
erva
matrix
tion
el
mat
%i
rix C
nit
= [1
iai
ze
0; 0
ac
ce
0];
ler
3%
ati
on
a
1%
initialize state model covariance equal to state noise variance for the first
2%
iteration only
stateCov = prcNoiseVar;
statePredCov = zeros(4,4);% state prediction
covariance obsNoise = [0.01; 0.01]; % noise in
observation model
1%
2%
zero mean
1%
2%
cnd is zero till object is not detected for the first time.
cnd = 0;
bboxPred =
zeros(1,4);
bboxEst =
zeros(1,4); box =
zeros(3,4); absent
= 0;
for k = 1 : nFrames
3%
background
subtraction
object(abs(double(grayFrame)
background)>10) = 1;
4%
object
imopen(object,strel('square',3));
object
imclose(object,strel('square',12));
object = imfill(object,'holes');
1%
location blobAnalyser =
vision.BlobAnalysis;
blobAnalyser.MinimumBlobArea = 100;
[area,centroid,bbox] =
step(blobAnalyser, object); if(centroid
~= 0)
cnd = cnd
+ 1; end
2% kalman algorithm
if(cnd == 1)
end
1% state
model
prediction
3% prediction
of
observation
covariance
model
obsCov
C*statePredCov*C' + obsNoiseVar;
1% observation
prediction
vector
zPred
C*XPred;
if(centroid ~= 0)
z(1:2)
centroid(1:2);
zError
zPred;
absent = 0; % when object is
present in frame else
absent = 1; % when object is
absent in frame end
end
box(2,1:2) =
bboxPred(1:2);
box(3,1:2) =
bboxEst(1:2);
end
subplot(3,1,
1);
1%
2%
title('background');
subplot(3,1,3);
imshow(object);
title('binary image of detected
objects'); pause(0.001);
end
close
all
OUTPUT:
May 1, 2015
Experiment 10
Aim: To perform objecttrackingusingOpticalFlow.
Theory:
In optical flow, motion is estimated as instantaneous image velocities or
discrete image displacements. The method calculates the motion between two
video sequences that are taken at two time instances t and t+t at every pixel
position. It is based on local taylor series approximation of image signal i.e. they
used partial derivatives with respect to spatial and temporal changes.
For a 2D-time dimension video sequence, a pixel at location (x, y, t) having
intensity I(x, y, t) have moved x, y and t between the two video sequence. In
this we consider a constrain that the brightness remains unaltered at different
position i.e.
I(x, y, t) = I(x+x , y+y , t+t)
Assuming the movement to be
we
obtain
order
terms
This result in
May 1,
2015
SAURABH
GUPTA
(2K11/EC/140)
The above equation is solved to determine the optical flow vectors in two
directions x and y.
Fig. 1 show the simulink model to track a moving object in a given video. The
video used in this model is viptraffic.avi consisting of moving cars on a road
scenario.
1. The initial block is the scource block, which is used to read a multimedia file.
2. The second block is used to convert the RGB values to Grayscale intensity
values.
3. The Optical Flow block is used to obtain the velocity vectors in x and y
direction in complex conjugate form.
4. The forth block is used thresholding to obtain the segmented objects in the
video and to obtain the bounded box.
5. The display box is used to display the original video, optical flow vectors,
the segmented objects and the bounding box on the original video.
As in Fig 2. the Thresholding and Region Filtering Sub-Block initially obatins the
magnitude of the velocity vector and then a threshold is computed by averaging
all the velocity vectors of the frame and using this threshold image is obtained.
The image is then passed to the median filter to remove the noise in the image.
The Blob Analysis Toolbox is used on this threshold image to obtain the bounding
box of the object as given in Fig 3.
As in Fig 4.the display box consist of 4 inputs: input video, thresholded video,
optical flow vectors and the tracked output result.
The input video and the threshold image is directly displayed. The optical flow
vectors are passed through optical flow line function block to give the dimensions
of the optical flow vectors. These dimensions are then ploted on the original video
and displayed. The bounding box are counted to obtain the number of cars and
the bounding box is diplayed on the original video to obatin the result.
SIMULINKMODEL:
Fig.1MainSimulinkModel
Fig.2ThresholdingandRegionFilteringSubblock
Fig.3RegionFiltering
Subblock
Fig.4DisplayResultSubblock
OUTPUT:
(a)
(b)
(e)
Fig.5(a)NthFrameofthevideo(b)N+1th
Frameofthevideo(c)Opticalflowvelocity
vectorsforframe(a)and(b),(d)Thresholding
forobjectdetectionand(e)Objectdetection
(c)
(d)
Result:ObjectTrckingwassuccessfully
accomplishedusingopticalflowforthegiven
dataset.