You are on page 1of 120

ROBOTICS & OBJECT

TRACKING
LABORATORY
Delhi Technological University

VIII Semester
Dept. of Electronics &
Communication Engg.
Under the Guidance of Kumar
Mayank

Name: SAURABHGUPTA

Roll
No.:2K11/EC/140
Batch:ECE D2

IND
EX
s
Name
of
of
for
.
w
Experi
N
ar
ment
1
o0
d
To
..
ki
stud ne
1y
m
. vari ati
ous cs
2 spat us
. ial in
des g
crip Pe
3 tion
te
. s
r
and Co
tran rk
sfor
4 mat e
. ions to
ol
usin bo
g
x
Pete in
r
M
5 Cor
atl
. ke
ab
tool .
box
To
in
6
st
. Matl
ab. ud
y
7 To
. stud co
nc
y
8 con ep
.
cept ts
9
S.

oa
f
T
i
nT
v
eT
rT
s
eT
k
T
i
nT
e
m
a
ti
c
s
u
s
i
n
g
P
e
t
e
r
C
o
r
k
e
t
o
o
l
b
o
x
i
n
M

0
6
0
2
2
0
1
5
3
1
3
0
2
2
0
1
5
9
1
9
0
2
2
0
1
5
1
5

2
0

0
3
2
0
1
5
2
0

2
7
0
3
2
0
1
5
2
5

1
7
0
4
2
0
1
5
2
9
1

7
0
-10
4
5-2
2
0
0
1
1
5
5
3
3
8
4
0
1-

0
5
2
0
1
5
5
9
0
1

0
5
2
0
1
5
6
6

Attendance(Out of 10
classes)

InternalAssessment
Marks

February 6, 2015

SAURABH GUPTA (2K11/EC/140)

>

EX >
PE T
3
RI
ME =
NT
1T
1
Aim(a)(i) :To study various spatial
descriptionsand transformations
*
using Peter Corke toolbox in
T
Matlab.
T1 = se2(1, 2, 30*pi/180) %se2:

Create planar translationand


rotation transformation >>T1 =

T
3

0.8660 -0.5000 1.0000


0.5000 0.8660 2.0000

0 1.0000

=
0.8660 -0.5000 2.2321
0.5000 0.8660 3.8660

0
2> T2 = se2(2, 1, 0)

0 1.0000

2> T4 = T2*T1

T2 =
T4 =
1

0 2

0.8660 -0.5000 3.0000

1 1

0.5000 0.8660 3.0000

0 1

February 6, 2015

SAURABH GUPTA (2K11/EC/140)

0 0 1.0000

As we can see, T3 and T4matrices


are not equal. Hence we can say
that homogenous transforms are
not commutative.

Aim(a)(ii): Using a
ZYZ ,
when the user enters
Euler
property of rotational
matrix.

convention,
write
. = 10, =
20,

Matlab program the rotation


matrix ARB
. Also demonstrate the
inverse

dr = pi/180;
ang = input('enter alpha, beta and gamma in degrees');
alpha = ang(1)*dr;
% get alpha, beta and gamma in separate variables
beta = ang(2)*dr;
% degree to radian conversion
gamma = ang(3)*dr;
Tz = rotz(alpha);
% rotate alpha degree along Z direction
Ty = roty(beta);
% rotate beta degree along Y direction
Tx = rotz(gamma);
% rotate gamma degree along Z direction
Rmult = Tz*Ty*Tx;
% to get final ZYZ rotation matrix
Trpy = eul2tr(gamma,beta,alpha); % does the same using toolbox function
Enter alpha, beta and gamma
in degrees [10 20 30]

February 6, 2015

SAURABH GUPTA (2K11/EC/140)

Rmult =
0.7146
0.6337
-0.2962

-0.6131
0.7713
0.1710

Aim(b)(i): Write to calculate the


homogenous transformation ATB,
when the user enters
ZYX Eular angles
the position vector
0.3368 ApB. E.g. = 10, =20, 30, ApB =[1 2
0.0594 3]
0.9397

Trpy =
0.7146
0.6337
-0.2962
0

-0.6131
0.7713
0.1710
0

0.3368
0.0594
0.9397
0

dr = pi/180;
ang =
input('enter
alpha, beta
and gamma in
degrees');
alpha =
ang(1)*dr;

b
e
t
Aim(a)(iii): Write a Matlab program
a to calculate, the

Euler angles
rotation matrix i.e.
A
RB

a
n
g rotation angles from transformation matrix
ang = tr2eul(Trpy)
% calculates
(
ang/dr
% convert angles
back to degree
2
10.0000
20.0000 30.0000 )
*
d
r angle =20, and
Aim(a)(iv): For a simple rotation of
;

beta1 = input('enter beta in degree'


Pb = input('enter Pb column wise'
g
Ty = roty(beta1*dr);
% get rotation
matrix for rotation along Y axis
a
m
Pa = Ty*(Pb)
% get coordinates
w.r.t. A frame
m
a

enter beta in degrees 20


enter Pbcolumn wise [1;0;1]
Pa =
1.3210
0
-0.5049

=
a
n
g
(
3
)
*
d
r
;

February 6, 2015

SAURABH GUPTA (2K11/EC/140)


Enter Pb column wise [1;0;1]
Ttrn1 =

Trpy1 = rpy2tr(gamma,beta,alpha);
%get transformation matrix for
XYZ rotation

Ttrn = transl(1,2,3);
Tba = Ttrn*Trpy1;

% transl Create
T = transl(X,
transform (4x4)representing
% final transformation matrix
and translational components

Enter alpha beta and gamma in


degrees [10 20 30]

Ttrn
=
1
0
0
0

0
1
0
0

0
0
1
0

1
2
3
1

Trpy1 =
0.163
0.9254 2
0.3420
0.823
0.3188 2
-0.4698
-0.2049 0.5438 0.8138
0
0
0 1.0000

0
0
0

Tba =
0.9254 -0.1632 0.3420 1.0000
0.823
0.3188 2
0.4698 2.0000
-0.2049 0.54380.8138 3.0000
0
0
0 1.
A
pB = [3 0 1]T and
Aim(b)(ii): For
=20,=
Pb = input('enter Pb column wise'

Ttrn1 = transl(3,0,1)
Tba1 =
rpy2tr(0,20*dr,0);

% get transformation matrix for the given


%get translational matrix having translational

Tba2 = Ttrn1*Tba1
Pa = Tba*[Pb;1];

1
0
0
0

0
1
0
0

0
0
1
0

3
0
1
1

February 6, 2015

SAURABH GUPTA (2K11/EC/140)

Tba2
=
0.342 3.000
0.9397
0 0
0
0 1.0000
0
0
1.000
-0.3420
0 0.9397
0
0
0
0 1.0000
Pa =
4.2817
0
1.5977
1.0000
Aim(b)(iii): Write a
Matlabprogram to calculate the
inverse homogeneous
transformation matrix i.e.
(ATB)-1 = BTA. Also show that ATB .
(ATB)-1 = I4
Tba_inv =
inv(Tba2);
eye(4) Tba_inv*Tba2
ans =
1.0e-15
*
0.1110
0
0
0
0
0
0
0
0.0555
0
0
0
0
0
0
0
eye(4) Tba2*Tba_inv
ans =
1.0e-15
*
0.1110
0 -0.0555
0
0
0
0
0
0
0
0 0.1110
0
0
0
0

% inv calculates Matrix inverse


% eye(N) is the N-by-N identity
matrix.

that ATB and BTC are


Aim(b)(iv): Writea
A
C
to calculate TC and TA, given
Matlab
obtained
B
part. TC
Define ATB to be the result
[10 20 30], ApB = [1 2 3],
B
=20,=
from
TC as
B
20 0], pC =[3
0 1]
aTc =
% ATC = ATB*BTC
Tba*Tba2
aTc =
0.7526 0.1632 0.6379 4.1183
0.4603 0.8232-0.3325 2.4865

February 6, 2015

SAURABH GUPTA (2K11/EC/140)

-0.4709 0.54380.6946 3.1992


0

0 1.0000

aTc_inv =
inv(aTc)
aTc_inv
=
0.460
0.7526 3
0.4709 -2.7377

-0.1632 0.82320.5438 -3.1147


0.6379 0.33250.6946 -4.0227
0
0
0 1.0000
Result: Hence the basic related to
homogenous transformation matrix for
both Eulerian and Cardanian type
rotation were successfully studied
along with its properties.

February 13, 2015 SAURABH GUPTA (2K11/EC/140)

Experiment 2
Aim (i): To study concepts of forward kinematics using Peter Corke toolbox in
Matlab.
Theory:
Forward Kinematics: To determine the position and orientation of the end-effector
given the values for the joint variables of the robot.
>>L =Link([0, 0.1, 0.2, pi/2, 0]) % A Link object holds all information related to a
robot link
L=
theta=q,
d=
>>L.A(0
.5)

0.1,
a=

0.2,
alpha=

1.571
(R,stdDH)
% obtain transformation matrix for theta = 0.5
rad

ans =
0.8776 -0.0000 0.4794
0.1755
0.000 0.877
0.4794 0
6 0.0959
1.000 0.000
0
0 0
0.1000
1.000
0
0
0
0
>>
% gets type of
L.RP
joint

>> L.a

ans =

ans =

R
>> L.offset =
0.5;
>>
L.A(0)

% gets link length

0.2000
% specify theta in advance i.e. 0.5 rad

% get transformation matrix now


% produces same matrix as above as link offset was
ans =
specified initiaily
0.8776 -0.0000 0.4794
0.1755
0.000 0.877
0.4794 0
6 0.0959
1.000 0.000
0
0 0
0.1000

0
0
0
L(1) = Link([0 0
1 0])

1.000
0
% create L(1) Link
object

L=
theta=q,
1,
d=
0, a=
alpha=
0 (R,stdDH)
>> L(2) = Link([0 0
% create L(2)
1 0])
Linkobject
February 13,
2015
L=
theta=q1,
d=
theta=q2,
d=

0, a=
0, a=

1,
alpha=
1,
alpha=

SAURABH GUPTA
(2K11/EC/140)

0 (R,stdDH)
0 (R,stdDH)

>> two_link = SerialLink(L, 'name', 'two link'); % Serial-link robot class


represents a serial-link arm-type robot.
two_link =
two link (2 axis, RR, stdDH, slowRNE)
+
|j|
+--| 1|
| 2|
+---

---+

-----------+
theta |
+----------+----------q1|
q2|
+----------+----------0 base = 1 0 0 0 tool = 1 0 0
grav=
0
0
0100
0100
9.81
0010
0010
0001
0001

-----------+
-----------+
d|
a |alpha |
+----------+----------+
0|
1|
0|
0|
1|
0|
+----------+----------+
% grav specifies that gravity will be applied in
z direction
% base specifies the initial position of robot
% tool specifies the end effector direction

>> mdl_twolink

% script to directly produce a two link robot

-----------+

twolink =
two link (2 axis, RR, stdDH, slowRNE)
from Spong, Hutchinson, Vidyasagar;
+
|j|

---+-----------

theta | d |

a |

-----------+
a
l
p
h
a

-----------

+-----------

+---

|
+---------- +-----------

+-----------

| 1|

q1|

0|

1|

| 2|

q2|

0|

1|

+---------- +----------0 base = 1 0 0 0 tool = 1 0


grav =
00
0 10
0
0100
0
9.8
1
0010
0010
0
+---

+-----------

+-----------

0
|
0
|

+-----------

0
0
1
>>
twolink.f
kine([0
0])

0 0

01

% fkine gives the pose (4x4) of the robot end-effector


as a
homogeneous
transformation

ans =
1
0
February 13,
2015

0 1 0 0
0 0 1 0
0 0 0 1
>> twolink.fkine([pi/4
-pi/4])
ans
=
1.707
1.0000
0
0 1
0.707
0 1.0000
0 1
0
0 1.0000 0

0
2%

0 0 1.0000
plot displays the robot orientation in 3D

SAURABH GUPTA
(2K11/EC/140)

>>twolink.plot([0 0])

>> twolink.plot([pi/4 pi/4])

% robot with one revolute and one prismatic joint


L(1) = Link([0 0 1 0 0]);
L(2) = Link([0 0 1 0 1]);
Linking = SerialLink(L,name, myrobot);
Linking.plot([0 1]);

>>mdl_puma560;
>>p560.plot(qz) >> p560.plot(qr)

February 13,
2015
>> p560.plot(qs)

SAURABH GUPTA
(2K11/EC/140)
>>p560.plot(qn)

Code:
Write down a Matlab program for the planar 3 DOF RRR robot with the following
parameters
L1 = 4m, L2=3m,L3 = 2m
To derive theforward kinematics i.e. 0T3, with the following input cases:

(1)
(2)
(3)

[0 0 0]T
[10 20 30]T
[90 90 90]T

1> B(1) = Link([0 0 4 0]);

2> B(2) = Link([0 0 3 0]);


3> B(3) = Link([0 0
2 0]) B =

SAURABH GUPTA
(2K11/EC/140)

February 13,
2015
theta=q1,
d=
theta=q2,
d=
theta=q3,
d=

0, a=
0, a=
0, a=

4,
alpha=
3,
alpha=
2,
alpha=

0 (R,stdDH)
0 (R,stdDH)
0 (R,stdDH)

>> saurabh = SerialLink(B, 'name', 'myBot')


saurabh =
myBot(3 axis, RRR, stdDH, slowRNE)
------+
+----+
+
---+----------|j|
thet
a| d| a|
alpha |
+
- +----+--- +----------- -----+
| 1| q1| 0|
4|
0|
| 2| q2| 0|
3|
0|
| 3| q3| 0|
2|
0|
+
- +----+--- +----------- -----+
-----------

+-----------

+-----------

+-----------

0 base = 1 0 0 0 tool = 1 0
grav = 0 0
0 10
0
0100
0
9.8
1
0010
0010
0
0
0
1 00
01

1> dr = pi/180;
2> q1 = [0 0 0]*dr;
3> q2 = [10 20 30]*dr;
4> q3 = [90 9090]*dr;
T30_1 = saurabh.fkine(q1)
T30_1 =
1
0
0
0

0
1
0
0

0
0
1
0

9
0
0
1

>> T30_2 = saurabh.fkine(q2)


T30_2 =
7.537
3
3.926
0.8660 0.5000
0 6
0
0 1.0000
0
0
0 0 1.0000
2> T30_3 = saurabh.fkine(q3)
February 13,
2015
0.5000 -0.8660

SAURABH GUPTA
(2K11/EC/140)

T30_3
=
1.000
3.000
-0.0000 0
0
0
2.000
-1.0000 0.0000
0
0
0
0 1.0000
0
0
0
0 1.0000
>>saurabh.plot([0 0 0], 'workspace', [55

>>saurabh.plot([10*dr, 20*dr, 30*dr],

[-5 5 -5 5 -5 5])

'workspace', -5 5 -5 5])

>>saurabh.plot([90*dr, 90*dr, 90*dr], 'workspace', [-5 5 -5 5 -5 5])

Result: Hence the robot simulation in Matlab was successfully performed for
different types of robots in different configurations.

February 20, 2015 SAURABH GUPTA (2K11/EC/140)

EXPERIMENT 3
Aim: To study concepts of inverse kinematics using Peter Corke toolbox in Matlab.
Theory:
Inverse Kinematics: To determine the values of the joint variables given the endeffector position and orientation.

1> mdl_puma560
2> qn
qn =
0 0.7854 3.1416 0 0.7854 0
>> T = p560.fkine(qn) % fkine gives the pose (4x4) of the robot end-effector
as a homogeneous transformation

20

-0.0000
0.0000
0.5963
-0.0000
-0.0000 -0.1501

1.0000
1.0000

- 0.014
-1.0000 0.0000 0.0000
4
0
0
0 1.0000

>> qi = p560.ikine6s(T) % ikine6s gives the joint coordinates corresponding to


the robot end-effector with pose T
qi =
-0.0000 0.7854 3.1416 3.1416 -0.7854 -3.1416
>> p560.fkine(qi)
ans =
-0.0000
-0.0000
1.0000
0.5963
-0.0000
1.0000
0.0000
-0.1501
-1.0000
-0.0000 -0.0000 -0.0144

0 0 1.0000

2> qi
p560.ikine6s(T,
qi =

=
'lu')

February 20, 2015 SAURABH GUPTA (2K11/EC/140)

-0.0000 0.7854 3.1416 3.1416 -0.7854 -3.1416


p560.plot(qi)
>> qi = p560.ikine6s(T, 'ru')
qi =
2.6486 -2.3081 3.1416 -2.4673 -0.8604 -0.4805

1> p560.plot(qi)
2> p560.ikine6s( transl(3,
0, 0) ) ans =
NaN NaN NaN NaN NaN NaN % shows that the point is unreachable
>> qi = p560.ikine(T) % ikine gives the joint coordinates corresponding to the
robot end-effector with
pose T
qi =
-0.0000 -0.8335 0.0940 -0.0000 -0.8312 -0.0000
>> p560.fkine(qi)
ans =
0.0000
0.0000
1.0000
0.5963
-0.0000
1.0000
-0.0000 -0.1501 -1.0000
-0.0000 0.0000 -0.0144
0
0 0 1.0000

2> p560.plot(qi)
qi = p560.ikine(T, [0 0 3 0 0 0] % here [0 0 3 0 0 0] gives the initial estimate of
robot
qi =
-0.0000 0.7854 3.1416 0.0000 0.7854 -0.0000

>>
p560.plot(
qi)

February 20, 2015 SAURABH GUPTA (2K11/EC/140)

Code:
Write down a Matlab program to solve planar RRR robot inverse kinematics
problem having the length parameters L1=4m, L2=3m, L3=2m.
For the following relative pose
(a) 0TH = [1 0 0 9; 0 1 0 0; 0 0 1 0; 0 0 0 1]
(b) 0TH = [0.5 -0.866 0 7.5373; 0.866 0.6 0 3.9266; 0 0 1 0; 0 0 0 1]
(c) 0TH = [0 1 0 -3; -1 0 0 2; 0 0 1 0; 0 0 0 1]
(d) 0TH = [0.866 0.5 0 -3.1245; -0.5 0.866 0 9.1674; 0 0 1 0; 0 0 0 1]
(a) TH0
=
1 0 0 9
0 1 0 0
0 0 1 0
0 0 0 1
dr =
pi/180;
q = saurabh.ikine(TH0, [0 0 0], [1 1 0 0 0
1 1])
q
1 =
0 0 0
q = saurabh.ikine(TH0, [90 -90 90]*dr, [1
1 1 0 0 0 1])
q
1 =
1.0e-03 *
0.1934 -0.4512 0.2578
saurabh.plot(q1, 'workspace', [-8 8 -8 8 -8 8])
>>
saurabh.fkine(q
1) ans =
1.0000 0.0000

9.000
0
0.000
0.0000 1.0000
0 0
0
0 1.0000
0
0
0 0 1.0000
0

(2) TH1 = [0.5 -0.866 0 7.5373; 0.866 0.6 0 3.9266; 0

0 1 0; 0 0 0 1] TH1 =
0.5000 0.8660

7.537
3
3.926
0.8660 0.6000
0 6
0
0 1.0000
0
0
0 0 1.0000
2> q1 = saurabh.ikine(TH1, [0 0 0], [1 1 0 0 0 1])
0

matrix not orthonormal rotation matrix


February 20,
2015

SAURABH GUPTA
(2K11/EC/140)

TH1(1:3,1:3)*TH1(1:3,1:
3)'
% The above can be proved as
% for orthonormal matrix the result should be identity
ans =
matrix
1.0000 0.0866
0
-0.0866 1.1100
0

0
(3)

0 1.0000
TH2 = [0 1 0 -3; -1 0 0 2;

0 0 1 0; 0 0 0 1] TH2 =
0
-1
0
0
>> q2 = saurabh.ikine(TH2, [0 0 0], [1 1
0 0 0 1])
q2 =
1.5708 1.5708 1.5708
>> saurabh.plot(q2)

1
0
-3
0
0
2
0
1
0
0
0
1
q = saurabh.ikine(TH2, [90 -90 90]*dr, [1
2 1 0 0 0 1])
q
2 =
2.8578 -1.5708 -2.8578
>> saurabh.plot(q2)

TH3 =
3.124
0.8660 0.5000
0
5
9.167
-0.5000 0.8660
0
4
0
0 1.0000
0
0
0 0 1.0000

2> q3 = saurabh.ikine(TH3, [0 0 0]*dr,


[1 1 0 0 0 1]) q3 =
1.6852
1.6649
2.4095
saurabh.fkine(q3)
ans =
0.8660 0.5000 0 -1.6596

February 20,
2015

SAURABH GUPTA
(2K11/EC/140)

0.866
0
-0.5000 0
2.3529
0
0 1.0000
0
0
0
0 1.0000

q3 = saurabh.ikine(TH3, [90 -90 90]*dr, [1 1


0 0 0 1])

% for different initial estimate

q3 =
1.0e+04 *
-2.6746 6.4932 -3.7081
>> saurabh.fkine(q3)
ans =
0.480
0.8769 8
0 -1.6122
-0.4808 0.8769
0 2.3772
1.000
0
0 0
0

0
0
0 1.0000
= saurabh.ikine(R, [0 0 0], [2 0 0 0 0
q3 1])

% for different
mask

q3 =
-2.0655 0.9075 2.4494

>> saurabh.fkine(q3)
ans =
3.124
0.8660 0.5000
05
5.023
-0.5000 0.8660
08
0 0 1.0000
0
0
0 0 1.0000

2> saurabh.plot(q3)
The different result for the same post 0TH is due to different usage of Initial
estimate (Q0) and mask (M).
Result: Hence the various concepts related to inverse kinematics were

successfully studies for different robots in varying configurations.

March 20, 2015

SAURABH GUPTA (2K11/EC/140)

Experiment - 4
Aim: To perform forward kinematic of 3 DOF Planar robot, Stanford manipulator,
PUMA robot for the given DH parameters using Roboanalyser and calculate the endeffector position for a given set of joint values using Matlaband verify the results
with Roboanalyser.
Theory:
Forward Kinematics: To determine the position and orientation of the end-effector
given the values for the joint variables of the robot.
Using Roboanalyser: RoboAnalyzer is a 3D Model Based Robotics Learning System
developed using OpenTK and Visual C#.

The above figures have been suitably labeled for proper description. The UI may be
explained as follows:

1. In this block, one can select the DOF and type of robot from tabs provided on
the far left. The center table shows the DH parameter for the robot. These
can be changes as per the requirement.
2. Here 4 tabs are present. One can see a DH paremeter animation for the robot
in Visualize DH tab. Link Config and EE Config gives the transformation
matrix for a particular link and for base to end-effector respectively.
3. Once the robot is configured, press the FKin button to completer the
forward kinematic analysis. Then press the play button to view the trajectory
of robot with initial and final position as specified in block 1. One can also
change thetime duration and no. of steps taken by robot for reaching from
initial to final position.
4. Here one can see the actual robot, coordinate axis w.r.t each link, and the

trajectory plotted after hitting the playbutton.

5. This screen is obtained by hitting the Graph tab on top of the screen. Here

one can see the variation in position for each link separately with time and
the joint parameters such as joint value, velocity, acceleration etc.
6. This block shows thegraph for selected parameter in block 5.

March 20, 2015

Code:
(a) 3 DOF Planar robot

1> L(1) = Link([0,0,0.3,0])


2> L(2) = Link([0,0,0.25,0])
3> L(3) = Link([0,0,0.1,0])
4> saurabh = SerialLink(L, 'name',
'myBot') saurabh =
myBot (3 axis, RRR, stdDH, slowRNE)
+-------+
--+
| j | theta |
d|
alpha |
+-------- +-------++-------+------------+
----| 1|
q1|
0|
0|
| 2|
q2|
0|
0|
| 3|
q3|
0|
0|
+-------- +-------++-------+------------+
----grav 0 base = 1 0 0 tool = 1 0 0
=
00
0
0100
0100
9.81
0010
0 010
00
0
1
0001
++-----------

+----------

-----------

a |

0.3|

0.25 |

0.1|

1> dr
=
pi/180 dr =
0.0175
2> T30_1 = saurabh.fkine([0 0 0])
T30_1 =
0.650
1.0000
0
0
0
0 1.0000
0
0
0
0 1.0000
0
0
0
0 1.0000

SAURABH GUPTA (2K11/EC/140)

Roboanayseroutput:

March 20, 2015

SAURABH GUPTA (2K11/EC/140)

(b) Puma Robot


>>B(1) = Link([0 0.77421 0 -pi/2])
1> B(2) = Link([0 0.101592 0.506628 pi])
2> B(3) = Link([0 -0.0381 0.02 -pi/2])
3> B(4) = Link([0 0.267969 0 pi/2])
4> B(5) = Link([0 0 0 -pi/2])
5> B(6) = Link([0 0.05842 0 0])

6> saurabh_puma=

SerialLink(B,

'name', 'myBot') saurabh_puma =


myBot (6 axis, RRRRRR, stdDH, slowRNE)
----------+ ---+
+
|j|
theta | d | a |
+- +-------- +-------+---------- ---------------+-----------

| 1|

q1|

| 2|
| 3|

q2|
q3|

0.774
2|

+----------

alpha |
+----------

0| -1.571|
0.5066|
0.101
3.142|
6|
0.02| -1.571|
0.0381|
0.268

| 4|
q4|
0| 1.571|
|
| 5|
q5|
0| 0| -1.571|
0.0584
| 6|
q6|
0|
0|
2|
+- +-------- +-------+---------- ----+
-+----------

grav
0 base = 1 0 0 0 tool = 1 0
=
00
0
0100
0100
9.81
0010
0010
00 0 1
0001
>> T30_1 = saurabh_puma.fkine([0 0 0 0 0 0])
T30_1
=
0.526
1.0000
0
0
6
0 1.0000
0 0.1397

0
0

0
0

1.000
0 1.1006
0 1.0000

Roboanalyseroutput:

March 20, 2015

(c) Stanford Robot

1> S(1) = Link([0 0.762 0 -pi/2])


2> S(2) = Link([0 0.393412 0 -pi/2])
3> S(3) = Link([-pi/2 0 0 0 1])
4> S(4) = Link([0 0.2268 0 -pi/2])
5> S(5) = Link([0 0 0 -pi/2])
6> S(6) = Link([0 0.4318 0 0])
7> saurabh_stanford

SerialLink(S,

'name', 'myBot') saurabh_stanford =


myBot (6 axis, RRPRRR, stdDH, slowRNE)

+---

+-----------

+
-

+-----------

+
-

th d
eta
|j| | |

+---

| 1|
| 2|
| 3|
| 4|
| 5|
| 6|
+---

----------

-----------+

a | alpha |
+---------

+--------- +

0
.
7
6
q1 2
| |
0|
q2| 0.3934|
0|
-q
1.53
71||
0|
0|
q4| 0.2268|
0|
q5 0
| |
0| -1.571|
q6| 0.4318|
0|
+
+----------- +--+--- +
----- --------- 1.571|

- 1.571|

- 1.571|

0|

SAURABH GUPTA (2K11/EC/140)

grav 0 base = 1 0 0 tool = 1 0 0


=
0
0
0
0100
0100
9.81
0010
0 010
0
0
00
0
0
1
1

>> T30_1 = saurabh_stanford.fkine([0 -pi/2 0.635 0 pi pi])


T30_1
=
0.000 0.000
1.293
0
0 1.0000
6
1.000
0
0 -0.0000 0.3934
0.000
1.0000 0.762
0
-0.0000
0
1.000
0
0
0
0
Roboanalyseroutput:

March 20, 2015

SAURABH GUPTA (2K11/EC/140)

Result: Hence the various robots were successfully simulated in the RoboAnalyser
environment with different configurations. Further the transformation matrix
obtained using Matlab for given joint values were verified using RoboAnalyser.

March 27, 2015

SAURABH GUPTA (2K11/EC/140)

Experiment 5
Aim: Perform the inverse kinematics of a 3DOF planar & articulated robot of the
given DH parameters using RoboAnalyser. Derive the analytical expressions for the
inverse kinematic problem of 3 DOF planar robot and verify the solutions from
Matlab.
Theory:
Inverse Kinematics: To determine the values of the joint variables given the endeffector position and orientation.
Using Roboanalyser:RoboAnalyzer is a 3D Model Based Robotics Learning System
developed using
OpenTK and Visual C#.

The above figures have been suitably labeled for proper description. The UI may be
explained as follows:

1. To perform Inverse Kinematic analysis, press the IKin button which open up
2.
3.

4.
5.

a new window.
Here, select the type of robot that is to be analysed e.g. 2R, 3R, wrist etc.
This block is used to set up the various parameters used to configure the
robot such as link length (a), joint offset (b), twist angle (alpha). Now, in order
to perform Inverse Kinematics, specify the end effector coordinates w.r.t the
base frame.
Press the Ikin button to start the analysis.
The corresponding results i.e. the joint values will get displayed in block 5.
Two solutions are provided, one for left hand configuration and other for right

6.
7.

8.
9.

hand configuration.
One may press the Show button to display the robot simulation for the
obtained joint values.
Now the 2 obtained solutions may be used to plot robot trajectory from one
solution to the other. Select either of the solution as the initial position and
the other solution as the final solution.
Press Ok button. This step changes the initial and final joint values in DH
parameter table.
Now press FKinbutton to display the corresponding trajectory from solution
1 to solution 2.

March 27, 2015

SAURABH GUPTA (2K11/EC/140)

Code:
The solution may be obtained from Matlab using trigonometric analysis or directly
from RoboAnalyzer.
(a) DOF planar robot
a1=2; a2=2; a3=1;

% Non-zero constant DH parameters

phi=pi/3; px = 2.5 + sqrt(3); py = 1 + sqrt(3)/2;

% Input

wx = px - a3*cos(phi);
wy = py-a3*sin(phi); del=wx*wx + wy*wy;
c2 = (del-a1*a1-a2*a2)/(2*a1*a2);

% Calculation for theta2

s2 = sqrt(1-c2*c2);
th21=atan2(s2,c2); th22=atan2(-s2,c2);
%Calculation for theta1
s11= ((a1+a2*cos(th21))*wy-a2*sin(th21)*wx)/del; c11 =
((a1+a2*cos(th21))*wx+a2*sin(th21)*wy)/del; s12 = ((a1+a2*cos(th22))*wya2*sin(th22)*wx)/del; c12= ((a1+a2*cos(th22))*wx+a2*sin(th22)*wy)/del;

th11 = atan2(s11,c11); th12=atan2(s12,c12);


th31 = phi-th11-th21; th32=phi-th12-th22;

% calculation for theta3

r2d=180/pi; % Angle in degrees


th11d= th11*r2d; th12d=th12*r2d; th21 = th21*r2d;
th22d=th22*r2d; th31d=th31*r2d; th32d = th32*r2d;

RoboAnalyze
r Output

March 27, 2015

SAURABH GUPTA (2K11/EC/140)


th22=atan2(s22,c22);

14>

th23=pi-th21;
th24=pi-th22;

15>

r2d=180/pi;

>>th11d=th11*r2d;

(2) 3R Articulated
2> a2=1; a3=1;
3> px=1;py=0;pz=0;
4> delxy=px*px+py*py; del=delxy+pz*pz;
5> th11=atan2(py,px);
6> th12=pi+atan2(py,px);
7> c3=(del-a2*a2-a3*a3)/(2*a2*a3); s3=sqrt(1c3*c3);

8> th31=atan2(s3,c3); th32=atan2(-s3,c3);


9> s21=(-(a2+a3*cos(th31))*pza3*s3*delxy)/del;

10>

c21=
((a2+a3*cos(th31))*delxy+a3*s3*pz)/del;

11>

s22=((a2+a3*cos(th31))*pz+a3*s3*delxy)/del;

12>

c22=((a2+a3*cos(th31))*delxya3*s3*pz)/del;

13>

th21=atan2(s21,c21);

March 27, 2015

SAURABH GUPTA (2K11/EC/140)

1> th12d=th12*r2d;
2> th21d=th21*r2d;
3> th22d=th22*r2d;
4> th23d=th23*r2d;
5> th24d=th24*r2d;
6> th31d=th31*r2d;
7> th32d=th32*r2d;
8> th11d=th11*r2d
RoboAnalyzer Output

Result: Hence the Inverse Kinematic Analysis for various robots is successfully
done using RoboAnalyzer.
The results are then verified using Matlab too.

April 17, 2015

SAURABH GUPTA (2K11/EC/140)

Experiment 6
Aim: To study velocity kinematics using Peter Corke tool box.

Theory:
Velocity Kinematics: Velocity Kinematics relates linear and angular velocities
ofthe end-effector to the joint velocities.
>>
mdl_puma560
>> T0 =
p560.fkine(qn)

% obtain transformation matrix

T0 =
0.000 0.000
0
0
1.0000 0.5963
0.000 1.000
0
0
0.0000-0.1501
1.000
0
0.0000 0.0000 -0.0144
0
0
0 1.0000
>> dq = 1e6;
>> Tp =p560.fkine(qn + [dq 0 0 0
0 0])

% give slight disturbance


% obtain new transformation
matrix

Tp =
0.000
0
0.0000 1.0000 0.5963
0.000
0
1.0000 0.0000 0.1500
1.000
0
0.0000 0.0000 -0.0144
0
0
0 1.0000
>> dTdq1 = (Tp T0) / dq
dTdq1
=
0
-1.0000 -0.0000 0.1500

% obtain the differential

0.000
0
0.0000 1.0000 0.5963
0
0
0
0
0
0
0
0
>> dRdq1 =
dTdq1(1:3,1:3)
dRdq1
=
0
-1.0000 -0.0000
0.000
0
0.0000 1.0000
0
0
0

% get the 3x3 differential


rotation matrix

April 17, 2015

SAURABH GUPTA (2K11/EC/140)


0

>> R = T0(1:3, 1:3)


%
obtain rotational matrix from T0

18

.
0
0

.
0

0
0

.
0

0
0

0
0

.
0

0
0

0
.
0
0
0
0

0
.
0
0
0
0

>> S = dRdq1 * R' % obtain the


skew symmetric matrix

19

-0.0000 -1.0000 0.0000

1.0000
0
>>
vex(S)
ans =

-0.0000 0.0000
0
0
%get vector from symmetric matrix

-0.0000
0.0000
1.0000
>> J = p560.jacob0(qn)

% get jacobian w.r.t. world coordinates

J=
0.1501 0.0144 0.3197
0
0
0
0.5963 0.0000 0.0000
0
0
0
0
0.5963 0.2910
0
0
0
0
0.0000 0.0000 0.7071 0.0000 1.0000
-1.0000 -1.0000 -0.0000 -1.0000
0.0000
-0.0000
0.0000 0.0000
0.0000
1.0000 -0.7071
-0.0000
>>T = transl(1, 0,
%get a transformation matrix with rotation in Y
0)*troty(pi/2)
direction
T=
1.000
0.0000
0 0
1.0000
0 1.0000
0
0
-1.0000
0 0.0000
0

0
0
>> J =
tr2jac(T)

0 1.0000
% get new jacobian matrix w.r.t. new
frame

April 17, 2015

SAURABH GUPTA (2K11/EC/140)

J=
0.0000
0-1.0000
0 1.0000 0
0 1.0000
0
0
0 1.0000
0.000
1.0000
0 0
0-0.0000 0
0
0
0 0.0000
0 1.0000
0
0
0
0 1.0000
0
0
0
0 1.0000
0 0.0000

1>vB = J*[1 0 0 0 0 0]';


2>vB'
ans =
0.0000
0 1.0000
>>
p560.jacobn(qn
)

0
% get jacobian matrix w.r.t. endeffector

ans =
-0.0000 -0.5963
-0.2910
0
0
0
0.000
0.5963 0.0000 0
0
0
0
0.319
0.1500 0.0144 7
0
0
0
1.000
0
0
0 0.7071
0
0
0.000
1.000
0
-1.0000 0
0.0000 1.0000
0
0.000
0.000
0.000
0
0.0000 0
0.7071 0
1.0000
>> p560.jacob0(qn,
% get jacobian matrix for Eular
'eul')
values
ans =
0.319
0.1501 0.0144 7
0
0
0
0.000
0.5963 0.0000 0
0
0
0
0 0.5963 0.2910
0
0
0
1.0000 0.0000 0.0000 -0.7071 0.0000 0.0000
0.0000 -1.0000 -0

1.000
1.000
0
0.0000 0
0.000
0.000
0.000
0
0.0000 0
0.7071 0
1.0000
To study inverse kinematics and develop a MATLAB program using peter coorke
toolbox to calculate a jacobianmatrix for planar 3R robot, given the robots length
l1=4m, l2=3m, l3=2m and initial joint angles (1,2,3)=[10 o, 20o,30o].

1> L(1) = Link([0 0 4 0 ]);


2> L(2) = Link([0 0 3 0 ]);
3> L(3) = Link([0 0 2 0 ])
SAURABH GUPTA
(2K11/EC/140)

April 17,
2015
L=
theta=q1,
d=
theta=q2,
d=
theta=q3,
d=

0, a=
0, a=
0, a=

4,
alpha=
3,
alpha=
2,
alpha=

0 (R,stdDH)
0 (R,stdDH)
0 (R,stdDH)

>> two_link_jacob = SerialLink(L, 'name', 'two_link_jacob')


two_link_jacob =
two_link_jacob (3 axis, RRR, stdDH, slowRNE)
----------- ---------------------+
+
+
a |alpha
|j|
theta | d |
|
+-------- +-------- +-------- +-------+--- --------+
| 1|
q1|
0|
4|
0|
| 2|
q2|
0|
3|
0|
| 3|
q3|
0|
2|
0|
+-------- +-------- +-------- +-------+--- --------+
+

---+

grav =
0
9.81

1> dr=
pi/180

-----------+

0 base = 1 0 0 0 tool = 1 0
00
0100
0100
0010
0010
0001
0001

dr =
0.0175

2> q0 = [10 20 30]*dr


q0 =
0.1745 0.3491 0.5236

3> jacobian
jacob0(two_link_jacob,
jacobian =
-3.9266 -3.2321 -1.7321
3.598 1.000
7.5373 1
0
0
0
0
0
0
0
0
0
0
1.000 1.000
1.0000 0
0

=
q0)

April 17, 2015

SAURABH GUPTA (2K11/EC/140)

Result: Various concepts related to velocity kinematics were studied and jacobian
matrix were obtained successfully for the 3DOF planar robot.

April 17, 2015

SAURABH GUPTA (2K11/EC/140)

Experiment 7
Aim: To study forward dynamics of a 3DOF of 3R robot.
Theory:
Dynamics:Robotdynamicsis concerned with the relationship between the forces
acting on a robot mechanism and the accelerations they produce. Forward
dynamics works out the accelerations given the forces.
>>mdl_puma560
Q = p560.rne(qn, qz, qz)
specified joint

%joint torque required for the robot R to achieve the


position Q, velocity QD and acceleration QDD.

Q=
-0.0000 31.6399 6.0351 0.0000 0.0283 0
Q = p560.rne(qn, qz, qz, [0 0 9.81]') % [0 0 9.81] gives the gravitational vector
Q=
31.639
-0.0000
9 6.0351 0.0000 0.0283
0
[q, q1, q2] = jtraj(qz,
% Compute a joint space trajectory between
qr, 10)
two points

SAURABH GUPTA (2K11/EC/140)

April 17, 2015

p560.links(1).dyn

p560.gravi
ty'

% Default gravity assumed by puma robot

ans =
0

0 9.8100

>> p560.gravload(qn)

% Torque required to maintain the robot in


position

ans =
-0.0000 31.6399 6.0351 0.0000
0.0283
p560.gravity=
p560.gravity/6;
>> p560.gravload(qn)
ans =
0.000 5.273
0.000
0
3
1.0059 0
0.0047

p560.base = trotx(pi);
p560.gravload(
qn)
ans =
0.0000 -5.2733 -1.0059
-0.0000
>> Q =
p560.gravload(qs)
Q=
46.0069
-0.0000
8.7722

% Rotate robot vertically

-0.0047

0
% Robot in qs position

0.0000
0.0283

>> Q =
p560.gravload(qr)
Q=
0 -0.7752 0.2489 0
April 17,
2015

SAURABH GUPTA
(2K11/EC/140)

To study forward dynamics of 3 DOF 3R robot


and
toolbox to calculate the forward dynamics for
planar
L2=3m, L3=2m and m1=20Kg, m2=15Kg,
m3=10Kg and
Ignore gravity by assuming that the gravity
acts in a

program
robots
,

normal

plane

peter coorke
as L1=4m,
.

. Using Peter

Coorke toolbox and Matlab commands solve the forward dynamics problem (i.e.
with the available
driving
joint
torque
s an
joint
angles
and
initialjo
in
Perfor
rates)
m

simul
ation
for 4
sec.
{

}=

}=

= Link([0 0 4 0
0 0 0.5 0 0 0
L(1) 020 2 0
0
= Link([0 0 3 0 1. 0 0 0 0.2 0 0
L(2) 0 15
5
0
= Link([0 0 2 0 1 0 0 0 0.1 0 0 0
L(3) 0 10
0
L(1).mdh=1;

01
% Link1 with a=4m, m=20kg,
000])
I=0.5kgm
% Link2 with a=3m, m=15kg,
00
I=0.2kgm2
1 00 0])
% Link3 with a=2m, m=10kg,
01
I=0.1kgm2
0 0 0])

% Use modified DH parameters

L(2).mdh=1;
L(3).mdh=1;
two_link_sau = SerialLink(L, 'name', 'two link_sau');

two_link_sau =
two link_sau (3 axis, RRR, stdDH, slowRNE)
------+
+
----+
+
---+----------| j|
thet
a| d| a|
alpha |
+
- +----+--- +----------- -----+
| 1| q1| 0|
4|
0|
| 2| q2| 0|
3|
0|
| 3| q3| 0|
2|
0|
+
- +----+--- +----------- -----+
0 base = 1 0 0 0 tool = 1 0
grav =
00
0 10
0
0100
0
9.8
1
0010
0010
0
0
0
1 00
01
-----------

+-----------

+-----------

+-----------

t0=0; tf=4; N=40; dt=(tf-t0)/N;


>> dt
dt =
0.1000

April 17, 2015

SAURABH GUPTA (2K11/EC/140)

t=[t0:dt:tf];
dr= pi/180;
Q = [0
60
Q
0 =

90
30]*dr
% Specify initial robot position

-1.0472 1.5708 0.5236


[tf Q Qd] = two_link_sau.fdyn(1, [20;5;1], Q0, [0.5 0 0]) % Do forward dynamics for
1s & [20;5;1] torque

Conclusion: Hence forward dynamics of 3DOF planar robot is studied


successfully with the given link configurations.

Learning Outcome: In this experiment a clear understanding of relationship


between force/torque and objectmotion was developed. Both, torque required to
keep the robot in a particular position and object position resulting due to an
applied force, were studied.

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

Experiment 8
Aim: To study Peter Corke Computer Vision toolbox.
Theory:
// Spectral Representation of Light//
//define a range of wavelengths//
>>lambda = [300:10:1000]*1e-9;
//in this case from 300 to 1 000 nm, and then compute the
blackbody spectra// >>for T=1000:1000:6000
>>plot( lambda*1e9, blackbody(lambda,
T)); hold all >>end
as shown in Fig. 8.1a.

//The filament of tungsten lamp has a temperature of 2 600 K and glowswhite


hot. The Sun has a surface temperature of 6 500 K. The spectra of these
sourcesare compared in Fig. 8.1b.//
>>lamp =
blackbody(lambda, 2600);
>>sun = blackbody(lambda,
6500);
>>plot(lambda*1e9, [lamp/max(lamp) sun/max(sun)])

//The Suns spectrum at ground level on the Earth has been measured and
tabulated//
>>sun_ground = loadspectrum(lambda,
'solar.dat'); >>plot(lambda*1e9,
sun_ground)
and is shown in Fig. 8.3a.

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

AIM: Absorption

1> [A,

lambda]

loadspectrum([400:10:700]*1e-9,
'water.dat');

//Ais

the

absorption

coefficient//

2> d = 5;//dis the path length//


3> T = 10.^(-A*d);// TransmissionTis the
inverse
of
absorption//
>>plot(lambda*1e9, T);
which is plotted in Fig. 8.3b.

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

Reflection
//the reflectivity of a red housebrick//
>> [R, lambda] = loadspectrum([100:10:10000]*1e-9,
'redbrick.dat'); >>plot(lambda*1e6, R);
which is plotted in Fig. 8.4. We see that it reflects red colors more than blue.

//The illuminance of the Sun in the visible


region// >>lambda = [400:10:700]*1e-9;
%
visible
spectrum
>>
E
=
loadspectrum(lambda, 'solar.dat');
//at ground level. The reflectivity of the brick is//
1> R
=
loadspectrum(lambda,
'redbrick.dat');
//and
the
light
reflected from the brick is//
2> L = E .* R;
>>plot(lambda*1e9, L);
//which is shown in Fig. 8.5. It is this spectrum that is interpreted by our eyes as the
color red.//

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

Color
//The luminosity function is provided by the
Toolbox// >>human = luminos(lambda);
>>plot(lambda*1e9,
human) //and is shown
in Fig. 8.6a. //

//The spectral responses of the cones can be loaded and


shown in fig 8.7.b// >>cones = loadspectrum(lambda,
'cones.dat');
>>plot(lambda*1e9, cones)

Reproducin
g Colors

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

//cmfrgb function return the color matching functions//


>>lambda =
[400:10:700]*1e-9;
>>cmf =
cmfrgb(lambda);
>>plot(lambda*1e9,
cmf);

>>orange = cmfrgb(600e-9)// create the sensationof light at


600 nm (orange)// orange =
2.8717 0.3007 -0.0043
>>green =
cmfrgb(500e-9)
green =
-0.2950 0.4906 0.1075
>> w =
-green(1);
>>white = [w
ww ];
>>feasible_green = green
+ white feasible_green =
0 0.7856 0.4025
>>whi
te
white
=

0.2950 0.2950 0.2950


//The Toolbox function cmfrgbcan also compute the CIE tristimulus
for an arbitrary spectral response. //
>>RGB_brick =
cmfrgb(lambda, L)
RGB_brick =0.6137 0.1416
0.0374
Chromaticity Space

// The Toolbox function lambda2rg computes the color matching function //

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


ans
=
'burntsienna'
>> [r,g] =
'burntumber'
lambda2rg( [400:700]*1e-9 );
>>plot(r, g)
>>color
>>rg_addticks
name('b
urntsien
na') ans
>>primaries = cmfrgb( [700, 546.1,
=
435.8]*1e-9 ); >>plot(primaries(:,1),
0.5412 0.2118 0.0588
primaries(:,2), 'd')

//chromaticity of the spectral green


color// >>green_ cc = lambda2rg(500e9); green_cc =
-0.9733 1.6187
>>plot2(green_cc, 's')
>>white_ cc = tristim2cc([1 1 1])//of white
color// white_cc =
0.3333 0.3333
>>plot2(white_cc, '*')

Color Names
>>colorname('?burnt')

>>bs =
colorname('bur
ntsienna', 'xy')
bs =
0.5258 0.3840
>>colorn
ame('cho
colate',
'xy') ans
=
0.5092 0.4026
>>colorn
ame([0.5
4 0.20
0.06]) ans
=
burntsienna

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


>>colorname(bs
, 'xy') ans =
'burntsienna'
Other Color Spaces
//The function colorspacecan be used to perform conversions between different
color spaces. //
>>colorspace('RGB->HSV',
[1, 0, 0]) ans =
011
>>colorspace('RGB->HSV',
[0, 1, 0]) ans =
120 1 1
>>colorspace('RGB->HSV',
[0, 0, 1]) ans =
240 1 1
// thesaturation is 1, the colors are pure, and the intensity is 1//
>>colorspace('RGB->HSV',
[0, 0.5, 0]) ans =
120.0000 1.0000 0.5000
1/ intensity drops but hue and saturation are unchanged//

2/ Image Formation //
cam = CentralCamera('focal', 0.015); // creatinga model of a central-perspective
camera //
1>P = [0.3, 0.4, 3.0]';
2> cam.project(P);
ans =
0.00
15
0.00
20
cam.project(P, 'Tcam', transl(-0.5, 0, 0) ) // moving camera 0.5 m
to theleft // ans =
0.0040
0.0020
cam = CentralCamera('focal', 0.015, 'pixel', 10e-6, ... //displays the parameters of
the imaging model
//
'resolution', [1280 1024], 'centre', [640 512],

'name', 'mycamera') name: mycamera [centralperspective]


focal length: 0.015
pixel size: (1e-05,
1e-05) principal pt:
(640, 512)
number pixels: 1280 x 1024

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


T:
1000
0100
0010
0001
>>cam.projec
t(P); ans =
790
712
>>ca
m.C
ans =
1.0e+03 *
1.5000
0 0.6400
1.500
0
0 0.5120
0.001
0
0
0
>>cam.fov() * 180/pi
ans =

0
0
0
// field of view //

46.212737.6930
// we create a 33 grid of points in thexy-plane with overall side length 0.2 m and
centred at (0, 0, 1) ,returns 3*9 matrix //

1>P = mkgrid(3, 0.2, 'T', transl(0, 0, 1.0));


2>P(:,1:4);
ans =
- 0.100
-0.1000 0.1000 0

0.1000 0.1000
1.000
1.0000 1.0000 0
1.0000
>>
// The image plane coordinates of the
cam.project(P)
vertices are //
ans =
-0.1000

490 490 490 640 640 640 790 790 790


362 512 662 362 512 662 362 512 662

>>
cam.plot(P)
h=
173.0022

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


2> [X,Y,Z] = mkcube(0.2, 'T',
transl([0, 0, 1.0]), 'edge');

>> Tcam = transl(1,0,0.5)*troty(0.9); // oblique view of


the plane //
1> cam.plot(P, 'Tcam', Tcam)

2> cam.projec
t([1 0 0 0]',
'Tcam', Tcam)
ans =
1.0e+03 *
1.8303
0.5120
>> p = cam.plot(P, 'Tcam', Tcam)

>>
// oblique viewing case the image plane
p(:,1:4)
ans =
887.763 887.763 887.763 955.245
8
8
8
1
364.333 512.000 659.667 374.905
0
0
0
0
>> cube = mkcube(0.2, 'T', transl([0,
0, 1]) ); // projection of cube //
1> cam.plot(cube);

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

/ll//// Camera Calibration //


1/ lHomogeneous
Transformation Approach //

2> P = mkcube(0.2);
3> T_unknown = transl(0.1, 0.2, 1.5)

>> cam.mesh(X, Y, Z)
>>
ca
m.T
=
tra
nsl(
-1,0
,0.5
)*tr
oty(
0.8)
; //
obli
que
vie
w //
>>
ca
m.
me
sh(
X,
Y,
Z,
'Tca
m',
Tca
m);

* rpy2tr(0.1, 0.2, 0.3);


4> cam = CentralCamera('focal', 0.015,
...'pixel', 10e-6, 'resolution', [1280
1024], 'centre', [512 512],
...'noise', 0.05);
5> p
=
cam.project(P,
'Tobj',
T_unknown)
6> C = camcald(P, p)
maxm residual 0.067393 pixels.
C=
883.1620 -240.0720 531.4419
612.0432
2
5
9
.
3
7
8
6
9
9
4
.
1
9
2
1
2
3
4
.
8
1
8

2
7
1
2
.
0
1
8
0
0
.
1
0
4
3
0
.
0
9
8
5
0
.
6
4
9
4
1
.
0
0
0
0

o
s
i
n
g
t
h
e
C
a
m
e
r
a
C
a
l
i
b
r
a
t
i
o
n
M
a
t
r
i
x
/
/
e
s
t

/
/

l
D
e
c
o
m
p

i
n
v
c
a
m
c
a

l
(
C
)
est =
n
a
m
e
:
i
n
v
c
a
m
c
a
l
[
c
e
n
t
r
a
l
p
e
r
s
p
e
c
t
i
v
e
]
f
o
c
a
l
l
e
n
g

t
h
:
1
5
0
4
p
i
x
e
l
s
i
z
e
:
(
1
,
0
.
9
9
8
5
)
p
r
i
n
c
i
p
a
l
p
t
:
(
5
1
8

.
6
,
5
0
5
)
Tcam:

May 1,
2015

SAURABH GUPTA
(2K11/EC/140)

0.93695 -0.29037 0.19446 0.08339


0.31233 0.94539 -0.093208 -0.39143
-0.15677 0.148070.97647 -1.4671
0
0
0
1
>>
est.f/est.rho(1)
// ratio of estimated parameters of camera //
ans
=
1.5044e
+03
>>
cam.f/cam.rho(2 // ratio of true parameters of
)
camera //
ans
=
1.500e+
03
>>
T_unknown*est.T
// relative pose between the true and estimated camera pose//
ans
=
0.7557 -0.5163 0.4031 -0.0000
0.6037 0.7877 -0.1227 -0.0001
-0.2541 0.3361 0.9069
-0.0041
1.000
0
0
0
0

1> plot_sphere(P, 0.03, 'r')


2> plot_frame(eye(4,4), 'frame', 'T', 'color', 'b', 'length', 0.3)
3> est.plot_camera()

// Pose Estimation //
>>cam = CentralCamera('focal', 0.015, 'pixel', 10e6, ...'resolution', [1280 1024], 'centre', [640 512]); // create
a calibrated camerawith known parameters//
>> P = mkcube(0.2); // cube determination //
>> T_unknown =
transl(0,0,2)*trotx(0.1)*troty(0.2) // pose estimation
w.r.t camera// T_unknown =
0.9801

0 0.1987 0
May 1,
2015

SAURABH GUPTA (2K11/EC/14

0.097
0.0198 0.9950
8
0
0.975
-0.1977 0.0998
2 2.0000
1.000
0
0
0
0

1> p = cam.project(P, 'Tobj', T_unknown);


2> T_est = cam.estpose(P, p) // estimate
relative pose of the object // T_est =
0.000 0.198
0
7 0.0000
0.995 0.097
0.0198 0
8 0.0000
0.975
-0.1977 0.0998
2 2.0000
1.000
0
0
0
0
0.9801

// lNon-Perspective Imaging Models //

1> cam = FishEyeCamera('name', 'fisheye', ...'projection',


'equiangular', ...'pixel', 10e-6, ...'resolution', [1280 1024]); //
creating fishery camera //
2> [X,Y,Z] = mkcube(0.2, 'centre', [0.2, 0, 0.3], 'edge'); // creating
edge based model //
3> cam.mesh(X, Y, Z)

l//// lCatadioptric Camera //

1> cam = CatadioptricCamera('name', 'panocam', ...'projection',


'equiangular', ...maxangle',pi/4, ...
'pixel', 10e-6, ... 'resolution', [1280 1024])
2> [X,Y,Z] = mkcube(1, 'centre', [1, 1, 0.8], 'edge');
3> cam.mesh(X, Y,Z)

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


// lMapping Wide-Angle
Images to the Sphere //
>> fisheye =
iread('fisheye_target.png',
'double', 'grey');
>>u0 = 528.1214; v0 =
384.0784; // camera calibration
parameters //
1> l=2.7899;
2> m=996.4617;
3> [Ui,Vi]
=
imeshgrid(fisheye); // domain
of input image //
// spherical camera //
4> n = 500;
1> cam = SphericalCamera('name',5> theta_range = (0:n)/n*pi/2 +
pi/2;
'spherical')
2> [X,Y,Z] = mkcube(1, 'centre', [2, 3, 6> phi_range = (-n:2:n)/n*pi;
7> [Phi,Theta]
=
1], 'edge');
meshgrid(phi_range,
3> cam.mesh(X, Y, Z)
theta_range); // n is no of grid
cells //
8> r = (l+m)*sin(Theta) ./
(l+cos(Theta));
>> U = r.*cos(Phi) + u0; //
Cartesian coordinates in the
input image //
1> V = r.*sin(Phi) + v0;
2> spherical = interp2(Ui, Vi,
fisheye, U, V); // applying
wrap //
3> idisp(spherical)

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

// lSynthetic Perspective
Images //
>> W = 1000;
>> m =
W/2/
tan(45/2*
pi/180) //
45 fied of
view // m
=
1.2071e+03
1> l = 0;
2> u0 = W/2; v0 = W/2;
3> [Uo,Vo] = meshgrid(0:W-1, 0:W1); // The domain ofthe output
image //
>> r = sqrt((Uo-u0).^2 + (Vov0).^2); // polar coordinates //
1> phi = atan2((Vo-v0), (Uo-u0));
2> Phi_o = phi; // spherical
coordinates //
1> Theta_o
=
pi
acos( ((l+m)*sqrt(r.^2*(1-l^2) +
(l+m)^2) -l*r.^2) ./ (r.^2 +
(l+m)^2) );
2> idisp(perspective)
3> spherical
=
sphere_rotate(spherical,
troty(1.2)*trotz(-1.5));

1 Obtaing
an image
from a
file

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

The toolbox consist of iread


function to read an image
>>street = iread('street.png'); %
Which returns a matrix
>
>>about(stree
% Gives details
related to
>
t)
street
f
l bytes)
street [uint8] : 851x1280 (1089280
o coordinate
>>street(200, % The pixel at image
w
300)
(300,200)
e
ans =
r
42
s
>>street_d = idouble(street); %
double precision number in the =
range [0, 1]. >>about(street_d)
i
>>street_d [double] : 851x1280
r
(8714240 bytes)
e
>>street_d =
iread('street.png',
'double'); % specify
this as an option while
loading
>>idisp(street); % to
display an image is
idisp

a
d
(
'
f
l
o
w
e
r
s
8
.
p
n
g
'
)
;
>
>
a
b
o
u
t
(

f
l
o
w
e
r
s
)
flowers [uint8] : 426x640x3
(817920 bytes)

>
>
p
i
x
=
f
l
o
w
e
r
s
(
2
7
6
,
3
1
8
,
:

)
%
p
i
x
e
l
a
t
(
3
1
8
,
2
7
6
)
a
n
s
(
:
,
:
,
1
)
=
57

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

ans(:,:,2
) = 91
ans(:,:,3
) = 198
% The pixel value is

>>idisp( flowers(:,:,1) )
2 Images from an Attached Camera
Most laptop computers today have a builtin camera for video conferencing. For
computers without a builtin camera an external camera can be easily attached via
a USB. The means of accessing a camera is operating system specific
and the Toolbox provides a simple interface to a camera for MacOS,
Linux and Windows. % A list of all attached cameras and their
capability can be obtained by >>VideoCamera('?')
1%
We
open
a
particular camera >>cam
= VideoCamera('name')
2%
which returns an instance of a VideoCamera object that
is a subclass of the %ImageSource class.
3%
The dimensions of the image returned by the camera are
given by the size method >>cam.size()

4%
an image is obtained using the grab method
>>im = cam.grab();
% which waits until the next frame becomes available.

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


0
3 Images from a Movie File
The Toolbox supports reading
frames from a movie file stored
in any of the popular formats
such as AVI, MPEG and MPEG4.
For example we can open a
movie file
>
>
c
a
m
=
M
o
v
i
e
(
'
t
r
a
f
f
i
c
_
s
e
q
u
e
n
c
e
.
m
p
g
'
)
;
7
2

x
5
7
6
@
2
.
9
9
9
9
7
0
e
+
0
1
f
p
s
350 frames

%This movie has 350 frames and was


captured at 30 frames per second.
%
T
h
e
s
i
z
e
o
f
e
a
c
h
f
r
a

m
e
w
i
t
h
i
n
t
h
e
m
o
v
i
e
i
s
>
>
c
a
m
.
s
i
z
e
(
)
a
n
s
=
7
2
0
5
7
6

1
%
a
n
d

t
h
e
n
e
x
t
f
r
a
m
e
i
s
r
e
a
d
f
r
o
m
t
h
e
m
o
v
i
e
f
i
l
e
b
y
>
>
i
m
=

matrix.
c
a
m
.
g
r
a
b
(
)
;

4 Images from Code


% The Toolbox function testpattern
generates simple images with a
variety of patterns including lines,
grids of dots or squares, intensity
ramps and intensity sinusoids.
>
>
i
576x720x3m

>>about(im)
im
[uint8]
:
(1244160 bytes)
2%
which is a 720576 color=
image.
t
e
3%
s
Wit
t
h
p
the
a
se
t
few
t
pri
e
mit
r
ive
n
s
(
we
'
can
r
wri
a
te
m
a
p
ver
x
y
'
sim
,
ple
mo
2
vie
5
pla
6
yer
,
1
whi
2
le
)
1
;
2 im = cam.grab;
3 if isempty(im), break; end
>
4 image(im)
>
5 end
i
%where the test at line 3 is to m
detect the end of file, in which
case grab returns an empty

=
t
e
s
t
p
a
t
t
e
r
n
(
'
s
i
n
y
'
,
2
5
6
,
2
)
;
>
>
i
m
=
t
e
s
t
p
a
t
t
e
r
n
(
'

s
q
u
a
r
e
s
'
,
2
5
6
,
5
0
,
2
5
)
;
>
>
i
m
=
t
e
s
t
p
a
t
t
e
r
n
(
'
d
o
t

s
'
,

5
6
,

2
5
6
,

1
0
0
)
;

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


these onto the canvas
>
>
c
a
n
v
a
s
=
i
p
a
s
t
e
(
c
a
n
v
a
s
,

We can also
construct an image
from simple
graphical
primitives. First we
create a
blankcanvas
containing all black
pixels (pixel value
of zero)
>>canvas = zeros(200, 200);
% then we create two squares
1> sq1 = 0.5 * ones(40, 40);
2> sq2 = 0.9 * ones(20, 20);
The first has pixel values of 0.5
(medium grey) and is 4040.
The second is smaller (just
2020) but brighter with pixel
values of 0.9. Now we can paste

s
q
1
,
[
2
0
,
4
0
]
)
;
>
>
c
a
n
v
a
s

=
i
p
a
s
t
e
(
c
a
n
v
a
s
,
s
q
2
,
[
6
0
,
1
2
0
]
)
;
>
>
c
i
r
c
l
e
=
0
.
6

c
i
r
c
l
e
(
3
0
)
;
%
o
f
r
a
d
i
u
s
3
0
p
i
x
e
l
s
w
i
t
h
a
g
r
e
y
v
a
l
u
e

o
f

.
6
.
>
>
s
i
z
e
(
c
i
r
c
l
e
)
a
n
s
=
6
1
6
1
>
>
c
a
n
v
a
s
=
i
p
a
s
t
e
(
c
a
n
v
a

s
,
c
i
r
c
l
e
,
[
1
0
0
,
3
0
]
)
;
%
F
i
n
a
l
l
y
,
w
e
d
r
a
w
a
l
i
n
e
s
e
g
m
e

n
t
o
n
t
o
o
u
r
c
a
n
v
a
s
>
>
c
a
n
v
a
s
=
i
l
i
n
e
(
c
a

n
v
a
s
,
[
3
0
,
4
0
]
,
[
1
5
0
,
1
9
0
]
,
0
.
8
)
;
%which extends
from (30, 40) to
(150, 190) and
its pixels are all
set to 0.8.
>>idisp(canvas)

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


example from uint8
(integer pixels in
the range [0, 255])
to double precision
values in the range
[0, 1]
>
>
i
m
d
=
i
d
o
u
b
l
e
(
i
m
)
;
>
>
i
m
=
5 Monadic Operations
Monadic image-processing
i
operations result in an image of
i
the same size W H as the
n
input image, and each output
t
pixel is a function of the
(
corresponding input pixel.
i
Since an image is represented
m
by a matrix any MATLAB
d
element-wise matrix function
)
or operator can be applied, for
;
example scalar multiplication
A color image has 3-dimensions
or addition, or functions such
which we can also consider as a 2abs or sqrt.
dimensional image where each pixel
The datatype
value is a 3-vector.
of each pixel
A monadic operation can convert a
can be
color image to a greyscale image
changed, for
where each output pixel value is a

scalar representing the


luminance of the corresponding
input pixel
>
>
g
r
e
y
=
i
m
o
n
o
(
f
l
o
w
e
r
s
)
;
T
h
e
i
n
v
e
r
s
e
o
p
e
r
a
t

i
o
n
i
s
>
>
c
o
l
o
r
=
i
c
o
l
o
r
(
g
r
e
y
)
;

% the
histogr
am of
the
street
scene
is
comput
ed and
display
ed by
>>ihist
(street
)

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

>> [n,v] = ihist(street);


%where the elements of n are
the number of times pixels
occur with the value of %the
corresponding element of v.
>>ihist(street, 'cdf');

%Th
e
Toolb
ox
funct
ion
peak
will
auto
mati
cally
find
the
peak
s
>>p
eak(
n, v)'
ans =
8 40 43 51 197 17 60 26 75 213
88 92 218 176 153 108 111 147 113
119
121 126 130 138 230 244
We can identify the pixels in

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

this peak by a logical monadic


operation >>shadows = (street
>= 30) & (street<= 80);
>>idisp(shadows)
>>im = istretch(street);
% ensures that pixel values span the full range_ which is either [0, 1] or [0, 255]
%A more sophisticated version is histogram normalization or
histogram equalization >>im = inormhist(street);
%Suchimages can be gamma decoded by a non-linear
monadic operation >>im = igamma(street, 1/0.45);
which raises each pixel to the specified
power, or >>im = igamma(street,
'sRGB');
>>idisp( street/64 )

Conclusion: Hence the various


functions in the computervision tool box
and their applications were carefully
studied.

May 1, 2015

SAURABH GUPTA (2K11/EC/140)

Experiment 9
Aim: Object tracking using Kalman filter using matlab.
Theory:
The Kalman filter keeps track of the estimated state of the system and the variance
or uncertainty of the
estimate. The estimate is updated using a state transition model and
measurements.
denotes the estimate of the system's state at time step k
before the k-th measurement zk has been taken into
account;

is the corresponding uncertainty.

The Kalman filter, also known as linear quadratic estimation (LQE), is an algorithm
that uses a series of measurements observed over time, containing noise (random
variations) and other inaccuracies, and produces estimates of unknown variables
that tend to be more precise than those based on a single measurement alone.
More formally, the Kalman filter operates recursively on streams of noisy input data
to produce a statistically optimal estimate of the underlying system state.
The algorithm works in a two-step process. In the prediction step, the Kalman filter
produces estimates of the current state variables, along with their uncertainties.
Once the outcome of the next measurement (necessarily corrupted with some
amount of error, including random noise) is observed, these estimates are updated
using a weighted average, with more weight being given to estimates with higher
certainty. Because of the algorithm's recursive nature, it can run in real time using
only the present input measurements and the previously calculated state and its
uncertainty matrix; no additional past information is required.

Kalman filter algorithm

1
2
3
4
5
6
7
8
9
10
11
12

State model prediction equation


XPred = A*X + B.*a
Prediction of state model covariance
statePredCov = A*stateCov*A' + prcNoiseVar
Prediction of observation model covariance
obsCov = C*statePredCov*C' + obsNoiseVar
Determine Kalman gain
kalGain = statePredCov * C' * (obsCov)^(-1)
Observation vector prediction
zPred = C*XPred
zError = z - zPred
State estimation equation

13
14
15

XEst = XPred + kalGain * zError


Estimated state covariance
stateCov = (eye(4) - kalGain * C) * statePredCov

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


1%
d
etremi
CODE:

ne

1%

length

object

tracking

and

using kalman

width

filter close all

of

2%

frame

clear workspace

len

vidObje

ct.Heig

ht;

w
i

l
l

% read a video

vidObject =

VideoReader('

car-2.avi'); %

determine no.

frames

nFrames =
vidObject.NumberOfFrames;

.
W

idth;

sumFra
me =

zeros(l

en,

wid);

2%

kalman filter variables

dt = 0.1;%
time

2%

between 2

initil

frames %

ize

initialize

obs

state model

erva

matrix

tion

A = [1 0 dt 0; 0 1 0 dt;0 0 1 0;0 0 0 1]; mod


B = [dt^2/2; dt^2/2; dt; dt];

el

mat

%i

rix C

nit

= [1

iai

ze

0; 0

ac

ce

0];

ler

3%

ati

prcNoise = [10.0; 10.0; 1.0;


1.0];

on
a

noise in state model

% variance of state model


noise,noise is gaussian in
nature and have

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


% zero mean
prcNoiseVar = [(dt^4/4)*(prcNoise(1)^2) 0 (dt^3/2)*...
(prcNoise(1)*prcNoise(3)) 0; 0 (dt^4/4)*(prcNoise(2)^2) 0 ...
(dt^3/2)*(prcNoise(2)*prcNoise(4));(dt^3/2)*(prcNoise(1)*prcNoise(3))...
0 (dt^2)*(prcNoise(3)^2) 0; 0 (dt^3/2)*(prcNoise(2)*prcNoise(4))...
0 (dt^2)*(prcNoise(4)^2)];

1%

initialize state model covariance equal to state noise variance for the first

2%

iteration only

stateCov = prcNoiseVar;
statePredCov = zeros(4,4);% state prediction
covariance obsNoise = [0.01; 0.01]; % noise in
observation model

1%

variance of observation model noise,noise is gaussian in nature and have

2%

zero mean

obsNoiseVar = [obsNoise(1)^2 0;0 obsNoise(2)^2];


obsCov = [0 0; 0 0]; % initialize observation model covariance

1%

state vector consist of position and

velocity of object X = [0; 0; 0; 0];


XPred = [0; 0; 0; 0]; % predicted
state vector XEst = [0; 0; 0; 0]; %
estimated state vector
z = [0; 0]; % observation vector i.e sensor
data(position of object) zPred = [0; 0]; % predicted
observation vector

2%

cnd is zero till object is not detected for the first time.

cnd = 0;
bboxPred =
zeros(1,4);

bboxEst =
zeros(1,4); box =
zeros(3,4); absent
= 0;
for k = 1 : nFrames

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

1% read frames from video


frame =
read(vidObject,k);
grayFrame =
rgb2gray(frame);

sumFrame = sumFrame + double(grayFrame);

2% background obtained by averaging sum


of frames background = (sumFrame ./ k);
object = false(len,wid);

3%

binary image consist detected object obtained by

background

subtraction

object(abs(double(grayFrame)

background)>10) = 1;

4%

morphological operation to remove clutter from binary image

object

imopen(object,strel('square',3));
object

imclose(object,strel('square',12));
object = imfill(object,'holes');

1%

determine detected object size and

location blobAnalyser =
vision.BlobAnalysis;
blobAnalyser.MinimumBlobArea = 100;
[area,centroid,bbox] =
step(blobAnalyser, object); if(centroid
~= 0)
cnd = cnd
+ 1; end

2% kalman algorithm

if(cnd == 1)

1% for the first time object detection, initialize state vector


2% with sensor observation data
X(1) = centroid(1);
X(2) =
centroid(2);
elseif(cnd > 1)
if(centroid ~= 0)

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


if(absent==1)
X(1:2)=centroid(
1:2); end

end

1% state

model

prediction

equation XPred = A*X +


B.*a;

2% prediction of state model covariance


statePredCov = A*stateCov*A' +
prcNoiseVar;

3% prediction

of

observation

covariance

model

obsCov

C*statePredCov*C' + obsNoiseVar;

4% determine kalman gain


kalGain = statePredCov * C' * (obsCov)^(-1);

1% observation
prediction

vector
zPred

C*XPred;
if(centroid ~= 0)
z(1:2)

centroid(1:2);
zError

zPred;
absent = 0; % when object is
present in frame else
absent = 1; % when object is
absent in frame end

2% state estimation equation

XEst = XPred + kalGain * zError;

1% estimated state covariance


stateCov = (eye(4) - kalGain * C) *
statePredCov; X = XEst;
if(bbox ~= 0)
bboxPred(1:2) = int32(XPred(1:2))-(bbox(3:4)')./2;

May 1, 2015 SAURABH GUPTA (2K11/EC/140)


bboxEst(1:2) = int32(XEst(1:2))(bbox(3:4)')./2; box(1,1:4) =
bbox(1:4);
box(2,3:4) =
bbox(3:4);
box(3,3:4) =
bbox(3:4);
else
box(1,1:2) = z(1:2);
bboxPred(1:2) =
XPred(1:2);
bboxEst(1:2) =
XEst(1:2);

end
box(2,1:2) =
bboxPred(1:2);
box(3,1:2) =
bboxEst(1:2);
end
subplot(3,1,
1);

1%

blue for detected position of object, red for predicted position

2%

of object, green for estimated position of object

frame = insertShape(frame, 'rectangle', box, 'Color',


{'blue','red','green'}); imshow(frame);
title('output:blue,red,green bounded box for
observation,prediction,estimated state'); subplot(3,1,2);
imshow(uint8(background));

title('background');
subplot(3,1,3);
imshow(object);
title('binary image of detected
objects'); pause(0.001);
end
close
all

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

OUTPUT:

Conclusion: Kalman filter was successfully implemented and applied to


images. It was successful in tracking the objects present within.

May 1, 2015

SAURABH GUPTA (2K11/EC/140)

Experiment 10
Aim: To perform objecttrackingusingOpticalFlow.

Theory:
In optical flow, motion is estimated as instantaneous image velocities or
discrete image displacements. The method calculates the motion between two
video sequences that are taken at two time instances t and t+t at every pixel
position. It is based on local taylor series approximation of image signal i.e. they
used partial derivatives with respect to spatial and temporal changes.
For a 2D-time dimension video sequence, a pixel at location (x, y, t) having
intensity I(x, y, t) have moved x, y and t between the two video sequence. In
this we consider a constrain that the brightness remains unaltered at different
position i.e.
I(x, y, t) = I(x+x , y+y , t+t)
Assuming the movement to be

we
obtain

I(x+x , y+y , t+t) = I(x, y, t)


+

order
terms

Thus from above equation

This result in

the x and y components of velocity or optical flow of I(x, y,


t) and

of video sequence at (x,y,t) in the corresponding direction.


We can also use derivation of intensity I x and Iy as follow

May 1,
2015

SAURABH
GUPTA
(2K11/EC/140)

The above equation is solved to determine the optical flow vectors in two
directions x and y.
Fig. 1 show the simulink model to track a moving object in a given video. The
video used in this model is viptraffic.avi consisting of moving cars on a road
scenario.

1. The initial block is the scource block, which is used to read a multimedia file.
2. The second block is used to convert the RGB values to Grayscale intensity
values.
3. The Optical Flow block is used to obtain the velocity vectors in x and y
direction in complex conjugate form.
4. The forth block is used thresholding to obtain the segmented objects in the
video and to obtain the bounded box.
5. The display box is used to display the original video, optical flow vectors,
the segmented objects and the bounding box on the original video.
As in Fig 2. the Thresholding and Region Filtering Sub-Block initially obatins the
magnitude of the velocity vector and then a threshold is computed by averaging
all the velocity vectors of the frame and using this threshold image is obtained.
The image is then passed to the median filter to remove the noise in the image.
The Blob Analysis Toolbox is used on this threshold image to obtain the bounding
box of the object as given in Fig 3.
As in Fig 4.the display box consist of 4 inputs: input video, thresholded video,
optical flow vectors and the tracked output result.
The input video and the threshold image is directly displayed. The optical flow
vectors are passed through optical flow line function block to give the dimensions
of the optical flow vectors. These dimensions are then ploted on the original video
and displayed. The bounding box are counted to obtain the number of cars and
the bounding box is diplayed on the original video to obatin the result.

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

SIMULINKMODEL:

Fig.1MainSimulinkModel

Fig.2ThresholdingandRegionFilteringSubblock

Fig.3RegionFiltering
Subblock

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

Fig.4DisplayResultSubblock
OUTPUT:

May 1, 2015 SAURABH GUPTA (2K11/EC/140)

(a)
(b)

(e)

Fig.5(a)NthFrameofthevideo(b)N+1th
Frameofthevideo(c)Opticalflowvelocity
vectorsforframe(a)and(b),(d)Thresholding
forobjectdetectionand(e)Objectdetection
(c)

(d)

Result:ObjectTrckingwassuccessfully
accomplishedusingopticalflowforthegiven
dataset.

You might also like