You are on page 1of 88

Implementing visual perception tasks for

the REEM robot


Visual pre-grasping pose
Author: Bence Magyar
Supervisors: Jordi Pag`es, PhD ; Dr. Zoltan Istenes, PhD

Barcelona & Budapest, 2012


Master Thesis Computer Science
Eotvos Lorand University Faculty of Informatics - Department of Software Technology and
Methodology

Contents
List of Tables

IV

List of Figures

.
.
.
.
.
.
.
.

1
1
2
3
4
5
5
6
9

.
.
.
.

10
10
10
12
16

.
.
.
.
.
.
.
.
.
.

17
17
20
20
21
25
26
29
29
29
31

Introduction and background


1.1 Introduction . . . . . . . . .
1.2 REEM introduction . . . . .
1.3 Outline and goal of the thesis
1.4 ROS Introduction . . . . . .
1.5 OpenCV . . . . . . . . . . .
1.6 Computer Vision basics . . .
1.7 Grasping problem . . . . . .
1.8 Visual servoing . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

Object detection survey and State of the Art


2.1 Introduction . . . . . . . . . . . . . . . .
2.2 Available sensors . . . . . . . . . . . . .
2.3 Survey work . . . . . . . . . . . . . . . .
2.4 Brief summary of survey . . . . . . . . .
Pose estimation of an object
3.1 Introduction and overview . .
3.2 CAD Model . . . . . . . . . .
3.3 Rendering . . . . . . . . . . .
3.4 Edge detection on color image
3.5 Particle filter . . . . . . . . . .
3.6 Feature detection (SIFT) . . .
3.7 kNN and RANSAC . . . . . .
3.7.1 kNN . . . . . . . . . .
3.7.2 RANSAC . . . . . . .
3.8 Implemented application . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

3.8.1 Learning module . . . . . . . . . . . . .


3.8.2 Detector module . . . . . . . . . . . . .
3.8.3 Tracker module . . . . . . . . . . . . . .
3.8.4 State design pattern for node modes . . .
3.9 Pose estimation results and ways for improvement
3.10 Published software, documentation and tutorials .
3.11 Hardware requirements . . . . . . . . . . . . . .
3.12 Future work . . . . . . . . . . . . . . . . . . . .
4

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

31
31
32
33
35
35
37
37

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

38
38
39
40
41
41
42
42
42
43
44
44
44
46
47
48
48

.
.
.
.
.
.
.

49
49
50
52
53
54
55
55

Experimental results for visual pre-grasping


6.1 Putting it all together: visual servoing architecture . . . . . . . . . . . . . . . .
6.2 Tests on the REEM RH2 robot . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 Hardware requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56
56
57
59

Increasing the speed of pose estimation using image segmentation


4.1 Image segmentation problem . . . . . . . . . . . . . . . . . . .
4.2 Segmentation using image processing . . . . . . . . . . . . . .
4.3 ROS node design . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Stereo disparity-based segmentation . . . . . . . . . . . . . . .
4.4.1 Computing depth information from stereo imaging . . .
4.4.2 Segmentation . . . . . . . . . . . . . . . . . . . . . . .
4.5 Template-based segmentation . . . . . . . . . . . . . . . . . . .
4.5.1 Template matching . . . . . . . . . . . . . . . . . . . .
4.5.2 Segmentation . . . . . . . . . . . . . . . . . . . . . . .
4.6 Histogram backprojection-based segmentation . . . . . . . . . .
4.6.1 Histogram backprojection . . . . . . . . . . . . . . . .
4.6.2 Segmentation . . . . . . . . . . . . . . . . . . . . . . .
4.7 Combined results with BLORT . . . . . . . . . . . . . . . . . .
4.8 Published software . . . . . . . . . . . . . . . . . . . . . . . .
4.9 Hardware requirements . . . . . . . . . . . . . . . . . . . . . .
4.10 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tracking the hand of the robot
5.1 Hand tracking problem . .
5.2 AR Toolkit . . . . . . . .
5.3 ESM . . . . . . . . . . . .
5.4 Aruco . . . . . . . . . . .
5.5 Application examples . . .
5.6 Software . . . . . . . . . .
5.7 Hardware requirements . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

II

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

Conclusion
7.1 Key results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60
60
61
62

Bibliography

63

A Appendix 1: Deep survey tables

67

B Appendix 2: Shallow survey tables

71

III

List of Tables
2.1

Survey summary table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

4.1

Effect of segmentation on detection . . . . . . . . . . . . . . . . . . . . . . .

47

A.1 Filtered deep survey part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . .


A.2 Filtered deep survey part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.3 Filtered deep survey part 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68
69
70

B.1
B.2
B.3
B.4
B.5
B.6
B.7
B.8
B.9

72
73
74
75
76
77
78
79
80

Wide shallow survey part 1


Wide shallow survey part 2
Wide shallow survey part 3
Wide shallow survey part 4
Wide shallow survey part 5
Wide shallow survey part 6
Wide shallow survey part 7
Wide shallow survey part 8
Wide shallow survey part 9

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

IV

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7

Willowgarages PR2 finishing a search task . . . . . .


PAL Robotics REEM robot . . . . . . . . . . . . . .
Two nodes in the ROS graph connected through topics
Real scene with the REEM robot . . . . . . . . . . . .
Examples for grasping . . . . . . . . . . . . . . . . .
REEM grasping a juicebox . . . . . . . . . . . . . . .
Closed loop architecture . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

2
2
4
7
8
8
9

2.1
2.2
2.3
2.4
2.5
2.6
2.7

Most common sensor types


Stereo camera theory . . .
LINE-Mod . . . . . . . .
ODUFinder . . . . . . . .
RoboEarth Detector . . . .
ViSP Tracker . . . . . . .
ESM Tracker . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

11
11
13
13
14
15
15

CAD model of a Pringles box in MeshLab . . . . . . . . . . . . . . . . . . . .


Examples of rendering in case of BLORT . . . . . . . . . . . . . . . . . . . .
Image convolution example . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Steps of image processing . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Particle filter used for localization . . . . . . . . . . . . . . . . . . . . . . . .
Particles visualized on the detected object of BLORT. Greens are valid, reds are
invalid particles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.7 Extracted SIFT and ORB feature points of the same scene . . . . . . . . . . . .
3.8 SIFT orientation histogram example . . . . . . . . . . . . . . . . . . . . . . .
3.9 Extracted SIFTs. Red SIFTs are not in the codebook, yellow and green ones are
considered as object points, green ones are inliers of the model and yellow ones
are outliers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.10 Detection result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.11 Tracking result, the object visible in the image is rendered . . . . . . . . . . .
3.12 Diagram of the tracking mode . . . . . . . . . . . . . . . . . . . . . . . . . .

20
21
22
24
25

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

3.1
3.2
3.3
3.4
3.5
3.6

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

26
27
28

29
32
32
33

3.13 Diagram of the singleshot mode . . . . . . . . . . . . . . . . . . . . . . . . .


3.14 Screenshots of the ROS wiki documentation . . . . . . . . . . . . . . . . . . .

34
36

4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12

Example of erosion the black pixel class were eroded . . . . . . .


Example of dilation where the black pixel class were dilated . . .
ROS node design of segmentation nodes . . . . . . . . . . . . . .
Parameters exposed through dynamic reconfigure . . . . . . . . .
Example of stereo vision . . . . . . . . . . . . . . . . . . . . . .
Masked, input and disparity images . . . . . . . . . . . . . . . .
Template-based segmentation . . . . . . . . . . . . . . . . . . . .
The process of histogram backprojection-based segmentation . . .
Histogram segmentation using a template of the target orange ball
The segmentation process and BLORT . . . . . . . . . . . . . . .
Test scenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Screenshot of the ROS wiki documentation . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

39
40
40
41
41
42
43
44
45
46
47
48

5.1
5.2
5.3
5.4
5.5
5.6

50
51
52
53
54

5.7

ARToolkit markers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ARToolkit in action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tests using the ESM ROS wrapper . . . . . . . . . . . . . . . . . . . . . . . .
Example markers of Aruco . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Otsu thresholding example . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Video of Aruco used for visual servoing. Markers are attached to the hand and
to the target object in the Gazebo simulator. . . . . . . . . . . . . . . . . . . .
Tests done with Aruco . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.1
6.2
6.3
6.4

Putting it all together: visual servoing architecture


A perfect result with the juicebox . . . . . . . . .
An experiment gone wrong . . . . . . . . . . . .
An experiment where tracking was tested . . . .

56
57
58
58

VI

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.

54
55

Acknowledgements
First I would like to thank my family and my girlfriend for their support and patience during
my 5-month journey in science and technology in Barcelona. The same appreciation goes to
my friends. I would also like to say thanks to Eotvos Lorand University and PAL Robotics
for providing the opportunity as an Erasmus internship to conduct such research at a foreign
country. Many thanks to my advisors Jordi Pag`es who mentored me at PAL and Zoltan Istenes
who both helped me forming this manuscript and organizing my work so that it can be presented. Thumbs up for Thomas Morwald who was always willing to answer my questions
about BLORT. The conversations and emails exchanged with Ferran Rigual, Julius Adorf and
Dejan Pangercic helped a great deal with my research.
I really enjoyed the friendly environment created by the co-workers and interns of PAL
Robotics especially: Laszlo Szabados, Jordi Pag`es, Don Joven Agravante, Adolfo Rodriguez,
Enrico Mingo, Hilario Tome, Carmen Lopera and all.
I would also like to give credit to everyone whose work served as a basis for my thesis.
These people are the members of the open source community and the developers of: Ubuntu,
C++, OpenCV, ROS, Texmaker, Latex, Qt Creator, GiMP, Inkscape and many more.

Thank you.

VII

Chapter 1
Introduction and background
1.1

Introduction

Even though we are not aware of it we are already being surrounded by robots. The most accepted definition of a robot is that it is some kind of machine thats automated in order to help
its owner by completing some tasks. They might not have human form as one would assume but
the only difference is that humanoid robots are bigger and more complex. A humanoid robot
could replace humans in various hazardous situations where a human form is still required such
as the tools provided for a rescue mission are hand-tools designed for humans. Although popular science fiction and sometimes even scientist like to paint a highly developed and idealized
picture about robotics, it is only in the state of maturing.
Despite the initial football oriented goal, even RoboCup - one of the most respected robotics
competitions - has a special league called Robocup@Home where humanoid robots compete in
well defined common tasks in home environments. 1 Also the DARPA Grand Challenge - the
most well-founded competition - has announced its latest challenge centered around a humanoid
robot. 2
1

http://www.ai.rug.nl/robocupathome/
http://spectrum.ieee.org/automaton/robotics/humanoids/
darpa-robotics-challenge-here-are-the-official-details
2

Figure 1.1: Willowgarages PR2 finishing a search task


However there is no generally accepted solution even for manipulating simple objects like
boxes and glasses at the current time but this field has been through a lot of development lately.
Right now it is still an open problem but promising works such as [15] have been published.
Finding and identifying an object to be grasped highly depends on the type and number of
sensors a robot has.

1.2

REEM introduction

The latest creation of PAL Robotics is the robot named REEM.

Figure 1.2: PAL Robotics REEM robot


With its 22 degrees of freedom, 8 hours of battery time, 30kg payload and 4km/h speed it
is one of the top humanoid service robots. Each arm owns 7 degrees of freedom with 2 for
the torso and 2 for the head. The head unit of REEM holds a pair of cameras as well as a
2

microphone and speaker system while a touch screen is available on the chest for multimedia
applications such as map navigation.
The main goal of this thesis work was to develop applications for this specific robot while
making sure that the end result will still be general enough to allow the usage of other robot
platforms.

1.3

Outline and goal of the thesis

Before going into more details the basic problems are needed to be defined.
The goal of this thesis was to implement computer vision modules for grasping tabletop
objects with the REEM robot. To be more precise: it consisted of implementing solutions for
the sub-problems of visual servoing in order to solve the grasping problem. This covers the
following two tasks from the topic statement: detection of textured and non-textured objects,
detection of tabletop objects for robot grasping.
The first and primary problem encountered is the pose estimation problem which was the
main task of this thesis work. There are several examples in scientific literature solving
slightly different problems partial to pose estimation. One of them is the object detection
problem and the other one is the object tracking problem. It is crucial to always have these
problems in mind when dealing with objects through vision.
The pose estimation problem is that we have to compute an estimated pose of an object
given some input image(s) and - possibly - given additional background knowledge. Ways of
defining a pose can be found at 1.6.
An object detection problem can be identified by its desired answer type. One is dealing
with object detection if the desired answer for an image or image sequence is whether an object
is present or its number of appearances. This problem is typically solved using features.
Numerous examples and articles can also be found for the object tracking problem. Usually these type of methods are specialized to provide real-time speed. To do so they require an
initialization stage before starting the tracker. Concretely: the target object has to be set to an
initial pose or the tracker has to be initialized with the pose of the object.
The secondary task of this thesis was to provide solution for tracking the hand of the
REEM robot during the grasping process so the manipulator position and the target position
3

can be inserted into a visual servoing architecture.


A necessary overall objective was to provide all results with speed that makes these applications eligable for deployment in a real scene on the REEM robot.

1.4

ROS Introduction

ROS [29] (Robot Operating System) is a meta operating system designed to help and enhance
the work of robotics scientists. ROS is not an operating system in the traditional sense of process
management and scheduling; rather, it provides a structured communications layer above the
host operating systems of a heterogenous computer cluster.

At the very core of it, ROS provides an implementation of the Observer Design Pattern [14,
p. 293] and additional software tools to well organize the system. A ROS system is made up of
nodes which serve as computational processes in the system. ROS nodes are communicating
via typed messages through topics which are registered by using simple strings as names. A
node can publish and/or subscribe to a topic.
A ROS system is completely modular, each node can be dynamically started or stopped,
they are all independent components in the system depending on each other only for data input
reasons. Topics provide continous dataflow-style processing of messages but they have limitations if one would like to use a node service in a blocking call way. There is a way to create
such interfaces for nodes and these are called services. To support a dynamic way to store and
modify commonly used global or local parameters, ROS also has a Parameter Server through
which nodes can read, create, modify parameters.
The link below provides more information about ROS:
http://ros.org/wiki

Figure 1.3: Two nodes in the ROS graph connected through topics
Among others, ROS also provides

a build system - rosbuild, rosmake


a launch system - roslaunch
monitoring tools - rxgraph, rosinfo, rosservice, rostopic

1.5

OpenCV

OpenCV [9] stands for Open Source Computer Vision and it is a programming library developed
for real time computer vision tasks. OpenCV is released under a BSD license, it is free for both
academic and commercial use. It has C++, C and Python interfaces running on Windows, Linux,
Android and Mac. It provides an implementation of several image processing and computer
vision algorithms classic and state of the art alike. It has great amounts of supplementary
material available on the internet such as [22]. It is being developed by Willowgarage along
with ROS and is widely used for vision oriented applications on all platforms. All tasks related
to image-processing in this thesis work were solved using OpenCV.

1.6

Computer Vision basics

This section will go through the very basic definitions of Computer Vision.
A rigid body in 3D space is defined by its position and orientation which is commonly referenced as pose. Such a pose is always defined with respect to an orthonormal reference frame
where x,y,z are the unit vectors of the frame axes.
Position of a point O0 on the rigid body with respect to the coordinate frame O xyz is expressed by the relation
o0 = o0x x + o0y y + o0z z
(1.1)
, where o0x , o0y , o0z denote the components of the vector o0 R3 along the frame axes. The
position of O0 therefore can be defined as vector o0 as follows:
0
ox

0
o0 =
o
y

o0z
So far we covered the position element of the objects pose.

(1.2)

The orientation of O0 can be defined w.r.t its reference frame as follows:


x0 = x0x x + x0y y + x0z z
y 0 = yx0 x + yy0 y + yz0 z
z 0 = zx0 x + zy0 y + zz0 z

(1.3)

A more practical form is the following usually called a rotation matrix R:

h
R = x0 y 0

0
0T

xx yx0 zx0
x x y 0T x z 0T x
i

0 = 0
0
0 = 0T
0T
0T
z
x
y
z
x
y
y
y
z
y
y y y

0
0
0
0T
0T
0T
xz yz zz
x z y z z z

(1.4)

The columns of matrix R are mutually orthogonal so as a consequence


RT R = I3

(1.5)

where I3 denotes the (3 3) identity matrix.


It is clear that the rotation matrix above is redundant in representation. In some cases a
unit quaternion representation is used. Given a unit quaternion q = (w, x, y, z) the equivalent
rotation matrix can be computed:

1 2y 2 2z 2
2xy 2zw
2xz + 2yw

2
2

Q=
2xy
+
2zw
1

2x

2z
2yz

2xw

2
2
2xz 2yw
2yz + 2xw
1 2x 2y

(1.6)

Note: When talking about transformations the components of a pose are usually called
translation and rotation instead of position and orientation.
[35] provided great help for writing this section.

1.7

Grasping problem

A grasping problem has several definitions depending on specific parameters. Since the goal of
this thesis was not visual servoing the presented grasping problem will be a simplified version.

Figure 1.4: Real scene with the REEM robot


Given an object frame in the 3D space the task is to find an apropriate sequence of operations
resulting the used robot manipulator in a pose that meets the desired definition of goal frame.
Let Foc denote the object frame w.r.t the camera frame and define the goal frame as
Fgc = Tof f Foc

(1.7)

where Tof f is a transformation that defines a desired offset on the object frame. Also let Fmc
denote the manipulator frame w.r.t. the camera frame.
The next task is to find the sequence T1 , T2 , ..., Tn where
|| T1 T2 ... Tn Fmc Fgc ||< 

(1.8)

is true where  is a pre-defined error. The transformations T1 , T2 , ..., Tn are applied to a kinematic chain describing the robots current state.

1
2
3
4
5
6
7

Initialize robot manipulator ;


Detect object(s) and determine pose ;
Compute goal frame ;
while || T1 T2 ... Tn Fmc Fgc ||>=  do
Manipulate/modify T1 , T2 , ..., Tn to minimize error. ;
end
Grasp object - close hand/gripper ;
Algorithm 1: General grasping algorithm

(a)

(b)

(d)

(c)

(e)

Figure 1.5: Examples for grasping

Figure 1.6: REEM grasping a juicebox

1.8

Visual servoing

Following the same principles as with motor control visual servoing stands for controlling a
robot manipulator based on feedback. In this particular case the feedback is obtained by using
computer vision. It is also referred to as Vision-Based Control and has 3 main types such as:
Image Based (IBVS): The feedback is the error between the current and the desired image
points on image plane. Does not include 3D pose at all therefore is often referred to as
2D visual servoing.
Position Based (PBVS): The main feedback is the 3D pose error between the current pose
and the goal pose. Usually referred to as 3D visual servoing.
Hybrid: 2D-3D visual servoing approaches are taking image features as well as 3D pose
information combining the two servoing methods mentioned above.
Visual servoing is categorized as closed loop control. Figure 1.7 shows the general architecture of visual servoing.

Figure 1.7: Closed loop architecture

Chapter 2
Object detection survey and State of the
Art
2.1

Introduction

As a precursor for this project a wide survey of existing software packages and techniques
needed to be done. The survey consisted of 2 stages.

1. A wider survey for shallow testing and research to classify possible subjects. The table
of results can be found in Appendix 2 B.
2. A filtered survey based on the attributes and previous results and experiences with more
detailed tests and research also taking available sensors into account. The table of results
can be found in Appendix 1 A.
This chapter introduces the most resulting softwares and techniques from the above surveys
providing the benefits and drawbacks experienced.

2.2

Available sensors

There are several ways to address the tasks of digitally recording the world. While there is a
wide variety of sensors suitable for image-based applications when building an actual humanoid
robot one has to consider to choose the type best fitting the application and one that can fit into
a robot body or - more preferably - into a robot head.

10

(a) Monocular camera

(b) Stereo cameras

(c) RGB-D devices

(d) Laser scanner

Figure 2.1: Most common sensor types


Monocular cameras usually provide accurate colors and can be reasonably faster than the
other sensor types.
Stereo cameras usually require extra processing time since it is a system of two calibrated
cameras with a predefined distance between the two monocular cameras the system consists of. They are also used for digital and analog 3D filming and photoshooting.

Figure 2.2: Stereo camera theory


RGB-D sensors are operating with structured light or time-of-flight techniques and have
become quite popular and frequent thanks to Microsoft Kinect or the Asus Xtion. These
sensors are cheaper then a medium quality stereo camera system and also require less or
no calibration at all but their quality is fixed to standard webcameras. They have special
hardware for processing the data into RGB images with depth information namely RGBD. A really common use cases for these sensors are human-PC virtual reality interaction
interfaces.
11

Laser scanners are more industrial and usually substantially more expensive than the others. Due to the primary industrial design of laser scanners they have an extremely low
error rate and high resolution. They are mostly used on mobile robots for mapping tasks
or 3D object scanning for graphical or medical applications.

2.3

Survey work

This section summarizes the research conducted for this thesis mentioning test experiences if
there were any.
Holzer et al. defined so-called distance templates and applied them using regular template
matching methods.
Hinterstoisser et al. introduced a method using Dominant Orientation Templates to identify
textureless objects and estimate their pose in real-time. In their very recent work Hinterstoisser
et al. engineered the method LINE-Mod for detecting textureless objects using gradient normals
and surface normals. The advantage of their approach is that even though an RGB-D sensor is
required in the learning stage, a simple webcamera is enough to detect - of course the error will
increase since there are no surface points available from a webcam. A compact implementation
is available since OpenCV 2.4.
Experiments done with LINE-Mod showed that this method cannot be applied to textured
objects although it is a reasonably nice alternative for textureless objects. An experience gained
by using this method is that the false detection rate was extremely high and no applicable 3D
pose result could be obtained, it only provided if an object was detected or not. The first
implementation was released at the time of this thesis work therefore it is possible that future
versions will improve results. The product of this thesis work could be expanded to textureless
objects using this technique.
Test videos prepared for this thesis:
http://www.youtube.com/watch?v=2cCsYfwQGxI
http://www.youtube.com/watch?v=3e3Wola4EWA

12

Figure 2.3: LINE-Mod


Nister and Stewenius speeded up the classic feature-based matching approach by utilizing
a tree-based search optimized data structure. They also ran experiments on a PR2 robot and
released the ROS package named Objects of Daily Use Finder (ODUFinder).

Figure 2.4: ODUFinder


However the theoretical base of this method is solid conducted experiments showed that the
practical results were not applicable for a mobile robot working in human environment at the
time of this work.
Muja et al. implemented the general Recognition Infrastructure to host and coordinate the
modules of recognition pipelines while Rusu et al. provided an example use case applying Binarized Gradient Grids and Viewpoint Feature Histograms.

13

The OpenCV group, Rublee et al. defined a new type of feature detector/extractor Oriented
BRief (ORB) to provide a BSD licensed solution in constrast to SURF [5]. The work of [4]
was to experiment and create benchmarks for TOD [34] using ORB as its main feature detection/extracion method.
Experimental work was done for this thesis to see if SIFT could be replaced with ORB in Chapter 3 but due to deadlines it was not possible to implement it. As future work it would be
however a nice addition to the final software.
The work of [38], RoboEarth is a general communication platform for robots and has a
ROS package which contains a database client and a detector module. Even though the detector module of RoboEarth was not precise enough for the task of this thesis its still exemplary
as robotics software.
The tests of RoboEarth package were really smooth and easy to do since they provided tutorials and convenient interfaces for their software. The requirements of the system however did
not exactly meet the provided hardware because the detector of RoboEarth needs an RGB-D
sensor and with REEM we only had a stereo camera. Experiments showed that obtaining a
precise pose is hard due to its high variance and the false detection rate was also high.

Figure 2.5: RoboEarth Detector


Marchand et al. called ViSP contains tools for visual servoing
The published library of Eric
tasks along image processing and other fields as well. ViSP is also available as a ROS package
and it contains a ready-to-use model-based tracker tracking edges of the object model. The
ViSP tracker operates using the edges of the objects and tracks it starting from a known initial
position.

14

Figure 2.6: ViSP Tracker


Thanks to the ROS package provided for it the ViSP Tracker was easy to test. The good
results almost made it the primary choice of tracker however it still requires a known initial
position to start and it solely does tracking which alone does not solve the pose estimation
problem alone. Though it provided good results problems occured due to the limit of only
using greyscale images. The tracker finally chosen (3.4) to be used is also taking colors into
account.
A remarkably unique approach for tracking 2D planar patterns is ESM[7]. It has a free
version for demoing but also provides a licensed version which is highly optimized. It did
not prove reliable enough and the output format also raised problems for this task. During the
tracking process the template searched is always modified to adapt to small changes over time.
Because of this it can only work when theres tiny difference between two consecutive images
and more importantly the target pattern should not travel too big distances between such images. It is also worth mentioning that since this technique is also a tracker the initial pose is
required. The implementation provided for this technique did not make the job of testing it easier with its dinamically linked C library and C header file. No open-source solution is provided.

Figure 2.7: ESM Tracker


15

Mixing techniques proved to be successful in markerless object detection. Feature detection


and edge-tracking based methods were presented and discussed in [26], [37], [30], [25], [31]
[8], [19], [20] and [21] leading to the birth of a software package called The Blocks World
Robotic Vision Toolbox. The basic idea is to use a feature-based pose estimation module to
acquire a rough estimation and than use this result to initialize a real-time tracker using edge
detection to make things a lot faster and dynamic. As a result of this survey work first BLORT
was chosen to be further tested and to be integrated into the software of the REEM robot.

2.4

Brief summary of survey

Table 2.1 is summarizing the previous section in a table form highlighting the most relevant
attributes.

Name

Tracker Detector Hybrid

Sensor

Texture

Speed

Output

Keywords

ViSP tracker

Yes

No

No

Monocular

Only edges

30Hz

Pose

edge
tracking,
grayscale,
particle
filter

RoboEarth

No

Yes

No

RGBD(train,
detect),
monocular(detect)

Needed

11Hz

Pattern
matched

kinect, point cloud


matching,
texture
matching

LINE-Mod

No

Yes

No

RGBD(train,
detect),
monocular(detect)

Low texture

30Hz

Pattern
matched

surface and color


gradient
normals,
kinect

ESM

Yes

No

No

Monocular

Needed

30Hz

Homography custom
minimization, pattern matching

ODUFinder

No

Yes

No

Monocular

Needed

4-6Hz

Matched
SIFTs

SIFT, vocabulary tree

BLORT

No

No

Yes

Monocular

Needed

30Hz+

3D pose

SIFT, edge, CAD,


RANSAC, OpenGL

Table 2.1: Survey summary table

16

Chapter 3
Pose estimation of an object
3.1

Introduction and overview

As a result of the wide and then the deep survey one software package was chosen to be integrated with the REEM robot. A crucial factor of all techniques surveyed was to see what type
of sensor is required because the REEM robot does not have a visual depth sensor in the head.
Early experiments with the BLORT system showed that it could be capable of serving as a pose
estimator on REEM for the grasping task. It provided correct results with a low ratio of false
detections especially when compared to others along with a reasonably good speed.

BLORT - The Blocks World Robotic Vision Toolbox

The vision and robotics communities have developed a large number of increasingly successful methods for tracking, recognizing and online learning of objects,
all of which have their particular strengths and weaknesses. A researcher aiming
to provide a robot with the ability to handle objects will typically have to pick
amongst these and engineer a system that works for her particular setting. The
toolbox is aimed at robotics research and as such we have in mind objects typically
of interest for robotic manipulation scenarios, e.g. mugs, boxes and packaging of
various sorts. We are not aiming to cover articulated objects (such as walking humans), highly irregular objects (such as potted plants) or deformable objects (such
as cables). The system does not require specialized hardware and simply uses a single camera allowing usage on about any robot. The toolbox integrates state-of-the
art methods for detection and learning of novel objects, and recognition and tracking of learned models. Integration is currently done via our own modular robotics
framework, but of course the libraries making up the modules can also be separately
integrated into own projects.

17

Source: http://www.acin.tuwien.ac.at/?id=290 (Last accessed: 2012.10.16.)


For the core of BLORT credits are to Thomas Morwald, Johann Prankl, Michael Zwillich,
Andreas Richtsfeld and Markus Wincze at the Vision for Robotics (V4R) lab at the Automation
and Control Institute (ACIN) of the Vienna University of Technology (TUWIEN). Originally
BLORT was designed to provide a toolkit for robotics therefore its full name is Blocks World
Robotic Vision Toolbox.
For a better understanding on this chapter it is required to read 1.3.
The list of papers connected to BLORT:
1. BLORT - The Blocks World Robotic Vision Toolbox Best Practice in 3D Perception and
Modeling for Mobile Manipulation [26]
2. Anytimeness Avoids Parameters in Detecting Closed Convex Polygons [37]
3. Basic Object Shape Detection and Tracking using Perceptual Organization [30]
4. Edge Tracking of Textured Objects with a Recursive Particle Filter [25]
5. Taking in Shape: Detection and Tracking of Basic 3D Shapes in a Robotics Context
[31]
Since there was no ROS package provided the integration had to start at that level. The
system itself is composed of separate works of the above authors but performs reasonably well
integrated together. A positive aspect of BLORT is that it was designed to be used with a single
webcam. This way no extra sensor is required on most robots and still it performs well. Of
course as most scientific software BLORT was also developed indoors without ever leaving the
lab. The step to take with BLORT was to integrate it into a system that runs ROS and tune it so
it will be able to operate in a real robot outside laboratory environment.
For the above objectives to work out the software had to be throughoutly tested while also
discovering those regions where most of the computation is being done. The code had to be
refactored in order to provide more convenient interfaces and also to eliminate bugs such as
tiny memory leaks and other problems coming from incorrect memory usage. Also all the components and algorithms used by BLORT had to be inspected and their parameters exposed to
end-users for deploy-time configuration or modified inside for better results.

18

The algorithmic design of BLORT is a sequence of the detector and tracker modules.
1
2
3
4
5
6
7
8

initialization ;
while object not detected or (object detected and conf idence < thresholddetector ) do
//Run object detector;
Extract SIFT features;
Match extracted SIFTs to codebook using kNN;
Estimate object pose from matched SIFTs using RANSAC;
Validate confidence;
publish object pose for tracker;

19

end
while object tracking confidence is high do
//Run object tracker;
Copy the input image and render the textured object into scene to its known location;
Run colored edge detection on both (input and rendered) image;
Use a particle filter to match the images around the estimated pose;
Average particle guesses and compute confidence rate;
Smooth confidence values (edge, color) to avoid unrealistic fast flashes;
if conf idence > thresholdtracker then
publish pose of the object ;
end

20

end

9
10
11
12
13
14
15
16
17
18

Algorithm 2: BLORT algorithmical overview

19

3.2

CAD Model

CAD models are commonly used in Computer Aided Design softwares mainly by different type
of engineers. These models define simple 3D objects as well as more complex ones.
Object trackers often rely on CAD models of the target object(s) to perform edge detectionbased tracking.
Related articles of BLORT: [26], [25]
MeshLab [11] proved to be a great tool to handle simple objects and generate convex hulls
of complex meshes.
A demonstration video about the process of creating a simple juicebox brick can be found
on the following link: http://www.youtube.com/watch?v=OtduI5MWVag

Figure 3.1: CAD model of a Pringles box in MeshLab

3.3

Rendering

Rendering is commonly known from computer games or scientific visualization. It loads or


generates 3D shapes and (usually) projects them onto a 2D surface - the screen. The OpenGL
and DirectX libraries are often used to utilize the computational power of the GPU (video card)
for rendering tasks through their APIs. Unlike CUDA which is pretty young compared to the
other two these libraries were not designed for scientific computation but they are still being
used for it.

20

(a) Visualizer of TomGine(part of BLORT) (b) Rendering a 3D model onto a camera


image

Figure 3.2: Examples of rendering in case of BLORT


In the case of BLORT rendering is used in the tracker module to validate the actual pose
guess. An intuitive description of this step is that the tracker module imagines (renders) how the
object should look like given the current pose guess and validates the guess using a comparison
method.

3.4

Edge detection on color image

To validate a pose guess the tracker module compares the original input image with the one
with the 3D object rendered onto it. Such a comparison can be done several ways. In the case
of object tracking it is considerable to use the edges of the object which can be extracted by
detecting the edges of the image.
The following steps were implemenented using OpenGL Shaders - a technique highly optimized for computing image convolution. The procedure takes an input image I and a convolution kernel K and outputs O. A simplified definition could be
O[x, y] =

I[f (x, y)] K[g(x, y)]

(3.1)

where f (x, y) and g(x, y) are the corresponding indexer functions. The result however is often
required to be normalized. This can be arranged by adding a normalizing factor to Equation
3.1 which is the sum of the factors of multiplication, more concretely the elements of kernel K.
The final formula for convolution should look like the following:
O[x, y] = P

X
1
I[f (x, y)] K[g(x, y)]
a,b K[a, b]

21

(3.2)

Figure 3.3: Image convolution example


Steps of image processing in the tracker module of BLORT:
1. A blurring operator is applied to the input image to minimize pointilized error such as
isolated black and white pixels. This is usually an important pre-filtering step for edgedetection methods. For this purpose a 5x5 Gauss operator was chosen.

2 4 5 4 2

4 9 12 9 4

1
5 12 15 12 5
K=

115

4
9
12
9
4

2 4 5 4 2

(3.3)

2. Edge detection using a Scharr operator. By applying Kx and Ky as convolutions to the


input image the corresponding estimated derivatives can be computed.

3 0 3
1

10 0 10
Kx =
22
3 0 3

3
10
3
1

Ky =
0
0
0
22
3 10 3

(3.4)

(3.5)

3. Nonmaxima supression to only keep the strongest edges of the edge detection.

0 0 0

Kx = 1 0 1
0 0 0
22

(3.6)

0 1 0

Kx = 0 0 0
0 1 0

(3.7)

In this step the above convolutions serve as indicators whether the current pixel is a maximal edge compared to its neighborhood. If it is not, the pixel is disposed and an extremal
element is returned (RGB(0,127,128)).
4. Spreading operation to grow the remaining edges from the previous step.

1
2

K=1
1
2

1
0
1

1
2

(3.8)

1
2

This step enlarges the previously determined strongest edges. This step is important to
remove the small errors received from detected false edges.

23

(a) Input image

(b) Gaussian blur

(c) Scharr operator

(d) Nonmaxima supression

(e) Spreading

Figure 3.4: Steps of image processing


The implementation of the above method is used through an OpenGL shader (which makes
use of the paralellizable nature of image processing techniques) but a pure CPU version using OpenCV was also implemented during the work of this thesis though they have proven
reasonably slower than the shader version.

24

3.5

Particle filter

As a technique well-based on statistical methods in robotics particle filters are often used for
localization tasks. For object detection tasks its utilized for tracking objects in real time.
At its very core, particle filtering is a model estimation technique based on simulation. In
such a system a particle could be called an elementary guess about one possible estimation of
the model while simulation stands for continuously validating and resampling these particles to
adapt the model to new information given by measurements or additional data.

(a)

(b)

(c)

(d)

Figure 3.5: Particle filter used for localization


Figure 3.5 shows a particle filter used in localization. It is clear that in the initial situation where no information was given the particles are well-spread around the map. As the
robot moves and uses sensors to measure its environment these particles are beginning to center
around those areas more likely to contain the robot.
The design of particle filters makes it possible to utilize paralell computing techniques in
the implementation such as using multiple processor threads or the graphics card. This is an
important feature which makes this algorithm suitable for real-time tracking.
25

Generate initial particles ;


while Error > T hreshold do
Wait for additional information ;
Calculate normalized particle importance weights ;
Resample particles based on importance weight to generate new particle set ;
Calculate Error ;

end

1
2
3
4
5

Algorithm 3: Particle filter algorithm


The tracker module is using a particle filter to track and refine the pose of an object. One
particle in this specific case holds a pose value which is evaluated by running an edge-detection
based comparation method described in Section 3.4.

Figure 3.6: Particles visualized on the detected object of BLORT. Greens are valid, reds are
invalid particles.

3.6

Feature detection (SIFT)

Image processing is often only the first step to further goals such as image analization or pattern
matching. The term image processing refers to operations done on pixel-level where the information gained is also often pixel-level information. The features used here are the individual
pixels. However it is necessary to define features of higher level in order to increase complexity,
robustness or speed or all of the previous at the same time. Though a sucessfully extracted line
in an image is also considered a feature when speaking of feature detection it usually refers to
feature types which are centered around a point. Such feature detectors are for example:

26

FAST

BRIEF

SIFT[23]

ORB [32]

SURF[5]

FREAK

Figure 3.7: Extracted SIFT and ORB feature points of the same scene
The SIFT detector did prove one of the strongest through the literature and existing applications therefore was chosen to be the main feature detector of BLORT. The SIFTs extracted
from the surface of the current object in the learning stage are saved in a data structure which
will be referred to as codebook or object SIFTs from now on. Later this codebook is used to
match image SIFTs: features extracted from the current image.
SIFT details:
invariant
scaling
orientation
partially invariant
affine distortion
illumination changes
SIFT procedure:
Image convolved using Laplacian of Gaussian (LoG) filter at different scales (scale pyramid)
Compute difference between the neighboring filtered images
Keypoints: local max/min of difference of LoG
27

Compare to its 8 neighbors on the same scale


Compare to the 9 corresponding pixels on the neighboring levels
Keypoint localization
Problem: too many (unstable) keypoints
Discard the low contrast points
Eliminate the weak edge points
Orientation assignment
Invariant for rotation
Each keypoint is assigned one or more orientations from local gradient features
m(x, y) =

(L(x + 1, y) L(x 1, y))2 + (L(x, y + 1) L(x, y 1))2 (3.9)

(x, y) = arctg

L(x, y + 1) L(x, y 1)
L(x + 1, y) L(x 1, y)

(3.10)

Calculate for every pixel in a neighboring region to create an orientation histogram


Determine dominant orientation based on the histogram

Figure 3.8: SIFT orientation histogram example


On the implementation level the feature detection step is done by utilizing the graphics card
again by using the SiftGPU [36] library to extract image SIFTs. As a part of this thesis work a
ROS wrapper package of this library was also created.
http://ros.org/wiki/siftgpu (Last accessed: 2012.11.05.)

28

3.7
3.7.1

kNN and RANSAC


kNN

k-Nearest Neighbors [12] is an algorithm used for solving classification problems. Given a distance measure of the data type of the actual dataset it classifies the current element based on the
attributes and classes of its nearest neighbors. It is also often used for clustering tasks.
In BLORT kNN is used to select a fixed size of set (the number k) of features from the
codebook most similar to the feature currently being matched during the detection stage.

3.7.2

RANSAC

The RANSAC[13] algorithm is possibly the most widely used robust estimator in the field of
computer vision. The abbreviation stands for Ran Sample Consensus. RANSAC is an iterative
model estimation algorithm which operates by assuming that the input data set contains outliers
- elements not inside the validation range of the estimated mathematical model and minimizes
. It is a non-deterministic algorithm since a random number generation is used
the ratio of outlier
inlier
in the sampling stage.
In BLORT RANSAC is used to estimate the pose of the object using image features (SIFTs
in this case) to initialize the tracker module therefore a RANSAC method can be found in the
detector module.

Figure 3.9: Extracted SIFTs. Red SIFTs are not in the codebook, yellow and green ones are
considered as object points, green ones are inliers of the model and yellow ones are outliers.

29

1
2
3
4

5
6
7
8
9
10
11
12
13

Data: dataset;
model - whose parameters are needed to be estimated;
n-points-to-sample - number of points to use to give a new estimation ;
max-ransac-trials - maximum number of iterations;
t - a threshold for maximal error when fitting a model ;
n-points-to-match - the number of dataset elements required to set up a valid model ;
0 - an optional, tolerable error limit
Result: best-model ;
best-inliers;
best-error ;
iterations = 0 ;
idx = NIL;
;
 = npointstomatch
dataset.size
while iterations < max ransac trials or (1.0 npointstomatch )iterations >= 0
do
idx = random indices from dataset ;
model = Compute model(idx) ;
inliers = Get inliers(model, dataset) ;
if inliers.size >= n points to match then
error = Compute error(model, dataset, idx) ;
if error < best error then
best-model = model ;
best-inliers = inliers ;
best-error = error ;
end

14
15
16
17

end
increment iterations ;
end
Algorithm 4: RANSAC algorithm in BLORT

30

3.8

Implemented application

All implementation were done using ROS 1.4 and C++.


Training and detecting a Pringles box: http://www.youtube.com/watch?v=
HoMSHIzhERI
Training and detecting a juicebox: http://www.youtube.com/watch?v=0QVc9x3ZRx8

3.8.1

Learning module

As well as similar applications BLORT also requires a learning stage before any detection could
be done. In order to start the process a CAD model of the object is needed. This model gets
textured during the learning process as well as SIFTs are registered onto surface points of the
model. The software itself is running the tracker module which is able to operate without texture only based on the pheriperial edges of the object (ie: the outline of the object). The learning
stage is operated manually.
After the operator starts the tracker in an initial pose displayed on the screen the tracker
will follow the object. By the pressing of a single button both texture and SIFT descriptors are
registered for the most dominant face of the object (ie: the one that is the most orthogonal to the
camera). All information captured are used on-the-fly from the moment of recording during the
learning stage. As the tracker gets more information by registering textures to different faces of
the object the task of the operator becomes more convenient. 1
To make this step easier for new users of BLORT demonstrative videos were recorded:
Training a Pringles container: http://www.youtube.com/watch?v=pp6RwxbUwrI
Training with a juicebox: http://www.youtube.com/watch?v=Hfg7spaPmY0

3.8.2

Detector module

The detector module unlike its name implies does object detection and pose estimation however
this resulting pose is often not completely precise. The detection stage starts with the extraction
of SIFTs 3.6 then continues with a kNN 3.7.1 method to determine the best matchings from the
codebook then further used by a RANSAC 3.7.2 method approximating the pose of the object.
1

Cylindrical objects tend to keep rotating when there is no texture due to the completely symmetric form.

31

Figure 3.10: Detection result


To overcome the inprecision of the feature-based approach and further validate the result an
object tracker is initialized with this pose.

3.8.3

Tracker module

As mentioned before the tracker module is running a particle filter-based 3.5 tracker using 3D
rendering to imagine the result then validating it with edge detection and matching running
on the GPU to gain real-time speed. This step is best summarized in the overview of BLORT
Algorithm 2 at the beginning of this chapter.

Figure 3.11: Tracking result, the object visible in the image is rendered

32

3.8.4

State design pattern for node modes

Although BLORT was designed to provide real-time tracking (tracker module) after featurebased initialization (detector module) it yields a different possible use-case which is more desirable for this thesis than the original functionality.
When used in an almost stand still scene to determine the pose of an object to be grabbed
tracking provides the refinement and validation of the pose acquired by the detector. By defining
a timeout for the tracker in these cases would result in high resource saving which is important
in a real robot. After the timeout has been passed and the confidence is sufficient the last determined pose can be used. This way for example the robot doesnt have to run all the costly
algorithms until it reached the table where it needs to grab an object.
The above behaviour however is not always an option therefore it is also required to have a
full-featured tracker which can recover when the object is lost.
Even though it is a launch-time parameter of BLORT the run-time design pattern called
State[14, p. 305] brings convenience to the implementation and future use.
tracking
The full-featured version of BLORT. When BLORT is launched in tracking mode it will recover
(or initialize) when needed and track continuously.

Figure 3.12: Diagram of the tracking mode


33

singleshot
When launched in singleshot mode, BLORT will initialize using the detector module then refine
the gained pose by launching the tracker module only when queried for this service through a
ROS service interface. The result of one service call is one pose, or an empty answer if the pose
estimation failed due to unpresent object or bad detection.

Figure 3.13: Diagram of the singleshot mode

34

3.9

Pose estimation results and ways for improvement

The goal of this thesis was to find the way to start with tabletop object grasping on the REEM
robot and to provide an initial solution to it.
Table 4.1 shows detection statistics of BLORT in a few given scenes. The average pose
estimation time was between 3 seconds and 5 seconds.
Since the part which takes most of the CPU time is the RANSAC algorithm inside the detector module it is desirable to decrease the number of extracted SIFT (or any) features.
Most of the failed attempts were caused by matching the bottom or the top of the boxes to
the wall or any untextured surface. In these cases the detector made a mistake by initializing the
tracker with a wrong pose but the tracker was satisfied with it because the edge-based matching
(requiring texture) was perfect. It would be useful to provide a way to block specific faces of
the object in case they are low textured.

3.10

Published software, documentation and tutorials

All software developed for BLORT were published open-source on the ROS wiki and can be
found at the following link:
http://www.ros.org/wiki/perception_blort
It is a ROS stack which consists of 3 packages.

blort: holds the modified version of the original BLORT sources used as a library.
blort ros: contains the nodes using the BLORT library, completely separate from it.
siftgpu: a necessary dependency of the blort package.
The codes are hosted at PAL Robotics public github account at https://github.com/
pal-robotics.

35

(a)

(b)

(d)

(c)

(e)

Figure 3.14: Screenshots of the ROS wiki documentation


Links to documentation and tutorials:
BLORT stack: http://ros.org/wiki/perception_blort
blort package: http://ros.org/wiki/blort
blort ros package: http://ros.org/wiki/blort_ros
siftgpu package: http://ros.org/wiki/siftgpu
Training tutorial: http://www.ros.org/wiki/blort_ros/Tutorials/Training
Track and detect tutorial http://www.ros.org/wiki/blort_ros/Tutorials/
TrackAndDetect
36

How to tune? tutorial: http://www.ros.org/wiki/blort_ros/Tutorials/


Tune

3.11

Hardware requirements

BLORT requires an OpenGL supported graphics card with GLSL = 2.0 (OpenGL Shading
Language) for running the paralellized image processing in the tracker module and also in the
detector module by SiftGPU to extract SIFTs fast.

3.12

Future work

Use a database to store learned object models. This could also be used to interface with
other object detection systems.
SIFT dependency: Remove the mandatory usage of SiftGPU and SIFT in general. Provide
a way to use different keypoint extractor/detector techniques.
OpenGL dependency: It would be elegant to have build options which also support CUDA
or non-GPU modes.

37

Chapter 4
Increasing the speed of pose estimation
using image segmentation
4.1

Image segmentation problem

Image processing operators such as feature extractors are usually quite expensive in terms of
computation time therefore it is often benefitially to limit their operating space. Much faster
speed can be achieved by limiting the space of frequently called costly image processing operators. The question of possible ways arises here.
Trying to copy nature is usually a good way to start in engineering. Lets think of how our
image processing works. Human perception tries to keep things simple and fast while the brain
only provides a tiny part of itself to do it. The way how our perception works is that most of the
information we receive through our eyes is disposed of by the time it reaches our brains. The
information that actually reaches the brain is based around a certain area of our vision with high
detail called focus point while we only get highly sparse information about other areas. In this
chapter the same approach is followed to increase the speed of image-based systems - in this
case more focused on boosting BLORT.
In order to limit the operating space the input image needs to be segmented. Segmentation
can be done via direct masking by painting the masked regions of the image to some color or
by assigning a matrix of 0s and 1s as mask to the image marking the valid and invalid pixels
and carrying this mask along with the image.
In general it requires a priori knowledge to know which areas of the input are interesting for
a specific costly operator. Most of the time it depends on the actual application environment
that is defined by hardware, software, camera and physical environments. The result of the
segmentation is a mask which in the end will be used to indicate which pixels are valid for

38

further analysis and which are invalid. Formally it could be written as


O[i, j] = image operator(I, i, j),
= extremal element,

where M [i, j] == valid


where M [i, j] == invalid

(4.1)

where O is the output image, I is the input image, i, j are current indices, M is the mask while
image operator and extremal element are depending on the current use-case.
As it plays an important role in Computer Vision, image segmentation is a strong tool of
medical imaging, face and iris recognition and agricultural imaging as well as image operator
optimization.

4.2

Segmentation using image processing

After creating a mask based on a specific method pointilized errors should be eliminated. This
step is done by running an erode operator which is used in image processing. Erode is using a
binary image therefore during this step all pixels of the target color (or class) will be trialed for
survival. Figure 4.1 shows an example and the way erosion trial works on pixel-level.

(a) Original

(b) Result

(c) Structuring element

Figure 4.1: Example of erosion the black pixel class were eroded
In order to make sure that the mask was not minimized too much a dilate step may be done.
As erode before the dilate operator is working on a binary image but trials all non-target pixels
for survival. Figure 4.2 shows an example and the way dilation trial works on pixel-level.1
1

Figures in this section were taken from http://docs.opencv.org/doc/tutorials/imgproc/


erosion_dilatation/erosion_dilatation.html and [10]

39

(a) Original

(b) Result

(c) Structuring element

Figure 4.2: Example of dilation where the black pixel class were dilated
By combining erode and dilate point-like noise can be eliminated and masking errors can be
fixed in an adaptive way to reduce mask noise. The parameters of the two operators are exposed
to the end-user and they can be tuned in run-time.

4.3

ROS node design

Since the segmentation tasks well defined the same ROS node skeleton can be used to implement all segmentation methods. This node has two primary input topics: image, and camera
info. The latter here holds the camera parameters and is published by the ROS node responsible
for capturing images. The output topics of the node are: a debug topic which holds information
on the inner working (eg: correlation map), a masked version of the input image and a mask.
For efficiency the node is designed in a way that messages on these topics are only published
when there is at least one node subscribing to them. For this reason the debug topic is usually
empty.

Figure 4.3: ROS node design of segmentation nodes


40

It was mentioned before that the parameters of erode and dilate operators need to be exposed. This is solved through a dynamic reconfigure 2 interface provided by ROS. For each
of erode and dilate the number of iterations and the kernel size can be set. An extra parameter threshold was included because segmentation methods often use at least one thresholding
operator inside.

Figure 4.4: Parameters exposed through dynamic reconfigure

4.4
4.4.1

Stereo disparity-based segmentation


Computing depth information from stereo imaging

(a) Left camera image

(b) Right camera image

(d) Computed depth map that


matches the dimensions of the left
camera image

Figure 4.5: Example of stereo vision


2

http://ros.org/wiki/dynamic_reconfigure

41

(c) Computed disparity image

In Figure 4.5 the disparity image is computed by matching image patches between the images
captured by the two cameras. Subfigure d in 4.5 shows a depth map with each pixel colored
accordingly to its estimated depth value. Black regions have unknown depth. The major drawback of stereo cameras compared to RGB-D sensors is that while depth images acquired from
RGB-D sensors are continuous, stereo systems tend to have holes in the depth map where no
depth information is available. This effect is usually due to the fact that stereo cameras operate
using feature detection and matching while most RGB-D cameras use light-emitting techniques.
Depth map holes are acquired of regions where no features could be extracted because of texturelessness. The texturelessness problem is solved by RGB-D techniques by emitting a light
pattern onto the surface and determining the distortion of these.

4.4.2

Segmentation

After obtaining a depth image it is not enough to create a mask based on the distance values
of single pixels. These masks would reflect the raw result of the segmentation however further
steps could be done to refine them.
Distance-based segmentation is good but not good enough in itself. Even though some parts
of the input image is usually omitted it can still forward too much unwanted information to a
costly image-processing system. Images of experiments are shown in Figure 4.6. Segmentation
steps can be organized in a pipeline fashion so the obtained result is an aggregate of masks
computed using different techniques.

Figure 4.6: Masked, input and disparity images

4.5
4.5.1

Template-based segmentation
Template matching

Template-matching is a common way to start with object detection but rarely yields success as
a standalone solution. It is perfect to search for a subimage in a big image but the matching
often fails when the pattern is from different source.
42

The most straightforward approach to template matching is image correlation. The output of
image correlation is a correlation map which is usually represented as a floating point singlechannel image that is of the same size as the image scanned and where the value of one pixel
holds the result of the image-subimage correlation centered around that position.
OpenCV has a highly optimized implementation for template matching where several different correlation methods can be chosen. 3

(a) Debug image showing the window around the (b)


target
Template

(c) Masked image

(d) Mask

Figure 4.7: Template-based segmentation

4.5.2

Segmentation

Irrelevant regions can be masked by thresholding the correlation map with a certain limit and
using this as the final mask. For tuning conveniency and noise issues erode and dilate operations
can also be used.
3

OpenCV documentation:
http://docs.opencv.org/modules/imgproc/doc/object_
detection.html?highlight=matchtemplate

43

4.6

Histogram backprojection-based segmentation

4.6.1

Histogram backprojection

Calculating a histogram of an image is a fast operation and can serve with pixel-level statistical
information. Luckily this type of information is also often used to solve pattern matching in
a relatively simple way. It is based on the assumption that similar images or sub-images often
have similar histograms especially when these are normalized.
OpenCV provides an implementation for histogram backprojection where the target histogram (the pattern in this case) is backprojected to the scanned image and an correlation
image can be computed. This result image will indicate how well the target and sub-image
histograms are matching therefore a maximum search will find the best matching region.

(a) Input camera image

(b) Histogram backprojection result

(c) Masked image

Figure 4.8: The process of histogram backprojection-based segmentation


Figure 4.8 shows the results of experiments where texture information is used that was captured during the training of BLORT. At startup the segmentation node reads the image and
computes its histogram. Later on when an input image is received the node uses the computed
histogram and backprojects it onto the input images histogram.

4.6.2

Segmentation

The noise level of these results are not relevant therefore erode steps are not necessary here but
to enlarge the valid regions of the mask the dilate operator can still be used. The parameters are
- as before - exposed through configuration files.
Experiments have shown that histogram-backprojection works far more precisely and faster than
the pixel correlation-based template matching approach. Figure 4.9 shows an experiment where
the pattern was the orange ball that can be seen in the upper-right corner and the on the left is
an image masked according to the result of the histogram backprojection. Image correlationbased matching usually fails when using different light conditions than the one the pattern was
44

captured with. It can be seen that histogram-backprojection is more robust to changes in light
conditions.

Figure 4.9: Histogram segmentation using a template of the target orange ball

45

4.7

Combined results with BLORT

By masking the input image of BLORT nodes the overall success rate was increased. The key
of this success was to control the ratio of inliers and outliers inserted into the RANSAC method
inside the detector module. By manipulating the features handled by RANSAC in a way that
Inliers
the Outliers
ratio increases the overall success rate and speed can be enhanced. However this
ratio can not be increased directly but it can be manipulated by decreasing the overall number
of extracted features while trying to keep the ones coming from the object. A good indicator
number is the ratio of Object SIFTs - the features matching the codebook - and All SIFTs extracted from the image.

(a) Left camera image

(b) After stereo segmentation

(d) Detector result

(c) After histogram segmentation

(e) Tracker result

Figure 4.10: The segmentation process and BLORT


This approach proved useful when BLORT is deployed in a noisy environment. To demonstrate this measurements were taken from a sample of 6 scenes a 100 times each. Table 4.1
shows the effectiveness of each segmentation method averaged from all scenes. The timeout
parameter of BLORT singleshot was set to 120 seconds. It can be seen that the speed and success rate of BLORT was dramatically increased by segmenting the input especially when using
different techniques.

46

Method used

Extracted features

Nothing

4106

Stereo-based

2287

Matching-based

3406

Histogram-based

1220

Stereo+histogram hybrid

600

ObjectSIF T s
AllSIF T s
52
4106
53
2287
32
3406
50
1220
52
600

Success rate Worst detection time


14%

101s

41%

64s

31%

74s

50%

32s

82%

20s

Table 4.1: Effect of segmentation on detection


For the test scenes depicted in Figure 4.11 the pose of the object was estimated using an
Augmented Reality marker and its detector.

(a)

(b)

(c)

(d)

(e)

(f)

Figure 4.11: Test scenes

4.8

Published software

All software developed for BLORT were published open-source on the ROS wiki and can be
found at the following link:
http://www.ros.org/wiki/pal_vision_segmentation

47

Figure 4.12: Screenshot of the ROS wiki documentation

4.9

Hardware requirements

There are no special hardware requirements for these nodes.

4.10

Future work

Future work on this topic may include the introduction of other pattern-matching techniques or
even new sensors. Also most works marked as detectors in Chapter 2 like LINE-Mod can be
used for segmentation as long as it is reasonable in terms of computation time.

48

Chapter 5
Tracking the hand of the robot
5.1

Hand tracking problem

The previous chapters of this thesis are about estimating the pose of the target objects which
is necessary for grasping but when considering the grasping problem 1.7 in full detail and the
visual servoing problem 1.8 it is necessary to be able to track the robot manipulator - the hand
in this case.
A reasonable approach could be to use a textured CAD model to track the hand but the
design of REEM does not have any textures on the body by default. To overcome this problem the marker-based approach was selected. Augmented Reality applications already feature
marker-based detectors and trackers therefore it is worthwhile to test them for tracking a robot
manipulator.

49

5.2

AR Toolkit

As its name indicates the Augmented Reality Toolkit [1] was designed to support applications
implementing augmented reality. It is an open-source project widely known and supported. It
provides marker design and software to detect and give a pose estimation of these markers in
3D space or to compute the viewpoint of the user.

(a)

(b)

Figure 5.1: ARToolkit markers


The functionality is implemented using edge- and corner-detection techniques. A marker is
defined by its black frame while the inside of the frame serves as the identifier of the marker
and as a primary indicator of orientation. Detection speed is increased to that of a usual CPUbased implementation by the usage of OpenGL and the GPU. Despite being faster than the usual
CPU-based implementations using the GPU can also yield problems when the target platform
does not have such a unit or it is being exclusively used by other components.
The AR Toolkit is already available in ROS wrapper so it is straightforward to integrate it
with a robot running with ROS.

50

(a) Using an ARToolkit marker attached to an (b) ARToolkit marker on the hand of REEM
object in the Gazebo simulator to test against
BLORT

Figure 5.2: ARToolkit in action


AR Toolkit gives the user freedom by using 4.5.1 for the center of the markers meaning that
the inside of the marker can be customized. This feature yields problems in detection because
pre-defined patterns can be more optimized for speed as well as for detection quality (precision,
success, ambiguity). The minimum size of the printed marker that was still working on REEM
was 7x7cm which does not fit into a smooth design.
ARToolkit was the first library used for tracking the hand of the REEM robot but it soon
turned out that it has mayor problems with light-changes. Because of the above reasons the
need for a different approach emerged.

51

5.3

ESM

The idea of ESM is a completely custom pattern matching technique that stands on solid ground
thanks to a custom minimization method defined specifically for it. Unfortunately it is only able
to track the target but given the circumstances it can achieve high precision while keeping its
adaptiveness to light-source changes.
ESM was tested in the Gazebo simulator and also with a real webcam to follow the company
logo of PAL Robotics. Figure 5.3 shows screenshots of the tracking tests and the target pattern.

(a) Target pattern logo


of PAL Robotics

(b) Used in the Gazebo simulator

(c) Used with a webcamera

Figure 5.3: Tests using the ESM ROS wrapper


Unfortunately during previous tests it turned out that the circumstances mentioned before
are really strict in the case of continuity. This means that the tracked marker can only move
tiny differences between two consecutive images therefore a really high frame-rate camera is
required. While the frame-rate would be supported by the cameras of REEM, it is not lucky to
spend tremendous portion of the computation time on capturing images from them.
Another problem is that ESM only provides a homography to the previous pattern which is
in the space of image points not in 3D.

Web page: [3]


Article: Benhimane and Malis
Video: http://www.youtube.com/watch?v=oN3sVTwNCBg

52

5.4

Aruco

Even though the Aruco library is matching most of the Augmented Reality libraries in functionality it is different in implementation and application aspects. The markers used by Aruco
might seem similar to the ones seen previously though they differ in the definition of the inner
side of markers such that it is a 5x5 grid made up by black and white rectangles. These patterns
code numbers from 0 to 1024 using a modified Hamming-code which provides error-detection
in code and a tool to measure distance between markers which: Hamming-distance. By knowing the distance between markers gives the opportunity to select the most distant markers from
each other to minimize the number of false or uncertain detections.

(a) 300

(b) 582

Figure 5.4: Example markers of Aruco


A significant addition to Aruco opposed to other Augmented Reality libraries is the support of marker boards allowing users to use several markers defining the same pose bringing
redundancy and with it more robustness and precision to the detection system. By using marker
boards unsuccessful detection of (even several) single markers is no problem.
Numerous techniques are used during the detection process which are configurable. The
technique most promising between these is the one introduced by Otsu which gives adaptive
binary thresholding making the detection more robust to changes in light. They use Otsus
thresholding on the grey-converted input image to speed up the process and also to increace
precision. The core of the method is an optimization problem where given an image histogram
the corresponding two, most separable classes have to be found. After the classes have been
found the classic thresholding takes place where the elements belonging to the bottom class
will be turned into black and while the other elements are going to turn white. This procedure
is highly advantegous with the black and white markers used by Aruco.

53

(a) Original image

(b) After thresholding

(c) Histogram the computed dividing


line

Figure 5.5: Otsu thresholding example


More about ArUco can be found on the web page: [2]

Figure 5.6: Video of Aruco used for visual servoing. Markers are attached to the hand and to
the target object in the Gazebo simulator.
http://www.youtube.com/watch?v=sI2mD9zRRw4

5.5

Application examples

The implemented Aruco ROS package from now on can be used for different robot tasks depending on 3D pose input. Not only the robot hand can be equipped with Aruco markers but
its other parts so robots can locate each other by their marked regions or it can also serve on
self-charger stations helping the robot to execute a safe approach. Since feature-based pose estimation will always be slower than the marker-based approach a robot-prepared kitchen could

54

be made by marking all important objects with Aruco markers relevant to their ID in the robots
database. A faster system could be implemented this way.

(a) Aruco in the Gazebo simulator attached to the (b) Real scene with the real REEM robot and an
hand of the REEM model
Aruco marker on the hand

Figure 5.7: Tests done with Aruco

5.6

Software

Aruco ROS nodes are planned to be released after the verification of the Aruco authors.

5.7

Hardware requirements

There are no special hardware requirements for these nodes.

55

Chapter 6
Experimental results for visual
pre-grasping
6.1

Putting it all together: visual servoing architecture

The design of all components were done by keeping the visual servoing architecture in mind.
Figure 6.1 shows the final implemented structure of the general architecture version presented
in Section 1.8. All the software produced in Chapters 3,4,5 was integrated into this architecture.

Figure 6.1: Putting it all together: visual servoing architecture


Now grasping can be done given a visual servoing controller. At the time of this work the
visual servoing controller was being developed by another intern. This controller was the one
controlling the manipulator of the robot during the experiments.

56

6.2

Tests on the REEM RH2 robot

At the end of my internship at PAL Robotics several experiments were done to validate this approach integrated with the visual servoing controller. Results showed that the system is capable
of running at a relatively fast speed given that - expect for minimal ones - no deep optimization
was done. This speed was an average of between 3 seconds and 5 seconds in a cluttered environment with the object often partially occluded. All final experiments were repeated with the
Pringles container and the juicebox.

(a)

(b)

(d)

(c)

(e)

Figure 6.2: A perfect result with the juicebox


Some experiments failed because the controller moved the arm to a position where the
marker was no longer visible or detectable. In other cases the depth component of the pose estimate was not precise enough causing the manipulator to push the object instead of positioning
it inside the gripper.

57

(a)

(b)

(d)

(c)

(e)

Figure 6.3: An experiment gone wrong


The different launch modes of BLORT (tracking, singleshot) allows for a different use-case.
In the experiment showed in Figure 6.4 I tested how well the tracker behaves with the visual
servoing controller. The last sub-figure shows that even after moving the object when the hand
was almost at the goal the controller moved the hand to the new goal pose.

(a)

(b)

(c)

(d)

(e)

(f)

Figure 6.4: An experiment where tracking was tested

58

6.3

Hardware requirements

The hardware requirements for integrated solution is the sum of all the requirements of previously introduced ROS nodes in Chapters 3,4,5.
Experiments were done on several different machines.
Desktop
Intel Xeon Quad-Core E5620 2.4GHz
4 GB DDR3 memory
NVidia GeForce GTX560
Ubuntu Linux 10.04 Lucid Lynx
Laptop1:
Intel Core2Duo 2.2GHz
4 GB DDR2 memory
Intel Graphics Media Accelerator X4500MHD
Ubuntu Linux 11.04 Natty Narhwal
Laptop2:
Intel Core i7 2.6 GHz
8 GB DDR2 memory
NVidia GeForce GT 650M
Ubuntu Linux 10.04 Lucid Lynx
Inner computer of the REEM robot:
Intel Core2Duo 2.2 GHz
Ubuntu Linux 10.04 Lucid Lynx

59

Chapter 7
Conclusion
7.1

Key results

The goals defined at the beginning of the work were all reached by the end of my internship at
PAL Robotics.
First I gathered information about existing object detection techniques and software and
tried to classify them by principal attributes. After the first survey was done I selected the best
candidates for the task and further analyzed them by running demos and tests. I integrated one
chosen software package with the REEM robot and ran experiments on it. In order to increase
the speed of the software parameters had to be finetuned and I also introduced a new way to
increase the speed of the system by segmenting the images. These segmentation nodes were
implemented in a general way so other packages relying on image processing can also benefit
them. As a result REEM is now able to estimate a pose of a trained object in common kitchen
or home scenes.
Given a working pose estimation node only a reliable hand tracker was needed. Using the
information gathered during the survey work and extra advices from Jordi I tested 3 different
packages to see which one is better for tracking the hand of the REEM robot. It turned out that
the Aruco library is capable of doing this job reasonably fast and accurately. After consulting
with the author of Aruco I created a ROS package for Aruco and used it in experiments to accomplish visual pre-grasping poses. REEM is now able to track its own hand using vision and
by using it in a visual servoing architecture it is able to move its hand into a grasping position.
While some tasks such as tracking the hand was easier to solve with existing software the
parts regarding object detection were much harder to deal with. The survey work was really interesting and I learned a lot about the field in general during those weeks. Choosing
BLORT was the best choice at that time. I consulted several times with fellow MSc students

60

from Universitat Polti`ecnica de Catalunya : Institute of Robotics and Industrial Informatics


working on similar tasks for the RoboCup@Home competition (http://www.ai.rug.nl/
robocupathome/(Last accessed: 2012.12.)) only to find out that they also had a hard time
finding an approach fulfilling all of the requirements. They used different software but a mainly
similar approach to solve their task however they had to attach a Kinect sensor to the head of
REEM which in my case was not possible.

7.2

Contribution

Most of the work I did during this thesis was given back to the community. This section summarizes the contribution to this field.
I kept a daily blog of my work which can be accessed here: http://bmagyaratpal.
wordpress.com/ (Last accessed: 2012.12.) It can be useful to anyone working on similar
problems.
All released software can be found in the github repository of PAL Robotics: https:
//github.com/pal-robotics (Last accessed: 2012.12.)).
BLORT links to documentation and tutorials:
BLORT stack: http://ros.org/wiki/perception_blort
blort package: http://ros.org/wiki/blort
blort ros package: http://ros.org/wiki/blort_ros
siftgpu package: http://ros.org/wiki/siftgpu
Training tutorial: http://www.ros.org/wiki/blort_ros/Tutorials/Training
Track and detect tutorial http://www.ros.org/wiki/blort_ros/Tutorials/
TrackAndDetect
How to tune? tutorial: http://www.ros.org/wiki/blort_ros/Tutorials/
Tune
Image segmentation nodes:
http://www.ros.org/wiki/pal_vision_segmentation
As an additional result, several bugs and suggestions were submitted during the thesis work.
These are:
61

dynamic reconfigure ROS package bug report


odufinder ROS package bug report
BLORT library bug report and implementation details
ROS wiki bug report
Gazebo simulator image encoding bug fix
research on TOD and its current state
research on WillowGarage ECTO and its current state
several questions asked and answered at http://answers.ros.org/ (Last accessed:
2012.12.)

7.3

Future work

As with most software developed for thesis reasons this work could also be further expanded in
several directions.
One mayor feature could be to provide a more flexible GPU usage with OpenGL, CUDA,
or OpenCL implementations.
Further decrease the number of features extracted by the detector module of BLORT to
gain speed, increase detection confidence, decrease ambiguity.
A smart addition would be to block the detection of textureless object faces such as bottoms of juice boxes, etc. This was the reason for most of the failed detections.
The BLORT library itself could be further optimized and refactored to provide a convenient way for future expansion.
Use a database to store learned object models of BLORT. This could also be used to
interface with other object detection systems.
Use a database designed for grasping to bring this work forward. The database entries
should have grasping points marked for each object so the robot can grasp it where it is
best.

62

Chapter 8
Bibliography
[1] Ar toolkit. http://www.hitl.washington.edu/artoolkit/.
22/08/2012.

Accessed:

[2] Aruco. http://www.uco.es/investiga/grupos/ava/node/26. Accessed:


22/08/2012.
[3] Esm software development kit.
22/08/2012.

http://esm.gforge.inria.fr/.

Accessed:

[4] Julius Adorf. Object detection and segmentation in cluttered scenes through perception
and manipulation, 2011.
[5] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. 3951:
404417, 2006.
[6] S. Benhimane and E. Malis. Homography-based 2d visual tracking and servoing. International Journal of Robotic Research (Special Issue on Vision and Robotics joint with the
International Journal of Computer Vision), 2007.
[7] S. Benhimane and E. Mallis. Homography-based 2d visual tracking and servoing. International Journal of Robotic Research (Special Issue on Vision and Robotics joint with the
International Journal of Computer Vision), 2007.
[8] S. Benhimane, A. Ladikos, V. Lepetit, and N. Navab. Linear and quadratic subsets for
template-based tracking. IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, 2007.
[9] G. Bradski. The OpenCV Library. Dr. Dobbs Journal of Software Tools, 2000.
[10] Dmitry Chetverikov. Basic algorithms of digital image processing, slides of course. ELTE.

63

[11] Paolo Cignoni, Massimiliano Corsini, and Guido Ranzuglia. Meshlab: an open-source 3d
mesh processing system. ERCIM News, 2008(73), 2008. doi: http://ercim-news.ercim.eu/
meshlab-an-open-source-3d-mesh-processing-system.
[12] S.A. Dudani. The distance-weighted k-nearest-neighbor rule. Systems, Man and Cybernetics, IEEE Transactions on, (4):325327, 1976.
[13] Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for
model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381395, 1981.
[14] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design patterns: elements of reusable object-oriented software. Addison-Wesley Professional, 1995.
[15] Corey Goldfeder, Matei Ciocarlie, Hao Dang, and Peter K. Allen. The columbia grasp
database. In IEEE Intl. Conf. on Robotics and Automation, 2009.
[16] S. Hinterstoisser, V. Lepetit, S. Ilic, P. Fua, and N. Navab. Dominant orientation templates
for real-time detection of texture-less objects. 2010.
[17] S. Hinterstoisser, S. Holzer, C. Cagniart, S. Ilic, K. Konolige, N. Navab, and V. Lepetit.
Multimodal templates for real-time detection of texture-less objects in heavily cluttered
scenes. 2011.
[18] S. Holzer, S. Hinterstoisser, S. Ilic, and N. Navab. Distance transform templates for object
detection and pose estimation. 2009.
[19] A. Ladikos, S. Benhimane, and N. Navab. A real-time tracking system combining
template-based and feature-based approaches. In International Conference on Computer
Vision Theory and Applications, 2007.
[20] A. Ladikos, S. Benhimane, M. Appel, and N. Navab. Model-free markerless tracking
for remote support in unknown environments. In International Conference on Computer
Vision Theory and Applications, 2008.
[21] A. Ladikos, S. Benhimane, and N. Navab. High performance model-based object detection
and tracking. In Computer Vision and Computer Graphics. Theory and Applications,
volume 21 of Communications in Computer and Information Science. Springer, 2008.
ISBN 978-3-540-89681-4.
[22] Robert Lagani`ere. OpenCV 2 Computer Vision Application Programming Cookbook.
Packt Publishing, 2011. ISBN 1849513244.

64

[23] David G. Lowe.


Distinctive image features from scale-invariant keypoints.
International Journal of Computer Vision, 60:91110, 2004.
ISSN 09205691. URL http://dx.doi.org/10.1023/B:VISI.0000029664.99615.
94. 10.1023/B:VISI.0000029664.99615.94.
[24] Marius Muja, Radu Bogdan Rusu, Gary Bradski, and David Lowe. Rein - a fast, robust,
scalable recognition infrastructure. In ICRA, Shanghai, China, 09/2011 2011.
[25] T. Morwald, M. Zillich, and M. Vincze. Edge tracking of textured objects with a recursive particle filter. In 19th International Conference on Computer Graphics and Vision
(Graphicon), Moscow, pages 96103., 2009.
[26] T. Morwald, J. Prankl, A. Richtsfeld, M. Zillich, and M. Vincze. Blort - the blocks world
robotic vision toolbox best practice in 3d perception and modeling for mobile manipulation. in conjunction with ICRA 2010, 2010.
[27] David Nister and Henrik Stewenius. Scalable recognition with a vocabulary tree. In CVPR
- Computer Vision and Pattern Recognition, pages 21612168. IEEE, 2006.
[28] N. Otsu. A threshold selection method from gray-level histograms. Automatica, 11(285296):2327, 1975.
[29] Morgan Quigley, Ken Conley, Brian P. Gerkey, Josh Faust, Tully Foote, Jeremy Leibs,
Rob Wheeler, and Andrew Y. Ng. Ros: an open-source robot operating system. In ICRA
Workshop on Open Source Software, 2009.
[30] A. Richtsfeld and M. Vincze. Basic object shape detection and tracking using perceptual organization. In International Conference on Advanced Robotics (ICAR), pages 1-6.,
2009.
[31] A. Richtsfeld, T. Morwald, M. Zillich, and M. Vincze. Taking in shape: Detection and
tracking of basic 3d shapes in a robotics context. In Computer Vision Winter Workshop
(CVWW), pages 9198., 2010.
[32] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. Orb: An efficient
alternative to sift or surf. In International Conference on Computer Vision, Barcelona,
11/2011 2011.
[33] Radu Bogdan Rusu, Gary Bradski, Romain Thibaux, and John Hsu. Fast 3d recognition
and pose using the viewpoint feature histogram. In Proceedings of the 23rd IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 10/2010
2010.
[34] Alexander Shishkov and Victor Eruhimov. Textured object detection. URL http://
www.ros.org/wiki/textured_object_detection.
65

[35] Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, and Giuseppe Oriolo. Robotics: Modelling, Planning and Control. 2009.
[36] Changchang Wu. SiftGPU: A GPU implementation of scale invariant feature transform
(SIFT). http://cs.unc.edu/ccwu/siftgpu, 2007.
[37] M. Zillich and M. Vincze. Anytimeness avoids parameters in detecting closed convex
polygons. In The Sixth IEEE Computer Society Workshop on Perceptual Grouping in
Computer Vision (POCV 2008), 2008.
[38] Oliver Zweigle, Rene van de Molengraft, Raffaello dAndrea, and Kai Haussermann.
Roboearth: connecting robots worldwide. In Proceedings of the 2nd International Conference on Interaction Sciences: Information Technology, Culture and Human, ICIS 09,
pages 184191, New York, NY, USA, 2009. ACM. ISBN 978-1-60558-710-3. doi:
10.1145/1655925.1655958. URL http://doi.acm.org/10.1145/1655925.
1655958.
Marchand, Fabien Spindler, and Francois Chaumette. Visp: A generic software plat[39] Eric
form for visual servoing.

66

Appendix A
Appendix 1: Deep survey tables
These tables were prepared based on the tables in Appendix 2 B. The techniques listed here
were further analyzed and evaluated.

67

68

RoboEarth
packages

fast template detector


Object of daily use
finder

BIGG detector
VHF + ReIn

3
4

Stefan Hinterstoisser,
Holzer: LINEMOD

BLORT - Blocks
World
Robotic
Vision Toolbox

Object recognition,
TOD, Ecto

ROS

Name
ViSP Model based
tracker

Test
1

Yes

Yes

Yes

Yes
Yes

Yes

Detection
Yes

Yes

Yes

Yes

No
No

Yes

Pose
Yes

OpenCV implementation (work


in progress)
standalone native
C++, OpenGL,
lots of legacy
code
...

ROS package

ROS package
ROS package

ROS stack

Implementation
ViSP ros package

...

monocular

monocular

stereo (or RGB-D)

monocular
monocular

kinect(detect, seems
compulsory
for
recording), monocular (detect)

Sensor required
monocular

Table A.1: Filtered deep survey part 1

No relevant documen- none


Yes
tation.
Summary:
http://answers.ros.org/question/29357/searchingfor-tod-ending-up-at-ecto

It looks promising, produces


quite valuable output unlike
the others.

Experiences
Good and fast but will require some further work.
Tends to get stuck on strong
edges.
The detection rate is too
poor. 2D detection is as
good as 3D for textured objects. For untextured neither
works.
none
False detection is too high,
the code is incomplete, unoptimized, and has memory
leaks. No relevant output
published.
Ferran: dropped it because
of very high false detection
rate, seems almost random
Still waiting for it.

...

10-20 HZ (sift, gpu)

none

none

none
4-6
Hz
(republishing
the
input image topic on
object found

10-11 Hz

Speed during test


30 Hz

...

edge tracking, sift,


gpu, CAD

ReIn,
BiGG(monocular),
VHF(point cloud)
related to DOT,
LINE-2D, LINE-3D

DOT
vocabulary tree, sift,
local image regions,
DOT

recognition, kinect

Technique keywords
CAD, edge tracking

69

http://www.irisa.
fr/lagadic/visp/
computer-vision.html
http://www.ros.org/
wiki/roboearth

http://ros.org/
wiki/fast_template_
detector
http://ros.org/wiki/
objects_of_daily_use_
finder

(meeting notes:
look for
Maria from 2011.07.) http:
//pr.willowgarage.
com/wiki/
OpenCVMeetingNotes

http://www.acin.
tuwien.ac.at/?id=290
...

http://www.ros.org/
wiki/bigg_detector

Link

Test

...

printed
plate

tem-

Extendable
how
Adding different CAD
models
Using
RoboEarths
database

...

online

...

CAD

Adding new
images
to
image data
folder, offline
training, 5Hz
640x480
offline(manually training
selected 3D
bounding box
or segmented
point cloud)
, models can
be stored in
database
online
Printed template

offline

online

online (record
mode
using
printed
templates)

Type of learning
offline

Table A.2: Filtered deep survey part 2

Currently being refactored (or reimplemented) into


OpenCV by Willowgarage. This technique is capable of
detecting multiply objects which had been learned before.
To detect an object under a full coverage of viewpoints
(360 degree tilt rotation, 90 degree inclination rotation
and in-plane rotations of +/- 80 degrees, scale changes
from 1.0 to 2.0), we usually need less than 2000 templates. Weak spot: Motion blur
...

BiGG stands for: Binary Gradient Grid, a faster implementation is BiGGPy where the matching algorithm at
the end is changed to a pyramid matching method. In the
related paper a combination of BiGG and VFH is done
using ReIn and it yields reliable results.

A general framework for detecting object, its database


are pre-built with often used kitchen objects.

This tracker works using CAD models (VRML format)


and provides location and pose of the followed object. A
tracker tracks one object at a time.
RoboEarth aims at creating a rich database of knowledge useful for robots. They use ROS as platform of
their works and provide useful packages with which you
can download models from the database or record your
own models using a printable marker template and upload them to the database. Relies on the online database.
An implementation of DOT by Holzer without pose estimation

Short desc

...

go to link

http://campar.
cs.tum.edu/pub/
hinterstoisser2011linemod/
hinterstoisser2011linemod.
pdf http://campar.cs.tum.edu/
pub/hinterstoisser2011pami/
hinterstoisser2011pami.pdf

http://www.willowgarage.com/
sites/default/files/icra11.
pdf http://www.ais.uni-bonn.
de/holz/spme/talks/01_
Bradski_SemanticPerception_
2011.pdf http://www.cs.ubc.
ca/lowe/papers/11muja.pdf

http://www.vis.uky.edu/
dnister/Publications/2006/
VocTree/nister_stewenius_
cvpr2006.pdf

same as DOT

none

http://www.irisa.fr/lagadic/
publi/all/all-eng.html

Related papers

70

3D(record,
2D(detect)

2D + 3D
2D

3D
2D, 3D

2D

...

3
4

5
6

detect),

Data required
CAD, initial pose,
2D

Test
1

...

Yes

Yes
No

...
Yes(trees)

Yes

Textured
Yes

...

No

Yes
Yes

...
No

Yes

...

Library used
OpenCV,
ViSP

...

Yes

Yes
Yes

...
Yes

OpenCV, SiftGPU
...

ReIn
OpenCV

...
OpenCV

Qt interface OpenCV,
for database PCL, there is
apps,
rviz a Java dep at
for
topics, ontology
pcl visualization

Visualization
Yes

Table A.3: Filtered deep survey part 3

model
(record),
matched point
cloud(kinect
detect), detected object
name, pose
...
best matching
template on
ROS topic
Partial
detection roi
pose
pose

Textureless Output
Yes
pose

...

BSD

BSD
BSD

LGPL
BSD

BSD,
GPL,
LGPL

...

2010

2011
2012

2010
2012

2011

License Last activity


GPL,
2011
BSD

...

...

...
...

...
...

Video
http://www.
youtube.com/watch?
v=UK10KMMJFCI
...

Appendix B
Appendix 2: Shallow survey tables
These tables were prepared to help evaluate the available scientific works related to the grasping
and tabletop manipulation topic. Assumptions here are not final but served as a first level for
the deep survey.

71

72

of
use

http://campar.
Algorithm
in.tum.de/Chair/
ProjectComputerVisionCADModel

natural 3D markers
(N3M)

CAD, edge tracking

similar to HoG-based
representation, template matching, low
textures objects

cells,
non-cyclic
graph,
typed
edge: cell, objectrecognition: bag of
feature representation

3d sift, ransac, edges


as features, gpu proc,
particle filtering

recognition, kinect,

Technique keywords
vocabulary tree, sift,
local image regions,
DOT

This tracker works using CAD models (VRML format) and provides location and pose of the followed object. A tracker tracks
one object at a time.
This technique requires a CAD model of the target object and
does offline training on it to choose the best N3Ms that will be
used during tracking.

Needs strong gradients. Can learn 3D appearances using a


printed pattern where to target object is placed. Really fast, uses
bitwise operators. Has a really low false positive rate, robust to
occlusion. Tracking is not continuous.

RoboEarth aims at creating a rich database of knowledge useful


for robots. They use ROS as platform of their works and provide
useful packages with which you can download models from the
database or record your own models using a printable marker
template and upload them to the database. Relies on the online
database.
A framework for recognizing objects that can be described as
types of blocks. The framework is capable of tracking objects,
learning texture by fitting a simple block (cuboids, cylinders,
cones, spheres) on an object. A strong constraint using this
framework is that the objects must be simple, no irregularly
formed or deformable object is considered. The method seems
robust against occlusion and background clutter.
An Ecto system is made of cells with shared memory, which
form a non-cyclic graph. The computation goes along the
graph. An Ecto graph can be compiled into a threaded
code. Ectos quite abstract and the ROS package is undocumented. Develepment is also in beta since August 2011.
More: http://ecto.willowgarage.com/releases/
bleedingedge/ecto/motivation.html

Short desc
A general framework for detecting object, its database are prebuilt with often used kitchen objects.

Table B.1: Wide shallow survey part 1

Algorithm

Hinterstoisser:
Vision
targeted CAD
models

ViSP Model
based tracker

http://campar.in.tum.de/
Main/StefanHinterstoisser
http://tw.myblog.yahoo.com/
stevegigijoe/article?mid=
275&prev=277&next=264 how to
get it work on linux
http://www.irisa.fr/lagadic/
visp/computer-vision.html

Hinterstoisser:
DOT

Framework
for perception
- seems to
be much like
a
general
framework
for processing
Algorithm

Framework
for simplified
recognition

Framework
for recognition, object
model
creation

Type
Framework
for recognition

http://ros.org/wiki/object_
recognition
http://ecto.
willowgarage.com
https:
//github.com/wg-perception/
object_recognition_ros_
server doesnt really seem stable, no
docs

http://users.acin.tuwien.ac.
at/mzillich/?site=4

Link
http://ros.org/wiki/
objects_of_daily_use_finder
http://answers.ros.org/
question/388/household_
objects_database-new-entries
http://www.ros.org/wiki/
roboearth

Willowgarage
ECTO

BLORT

RoboEarth
ROS
packages

Name
Object
daily
finder

73

30 FPS
on GPU
(geforce
gtx 285)
Willowgarage depends
ECTO
on
the
size
of
graph
and
computational
cost of
cells
DOT
for 12 FPS
Real-Time
on
orDetection
dinary
laptop
ViSP Model
based
tracker
Vision tar- 15 FPS
geted CAD (tested
models
on 1GHz
centrino)

RoboEarth
ROS packages
BLORT

10 FPS

Object
daily
finder

of
use

Speed

Name

Has fast online


training features

Adding different
CAD models

Adding different
CAD models

Yes

Yes

You can build the


graph of cells and
let it work

Yes

Yes

Has learning features built in

Adding new images to image-data


folder, offline training, 5Hz 640x480
Using RoboEarths
database

Yes

Yes

Yes

Extendable
Extendable how

C++,

ViSP ros package

native, originally
Windows
but
works on linux

C++, (in theory)


its ready to use
with ROS

native
OpenGL

ROS stack

ROS package

Implementation

Table B.2: Wide shallow survey part 2

offline

none

online
(record
mode using printed
templates)

online
(record
mode using printed
templates)
online

offline

Type of learning

http://www.irisa.fr/
lagadic/publi/all/
all-eng.html
http://wwwnavab.
in.tum.de/Chair/
PublicationDetail?pub=
hinterstoisser2007N3M

http://campar.in.tum.de/
personal/hinterst/index/
project/CVPR10.pdf

http://users.acin.tuwien.
ac.at/mzillich/files/
zillich2008anytimeness.
pdf
http://ecto.willowgarage.
com/releases/
bleedingedge/ecto/
overview/cells.html

http://www.vis.uky.edu/
dnister/Publications/
2006/VocTree/nister_
stewenius_cvpr2006.pdf

Related papers

CAD,
initial
pose, 2D
CAD, 2D

2D

depends

3D(record,
detect),
2D(detect)
2D

Data required
2D

74

Yes

Yes

ViSP Model Yes


based
tracker
Hinterstoisser: Yes
Vision targeted CAD
models

Yes

Yes

Yes

depends

Yes

Yes(dot)

Textureless

Yes

Willowgarage depends
ECTO
Hinterstoisser: No
DOT

RoboEarth
ROS packages
BLORT

Yes(trees)

Object
daily
finder

of
use

Textured

Name

esti-

and

Yes

Yes

Yes

depends

Yes

Yes

Yes

Detection

Available

Available

Yes

Yes if
got 3D
info
Yes

Yes

Yes

OpenCV,
ESM

opencv,
ros, pcl
OpenCV,
Intel IPP
, ESM
OpenCV,
ViSP

own
library

OpenCV,
PCL

OpenCV

VisualizationLibrary
used

own
visualization
using
OpenGL
depends highgui

Yes

Yes

Pose

Table B.3: Wide shallow survey part 3

output
of
cells
location and
pose (if got
3D info)
pose

pose
mate

name
pose

best matching template

Output

monocular

monocular

monocular

any

kinect(detect,
record), monocular
(detect)
monocular

monocular

Sensor required

GPL,
BSD

LGPL

BSD

2007

2011

2010

2011

BSD, 2011
GPL,
LGPL
modified 2010
BSD

License Last
activity
BSD
2012

had to install
deps manually

both SIFT
and DOT for
textures and
textureless
objects.

Additional
comments

75

http://www.ros.org/wiki/
rein

ReIn

Class framework for 3D

http://ros.org/wiki/
stereo_object_recognition

detection, training,
TOD

feature extraction

ReIn, BiGG, VHF

ReIn

descriptor(feature)
extraction, descriptor matching, bag
of words

Technique
keywords
nodelet, computational graph, modular
DOT,

No documentation. Used by textured object detector.


Seems dead, no new papers listed since 2009.

Using locally adaptive regression kernels (LARKS) makes


it easy to recognize objects of interest from a single example. It provides similar efficiency as other state of the art
techniques, but requires no training at all. Emphasis is on
generic object detection. It seems more like an information
retrieval tool
BiGG stands for: Binary Gradient Grid, a faster implementation is BiGGPy where the matching algorithm at the end
is changed to a pyramid matching method. In the related
paper a combination of BiGG and VFH is done using ReIn
and it yields reliable results.
No scale invariance right now. Could be used with BOWImageDescriptorExtractor
Training needs pre-given pictures and pointcloud files of
each object. Makes use of rosbag to train.

The framework is modular, consists of Attention Operator,


Detector, Pose estimator, Model filter moduls defining a
computational graph. Replaced with Ecto.
Currently being refactored (or reimplemented) into
OpenCV by Willowgarage. This technique is capable of
detecting multiply objects which had been learned before.
OpenCV provided descriptor and matching framework

Short desc

Table B.4: Wide shallow survey part 4

Algorithm

http://ros.org/wiki/
textured_object_detection

Algorithm

textured
object
detection
stereo object
recognition

BIGG detector

Algorithm

Algorithm

http://www.ros.org/wiki/
bigg_detector

Jong
Seo:
LARKS

Simple
framework

Framework
for recognition
Algorithm

Type

ORB

http://opencv.itseez.com/
modules/features2d/doc/
object_categorization.
html
http://pr.
willowgarage.com/wiki/
OpenCVMeetingNotes/
Minutes2011-12-06
http://www.ros.org/wiki/
larks

OpenCV
BOWImgDesciptorExtractor

Hinterstoisser: http://campar.in.tum.de/
LINEMOD
Main/StefanHinterstoisser

Link

Name

76

Yes

OpenCV
BOWImgDesciptorExtractor
Hae
Jong
Seo:
LARKS

30FPS

ORB

textured
object
detection
stereo object
recognition

no info

BIGG detector

no info

Yes

Yes

Yes

Yes

Hinterstoisser: 10 FPS
LINEMOD

depends

Yes

depends

ReIn

training

training

one example at a
time

Implementing
more
extractors
and matchers.

Printed template

implementing ReIn
components
and
plugging
them
together

Extendable
Extendable how

Speed

Name

in

ROS package

offline, database

Table B.5: Wide shallow survey part 5

ROS package

OpenCV
implementation available

ROS package

ROS package

implemented
OpenCV

OpenCV
implementation (work in
progress)

ROS package

Implementation

offline, needs manual interaction

offline(bounding
box)

without

online

Type of learning

http://www.soe.
ucsc.edu/milanfar/
publications/journal/
TrainingFreeDetection_
Final.pdf
http://www.willowgarage.
com/sites/default/files/
icra11.pdf
https://willowgarage.com/
sites/default/files/orb_
final.pdf

http://www.willowgarage.
com/sites/default/
files/icra11.pdf
http:
//www.cs.ubc.ca/lowe/
papers/11muja.pdf
http://campar.
cs.tum.edu/pub/
hinterstoisser2011linemod/
hinterstoisser2011linemod.
pdf

Related papers

3D

3D

2D

3D

2D

2D, 3D

2D, 3D

Data required
2D,
3D(optional)

77

Yes

depends

Yes

No

depends

Yes

Stefan Hinterstoisser:
LINEMOD
OpenCV
BOWImgDesciptorExtractor
Hae
Jong
Seo:
LARKS
BIGG detector
ORB
textured
object
detection
stereo object
recognition

Yes

No
No

Yes

Yes
Yes

Yes

Yes

ReIn

Textureless

Textured

Name

detection,
roi, pose

Yes
Yes

Yes

Yes

No

Yes

supposed to

Detection

Yes
Yes

Yes

Yes

No

Yes

Yes

Yes

OpenCV
OpenCV

ReIn

ReIn

OpenCV

OpenCV

OpenCV,
PCL

VisualizationLibrary
used

supposed Yes
to

Pose

Table B.6: Wide shallow survey part 6

detections,
roi-s

Output

BSD
BSD

BSD

stereo

BSD

BSD

BSD

BSD

2010

2011
20102011

2011

2010

2011

2012

License Last
activity
BSD
2010

monocular
stereo

stereo

monocular

monocular,
stereo(required
for pose)
monocular

Sensor required

Additional
comments

78

BOR3D

ecto: object
recognition

OpenRAVE

http://www.ros.org/wiki/
fast_plane_detection
http://www.ros.org/wiki/
tabletop_object_detector
http://openrave.
programmingvision.com/
en/main/index.html
http://ecto.willowgarage.
com/recognition/
release/latest/object_
recognition/index.html
http://sourceforge.net/
projects/bor3d/

http://ros.org/wiki/fast_
template_detector
http://ros.org/wiki/dpm_
detector

http://ros.org/wiki/vfh_
cluster_classifier
http:
//www.pointclouds.org/
documentation/tutorials/
vfh_recognition.php

Viewpoint
Feature
Histogram
cluster
classifier

Holzer and
Hinterstoisser:
Distance
transform
templates
fast template
detector
deformable
objects
detector
tabletop object perception

Link

Name

Object recognition
in 3D data

tabletop
perception,
plane
detection, object
detection
motion planning

DOT

Technique
keywords
PFH,
detection,
pose
estimation,
tabletop
manipulation,
mobile,
KNN,
kd-trees
to FLANN, point
cloud
distance transform,
edge based templates,
template
matching,
ferns,
Lucas-Kanade

Work in progress

A collection of ecto cells that can be used for object recognition tasks.

Old sources can be misleading, OpenRAVE only concentrates on motion planning now.

Was used with PR2 for tabletop manipulation

Uses the Ferns classifier on extracted templates from an


object. Applies distance transform on images and matches
templates in a Lucas-Kanade-ian way. Templates are normalized, and circle-ized contour patches. Edge based,
needs closed contours. Scales fine. Claims to be better
than N3Ms, RANSAC, Ferns.
An implementation of DOT by Holzer

Designed specifically for tabletop manipulations with one


robot hand. Works with clusters of points, does recognition
and pose estimation by defining meta-local descriptors. No
reflective or transparent objects.

Short desc

Table B.7: Wide shallow survey part 7

Framework

Framework

Pipeline

Algorithm

Algorithm

Algorithm

Type

79

Yes

Yes

well
below 30
FPS

6 FPS

Viewpoint
Feature
Histogram
cluster
classifier
Holzer and
Hinterstoisser:
Distance
transform
templates
fast template
detector
deformable
objects
detector
tabletop object perception
OpenRAVE
ecto: object
recognition
BOR3D

training

training

Extendable
Extendable how

Speed

Name

Ecto package

ROS stack

ROS package

ROS-PCL package

Implementation

Table B.8: Wide shallow survey part 8

offline

offline

Type of learning

Data required
3D

same as DOT

http://ar.in.tum.de/pub/
2D
holzerst2009distancetemplates/
holzerst2009distancetemplates.
pdf

http://www.
willowgarage.com/papers/
fast-3d-recognition-and-pose-using-viewpoint-f

Related papers

80

Yes

Yes

Yes

Low

Viewpoint Feature
Histogram cluster
classifier
Holzer and Hinterstoisser: Distance
transform
templates
fast template detector
deformable objects
detector
tabletop object perception
OpenRAVE
ecto: object recognition
BOR3D

Textureless

Textured

Name

Yes

Yes

Detection

Yes

Yes

Pose

PCL,
OpenCV

VisualizationLibrary
used

Table B.9: Wide shallow survey part 9

Output

RGB-D

monocular

stereo

Sensor required

2011
2012
2012

GPL

2010

2010

LGPL
BSD

BSD

LGPL

2009

License Last
activity
BSD
2011

Under work
to release

Additional
comments