You are on page 1of 53

Final Year Thesis

Title:
Camera Guided Robot for Real Time Tracking Of Objects in the Real World

BE Electronic & Computer Engineering


College of Engineering and Informatics,
National University of Ireland, Galway

Student:
Joseph Fleury

March 2013

Project Supervisor:
Dr. Martin Glavin

Co-Supervisor:
Dr. Fearghal Morgan

Abstract
There is an increasingly high demand for smarter systems of weeding in organic farming. As
people become more concerned with what they are eating, they are looking towards organic food as
a healthy option. This puts pressure on organic farmers to increase their output, without pesticides
and chemicals being used in the protection of their crops. Since pesticides cant be used, it is
required that farmers manually weed their fields, which is inefficient, time consuming and expensive
(if hiring help). The solution to this problem is to create a machine that is capable of identifying
plants and their real world location, so that a mechanical arm can weed around them without
harming the plants. The aim for this project is to demonstrate the ability of using image processing in
the identification, location and tracking of objects and their real world location in real time.

Statement of Originality
I declare that this thesis is my original work except where stated.
Date:
___________________________
Signature:
___________________________

ii

Acknowledgements
I would like to thank everyone who helped me in this project who, without them, I would not
have made as much progress. Firstly I would like to Thanks Dr. Martin Glavin, my project supervisor.
Dr. Glavin kept the project on time and always helped point me in the right direction by giving me
useful advice throughout the duration of the project. I am also very grateful to Dr. Fearghal Morgan
(co-supervisor) for all the advice on direction to take with my project. I would especially like to thank
the technicians Martin Burke and Myles Meehan who without their help, assembling the robot
would have proved very difficult. They designed the shelving for holding the batteries and boards.

iii

Table of Contents

Abstract .................................................................................................................................................... i
Statement of Originality.......................................................................................................................... ii
Acknowledgements................................................................................................................................ iii
Table of Contents ................................................................................................................................... iv
Table of Figures ...................................................................................................................................... vi
Table of Equations ............................................................................................................................... viii
Glossary .................................................................................................................................................. ix
Chapter 1 Background.......................................................................................................................... 1
1.1 Formation of an Image.................................................................................................................. 1
1.2 Image Processing .......................................................................................................................... 1
1.3 Object Detection Algorithms ........................................................................................................ 2
1.4 OpenCV ......................................................................................................................................... 2
1.4.1 OpenCV Libraries.................................................................................................................... 2
1.5 cvBlob Library................................................................................................................................ 2
1.6 Background of Project................................................................................................................... 3
1.7 Project Objective ........................................................................................................................... 4
Chapter 2 Literature Review ................................................................................................................ 6
2.1 Image Processing Techniques ....................................................................................................... 6
2.1.1 Object Detection and Extraction ............................................................................................ 6
2.1.2 Kalman Filters....................................................................................................................... 12
2.1.3 Calibrating the Camera ........................................................................................................ 14
2.1.4 Real World Conversion......................................................................................................... 19
2.2 Robot ........................................................................................................................................... 20
2.2.1Servo Motors......................................................................................................................... 20
2.2.2 Encoders............................................................................................................................... 20
2.2.3 Using the Arduino ................................................................................................................ 22
Chapter 3 Implementation ................................................................................................................. 23
3.1 Application .................................................................................................................................. 23
3.1.1 Background .......................................................................................................................... 23
3.1.2 Object Detection .................................................................................................................. 23
3.1.3 Tracking Objects ....................................................................................................................... 26
iv

3.1.4 Conversion of Centre Point to Real World Location ............................................................ 26


3.1.5 Application & Arduino .......................................................................................................... 27
3.1.7 Summary .............................................................................................................................. 27
3.2 Robot ........................................................................................................................................... 28
3.2.1 Dagu 4 Channel Motor Controller........................................................................................ 28
3.2.2 Dagu 2 Degrees of Freedom Robotic Arm ........................................................................... 29
3.2.3 Arduino & Robot .................................................................................................................. 30
3.2.4 Summary .............................................................................................................................. 30
Chapter 4 Testing ............................................................................................................................... 31
4.1 Advantage of Using Kalman Filter Estimates over Measured Points .......................................... 31
4.1.1 Test One Track Four Circles Centre Points at 139mm .......................................................... 31
4.1.2 Test Two Track Four Circles Centre Points at 145mm ...................................................... 32
4.1.3 Test Three Track Four Circles Centre Points at 189mm .................................................... 33
Chapter 5 Conclusion of Results ........................................................................................................ 34
5.1 Application & Robot .................................................................................................................... 34
5.2 Testing ......................................................................................................................................... 34
Chapter 6 Recommendations ............................................................................................................ 35
6.1 Frame Rate .................................................................................................................................. 35
6.2 Camera Vibrations....................................................................................................................... 35
6.3 Robotic Movement Accuracy ...................................................................................................... 36
Bibliography .......................................................................................................................................... 37
Appendix A ............................................................................................................................................ 39
Appendix B ............................................................................................................................................ 40
Appendix C ............................................................................................................................................ 41

Table of Figures
Figure 1 - How a digital camera generates a digital picture [18] ............................................................ 1
Figure 2 - Camera in centre and only weeds the rows [4] ...................................................................... 3
Figure 3 - Manual weed removal [19] ..................................................................................................... 4
Figure 4 - circle with path for robot to take around the circumference................................................. 5
Figure 5 - sequence of objects ................................................................................................................ 5
Figure 6 The 8 corners of the cube are associated with red, green and blue. [6] ............................... 6
Figure 7 - HSV colour space (Jonathan Sachs, 1996 -1999) [6]. Chroma is saturation, value is
brightness................................................................................................................................................ 7
Figure 8 - Image Converted To HSV Colours ........................................................................................... 7
Figure 9 Binary image........................................................................................................................... 8
Figure 10 - Morphological kernel B demonstrates how shape is morphed to fit the kernel inside it. [5]
................................................................................................................................................................ 8
Figure 11 - Erosion of dark blue square by a sphere results in the inner aqua square [20] ................... 9
Figure 12- the inside dark blue square is eroded by a spherical structuring element. The result is a
larger aqua square with rounded edges. [21]....................................................................................... 10
Figure 13 - Opening operation on a square using a spherical structuring element. The result is an
inside square with rounded edges. [22] ............................................................................................... 10
Figure 14 - Square polygon with angles shown .................................................................................... 11
Figure 15- Ellipse with a width A and a height B................................................................................... 12
Figure 16 - Kalman filter, P is probability that measurement is there, Xt-1 is the probability of the
original measurement is what is measured, U is the change in measurement according to system,
is the probability that the measurement is correct after the change by U (state prediction), Z is the a
different method if measurement and Xest is the combination of and Z (t refers to time). ......... 13
Figure 17- Shows the relationship between the focal length of a camera and the error in
measurement that a wrong focal length can cause. [9] ....................................................................... 15
Figure 18- Barrel View........................................................................................................................... 15
Figure 19 - Tangential distortion ........................................................................................................... 16
Figure 20 - Explains the lens and image plan distortions [23]. Shows the relationship between a pin
hole camera model and the digital camera model. .............................................................................. 16
Figure 21 - Calibration of Camera [10] .................................................................................................. 17
Figure 22 - Chessboard movement in the real world in relationg to the camera [24] ......................... 17
Figure 23 - Chessboard stationary and camera moving [24] ................................................................ 18
Figure 24 Each green point in the image relates to a measured real world coordinate and a
measured image coordinate. The origin is located where the x, y and z axis is shown. [25] ............... 18
Figure 25 Graph of Voltage vs Time. It demonstrates the relationship between the average voltage
supplied and the pulse width. [26] ....................................................................................................... 20
Figure 26 - An encoder for the Dagu Rover 5 motor ............................................................................ 20
Figure 27 - Quadrature encoder on the left has gaps for light to hit a detector. The right has gaps to
break the reflection. [11] ...................................................................................................................... 21
Figure 28 - When A is high B is low and vice versa [11] ........................................................................ 21
Figure 29 - Arduino Uno ........................................................................................................................ 22
Figure 30 - Binary image after morphological operations have been carried out................................ 24
Figure 31 - Bounding Box placed around the object. It is also clear that the circle appears elliptical. 25
Figure 32 - Dagu 4 channel motor controller [17] ................................................................................ 28
vi

Figure 33 Quadrature encoder inputs A and B go through an XOR gate to produce an interrupt for
each input. There is one interrupt output pin for each encoder. [17] ................................................. 29
Figure 34 - Dagu 2 DOF Arm.................................................................................................................. 29
Figure 35 - Robot used to track circles on the ground. Test setup was similar to this. ........................ 31
Figure 36 - Measured results for distances of 139mm ......................................................................... 31
Figure 37 - Kalman filter results for distances of 139mm ..................................................................... 32
Figure 38 - Measured results for distances of 145mm ......................................................................... 32
Figure 39 - Kalman filter results for distances of 145mm ..................................................................... 32
Figure 40 - Kalman filter results for distance of 189mm ...................................................................... 33
Figure 41 - Measured results for distances of 189mm ......................................................................... 33
Figure 42 - Type of aluminium bracket. In the case of this project the bracket was much longer [27]35
Figure 43 - Table of results.................................................................................................................... 40
Figure 44 - Final robot assembled and used in testing ......................................................................... 41
Figure 45 -System Flow ......................................................................................................................... 42
Figure 46 - System block diagram ......................................................................................................... 42

vii

Table of Equations
Equation 2-1 erosion of A by B............................................................................................................. 9
Equation 2-2 Translation of B by z ....................................................................................................... 9
Equation 2-3 - Dilation of A by B ........................................................................................................... 10
Equation 2-4 - Circumference of a circle, where r is the radius ........................................................... 12
Equation 2-5 - where r is the radius of the circle and A is the area ...................................................... 12
Equation 2-6 - Formula for the area of an ellipse. ................................................................................ 12
Equation 2-7 - Linear function of prior state -1 and the action B ................................................... 13
Equation 2-8 - E is the Gaussian error ................................................................................................. 13
Equation 2-9 Zt is the measurement prediction, C is a function of the prediction, is the state
prediction and Et is the error. ............................................................................................................... 14
Equation 2-10 - K is the Kalman gain .................................................................................................... 14
Equation 2-11 - Equation for radial factors........................................................................................... 15
Equation 2-12 - Equations for Tangential Factors [15] ......................................................................... 16
Equation 2-13 - Equation for converting 3D "Real World" points into image points [25].................... 19
Equation 2-14 - Equation for converting 2D image points into 3D real world points [25] ................... 19
Equation 3-1 - Where v = velocity in mm/s, r is the radius of the wheel in mm, RPM is the revolutions
per millisecond and 0.10472 is a random time period in milliseconds ................................................ 30

viii

Glossary
FYP Final Year Project.
CD Circle Detection.
DSP Digital Signal Processing.
HSV Hue Saturation Value.
ROI Region of interest.
DOF Degrees of freedom.
CPU Central processing unit.
MHz Unit of frequency in megahertz.
GHz Unit of frequency in gigahertz.
XML - Extensible markup language.
PWM Pulse width modulation.
FET Field-effect transistor.
V volts, a unit of measure for voltage.
RPM revolutions per minute

ix

Chapter 1 Background
Over the years image processing has become a very powerful tool. This Chapter will provide the
background on the project and outline the goals that were set and achieved in the project.

1.1 Formation of an Image


A camera acts by focusing rays of light onto the cameras sensors by means of a lens. This
induces a voltage on each sensor (see Figure 1) and generates an image matrix of numbers (pixels).
This matrix contains information about the levels of red, green and blue in the image and form an
RGB image. The colour seen at a point in an image is the resulting combination of these three
colours. For digitisation of an image, each pixel value is defined by a finite number of bits, which is
then processed by a computer in order to display it as an interpretable image.

Figure 1 - How a digital camera generates a digital picture [18]

1.2 Image Processing


Image processing is a method of extracting information from an image so that it can be
analysed. Image processing is usually carried out in order to detect an object within an image or
understand patterns of objects that are found within images. It is a form of signal processing using
an image as the input into that processing technique. It can be done using an image or a frame of a
video. The result of the signal processing on that image is either another image, or it can be a set of
characteristics describing the image. The basis for most image processing techniques is treating the
image as a 2-dimensional signal and applying standard signal processing techniques to it.
The term image processing usually refers to digital images [1], but it is possible to carry
out image processing on analogue image as well. An image is the result of capturing the reflected

light from objects in the real world and translating them into two dimensional points on an image
plane. An image can be considered to have a sub-image within itself called a region of interest (ROI).
ROIs reflect the fact that images can be made up of groups of objects, each of which can be a ROI,
making it possible to preform image processing operations on these regions.

1.3 Object Detection Algorithms


Object detection can be done in a large number of ways. The method used solely depends
on the type of information about the object that is being looked for. This project detects objects
based on colour, size and shape of the object.

1.4 OpenCV
OpenCV is an open source computer vision library developed by Intel. It is a cross platform
library that is implemented in languages such as C, C++ and python. Its major focus is on real time
applications related to computer vision [2]. As OpenCV is dedicated solely to image processing, it
provides excellent functionality compared to most other tools. OpenCV is an open source set of
libraries. This means there is a need to spend a lot of time reading the manuals in order to
understand the language.
1.4.1 OpenCV Libraries
CXCORE
This library contains data structures, matrix algebra, data transforms, object persistence,
memory management, error handling, and dynamic loading of code as well as drawing, text and
basic math [2].
CV
This library contains image processing, image structure analysis, motion and tracking,
pattern recognition and camera calibration [2].
HighGUI
This library contains user interface GUI and image/video storage and recall [2].

1.5 cvBlob Library


cvBlob is a library for computer vision to detect connected regions in binary digital images.
cvBlob performs connected component analysis (also known as labelling) and features extraction.
[3]. The cvBlob library has features for labelling binary images, feature extraction, and basic tracking.
It is easily integrated with OpenCV applications and uses OpenCV at its core.

1.6 Background of Project


The Idea of an automatic weeder is not new, and there are varieties of machines out there
that do this. This project follows on from Robocrop, a vision guidance [4] type device which
eliminates weeds between rows of crops (Figure 2). This leaves weeds behind between each crop in
a row which then has to be manually removed (Figure 3). This is a just a sample of the power of
using image processing techniques in weeding. This was used as a reference rather than a starting
point as the code is not available to the public.

Figure 2 - Camera in centre and only weeds the rows [4]

Figure 3 - Manual weed removal [19]

An article that deals with object identification in an image [5] was chosen as the starting
point for this project as it deals with object recognition in 3-dimensional and 2-dimensional spaces.
The articles go through the process of deciding the easiest methods to extract the object you are
looking for as well as methods for dealing with the categorization of the characteristics of the
objects.

1.7 Project Objective


The objective of this project is to demonstrate the power of image processing in the
identification, tracking and real world location of an object. To achieve this, a robot will be designed
to detect, track and locate objects in a sequence in the real world. The robot should be able to
extract multiple objects in a given frame of video, and decide which object is closest. It should then
be able to calculate the real world position of that object and accurately identify the objects centre
point by moving toward the objects real world centre and drop a marker on the centre.

Figure 4 - circle with path for robot to take around the


circumference.

Initially it is to be designed to just track one object. This will be achieved by calibrating the
camera to account for any distortions in the images received from the camera. It will then be
processed on a DSP board to give an accurate account of the environment the robot is in. Then the
number of objects is to be increased and placed in a straight line, identified and tracked. The robot
will have to move towards the centre of that object and drop a robotic arm on the objects centre
point. This will be done using a tracking algorithm to prevent the robot from stopping in the case of
a missing object in a sequence (shown in Figure 5).

Missing Object from


Sequence should
not affect the robot

Figure 5 - sequence of objects

For this project a Pandaboard ES has been chosen as a DSP board. For the robot a Dagu
Rover 5 chassis was chosen, a Dagu 4 Channel Motor Controller for controlling motors and encoders,
a Dagu 2 DOF robotic arm for holding the marker and an Arduino for controlling the robotic
movements have been chosen.

Chapter 2 Literature Review


This section talks about image processing and how detection, tracking and location of real
world objects can be found.

2.1 Image Processing Techniques


Object detection involves multiple different techniques to be used. Each has its own
advantages and disadvantages
2.1.1 Object Detection and Extraction
RGB Colour Detection
A consideration when trying to distinguish between objects in an image is the colour of
the object. To carry on from the discussion in 1.1 Formation of an Image, an RGB image is a 3dimensional space that has a finite number of bits for representing a pixel in an image. Within
the 3-dinensional space are a fixed number of bits to representing the red, green and blue
values of a pixel respectively. It can be pictured as a cube with every possible combination of
colour situated in the cube space (see Figure 6) [6].

Figure 6 The 8 corners of the cube are associated with red, green and blue. [6]

This means that if a certain colour is being looked for in an image (such as the colour of
an object), a threshold value can be determined for that colour. This would allow for an image
containing only object of the specified colour to be obtained. However there are limitations to
threshing an RGB image for particular colours, one being the difference in colours in an indoor
image and an outdoor image [6].
Outdoor images contain different levels of RGB to indoor images due to lighting
conditions and the different sources of light. This means that the colour being looked for in an
image can vary by a large amount. Because of this, the threshold range would need to be
calibrated for each possible scene, which is not practical. A better method is to use a HSV
image.

HSV Colour Detection


To make it easier to extract an object, the image can be converted to a HSV image
(Figure 8), and then put through a colour detection algorithm which can deal with different
shades of colours. A HSV image is an image that gets converted from RGB into its hue,
saturation and brightness value. As the brightness decreases the colour gets darker, and
eventually becomes black. This results in a cone shaped space for the HSV colours to lie within.

Figure 7 - HSV colour space (Jonathan Sachs, 1996 -1999) [6]. Chroma is saturation,
value is brightness.

The algorithm for threshing a HSV image works by defining an upper and lower limit
that the HSV values can lie within. The values that lie inside the range are given a value of 1,
and the values that lie outside that range are given a value of 0. The result is a binary image,
which is a black and white image (1 = white, 0 =black) (Figure 9).

Figure 8 - Image Converted To HSV Colours

Figure 9 Binary image

Morphological Kernels
For images that are noisy, object enhancement and noise elimination can be achieved
using mathematical morphological kernels, otherwise known as structuring elements [7]. These
kernels are created and then passed over a binary image. They act as a template to judge if a
white blob of pixels in an image fit a certain shape defined by the kernel. If the morphological
kernels fit the shape of the blob, the objects noisy edge are smooth out, however if the object
is smaller or does not fit the kernel then it is discarded by converting the white pixels to black
(Figure 10).

Figure 10 - Morphological kernel B demonstrates how shape is


morphed to fit the kernel inside it. [5]

Morphological Operations
The result produced by carrying out morphological operations depends on the type of
morphological operation carried out on the binary image. There are two fundamental operations in
morphological image processing from which all morphological operations are defined from, erosion
and dilation. Since a binary image is an image containing 1s and 0s, it can be viewed as a subset of a
Euclidean space.
Erosion
Erosion of an image by a morphological kernel can be described by Equation 2-1, where A is
a binary image eroded by B, the structuring element,
,
Equation 2-1 erosion of A by B

And where Bz is the translation on B by the vector z.

Equation 2-2 Translation of B by z

When the structuring element B has some form of centre, and this centre is located at the
origin of the Euclidean space E, the erosion of A by B can be defined as the locus points reached by
the centre of B as B moves inside A.

Figure 11 - Erosion of dark blue square by a sphere results in the inner


aqua square [20]

Dilation
Dilation of an image A by a morphological kernel B can be described by Equation 2-3.

Equation 2-3 - Dilation of A by B

If Bs centre is at the origin, the dilation of A by B can be defined as the locus points covered by B
when B moves inside A. Figure shows the original image A being dilated by B.

Figure 12- the inside dark blue square is eroded by a spherical structuring
element. The result is a larger aqua square with rounded edges. [21]

Opening
An opening operation in terms of image processing is an erosion followed by a dilation. In
other words an opening is a dilation of the erosion of a binary image. This is done to remove noise in
the image. Opening operations remove small objects in the foreground of a binary image and places
then in the background.

Figure 13 - Opening operation on a square using a spherical structuring


element. The result is an inside square with rounded edges. [22]

10

Object Extraction Using CvBlob Library


Once the morphological operations are complete, the binary image is left with only the
objects that are of concern. This means that all white groups of pixels (blobs) that are in the
binary image are the objects to extract. The application extracts these blobs through using a
set of functions from the CvBlob library.
Blob detection in the CvBlob library is based on a paper by Fu Chang, Chun-Jen Chen and
Chi-Jen Lu called "A linear-time component-labelling algorithm using contour tracing technique" [8].
The algorithm uses a contour tracing technique and identifies and labels the interior of the
component. The library also supports filtering of blobs based on the area of the blob (in pixels). The
centroid of the blob can be found by placing a bounding box around the blob.
Shape Determination
In order to guarantee the shape of the object, simple geometrical formulas are used.
For example a square has 4 sides and is a regular polygon. It is regular because a square is a
quadrilateral polygon that has sides equal in length, and all angles of the intersecting sides are
90 (Figure 14). Therefore it is simple to determine if a shape is square.

Figure 14 - Square polygon with angles


shown

The same concept is useful if dealing with regular polygons. However a circle is not a
polygon because it is a closed curve. To determine if a shape is circular the circumference (the
number of pixels in the closed contour) is used. There is a relationship between the circumference
and area of a circle and which is shown in Equation 2-4 and Equation 2-5

11

Equation 2-4 - Circumference of a circle,


where r is the radius

Equation 2-5 - where r is the radius of the circle and A is the


area

If a circles radius is known it is possible to determine if a shape is circular by finding the area
of a closed contour and dividing it by . For an ellipse there is a similar relationship between the
area and (Figure 15). The relationship is shown in Equation 2-6.

Figure 15- Ellipse with a width A and a height B.

Equation 2-6 - Formula for the area of an


ellipse.

2.1.2 Kalman Filters


During the lifetime of the application, noisy data from the camera can result in
inaccurate measurement. This can be caused by external factors such as scene lighting and of
obstruction of view. To solve this problem and to smooth out the measurements, a Kalman
filter is used. A Kalman filter [9] is a recursive algorithm, which works by starting with random
measurements and random noise coefficients.
During each loop, a prediction is made for the current measurement based on the
previous measurement. Then the Kalman filter is then updated with current measurements and
makes an estimation of what the next measurement will be.
12

The Kalman filters estimated measurement can be considered an in between


probability of a predicted measurement and some other form of measurement. After this the
filters error and noise coefficients are updated. Figure 16 describes how the Kalman works.

Figure 16 - Kalman filter, P is probability that measurement is there, Xt-1 is the probability of the original measurement is
what is measured, U is the change in measurement according to system, is the probability that the measurement is
correct after the change by U (state prediction), Z is the a different method if measurement and Xest is the combination of
and Z (t refers to time).

Prediction State
The first state in using a Kalman filter for estimating a measurement is the predicted
state . We start by assuming that it is a linear system of equation. This can be s een in
Equation 2-7, where A and B define the system.

Equation 2-7 - Linear function of prior state -1


and the action B

There is also error to account for in this system (Equation 2-8). We treat this error as Gaussian
noise because the system is linear.

Equation 2-8 - E is the Gaussian error

13

Measurement State
The second state is the measurement prediction. A measurement can be incorrect due
to the slipping of a robot on a surface it is on or a distortion in the image itself due to the angle
the camera is at in relation to the ground. The prediction from above needs to be transformed
into a measured prediction, and will also have some error which is also Gaussian in nature
(Equation 2-9).

Equation 2-9 Zt is the measurement prediction, C is a function


of the prediction, is the state prediction and Et is the error.

Estimation State
The final state is then the estimated state est, which is a linear function of the
predicted state and the real measurement minus the predicted measurement, times the
Kalman gain (Equation 2-10).

Equation 2-10 - K is the Kalman gain

So if the measured estimate is different from the actual measurement, there may have
been an error. This error is corrected by the Kalman gain to get a more accurate estimate.
2.1.3 Calibrating the Camera
Camera calibration is the process of determining the internal factors that can affect the
imaging process. These factors can be the focal length of a lens, the skew factor or the lens
distortion for a camera. The motivation for calibrating the camera is to eliminate distortions so that
it is possible to obtain more accurate results.
It is essential to know these factors when converting the image point into the real world
point (discussed in 2.1.4 Real World Conversion). For example in Figure 17 a line is being viewed by a
camera. If the field of view is wrong, the size of the line viewed will be wrong. Calibration with
OpenCV works by taking into account tangential and radial factors [10].

14

Figure 17- Shows the relationship between the focal length of a camera and the error in measurement that a wrong
focal length can cause. [9]

Radial Factors
For an old pixel point at (x,y) coordinate in the input image, for a corrected output image its
position will be (Xcorrected,Ycorrected). The reason radial distortion occurs is because of the barrel
effect shown in Figure 18 on the next page.

Equation 2-11 - Equation for radial factors

Figure 18- Barrel View

15

Tangential Factors
The tangential equations are needed because of the lenses on the cameras not being parallel
to the image plane. Figure 19 demonstrates this.

Equation 2-12 - Equations for Tangential


Factors [15]

Figure 19 - Tangential distortion

Pinhole Camera vs. Digital Camera

Figure 20 - Explains the lens and image plan distortions [23]. Shows the relationship between a pin
hole camera model and the digital camera model.

Figure 20 describes the difference between the camera image plane and the equivalent pinhole
model. As you can see the pinhole model can be used to model the characteristics of the camera.

16

Calibration Procedure
OpenCV provides a calibration application for calibration. The calibration procedure is as
follows: The application reads in video frames that contain a chess board (Figure 21). The number of
inner corners per row and column on the chess board is pre-determined and the size of each square
is also known. Then the type of calibration is chosen to be of type Chessboard.

Figure 21 - Calibration of Camera [10]

First a list for object points and image points are created (the object points are determined
by the size of each square that is set in the In_VID5 file), with each one of the object points relating
to a different vertex. Once this is complete a list of corners needs to be created. This temporarily
holds the chessboards corners for the given frame.

Figure 22 - Chessboard movement in the real world in relationg to


the camera [24]

17

Ideally the object points would be the physical points but measuring the distance from the
camera lens and using the camera as the origin is good enough. See Figure 22. However the OpenCV
application treats the chessboard as stationary and the camera as moving. See Figure 23.

Figure 23 - Chessboard stationary and camera moving [24]

The number of images is the next concern. OpenCV suggests between ten and twenty
successful frames need to be used. Each frame is converted to a grey scale image and is then run
through a chessboard corner detection algorithm. If it detects the corners of a chessboard, the
image coordinates are stored and the corner points are refined. Sub pixel corners are calculated
from the grey scale image.

Figure 24 Each green point in the image relates to a measured real world coordinate and a measured image
coordinate. The origin is located where the x, y and z axis is shown. [25]

18

Once all this is complete, the calibration is carried out. The intrinsic matrix is modified with
the known information. When the object and image points from the application are run though the
camera calibration algorithm, the intrinsic parameters, distortion coefficients and the rotation and
translation matrices are populated. The intrinsic parameters and the distortion coefficients relate to
the camera and lens characteristics. All this information is stored in an XML file by the application for
reuse in other applications.
2.1.4 Real World Conversion
Once the measurements have been smoothed out by the Kalman filter, and a good
estimation for distortion of the camera has been obtained, the final step is to translate the
coordinates to the real world. This is done by using the equations for r elating the 3-dimentional
coordinates to the 2-dimensional image plane and vice versa.

Equation 2-13 - Equation for converting 3D


"Real World" points into image points [25]

Equation 2-14 - Equation for converting 2D image


points into 3D real world points [25]

M is the camera matrix (intrinsic matrix), R is the rotation matrix, t is the translation matrix
and S is the unknown. Zconst represents the height of the object off the ground, for this project
Zconst is 1 since the circles will not sit perfectly flat on the ground. All of these are obtained by the
calibration in 2.1.3 Calibrating the Camera. Zconst is the only fixed variable.
In order to make this equation work, the real world object points are mapped to the image
points. This helps to calculate the real world relationship. For example if the real world location of
the object is at the origin (0,0), the image coordinates matching that real world point need to be
determined. This is done by setting up the camera similarly to Figure 24.
In the case of this project the camera is much lower to the ground and the units that are of
concern are millimetres. Even though the robot moves, the cameras view should not change and the
image point relating to the object point remains unchanged.

19

2.2 Robot
In order to demonstrate how well the application works, a robot is used to locate and mark
the centre of the object. The robot consists of 4 components, the Dagu Rover 5, Dagu 2 DOF arm,
Dagu 4 channel motor controller and an Arduino.
2.2.1Servo Motors
Servo motors are controlled using pulse width modulation (PWM), a method of controlling
the value of voltage (and current). The value fed to the load (the motor), is controlled by turning the
switch between the supply and the load on and off at fast rates (sees Figure 25). This determines the
speed at which the motors turn.

Figure 25 Graph of Voltage vs Time. It demonstrates the relationship between the average voltage supplied
and the pulse width. [26]

2.2.2 Encoders
Quadrature encoders are attached to motors (Figure 26) in order to determine speed of the
motors. The encoders count the number of state transitions the motors go through. There are
roughly 333 state transitions in one revolution of the wheel (1000 for 3 revolutions). From this the
rpm of the robot can be determined.

Figure 26 - An encoder for the Dagu Rover 5


motor

20

They work on the basis that a beam of light hitting a detector gets broken when a motor is
moving (see Figure 27). For every high to tow transition a state has been entered and completed.
Figure 28 shows how using the two types of light sources and detectors can determine the direction
of movement of the motors. when the A detector encounters the beginning of a white strip,
detector B will be in the middle of a white strip. Hence, the direction of motion when detector A
gets a pulse can be determined by looking at the status of detector B. [11]

Figure 27 - Quadrature encoder on the left has gaps for light to hit a
detector. The right has gaps to break the reflection. [11]

Figure 28 - When A is high B is low and vice versa [11]

21

2.2.3 Using the Arduino


The Arduino (Figure 29) consists of an ATmega328 microcontroller, 14 digital I/O pins (2 of
which are interruptible and 6 of which can be used for PWM), 6 analogue input pins and a resonator
that clocks at 16MHz.

Figure 29 - Arduino Uno

The Arduino generates the PWM for the motors depending on the voltage being written
using the analogwirte function. The duty cycle depends on the voltage being output. This can be
calculated using the fact that 255 is the maximum value for the voltage out and 0 is the lowest. The
Duty cycle is then equal to then output voltage over the maximum voltage output.
The robotic arm is controlled in two ways by the Arduino, up and down movement. When
the robot is started the robot arm is placed at a 135 angle so that the marker is not touching the
ground. When the robot has reached its destination (the centre of the circle), the arm is then set to a
180 angle so that the marker is touching the centre of the circle. This is all done by small increments
in the angle (see the robotic arm controls function on the CD) followed by a time delay to allow the
servo motor in the arm to reach that angle smoothly (the smoother the change in angle the smaller
the current spikes).

22

Chapter 3 Implementation
This chapter explains how the methods discussed in the literature review are implemented in this
project.

3.1 Application
3.1.1 Background
From previous research done into object tracking, it was clear that colour detection is a
key method to detection and object. Research showed that Kalman filters are essential for the
project to work, as robots are noisy inaccurate machines that do not always move in the same
way every time.
The application designed for this project took frames of video from a webcam situated
at the front of a robot. The frames were then processed and a decision was made as to the
location of a red coloured circle on the ground. This information was then sent over a USB
cable to the robot. The application was run on a Pandaboard ES [12] (digital signal processing
board).
3.1.2 Object Detection
HSV Colour Detection
This project was concerned with red circles that are in sequences on the ground. In order to
detect these circles efficiently it was decided that the images will be converted to HSV. The reason
for using a HSV image was that it was easier to thresh the HSV image under difficult lighting
conditions than it was for than conventional RGB images.
The hue, saturation and brightness values were found using a separate application that was
written. This application allowed for the upper and lower HSV values to be changed during the
lifetime of the application using sidebars. Once a satisfactory result had been obtained the HSV
values are recorded.
These results were used as the inputs into the OpenCV colour detection function CvInRangeS
[13]. This function takes the HSV image and searches for all HSV values that lie within the range. This
result was a binary image where the red circles in the HSV image appear as white blobs of pixels
(Figure 9).

23

Choosing a Morphological Kernel


For this project it was clear that morphological kernels would be needed. The reason was
that the image being read from the camera was noisy and false detections of small objects were
being found. Using morphological operations it was possible to clean up the image and remove all
the noise. There were a number of issues to consider when choosing a kernel for performing
morphological techniques such as shape and size of the object.
The shape that was being looked for was a circle on the ground. In an image however, the
circle appeared elliptical due to the cameras orientation to the ground. The further away the circle
was from the robot the more elliptical and smaller the circle appeared, so the size of the kernel was
very important. Two different sizes were chosen, a 5x5 elliptical kernel and a 9x9 elliptical kernel.
The reason for this was the 5x5 kernel cleans up the ellipses edges, which gave the ellipse a more
defined shape, while the 9x9 kernel was there to remove as much noise as possible from the image.
Morphological Operations
The morphological operation carried out on each frame was an opening operation. This
operation cleaned up the edges of the ellipse and then enhanced the inside of the blob so that any
black pixels inside the blob were changed to white. This ensured that when the image was passed
through the blob detection algorithm, the entire object was passed successfully. See Figure 30 for
the resulting image.

Figure 30 - Binary image after morphological operations have


been carried out.

24

Object Extraction Using cvBlob


The cvBlob library was used to extract all blobs from the image. Each blob was deemed to be
an object of interest. Once the binary image had undergone morphological operations, it was passed
through to the cvLabels function [14]. This function looped through the image extracting any blob,
and placed it in an array of blobs.
Shape Determination for A Circle & Object Selection
As stated in Morphological Kernels, the shape of the object that was being looking for in
the image is not circular, but elliptical. To ensure that any false objects that were not removed by
morphological operations were not tracked, a decision was made as to the shape of the blob.
A bounding box was rendered around the blob using the cvRenderBlob and passing
CV_RENDER_BOUNDING_BOX to the algorithm (Figure 31). The boxs width and height were then
found, and from this the minor and major axis of the ellipse was found. The contour area was then
found using the cvContourArea [15] function. The area was then divided by the product of the minor
axis, major axis and . If the result was relatively close to zero plus some calibrated error, then it was
determined to be an ellipse.

Figure 31 - Bounding Box placed around the object. It is also clear that the
circle appears elliptical.

Once a bounding box was drawn, and it was determined to be an elliptical object, the centre
of

the

ellipse

was

found

by

using

the

cvRenderBlob

function, this

time

passing

CV_RENDER_CENTROID. The x and y values for each blob were compared and the centre closes to
the bottom of the frame was determined to be the closes object to the robot.

25

3.1.3 Tracking Objects


For tracking objects in real time a Kalman filter was used. This project dealt with a camera
mounted on the front of a robot. When the robot moved, noisy measurements were recorded and
the measured centre point for the circle was inaccurate. The solution for this problem is the Kalman
filter. It was used to track the measurements and make an estimation of where the centre point was.
The first step was setting up the Kalman. The camera on the robot was treated as the fixed
point and the object as the moving point. The x and y coordinates were the only measurements that
were of concern so a 4x2 Kalman filter was created. The 4 represented the size of the transition
identity matrix and two referred to the number of measurements that were input into the system,
(x,y) values. For the noise coefficients random values were picked to start off with. After each
iteration these coefficients were updated.
The Kalman worked as follows: The circles centre had a current position denoted by -1,
and that position changed by an unknown amount . The robot moved to the next state and
the centre point was at state . Assume now that the position the robot thinks the centre point
was is wrong, and that the location the application says it was, is at the point app. It is then safe
to assume that the centre was located somewhere between and app. This is known as est
and was the estimated point from the filter (Formulas for calculating all the states are covered
in section 2.1.2 Kalman Filters).
3.1.4 Conversion of Centre Point to Real World Location
Once a centre point was estimated from the Kalman filter, the real world coordinates were
calculated. This was done by setting up the camera on the front of the robot and objects were
placed in the cameras view. An origin was set up and all object points were measured from the
origin. The object points were then found in the image (image points) and both object and image
points were stored in an XML file for use in the application.
The XML file was used to map an image point to a real world point. When a centre point was
detected it was passed through to the equation for converting 2-dimesnsional coordinates to 3dimentional coordinates (Equation 2-13). The output of that equation was determined to be the real
world centre point for that circle. For this project Zconst was set to 1 because of the circle was
situated flat on the ground (all real world points are in millimetres).

26

3.1.5 Application & Arduino


Because the development board the application was running on did not have a real time
kernel running on it, it was not possible to guarantee that the Pandaboard ES would be able to
control the robot. Because of this the Arduino was chosen as a replacement device solely dedicated
to controlling the robot movement.
After the application found the centre point of the circle in the real world, it sent the
distance to the Arduino over serial communication. First the serial port was opened using the open
function and then the baud rate was set to that of the Arduino. The write function was used to write
data to the serial port.
When the robot reached the destination (the centre of the circle) a d command was sent
and the application waited for the Arduino to send back an s. The s was also sent over a serial
connection, and was received by the application using the stringstream class [16] in the sstream
library. It was stored until the application could check it. This indicated that it had finished with the
arm. The application then continued running as normal.
3.1.7 Summary
Object detection was done by HSV colour detection and noisy image were cleaned up using
elliptical morphological kernels. The blobs in the cleaned up image were then searched for and
stored in arrays using the cvBlob library from Google. The blob arrays were iterated through and
checked for being elliptical in shape. If a blob was elliptical, its centre point was determined in the
image, the blob that was closest to the bottom of the image was deemed to be closes to the robot.
The centre point for each blob was tracked using a Kalman filter. The Kalman filter smoothed
out noisy measurements of the centre point and produced an estimate that was considered more
accurate than the measured point. The estimate point was then translated to the real world
coordinates. The application then decided where in the image the object was. Based on this decision
a command was sent over serial to the Arduino.

27

3.2 Robot
3.2.1 Dagu 4 Channel Motor Controller
The Dagu 4 channel motor controller [17] is designed by Dagu for the Dagu Rover 5 with four
motors and four quadrature encoders. It has four channels that are dedicated for the motors,
sixteen pins dedicated for the encoder of which four are used for ground, four for VCC (5 volt input
from the Arduino) and eight for the state transition. Each channel had a low resistance FET H bridge
and was rated for 4 amp stall current. The direction of the motors was controlled using the direction
pins. The output voltage and current of a channel depended on the speed of the PWM input to the
PWM input pin for that channel.

Figure 32 - Dagu 4 channel motor controller [17]

For this project, channel three and channel four were used. The two encoder inputs for each
motor were used to determine the direction the robot was moving. This was done by looking at
which encoder input was high first (see Figure 33). This depended on which input pin was attached
to witch output of the encoder. There was also a current sensing circuit for each channel. There was
approximately 1V output from this pin for each amp of current.

28

Figure 33 Quadrature encoder inputs A and B go through an XOR gate to produce an interrupt for each input.
There is one interrupt output pin for each encoder. [17]

3.2.2 Dagu 2 Degrees of Freedom Robotic Arm


The Dagu 2 DOF arm consisted of two servo motors; one was attached to the arm and one to
the gripper. It was chosen as the robotic arm because of its cheap price, easy integration with the
Arduino and its strong aluminium frame, so it was safe to say it was not going to break during
testing, which is essential. The marker was situated in the jaws of the gripper.

Figure 34 - Dagu 2 DOF Arm

29

3.2.3 Arduino & Robot


Once the Arduino had received the command from the application running on the
Pandaboard ES it carried out an operation to control the robot. There were four main commands:
one was the f command. If this was received the Arduino set the direction of the motors to forward
and PWM was generated at a 50% duty cycle. The second and third were the l and r command. If
the Arduino received these commands it set the motors in opposite directions, l refers to left and
r refers to right. The fourth command was a distance d command. The distance referred to the
real world distance in the y direction from the robot to the centre of the circle and the d then
referred to the arm control (down 45 to a 180 angle and then up 45 to a 135 angle) and set the
motors to forward.
The PWM duration was calculated by connecting the encoder out interrupt pins from the
Dagu 4 channel motor driver to the Arduino. The number of interrupts was counted and then
divided by two since there is an interrupt for A and B (discussed in 2.2.2 Encoders). Then the number
of revolutions was found by taking the result and divided by one thousand, since there are one
thousand interrupts in three revolutions. The RPM (revolutions per minute) value was calculated in
millimetres using the number of revolutions. The RPM value was then converted to revolutions per
millisecond and converted to linear speed using the Equation and converted to linear speed using
the Equation 3-1. The distance was then divided by v and resulted in a length of time for generating
PWM.

Equation 3-1 - Where v = velocity in mm/s, r is the radius of the wheel in mm, RPM is the revolutions per millisecond and
0.10472 is a random time period in milliseconds

3.2.4 Summary
The Robot was controlled using the Dagu 4 channel motor controller and an Arduino. The
Arduino controlled the motor controller by generating the required PWM based off the commands
received from the application. The direction pin on the motor controller was set by the Arduino
depending on the command; this affected the directions the motors turned. The motor controller
then output the required current and voltage for the motors based off the input from the Arduino.

30

Chapter 4 Testing
4.1 Advantage of Using Kalman Filter Estimates over Measured Points
To explain why a Kalman filter is used in the project, and to demonstrate the advantages and
accuracy of a Kalman filter, the following tests were carried out. The tests were carried out a number
of times using only the measured points, and then using the Kalman filter estimated points. The
following histograms show the results of each test. Each test was run ten times. Each of the
measurement values (Bins) in the histograms were the average measurement of where the robot
determined the centre point of a circle to be in each test. The Frequency axis shows how often the
robot determined the circles centre to be at that distance. Figure 35 shows the setup.

Figure 35 - Robot used to track circles on the ground. Test setup was similar to this.

4.1.1 Test One Track Four Circles Centre Points at 139mm


The robot was positioned 139mm from the first circles centre point. Each Circle was
positioned 139mm apart (centre to centre).

Histogram
5
Frequency

4
3
2

Frequency

1
0
138.00139.00140.00141.00142.00143.00144.00145.00 More
Bin
Figure 36 - Measured results for distances of 139mm

31

Frequency

Histogram
5
4
3
2
1
0

Frequency
135.00 136.00 137.00 138.00 139.00 140.00

More

Bin

Figure 37 - Kalman filter results for distances of 139mm

4.1.2 Test Two Track Four Circles Centre Points at 145mm


The robot was positioned 145mm from the first circles centre point. Each Circle was
positioned 145mm apart (centre to centre).

Histogram
Frequency

4
3
2
1

Frequency
140.00
141.00
142.00
143.00
144.00
145.00
146.00
147.00
148.00
149.00
150.00
151.00
152.00
153.00
154.00
155.00
156.00
More

Bin

Figure 38 - Measured results for distances of 145mm

Frequency

Histogram
4
3
2
1
0

Frequency

Bin
Figure 39 - Kalman filter results for distances of 145mm

32

4.1.3 Test Three Track Four Circles Centre Points at 189mm


The robot was positioned 189mm from the first circles centre point. Each Circle was
positioned 189mm apart (centre to centre).

4
3
2
1
0

Frequency
182.00
183.00
184.00
185.00
186.00
187.00
188.00
189.00
190.00
191.00
192.00
193.00
194.00
195.00
196.00
197.00
More

Frequency

Histogram

Bin
Figure 41 - Measured results for distances of 189mm

Histogram
Frequency

5
4
3
2

Frequency

1
0
186.00 187.00 188.00 189.00 190.00 191.00 More
Bin
Figure 40 - Kalman filter results for distance of 189mm

33

Chapter 5 Conclusion of Results


5.1 Application & Robot
The use of HSV images in colour detection works exceptionally well under most normal
lighting conditions. The objects can be detected and noisy images can be cleaned up very well using
elliptical morphological kernels. The remaining objects can then be stored in arrays and filtered by
comparing their shape to that of an ellipse. The remaining objects distance from the robot can then
be checked using the bottom of the image as a reference point. The object closes to the robot can be
accurately found in the real world using the equations that relate the camera image plane to the 3dimensional point.

5.2 Testing
The results in 4.1 Advantage of Using Kalman Filter Estimates over Measured Points section
showed that by using only the measured results from an image, the robot was off by a factor of 5mm
or more. It also showed that the further the circles centres were from each other the bigger the
error in measurement was. This is backing up the theory that robots produce noisy data.
The Kalman filter results showed that by using a Kalman filter a smoothed out more accurate
result could be obtained. The further away the centre points are from each other the better the
estimations were. This was because of the convergence of the filter over time, due to the noise and
other coefficients being updated with each measurement (i.e. the more measurements and updates
the system goes through the more accurate it becomes).

34

Chapter 6 Recommendations
6.1 Frame Rate
One of the biggest issues with the application was the frame rate. The frame rate of the
application was between 3 and 5 frames a second when fully operational. This affected the speed at
which the robot could move at and the overall results in section 4.1 since the robots movements
were hard to sync with the application. An attempt was made to try and solve the problem by
slowing down the robot to the speed of the application. This caused other issues such as extra noise
in the system due to the vibrations of the camera every time the robot moved, so the overall
improvement of accuracy in the system was only increased by a relatively small factor. It is
important to state here that there are multiple factors that accounted for the low frame rate. The
most important being the cap on the frequency of the Pandaboard ES. The Pandaboard ES has a
1.2GHz dual-core processor but there was a cap of 900MHz because there were issues with the
stability of the kernel above that frequency.
Investigating into the use of CPU (central processing unit) threads would be the best option
to help improve the frame rate without the purchasing of a more powerful DSP board. Threads
provide a powerful method of multi-tasking with multicore processers. A thread could be created to
handle incoming frames from the camera, with each frame being placed into buffers. Two other
threads could be created to handle the object detection and extraction. This can be done by having
blocked queues of images and having each thread wait for images to be placed into the queue. Once
the first two images have been buffered and the object detection threads have started processing,
the image capturing threads will keep the buffers filled, allowing for twice the amount of images
being processed at any one time. A fourth thread could be created to handle Kalman tracking and
real world conversion of points found by the object detection threads. This theoretically could
increase the speed of the system by up to twice the current frame rate.

6.2 Camera Vibrations


A very important note here is that the neck the camera was mounted on was an aluminium
bracket (Figure 41). It was a weak structure that was susceptible to strong vibrations caused by the
robot. It is this authors opinion, if a stronger neck was chosen for the camera the accuracy of the
robot could have been increased, as less vibrations leads to less noisy data.

Figure 42 - Type of aluminium bracket. In the case


of this project the bracket was much longer [27]

35

6.3 Robotic Movement Accuracy


Another method for improving the accuracy of the system would be to calibrate the
robots speed and movement for different surfaces. Even though a motor spins at a speed, it is
not possible to guarantee that the robot is moving at that speed. This uncertainty is caused by
the robot sliding and wheel spinning due to slippery surfaces. This can be done in the
application layer or in the Arduino code using different methods. Both have their advantages
and disadvantage. A method for improving the accuracy of the robots movement would be to
introduce speed and acceleration as factors into the Kalman filter.
The acceleration and velocity could be measured by the Arduino and sent back to the
application. The application could monitor the acceleration and velocity of the robot by
measuring the rate at which a circles centre moved over a time period. These two variables
could then be added to a Kalman filter to gain a better estimate of how far the robot has
moved. This would help filter out uncertainties about the robots position in the real world. In
this authors opinion, the combination of the estimates for the real world position for the
circles centre point and for the robots position from the Kalman filter, the accuracy could be
increased from 40% up to 90%.

36

Bibliography

[1] image processing, [Online]. Available: http://en.wikipedia.org/wiki/Image_processing.


[Accessed February 2013].
[2] A. K. Gary Bradski, Learning OpenCV, O'Reilly, 2006.
[3] cvblob, Google, [Online]. Available: https://code.google.com/p/cvblob/.
[4] Robocrop, [Online]. Available: http://www.thtechnology.co.uk/Robocrop.html.
[5] Rutgers, [Online]. Available:
http://www.cs.rutgers.edu/~elgammal/classes/cs534/lectures/3D_modelbasedvision.pdf.
[6] A. Mane, Indoor/Outdoor Scene, 01 09 2010. [Online]. Available:
http://www.dcs.shef.ac.uk/intranet/teaching/public/projects/archive/msc0910/pdf/AMane_Ma
ne_090117275.pdf.
[7] immateriel, [Online]. Available:
http://livre.immateriel.fr/fr/read_book/9780596516130/ch05s03.
[8] A linear-time component-labelling algorithm using a contour tracing technique, [Online].
Available: http://www.iis.sinica.edu.tw/papers/fchang/1362-F.pdf.
[9] Kalman filter, [Online]. Available: http://www.emgu.com/wiki/index.php/Kalman_Filter.
[10 Science Direct, [Online]. Available:
] http://www.sciencedirect.com/science/article/pii/S0165027002001917.
[11 AB Robotics, [Online]. Available: http://abrobotics.tripod.com/Ebot/using_encoder.htm.
]
[12 Pandaboard, [Online]. Available: http://www.pandaboard.org/content/platform.
]
[13 willow garage - CvInRangeS, [Online]. Available:
] http://opencv.willowgarage.com/documentation/operations_on_arrays.html.
[14 Google/CvBlob, [Online]. Available: https://code.google.com/p/cvblob/wiki/FAQ.
]
[15 willow garage - contour area, [Online]. Available:
] http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.ht
ml#cvContourArea.

37

[16 stringstream C++ library, [Online]. Available:


] http://www.cplusplus.com/reference/sstream/stringstream/.
[17 robo savvy, [Online]. Available:
] http://robosavvy.com/RoboSavvyPages/Dagu/4ChannelDCMotorControllerManual.pdf.
[18 mydigitalcamera, [Online]. Available: http://mydigitalcamera.us/relationship-between-the] number-of-pixels-on-the-sensor-and-in-the-digital-image/.
[19 Garden Organic, [Online]. Available:
] http://www.gardenorganic.org.uk/organicweeds/weed_management/show_wman.php?id=16.
[20 Erosion, [Online]. Available: http://en.wikipedia.org/wiki/File:Erosion.png.
]
[21 Dilation, [Online]. Available: http://en.wikipedia.org/wiki/File:Dilation.png.
]
[22 Opening, [Online]. Available: http://en.wikipedia.org/wiki/File:Opening.png.
]
[23 umiacs, [Online]. Available: http://www.umiacs.umd.edu/~ramani/cmsc828d/lecture9.pdf.
]
[24 Calibration and understanding with OpenCV, [Online]. Available:
] http://www.aishack.in/2010/07/calibrating-undistorting-with-opencv-in-c-oh-yeah/.
[25 StackOverflow, [Online]. Available:
] http://stackoverflow.com/questions/12299870/computing-x-y-coordinate-3d-from-image-point.
[26 Society of robots, [Online]. Available:
] http://www.societyofrobots.com/microcontroller_tutorial.shtml.
[27 robotshop, http://www.robotshop.com/Images/big/en/lynxmotion-aluminium-interconnect] bracket-pair-asb-25.jpg.

38

Appendix A
Appendix A is on a CD included in the report. It contains all the code used in the project as well as
any configuration files used.

39

Appendix B
This contains all the tables of measurements that were used in testing. All data in each row is an
average over the 4 circles in each test. Each test was run 10 times to produce the following tables.

Test Set
Test Set 1

Actual Distance From


Measured Distance From
Kalman Filter Estimated Distance
Centre (mm)
Centre (mm)
from centre (mm)
139.00
143.00
136.00
139.00
142.00
137.00
139.00
144.00
138.00
139.00
143.00
138.00
139.00
143.00
136.00
139.00
140.00
138.00
139.00
142.00
136.00
139.00
142.00
138.00
139.00
142.00
139.00
139.00
144.00
137.00

Test Set 2

145.00
145.00
145.00
145.00
145.00
145.00
145.00
145.00
145.00
145.00

154.00
153.00
153.00
140.00
149.00
149.00
155.00
151.00
149.00
154.00

147.00
145.00
145.00
147.00
143.00
145.00
148.00
148.00
146.00
148.00

Test Set 3

189.00
189.00
189.00
189.00
189.00
189.00
189.00
189.00
189.00
189.00

184.00
170.00
195.00
196.00
195.00
183.00
194.00
183.00
195.00
184.00

189.00
188.00
188.00
190.00
189.00
190.00
188.00
189.00
187.00
189.00

Figure 43 - Table of results

40

Appendix C
This appendix contains all diagrams not shown in the report.

Logitech C310 HD
webcam

This is where the


Pandaboard is located
Dagu 2 DOF arm with
marker

Dagu 4 Channel
Motor Controller

Figure 44 - Final robot assembled and used in testing

Arduino Uno

Dagu Rover 5 Chassis

41

Figure 45 -System Flow

42

Figure 46 - System block diagram

43

You might also like