You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/261311766

Laser based rangefinder for underwater applications

Conference Paper  in  Proceedings of the American Control Conference · June 2012


DOI: 10.1109/ACC.2012.6315182

CITATIONS READS
11 832

2 authors, including:

Alexander Leonessa
Virginia Polytechnic Institute and State University
136 PUBLICATIONS   1,516 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Legged Locomotion View project

Virginia Tech's DARPA Robotics Challenge View project

All content following this page was uploaded by Alexander Leonessa on 16 December 2015.

The user has requested enhancement of the downloaded file.


2012 American Control Conference
Fairmont Queen Elizabeth, Montréal, Canada
June 27-June 29, 2012

Laser Based Rangefinder for Underwater Applications

Chris Cain Alexander Leonessa

Mechanical Engineering Department Mechanical Engineering Department


Virginia Polytechnic Institute Virginia Polytechnic Institute
and State University and State University
Blacksburg, VA 24061-0238 Blacksburg, VA 24061-0238
Telephone: (864) 238-4779 Telephone: (540) 231-3268
Fax: (540) 231-8836 Fax: (540) 231-8836
Email: ccain@vt.edu Email: leonessa@vt.edu

Abstract—Autonomous vehicles working in unknown envi- there are some main differences with this proposed sensor.
ronments require tools that allow them to learn about their In ([1],[2]) laser pointers projecting a single dot are used as
surroundings. Once a vehicle knows about the environment in opposed to laser lines. Because of their design, they are only
which it is working, then it is able to move about the environment
freely and in an optimized pattern to fulfill the required tasks able to give distance information related to a single location
while staying out of harms way. While various sensors have been directly in front of the camera. Also, both of these designs rely
developed for vehicles operating out of the water, the number heavily on calibration routines that map the pointer’s location
of sensors available for use by underwater vehicles is smaller. in the image frame with a distance, where our approach
A low cost sensor prototype for cluster distance measurements removes the need for this calibration step. Other, similar
is developed for use in underwater environments. The proposed
sensor makes use of a camera and two laser line generators. Using approaches have been developed for use with different tasks.
an algorithm based on the pinhole camera model, the sensor For example, in ([3],[4]) the use of a single laser line is used
is able to determine distance information about surrounding to generate full 3D maps of underwater objects. However, it
environment. The prototype results in distance measurements can be challenging to find the entire laser line in environments
with an error of 10% of the distance with a tested distance up that are not extremely dark, thus this approach can not be used
to 5m.
Index Terms—laser range finder, computer vision, autonomous
in operating environments where large amounts of natural and
vehicles artificial light may be present. Because of the downfalls of the
previous designs a new sensor was designed.
The remainder of this paper is organized as follows. Section
I. INTRODUCTION
II discusses the design of the sensor prototype including
In autonomous vehicle applications the ability of the vehicle physical layout and the theory behind the design. Section III
to know where objects that surround it are located is crucial. discusses all of the steps of the image processing algorithm
This information is needed for high level applications such as used to extract the laser lines location from the image. Section
mapping and vehicle localization as well as low level applica- IV discusses the experimental results, and finally Section V
tions such as obstacle avoidance. Light Detection and Ranging concludes the presentation of the proposed design.
(LiDAR) sensors do an extremely good job providing a vehicle
II. RANGEFINDER DESIGN
information about the surrounding environment. However, they
have a high cost that is prohibitive for low budget applications A. Physical Layout
and, they don’t work correctly in underwater environments. The physical layout of the rangefinder can be seen in Fig.
Sound Navigation and Ranging (SONAR) work quite well 1. The rangefinder consists of a color CCD camera (C) and a
determining distances in underwater environments. However, pair of laser line generators (A) and (B) which are vertically
the underwater environment that our sensor is designed for are mounted on top of each other and parallel to the camera’s
small and enclosed by structures. In these types of environ- viewing axis. The line generators are mounted so that their
ments acoustic based solutions such as SONAR are extremely lines are parallel to the horizontal axis of the camera’s focal
difficult to use due to the high number of multiple returns plane. The result of the sensor’s physical layout is a pair
caused by reflections in the enclosed environment. of horizontal lines running across the frame captured by the
In this paper a prototype for a low cost distance measuring camera. The camera chosen for the prototype was a Sony FCB-
sensor based upon a simple camera and parallel laser line setup EX11D which has a 1/4 sensor and a 10x optical zoom. The
is proposed. While similar sensor designs have been proposed laser beam generators are Apinex IL-532-5L which are green

978-1-4577-1096-4/12/$26.00 ©2012 AACC 6190


P =( xw , yw , zw )

Q =( xf , yf , f )
A

O
Viewing Axis
B
C Fig. 3. Modified Pinhole Model

Fig. 1. Sensor Physical Layout design on the left and photograph on the
right
the camera’s aperture, as seen in Fig. 3, the signs of the X
and Y components of the object projected onto the focal plane
match those of the objects real world coordinates. From the
laser beam generators working with 532nm wavelength lasers modified pinhole model the relation between the object in the
and a 60◦ fan angle in air. A green laser was chosen as opposed world coordinate frame and the camera coordinate frame is
to a standard red laser used in many laser pointers because described by
the absorption coefficient of green light is lower than the xw xf yw yf
coefficient for red light ([5]). This results in water absorbing = , = (2)
Z f Z f
approximately 50 times more of the red light than the green
where the corresponding components of P and Q define the
light.
relationship.
B. Pinhole Camera Model C. Distance Measurement Theory
The distance calculation is based on the pinhole camera Based on the physical layout of the sensor (Fig. 1) along
model ([6]). According to the classic pinhole model any point with the modified pinhole camera model, a side view of the
in the world, P = (xw , yw , zw ), seen by a camera whose sensor can be seen in Fig. 4.
aperture is located at O = (0, 0, 0), is projected onto the As seen in the side view of the sensor, two similar triangles
camera’s focal plane at Q = (−xf , −yf , f ). The relationship are created between the sensor’s camera aperture and i) the
between P and Q is described by object (OCD), and the ii) objects projection on the focal
xw xf yw yf plane (OAB). By equating the two triangles, the relationship
=− , =− , (1) between the world coordinates of the object and the location
zw f zw f
of the laser lines on the image is given as
where xw , yw , and zw are the components of P corresponding
to the point in world coordinates and xf , yf , and f are the yew yef
= (3)
corresponding components of Q corresponding to the object’s zw f
projection. The negative signs in the projected point, Q, is a
where yew , yw,1 − yw,2 is the physical distance between the
consequence of the camera’s focal plane being located behind
laser line generators, yef , yf,1 − yf,2 is the distance between
the aperture as seen in Fig. 2.
the lines in the image, zw is the distance between the camera’s
In order to remove any confusion caused by the negative
aperture and the object, and f is the focal length of the camera.
signs, a modified version of the pinhole model has been
Since yew is known from the physical setup of the sensor,
chosen. In particular, by moving the focal plane in front of
f is known for the camera being used, and yef can be found

P =( xw , yw , zw ) Object

Laser 1 yw,1
C
f

yf ,1 A

Viewing Axis
O
O yf ,2 B

Focal Plane D
Viewing Axis Q =( xf , yf , f ) Laser 2 yw,2
zw

Fig. 2. Classical Pinhole Model Fig. 4. Sensor Side View

6191
[ ( )]
xcorrected,tangential , x + 2p1 y + p2 r2 + 2x2 (7)
[ ( ) ]
ycorrected,tangential , y + p1 r2 + 2y 2 + 2p2 x (8)
where pi > 0, i = 1, 2, are camera specific constants that
describe the tangential distortion.
The first step of the image processing algorithm is to remove
the distortion from the image which can be achieved by deter-
mining the two sets of distortion parameters, ki , i = 1, 2, 3
and pi , i = 1, 2. This is a one time operation that must
be performed for the camera that is used in the sensor. In
the prototype, the Camera Calibration Toolbox for Matlab
([7]) was used to determine the coefficients. The toolbox
is a Matlab implementation of [8]. The calibration process
examines a set of images of a standard checkerboard training
Fig. 5. Image Processing Algorithm Overview pattern that is placed around the working space of the camera.
Since the dimensions and layout of the testing pattern are
know the toolbox uses this known information to solve for
through the image processing algorithm, discussed in Section the camera’s distortion parameters. Along with finding the
III, the distance to the object can be calculated as distortion parameters, the Camera Calibration Toolbox also
( ) determines the focal length of the camera and the location
f
zw = yew . (4) of the center point of the aperture in the image. With the
yef distortion removed the image is assumed to match that of an
III. IMAGE PROCESSING ALGORITHM ideal pinhole camera model.

A. Algorithm Overview C. Image Segmentation


As seen in Section II, in order to determine how far away While several other sensors use a similar approach ([2],[1])
an object is from the sensor it is necessary to determine the they implement a single point projected onto an object as
distance between the two laser lines in the image frame. To opposed to a line projected across an object. By projecting
accomplish this task, an algorithm was developed to extract the a line across the image, the distance to an object can be
laser lines from the image and then calculate the distance to the determined at multiple points as opposed to at a single point
object based on their spacing. An overview of the algorithm which occurs when using just a single laser point. The ability
is seen in Fig. 5. to know the distance to multiple objects or multiple locations
on a single object aids the vehicle’s ability to map the
B. Removal of Lens Distortion surrounding environment. In order to determine the distance
In the development of the sensor model the assumption was at multiple locations the image is broken down into segments
made that the camera behaved like the ideal pinhole camera as seen in Fig. 6.
model. However, in practice most low cost camera solutions
suffer from distortion from the lens and other manufacturing
defects. A model for camera distortion ([7],[6]) describes
two different types of distortions existing in cameras. Radial
distortion is described as

( )
xcorrected,radial , x 1 + k1 r2 + k2 r4 + k3 r6 , (5)
( )
ycorrected,radial , y 1 + k1 r2 + k2 r4 + k3 r6 , (6)

where x and y are the corresponding horizontal and vertical


distances from the√ center of the camera aperture for the point
in the image, r , x2 + y 2 is the distance of the point from
the center of the camera’s aperture, and the ki > 0, i = 1, 2, 3,
are unique constants describing the radial distortion for a given
given camera.
The second component of distortion, tangential distortion, Fig. 6. Image segmented into seven segments 50 pixels wide with an angular
is described as offset of 4◦

6192
While the primary advantage of segmenting the image is that
distances can be calculated at multiple locations, a secondary
advantage is also achieved. In particular, by segmenting the
image, the image processing algorithm is run on smaller
images as opposed to the entire image. This provides a com-
putational advantage in that the processing time is shortened
compared to the time that would be required to process the
entire image.
D. Line Extraction
With the image broken down into smaller segments, each
segment is then processed to extract the location of the laser
line in the image. First, the image is converted from a full color
image to a black and white image. Then, a threshold is applied
in order to extract the brightest portion of the image. Finally,
all of the line segments are extracted from the image using
the Hough Transform ([9]). The Hough Transform takes as an
input an image that has been processed by an edge detection
(A) (B) (C) (D)
algorithm that extracts all of the edges that could be lines.
Each point in the image, located at (x, y), that is a member of Fig. 8. Steps for Line Extraction (A) Original Image, (B) Converted to Grey
an extracted edge can be represented in slope-intercept form, Scale, (C) Threshold Applied, and (D) Extracted Lines

y = mx + b (9)
line segments and if they fall within some p pixel distance
where m is the slope of a given line and b is the point where of each other are assumed to represent the same laser line.
the line intercepts the vertical axis. Any point in the x − y Once the line segments corresponding to each laser line
coordinate system can be represented as a line in the m − b are grouped together each line segment is evaluated at the
coordinate system as seen in Fig. 7. By examining any two midpoint of the image segment and are averaged to estimate
points in the m − b coordinate system, if their respective lines the exact middle of the laser line in the frame. Finally, the pixel
intersect, they lie on the same line segment in the x − y difference between the two laser lines is calculated based on
coordinate system. the calculated averages so that the distance to the object at
The result after each step of the line extraction algorithm the center of the image segment can be calculated according
can be seen in Fig. 8 where (A) is the original image segment to (4).
captured by the camera, (B) is the image after it has been
converted from the full color space to the grey scale color IV. RESULTS
space, (C) shows the image once the threshold has been A. Results Overview
applied to extract the brightest image components, and (D)
shows the line segments extracted by the Hough Transform To test the accuracy of the sensor an experiment was
line identification. performed. A test environment was constructed out of plywood
Once all of the line segments have been extracted from in the laboratory. To test the proposed sensor’s accuracy, dis-
the image there is a chance that multiple line segments are tances were measured with an Hokuyo UTM-30LX scanning
used to represent each laser line. Each of the line segments laser range finder as a reference. The Hokuyo LiDAR system
are grouped together based on a user defined pixel separation has an accuracy of 0.1 − 10m ± 30mm, 10 − 30m ± 50mm
parameter. This grouping procedure takes each of the extracted ([10]). Both the sensor prototype and LiDAR system were
attached to a wheeled cart and were moved around the test
8 10
environment. The image was segmented into seven frames dur-
7
8 ing the test. A comparison of each of the distances calculated
6
6

4
with the sensor prototype compared with the corresponding
5 distances measured with the LiDAR, for two of the seven
Y Intercept

2
Y (pixel)

4 0
segments, can be seen in Fig. 9.
−2
3
−4 To better understand how well the sensor prototype tracks
2

1
−6
the actual distance to the object an error analysis was per-
−8

0
0 1 2 3 4 5 6 7 8
−10
−10 −5 0 5 10
formed. The results for two of the seven segments can be
X (pixel) Slope
seen in Fig. 10. From the error analysis the absolute error
(A) (B) in distance is proportional to the distance from the object,
Fig. 7. Points on a given line in the x − y coordinate system (A) and the approximately 10% of the actual distance. The sensor provides
m − b coordinate system (B) a more accurate distance measurement when the sensor is

6193
100 100

closer to the object which was to be expected from the use of 90 90

a camera as the sensors basis. 80 80

Normalized Error(%)

Normalized Error(%)
70 70

To show the benefits of this design a set of frames are shown 60 60

in Fig. 11. The advantage that results from using laser lines 50

40
50

40

instead of a single laser pointer and segmenting the image into 30 30

multiple frames can be seen. When corners or obstacles that 20 20

10 10

are not flat and perpendicular to the camera’s viewing axis are 0
1 1.5 2 2.5 3 3.5 4 4.5 5
0
1 1.5 2 2.5 3 3.5 4 4.5 5
Distance (m) Distance (m)
encountered more information about the object is obtained as
opposed to just the distance to the object directly in front of Fig. 10. Error vs. Distance for two of the seven image segments
the sensor.
Once the sensor was tested in the lab environment a
waterproof enclosure was constructed to allow for underwater where δzw represents the distance error corresponding to the
testing. An outdoor, unfiltered test tank was used to test the error δe
yf affecting the line separation measurement in the
ability of the sensor to measure distances underwater. The camera image. Equation (10) can be rewritten as
sensor was held underwater facing one end of the tank at
distances of approximately 0.61 m (2 ft), 1.529 m (5 ft), f yew zw2
2.14 m (7 ft), 3.058 m (10 ft), and 3.975 m (13 ft). The |δzw | = |δe
yf | or |δzw | = |δe
yf | . (11)
yef2 f yew
results of the underwater distance test are shown in Fig.
12 where the sensor is located at the origin pointed down From this result it is seen that the error grows quadratically
the horizontal axis looking into the tank. Before the sensor with respect to the distance from the target object.
was tested the calibration routine was run for the camera in
the underwater environment because the water changes the V. C ONCLUSION
distortion coefficients. As seen in the results the sensor is In this paper a low cost distance measuring sensor was
capable of determining the distance to given object in the designed and tested in both the air, where the accuracy of
underwater environment, with an estimated error near the the sensor could be easily tested, and underwater. The low
previously calculated 10%. When examining the results it can cost and ability to use waterproof equipment (lasers and
be seen that the sensor is able to determine the shape of objects cameras) make the proposed sensor useful for underwater
in the underwater environment as the measurement at 3.975 robotics applications where a given level of visibility can be
m the prototype captured the sides of the test tank as well as assured. The resulting sensor provides distance data with an
the end of the tank and the corners of the tank are easily seen error of 10% of the distance to the object which is accurate
in the results. However, since the sensor is based on vision enough to be used on underwater vehicles moving at slow
any water conditions that negatively affect visibility will also speeds. The sensor prototype also provides shape information
negatively affect the sensor’s performance. about objects that can be useful for mapping and localization
as well as simple obstacle avoidance.
B. Error Analysis
By examining (4) with the assumption that the laser line R EFERENCES
generators can be properly mounted parallel to each other, the [1] K. Muljowidodo, M. Rasyid, N. SaptoAdi, and A. Budiyono, “Vision
primary source of error in the distance measurement comes based distance measurement system using single laser pointer design for
from the approximations in determining the actual center of underwater vehicle,” IJMS, vol. 38, no. September, pp. 324–331, 2009.
[Online]. Available: http://nopr.niscair.res.in/handle/123456789/6209
the laser line in the camera image. To see how this error affects
[2] G. Karras, D. Panagou, and K. Kyriakopoulos, “Target-referenced
the distance measurement a simple analysis was performed. By Localization of an Underwater Vehicle using a Laser-based Vision
differentiating (4) we obtain System,” Oceans 2006, pp. 1–6, Sept. 2006. [Online]. Available: http:
//ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4098908
( ) [3] A. Bodenmann, B. Thornton, T. Ura, M. Sangekar, T. Nakatani, and
f
δzw = −eyw δe
yf , (10) T. Sakamaki, “Pixel based mapping using a sheet laser and camera for
yef2 generation of coloured 3d seafloor reconstructions,” in OCEANS 2010,
sept. 2010, pp. 1 –5.
[4] M. Sangekar, B. Thornton, T. Nakatani, and T. Ura, “Development of
5.5
Lidar
6
Lidar a landing algorithm for autonomous underwater vehicles using laser
5
Prototype
4.5
5
Prototype profiling,” in OCEANS 2010 IEEE - Sydney, may 2010, pp. 1 –7.
4
[5] M. Stomp, J. Huisman, L. J. Stal, and H. C. P. Matthijs, Colorful
4
niches of phototrophic microorganisms shaped by vibrations of the water
Distance(m)

Distance(m)

3.5

3 3 molecule. ISME J, 2007.


2.5
2
[6] G. Bradski and A. Kaehler, Learning OpenCV. O’Reilly, 2008.
2
[7] J.-Y. Bouguet, “Camera calibration toolbox for matlab,” http://www.
1.5

1
1
vision.caltech.edu/bouguetj/calib doc/index.html.
0.5 0
[8] J. Hekkilä and O. Silvén, A Four-step Camera Calibration Procedure
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time (sec) Time (sec) with Implicit Image Correction.
[9] R. O. Duda and P. E. Hart, “Communications for the association for
Fig. 9. Sensor Results vs LIDAR over time for two of the seven image computing machinery 15,” in Communications for the Association for
segments Computing Machinery 15, 1972, pp. 11–15.

6194
[10] Hokuyo, “Scanning laser range finder utm-30lx/ln specification,”
http://www.hokuyo-aut.jp/02sensor/07scanner/download/data/ Lidar
7 Laser
UTM-30LX spec.pdf.
Camera Viewing Angle
6

Y Position(m)
4

−1

−2

−3
−6 −4 −2 0 2 4 6
X Position (m)
Lidar
7 Laser
Camera Viewing Angle
6

5
Y Position(m)
4

−1

−2

−3
−6 −4 −2 0 2 4 6
X Position (m)

Fig. 11. Corner Plots

2 0.61 m
1.529 m
1.5 2.14 m
3.058 m
3.975 m
1
Y Position(m)

0.5

−0.5

−1

−1.5

−2
0 1 2 3 4 5
X Position (m)

Fig. 12. Tank Testing Results

6195

View publication stats

You might also like