You are on page 1of 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/224119717

Visual-based navigation of an autonomous tugboat

Conference Paper · November 2009


Source: IEEE Xplore

CITATIONS READS
2 232

4 authors, including:

Patrick Rynne Karl D. von Ellenrieder


University of Miami Florida Atlantic University
14 PUBLICATIONS   124 CITATIONS    70 PUBLICATIONS   518 CITATIONS   

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Patrick Rynne on 12 December 2014.

The user has requested enhancement of the downloaded file.


Visual-Based Navigation of an Autonomous
Tugboat
M. H. Tall, P. F. Rynne, J. M. Lorio, & K. D. von Ellenrieder
Department of Ocean Engineering
Florida Atlantic University
Dania Beach, FL 33004-3023 USA

Abstract- This paper presents the work of a team of undergraduate and graduate students at Florida Atlantic University (FAU)
who compete in the annual Autonomous Surface Vehicle (ASV) competition held by the Association for Unmanned Vehicle
Systems International (AUVSI). The theoretical concept of the vehicle, the design modification and fabrication process, and the
results of both preliminary testing and the final competition are presented. The initial configuration of a stereoscopic vision system
and navigation algorithm is explored through testing in a controlled environment. With this approach, the vehicle is shown to be
capable of navigating precisely through various courses of colored buoys; approximately 25% of the attempts results in successful
navigation of all buoy pairs while 75% of the attempts result in successful navigation of half the buoy pairs or more.

I. INTRODUCTION

Beginning in 2007, the Association for Unmanned Vehicle Systems International (AUVSI) has held an annual autonomous
surface vehicle (ASV) competition for students [1]. Recent efforts to advance ASV technology exemplify the broad spectrum
of applications for which they are appropriate[2][3][4][5]. The requirements of the vehicle presented here are defined with
respect to the competition missions, with the primary focus placed on navigating autonomously between a series of buoys,
obstacles, and docking stations. Specifically, the obstacle course consists of the following sub-missions: a static thrust test, a
high speed run between two gates, navigation through eight sets of orange and green buoys, avoidance of two yellow buoys,
docking with a small floating structure, shooting two “hostile” targets, acquiring a floating life ring, and returning home. The
position (latitude and longitude) of the buoys is not provided. Although the specific geometry of the course is changed
between each round of the competition, the ranges of the buoy spacing, gate width, and offset angle are defined in Fig. 1. All
tasks are to be completed in a single run while operating autonomously. A conventional navigation approach, one that relies
solely on input from the Global Positioning System (GPS) and a compass is inadequate due to the lack of precision it can
achieve and because the positions of the obstacles are not fixed. A visual-based navigation approach is required.

Figure 1. Obstacle course buoy configuration [1].

0-933957-38-1 ©2009 MTS


II. PLATFORM SELECTION

After considering many different hull forms, both mono-hull and multi-hull, it was determined that an escort tugboat type
design would lend well to the areas of the mission that were deemed most important, namely maneuverability, stability, and
towing ability. Escort tugboat designs have evolved from those of harbor tugboats, and as a result have maintained a very
high level of maneuverability. These vessels are required to provide high thrust over 360 degrees, and change the direction of
that thrust precisely and instantaneously, a feature that translates well to the AUVSI ASV competition’s missions of buoy
navigation, docking, and towing. Drawbacks of this design include higher wave-making resistance and less dynamic straight-
line stability as compared to a slender hull design such as a catamaran.
Escort tugs are unique in that they rarely operate in “design” mode [6]. An escort tugboat will often have a line from its
stern to the stern of the vessel being escorted. In this configuration the tugboat will head in the direction considered astern by
convention, while the vessel in-tow heads forward. Since this is the main mode of operation, escort tugs are designed with a
considerable amount of fore/aft symmetry at the waterline, similar to harbor tugs, which are generally fully symmetric. Since
escort tugs are required to operate at speeds in excess of 10 knots ahead and astern, which is considerably higher than that of
normal harbor tugs, they do have a lower length-to-beam ratio (L/B) and slightly more of a distinction between the bow and
stern. Other features of the hull that make it favorable to the competition are the large interior volume, transverse bulkhead
and sponsons, and deeply submersed propulsion systems. Although a Voith-Schneider drive unit is typically used for this hull
form, two model Schottel drive units are utilized to provide the 45 N of thrust required by the static thrust sub-mission.
Control is achieved using differential thrust and a proportional-integral-derivative (PID) controller. The interior volume of a
monohull with a low L/B was felt to be advantageous to that of a multi-hull vessel. The large interior volume allows nearly
all of the equipment to be placed below deck, offering increased protection and a low vertical center of gravity (VCG). A
transverse watertight bulkhead is placed at amidships to separate the hull penetrations for the Schottel drives from the CPU
and navigation electronics. The sponsons allow for a lower L/B at the waterline, lowering the hydrodynamic resistance, but
provide a very high jump in stability if the vessel were to heel over. The scaled down hull form, seen in Fig. 2, is similar to a
high performance escort tugboat design by Robert Allan Ltd. of Vancouver, B.C. [7]. The deck height is 22.8 cm to
accommodate the limits of the CNC machine used to fabricate the hull. The original design weight of the vessel was 34 N.
The lines are modified for that displacement, which gives a design waterline of 10 cm. The final vessel only weighs 24 N.
This 30% weight reduction gives the vessel lower drag and requires much less power to maneuver than originally estimated.
Various parameters of the hull design are presented in Table 1.

Figure 2.C.A.D representation of ASV hull.

TABLE I. COURSE RESULTS


PARAMETER VALUE
Length overall 1.37 [m]
Length at waterline 1.257 [m]
Beam 0.508 [m]
Maximum beam 0.577 [m]
Draft 0.23 [m]
Molded depth of hull 0.10 [m]]
Displacement 24 [N]
Length to beam ratio 2.7

III. VISION SYSTEM HARDWARE

The C-programmable Carnegie Melon University (CMU) Cam3 digital camera is the vision sensor for the ASV (Fig. 3a).
The CMUCam3 is based on a complementary metal oxide semiconductor (CMOS), a silicon chip that contains a grid of
boxes that are each sensitive to different colors of light. After light passes through the lens and onto the chip, different
voltage levels are generated proportional to the intensity of the light. The voltage is then converted into a numerical value
representing the level of that color: red, green, or blue [8]. To navigate through the buoys, a color-tracking scheme is
implemented. Color tracking is an ability of the camera to recognize a desired object and return information about its size and
position relative to the camera’s view. The CMUCam3 uses a simple one-pass algorithm to analyze the color of each pixel
and determine if it falls within a user-defined range. If the pixel is in the range of colors, its location is stored and the
bounding box (Fig. 3b) of the blob is increased. The centroid is calculated from the number of pixels in the bounding box
(Fig. 3c).

Figure 3a, 3b & 3c. CMUCam3 (left) and example of color tracking bounding box (middle), and corresponding centroid (right).

A GWS standard S03N STD servomotor is used to rotate the camera about its vertical axis (yawing). It connects directly to
the CmuCam3, such that the camera can communicate directly with the servo, eliminating the need to include a separate
servo controller. The servo is capable of a rotation rate of 230 °/s and limited to ±50°. While color tracking, the servo is
programmed to turn such that the centroid of a target remains in the center of the field of view. The CMUCam’s closed loop
controller is based on a “bang bang” control approach [9]; if the target is off center a command is sent to the servo to move a
predetermined number of steps in the direction that will center the target. If this predetermined number of steps is not enough,
then on the next frame the bang bang control is applied again. To make this approach more capable, a two-stage method is
used (Fig. 4). Two centroid deviations are defined, a high and a low. If the centroid deviation is greater than the upper limit,
the servo moves by the predetermined step size. If the centroid deviation is less than the upper limit, the servo moves by half
of the predetermined step size. Early tests of the tracking system in both low resolution (88 x 72 pixels) and high resolution
(176 x 143 pixels) mode have been conducted. The maximum angular velocity the cameras can track for both modes is
related to the target range ( ) and its relative velocity ( ) in Equations 1a, 1b, and 1c. The angles θ are for both the left
and right camera as illustrated in Fig. 6.

(1a)
low resolution, 0.88 ⁄ (1b)
high resolution, 0.34 ⁄ (1c)

The maximum color-tracking yawing rates are appropriate for the ASV; specifically the cameras are capable of tracking
targets at an appropriate range and velocity equivalent to the desired operating speed of the vehicle.

Figure 4. Flow chart of modified “bang bang” control algorithm.


IV. VISUAL NAVIGATION ALGORITHM

A system of two cameras is presented. The primary motivation for a multi-camera configuration is redundancy; if one
camera fails, the backup units could allow the ASV to proceed with its mission. Additionally, it is anticipated that a long-
term goal of the vision system will be to support stereoscopic imaging and stereopsis. Unfortunately, the processing power
required to perform stereopsis exceeds the capabilities of the CMUCam3. Therefore, a new algorithm is required. The
formulation of the dual camera algorithm begins by analyzing the ideal situation (Fig. 5a) of the vehicle approaching a set of
buoys. In this case, the vehicle approaches from a position equidistant to both buoys, with an initial heading perpendicular to

Figure 5a & 5b. Ideal navigation conditions.

the gate. From Fig. 5a it is clear that the camera angles are of opposite sign, but equal in magnitude. Many scenarios similar
to Fig. 5b are analyzed, and a general algorithm is developed. The result is a novel navigation algorithm (Fig. 6) called the
chameleon-eyed navigation system (CENS), inspired by the independent eye motion of the reptile. The CENS algorithm
provides a desired heading and estimated distance to targets of interest. As shown in Fig. 7, the camera angles are combined
and converted to the vehicle’s heading coordinates to derive a desired heading, which is used as the set point for the vehicle’s
PID controller. For example, when both the returned angles are near zero, the target (in this case, the midpoint between the
buoys) is far away and straight ahead. Alternatively, when both angles are equal and large, the target is nearby and straight
ahead.

Figure 6. CENS conceptual diagram.


Figure 7. CENS algorithm flow chart.

V. POOL TESTING

A series of closed-water experiments were conducted to test the CENS approach. An outdoor Olympic swimming pool was
selected as the testing venue since it provided sufficient space and no tidal currents or waves and minimal wind [10]. Four
sets of red and white buoys were used to mark the course boundaries. The three course configurations are shown in Fig. 8a,
8b, and 8c. Note that the distance between each red and white buoy of the same pair shall be referred to as the gate width
(GW), while the distance between each successive pair shall be referred to as the buoy spacing (BS). The first course was a
simple straight line. The second and third course contained a slight “S” pattern with varying gate widths and buoy spacing.

TABLE II. COURSE RESULTS


Course Number Course Description GW [m] BS [m] Perfect Runs 2-3 Buoys 0-1 Buoys
1 Straight Line 2.5 5 30% 60% 10%
2 S Curve 2-3 5 25% 50% 25%
3 Narrow S Curve 1-4 4-5 25% 50% 25%

Figure 8a, 8b & 8c. Three course configurations.

Some course statistics are presented in Table 2. Each course was attempted approximately ten times. The lighting
conditions varied for each run due to partial cloud cover at the testing venue. The quality of light significantly affects the
color tracking capabilities of the CMUCam3, despite the addition of an infrared cutoff filter and a polarizing lens.
Additionally, many of the lowest percentage runs occurred during periods of changing conditions (a moment of shadow
followed by sun or vice versa). Two GPS course plots are presented in Fig. 9 and Fig. 10. Since the GPS is only accurate to
within approximately two meters (depending on the number of satellites available), these figures serve only as illustrations of
the observed results.
Examples of raw camera data from a perfect run through course #3 (Fig. 8c) are presented in Fig. 11 and Fig. 12. These
plots confirm that each camera records statistically similar data. A polynomial curve fit illustrates the general trend, namely
three distinct peaks. Each peak represents a maximum magnitude of θL & θR and correlates to passing through a pair of buoys.
It was anticipated however that four peaks, one for each pair of buoys would be visible. The three-peak trend is also
consistent in different data, and is possibly the result of a false target acquisition that occurred during the tests. As seen in
Fig. 9 and Fig. 10, in both cases the ASV continues to track straight beyond the final buoy pair. This is likely the result of the
cameras acquiring a new target at the end of the pool such as a red flag and white diving platform (both present at the venue).
While color tracking this target in the far field, the cameras would not turn as expected through the final set of buoys and
could account for not seeing a fourth maxima.

Figure 9 & 10. GPS track of course #1 and #3 superimposed over a satellite image. Note that the port buoys used in the test are white, not green.

Figure 11. Left camera time series (course #3).

Figure 12. Right camera time series (course #3).

Another notable trend in these time series is the difference in the mean values for both the left and right cameras. The mean
value for the left camera is approximately θL = 15° while the mean of the right camera is approximately θR = 18°. The
discrepancy is accounted for by the non-symmetric nature of course #3. As seen in Fig. 8c, both the second and third buoy
pairs are shifted to the right (as compared to course #1) and could account for a slight right biased in the data. Finally, these
time series depict the “noisy” nature of color tracking, and the importance of filtering out rogue instructions passed down to
the PID controller.
To better understand the performance of the ASV while executing CENS, the cross-correlation between the instructions
coming from the CMUCam3 (desired heading) and the dynamic response of the vehicle (actual heading) is examined. Fig. 13
shows a time series plot of the actual and desired heading of the ASV while navigating a perfect run through course #3. A
consistent time shift between the local maxima can be observed. In Fig. 14, a normalized cross-correlation estimate is
presented using the built-in MATLAB function xcorr. The two series are highly correlated (86%) at a lag time of 1.96
seconds. This suggests that the vehicle consistently takes approximately 2 seconds to achieve the desired heading once the
instructions have been given. To confirm this value, a lag shifted time series is presented in Fig. 15. This plot illustrates that
by shifting the desired heading forward in time by the lag time associated with the maximum cross-correlation between the
two series, that they become aligned.

Figure 13. Heading time series (course #3).

Figure 14. Cross-correlation between vehicle heading and desired heading (course #3).

Figure 15. Lag shifted heading time series (course #3).


VI. THE AUVSI COMPETITION

The 2009 AUVSI ASV competition was held in Virginia Beach from June 18-21. The format of the event consisted of a
series of 30-minute qualification rounds, followed by a final 30 minute run on the final day for the top teams. The obstacle
course was broken down into the following missions: a static thrust test, a high speed run between two gates, navigation
through eight sets of orange and green buoys, avoidance of two yellow buoys, docking with a small floating structure,
shooting two “hostile” targets, acquiring a floating life ring, and returning home. All tasks were to be completed in a single
run while operating autonomously.
Two additions were made to the FAU ASV in
preparation for these challenges as seen in Fig. 16. First,
an electric water cannon with a range of five meters was
installed on the stern. Wired directly to the backplane of
the control system, firing the cannon autonomously
required little modification to the overall mission code.
The second device was a “buoy grabber”; a drop down
hook made of plastic tubing that was deployed using a
standard servo and release pin mechanism.
Early in the qualification rounds it became clear that the
lighting conditions were significantly different than those
in a controlled environment such as a pool. Along with
varying cloud cover and occasional rain, the weather
conditions in Virginia Beach brought 15-30 knot wind
gusts, which caused the floating targets (gates, buoys,
docks) to drift back and forth significantly. The
immediate challenge of the course was autonomous Figure 16. The ASV outfitted for the AUVSI competition
navigation of the red and green buoys. As shown in Fig.
17, the area surrounding the course area was abundant with green vegetation. During a preliminary qualification run, it
became clear that false target acquisitions would be a source of navigation error. In response to this observation, the
navigation code was modified to give biased towards orange buoys. Specifically, unless the left camera could track a green
buoy with a high confidence level, the navigation would rely on a heading estimate based solely on the right camera (orange
buoy). The algorithm adjustment was not perfect, but the ASV was still able to navigate 50% of the buoy pairs in the
qualification rounds (Fig. 18).
Another interesting observation was made during this phase of the competition. For safety and recovery purposes, a
manned kayak was used to follow each ASV through the obstacle course. During one run, our ASV mistakenly “locked on”
to the orange lifejacket of the kayak skipper, and proceeded to aggressively track it. Not wanting to interfere with the run, the
skipper executed some evasive paddling maneuvers to no avail. Like a stubborn torpedo, our ASV followed the kayak in
circles and eventually hit her. The run was abandoned, a lesson was learned, and the competition audience had a good laugh.
At the end of the qualification rounds, we sat comfortably in 2nd place behind the University of Central Florida (UCF). UCF
won the 2008 edition of the event (we finished 2nd), and we were eager to dethrone the champions. With an aggressive
mission plan that we hoped would result in the first water cannon marksmanship of the event, our ASV took to the water for
its final run. Unfortunately, we saved our worst for last. A bug in code that was related to clearing the buffers of the GPS
strings caused the algorithm to shut down halfway through the mission. Without enough time to diagnose the problem, our
low final score would drop us to 5th place. Although the end result was disappointing, we left the competition having learned
many important lessons towards building a better ASV for 2010!
Figure 17. Approaching a gate during the competition. Figure 18. The modified CENS algorithm was biased toward orange buoys.

VII. CONCLUSIONS

Autonomous visual-based navigation of an ASV is suitable for steering an accurate course through an uncharted body of
water littered with obstacles. Although the CMUCam3 camera provides sufficient resolution for a closed water environment
with good lighting such as a pool, a more powerful unit that is capable of auto-focusing and auto-filtering is likely required
for equivalent performance in a broader range of conditions. Additionally, it is also pertinent to realize how a series of
closed-water experiments can provide useful information regarding the dynamic response of a newly developed ASV. Due to
the constraints of time, money and personnel, it is often unfeasible to develop dynamic models of a proposed system prior to
fabrication. Luckily, the 2009 FAU ASV will serve as a building platform for the 2010 competition, allowing for more
thorough research to be pursued.

ACKNOWLEDGMENT

The authors would like to acknowledge the sponsors of this effort: Lockheed Martin Undersea Systems, FAU Club Sports,
FAU SNAME Student Club, Advanced Circuits, and the FAU Department of Ocean Engineering. Special thanks to Tom
Pantelakis, John Kielbasa, Ed Henderson, and Beatriz Gomez of the FAU SeaTech electronics shop and to Luis Padilla of the
FAU SeaTech machine shop for their guidance and expertise.

REFERENCES

[1] (2009, Aug.). AUVSI & ONR's International Autonomous Surface Vehicle Competition [Online]. Available:
http://www.auvsi.org/competitions/surface.cfm
[2] J. Wang, W. Gu, J. Zhu, "Design of an Autonomous Surface Vehicle Used for Marine Environment Monitoring," International Conference on Advanced
Computer Control, icacc, pp. 405-409, 2009.
[3] J. Curcio, J. Leonard, A. Patrikalakis, “SCOUT – a low cost autonomous surface platform for research in cooperative autonomy,” Proceedings of the
MTS/IEEE OCEANS Conference, vol. 1, pp. 725-729, 2005.
[4] P. F. Rynne, K. D. von Ellenrieder, “Unmanned Autonomous Sailing: Current Status and Future Role in Sustained Ocean Observations,” MTS Journal,
vol. 43, NO. 1. 2009.
[5] P. F. Rynne, K. D. von Ellenrieder, “Development and preliminary experimental validation of a wind and solar-powered autonomous surface vehicle,”
under review IEEE J. Oceanic Eng. (JOE).
[6] W.D Molyneux, J. Xu, and N. Bose, “Flow vectors around an escort tug model at a large yaw angle,” Proceedings of the Canadian Marine
Hydrodynamics and Structures Conference, St. John’s, Newfoundland, October, 2007.
[7] R.G Allan, J-E Bartells, and W.D Molyneux, “The development of a new generation of high performance escort tugs”, Proceedings of the International
Towage and Salvage Conference, Jersey, May 2000.
[8] (2009, Aug.). CMUCam2 Manual [Online]. Available: http://cmucam.org/attachment/wiki/Documentation/CMUcam2_manual.pdf
[9] L. M Hocking, Optimal Control – An Introduction to the Theory with Applications, Oxford: Clarendon, 1991, pp.62-74.
[10] (2009, Aug.). GUSS – The Autonomous Boat [Online]. Available: http://www.youtube.com/watch?v=Ug55ZbMLCus

View publication stats

You might also like