You are on page 1of 6

International Journal of Conceptions on Computing and Information Technology

Vol. 2, Issue. 4, June 2014; ISSN: 2345 - 9808

Android-based Mobile Framework for Navigating


Ultrasound and Vision Guided Autonomous Robotic
Vehicle
K S Dasun and R G N Meegama
Dept. of Statistics and Computer Science,
Faculty of Applied Sciences, University of Sri Jayewardenepura,
Gangodawila, Nugegoda, Srilanka
sachindradasun@gmail.com and rgn@sci.sjp.ac.lk
Abstract An autonomous vehicle must be capable of navigating
to a pre-specified destination by making necessary decisions
required to move the vehicle while avoiding obstacles in the path
without depending on human intervention. Presently, most
autonomous vehicles depend on a large number of sensors and
powerful processing devices. Higher cost and size of these systems
prevent their use in small to medium-scale autonomous
navigation tasks such as robotics, environment monitoring and
workplace automation. This paper presents the development of a
prototype of an autonomous vehicle using a mobile
communication device that runs on the Android platform. The
proposed vehicle uses ultrasound sensors and computer vision to
detect and avoid obstacles along the path. Results obtained by
operating the vehicle in a simulated environment indicate the
performance of the vehicle in avoiding obstacles.
Keywords- Android, autonomous vehicle, robots

I.

INTRODUCTION

Although a robotic vehicle with fully autonomous


capabilities has still not been developed, the topic has gained
ground among the researchers in the last few decades. In recent
times, however, continuous developments of technologies and
increasing processing power of computers have proven the
feasibility of developing an automated vehicle [10, 11].
Though autonomous vehicles are not currently in widespread
use, experimental-scale productions may offer advantages in
various fields such as traveling, environment exploration, space
traveling, agriculture, etc. However, several challenges need to
be looked into in the use of an autonomous vehicle as a means
of transportation especially, in a situation where the vehicle
should travel in a previously unknown or a dynamic
environment [13].
Autonomous vehicles use various devices such as GPS
units, range finders, accelerometers, cameras, etc. to sense the
surrounding environment. The accuracy of data collected from
these sensors plays an important role in the decision making
process of an autonomous vehicle [16]. A computer attached to
the sensor system can process feedback information obtained
through the sensors and make required decisions for safe
navigation. A program running in the on-board computer

produces a sequence of instructions that can be used to reach


the specified destination safely by avoiding obstacles in the
path [14].
Finding a suitable path on a map and obstacle avoidance are
the two major functions to be considered when developing an
autonomous vehicle. Moreover, some emergency situations
may arise where a human must intervene to navigate the
vehicle along the correct path [9]. In such a situation, it is very
useful if the vehicle can provide data collected about its
environment to the remote operator and perform commands
received in real time. The aim of this project is to develop a
mobile autonomous vehicle with the aforementioned
capabilities.
II. RECENT WORK
Sachin et al. [1] have compared three different obstacle
detection and avoidance techniques that include fixed
mounting ultrasonic sensors, a rotating ultrasonic sensor and a
laser scanner. It reveals that a stationary ultrasonic sensor with
a rule-based approach to avoid obstacles performs poorly in
filtering noise and reducing errors.
Akihisa et al. [2] have constructed two ultrasonic range
systems that differ in their wave patterns and have examined
the performance of obstacle detection by measuring the
correlation of the maximum measurable distance to the width
of a reflected object. The results have led them to the
conclusion that the directivity of the sensor depends not only
on the directivity of the transducer but also on the sensitivity.
An autonomous ground vehicle that can be used in farming
is presented in [3] having a combination of global and local
obstacle avoidance techniques to deal with known and
unknown obstacles, respectively. The global obstacle
avoidance subsystem is used to pre-plan the paths around all
known obstacles while the local obstacle avoidance subsystem
is used to safely avoid previously unknown obstacles that
reside in the environment.
Johann et al. [4] have proposed an obstacle avoidance
system for mobile robots traveling through narrow aisles of a

16 | 3 6

International Journal of Conceptions on Computing and Information Technology


Vol. 2, Issue. 4, June 2014; ISSN: 2345 - 9808

Figure 1. Topology of the proposed system.

warehouse. In this setup, ultrasonic sensors are placed at


optimal locations around the robotic vehicle to allow the
navigation algorithm to obtain the required distance
measurements with a minimum error.
Syedur et al. [5] have compared the performance of three
computer vision based obstacle detection techniques under
different scenarios such as planer homography with image
wrapping, planar homography with image segmentation and a
combination of epipolar geometry, planar homography and
edge detection.
III. METHODOLOGY
Most of the current obstacle detection methods have
employed only a single technique that focuses on improving
the accuracy of detecting a single obstacle [7]. In contrast, the
proposed methodology combines two obstacle detection
techniques to improve navigation process. More specifically,
this project combines digital image processing algorithms
combined with ultrasonic distance measuring technique to
identify and measure distances to each obstacle along the path
of the vehicle. The ultrasonic distance measurement technique
is used due to its low cost and accuracy in measuring distances
to object even in poor visibility conditions. On the other hand,
the image processing technique is used because of its ability to
identify the exact shape of an obstacle [8]. In addition to
obstacle detection, the proposed solution makes use of various
hardware devices and technologies as listed in Table 1 for
navigating purposes. The system topology of communication
links of the framework is depicted in Figure 1.

The proposed framework consists of six modules as shown


in Figure 2. The module that reports the status of the vehicle
captures a variety of information about the surrounding
environment. The vehicle utilizes an array of sensors built into
the Android mobile device and additional hardware sensors
that are connected to the Android device via the
microcontroller to achieve the required tasks.
The web control module allows an operator to control the
vehicle remotely in two ways. The remote operator can either
manually control the speed, direction and the steering of the
vehicle or provide step by step instruction to the vehicle using
the step control feature.

Figure 2. Individual modules of the proposed framework.

TABLE 1. HARDWARE AND SOFTWARE TECHNOLOGIES USED IN


THE DESIGN OF THE ROBOTIC VEHICLE.
Component

Technology

sensors

GPS, accelerometer, compass,


proximity sensor, rotary encoders

control system

Android, microcontrollers, DC
motors, servo motors
Bluetooth, USB, Wi-Fi

communication
web controlling

Node.js, MongoDB, Google aps API

There are two kinds of step instructions that can be


provided to the vehicle. They are, MOVE instruction to move
the vehicle to a specific distance and TURN instruction that
allow the vehicle to turn in a specific direction. The path of the
vehicle can be controlled using these two instructions. As the
vehicle follows steps provided by the operator, the step control
panel shows the progress of each step in real time until the
vehicle reaches the final destination. The operator has the
ability to edit the step list and reorder the steps using drag and
drop operations.

17 | 3 6

International Journal of Conceptions on Computing and Information Technology


Vol. 2, Issue. 4, June 2014; ISSN: 2345 - 9808
A. Map navigation
Map navigation allows the operator to mark a specific
location on a map to navigate the vehicle automatically to a
specified destination. In addition, the location of the vehicle is
displayed to the operator in the same map. There are two steps
involved in the process of map navigation: First, the web
application sends the destination information to the mobile
application and the mobile application in turn calculates the
path that should be taken to reach the destination. The output of
this path planning process is a list of MOVE and TURN
instructions to be executed along the shortest and most
appropriate path from the current location to the destination.
Then, the control program initiates executing each step in the
list sequentially until the vehicle reaches the final destination.
While the vehicle moves through the path, the current location
and distance travelled are displayed in the web control panel in
real time.
B. Avoiding obstalces
The obstacle avoidance system consists of an obstacle
detection module and path planning module. The obstacle
detection module utilizes ultrasonic sensors and image
processing techniques for the detection of objects in the
surrounding environment of the vehicle [12, 15].
The vehicle is equipped with two ultrasonic sensors each
placed in the front and rear sides of the vehicle. The front
ultrasonic sensor is attached to a servo motor that rotates from
0 to 180 degrees. It measures the distance to an obstacle and
the captured distances are used to generate an obstacle map to
be used during the obstacle avoidance process. The rear static
ultrasonic sensor is used to detect obstacles when the vehicle
moves in reverse direction.
In addition to ultrasonic sensors, the proposed solution
utilizes image processing techniques to identify the shapes of
objects located in close proximity to the vehicle. For this task,
the control program captures still images from the mobile
devices camera at a fixed frame rate and processes them to
generate an obstacle map.
Once the two obstacle maps, generated from ultrasound
sensors and image processing routines, are available, a
complete obstacle map that clearly shows the positions of the
obstacles near the vehicles is formed. This final obstacle map is
used as the input to the obstacle avoidance process to
determine the correct path to be travelled to reach the
destination while avoiding obstacles.
Three different communication links exist between the
peripherals used in the proposed technique. The link between
the web application and the vehicle control program that runs
on the mobile device is established using Wi-Fi whereas the
mobile device and the microcontroller are connected using
Bluetooth.

The state of the shaft in the servo motor is determined by a


pulse width modulated (PWM) signal provided to the control
wire of the motor. The servo expects a pulse for every 20 ms in
which the pulse width, the duration of the high-time within a
single period of the signal, varies between 500 s to 1700 s.
The pulse width of the PWM signal is controlled using the
setPulseWidth (intpulseWidthUs) method.
The distances to the objects in front of the vehicle are
obtained using the HC-SR04 ultrasonic sensor which has an
effective range of 0 to 450 cm. The timing diagram of HCSR04 is shown in Figure 3.

Figure 3. Timing diagram of the ultrasonic sensor.

Measuring the distance is initiated by providing a pulse of


high (5 v) for at least 10 s to the trigger pin of the ultrasonic
sensor. Then, the module automatically sends eight 40 kHz
ultrasonic bursts and waits for a reflection. When the reflection
is received, it will set the eco pin of the module to the high (5
v) and the distance can then be calculated by measuring the
pulse width of the echo pin. From this timing diagram, it can be
seen that the 40 kHz pulse is transmitted just after the 10th S
triggering the pulse where the output of the echo is obtained
after the elapse of more time duration.
Distance readings are obtained at a fixed interval while the
ultrasonic sensor sweeps between 0 to 180 degrees. The
distances are stored with the servo shaft angle to create an
ultrasonic obstacle map which contains accurate distance
measurements to the objects in front of the vehicle and the
direction. Figure 4 shows the ultrasonic obstacle map as
graphically seen by the vehicle operator in the web control
panel.
The image processing obstacle map shown in Figure 5 is
produced by processing the images acquired using the mobile
devices camera at regular intervals. The solution uses contour
based obstacle detection technique with OpenCV library to
implement the required functions as in the following steps.

C. Ultrasonic Obstacle Detection


The ultrasonic obstacle detection task is implemented using
an HC-SR04 ultrasonic sensor attached to a servo motors.

18 | 3 6

i. Create grayscale image from the color image


ii. Apply a fixed level of threshold value for each pixel in
the image
iii. Create memory storage
iv. Find contour in the binary image

International Journal of Conceptions on Computing and Information Technology


Vol. 2, Issue. 4, June 2014; ISSN: 2345 - 9808
D. Microcontroller
The main hardware component of the control system is the
microcontroller that controls the motor driver, servo motors
and ultrasonic sensors. This microcontroller communicates
with the mobile application and controls the attached hardware
components according to the instructions received from the
mobile application.

Figure 4. Ultrasound obstacle map for a sensor sweep between 0 to 180


degrees.

The 16-bit microcontroller used in the proposed solution is


a PIC24FJ256 having 96 KB of RAM and a built in USB onthe-go module that provides on-chip functionality as a target
device [6]. The motor controller circuit interprets the signals
transmitted from the microcontroller and controls the direction
and speed of each motor. The main quadruple high current
half-H drivers used in the motor controller are two L293D ICs
designed to provide bidirectional currents up to 600 mA at
voltages ranging from 4.5 - 36 v. Figure 7 shows the pin
configuration of the L293D IC connecting two DC motors.

Figure 5. Image processing obstacle map.

Figure 7. Pin configuration of L293D IC.

Figure 6. Combined obstacle maps of ultrasound sensors and image


processing routines.

After generating both the ultrasonic and the image


processing obstacle maps, they care combined to generate the
final obstacle map and presented on the web control panel as in
Figure 6.
The obstacle avoidance process is carried out using the
combined obstacle map created in the previous step. Once the
obstacle detection system finds an object in front of the vehicle,
the obstacle avoidance system takes the best suitable direction
to move the vehicle to avoid the object. If the distance to the
obstacle is not sufficient to make a turn, the vehicle reverses
and tries again to find a better path.

19 | 3 6

Figure 8. Prototype vehicle as seen from underneath. (1) L293D IC, (2)
motor, (3) power input, (4) motor outputs, (5) control input signals (6)
power LED and (7) microcontroller power output.

International Journal of Conceptions on Computing and Information Technology


Vol. 2, Issue. 4, June 2014; ISSN: 2345 - 9808

Figure 10. Maximum detection distance of objects based on object width.


Figure 9. Prototype vehicle as seen from above. (1) PIC24FJ256
microcontroller, (2) front ultrasonic sensor, (3) servo motor used to rotate
front ultrasonic sensor, (4) rear ultrasonic sensor, (5) DC Motors, (6)
batteries, (7) wheels, (8) vehicle chassis, (9) Bluetooth module and (10)
motor controller circuit

Figure 8 Figure 9 give the actual implementation of the


prototype vehicle as seen from underneath and above,
respectively.
The specifications of the mobile phone used to test the
developed solution are given below. Because all the processing
tasks are performed by the mobile application, a reasonably
high powered mobile device is required.
Chipset:
CPU:
GPU:
RAM:
Sensors:

Qualcomm APQ8064 Snapdragon


Quad-core 1.5 GHz Krait
Adreno 320
2 GB
GPS, Accelerometer, gyro, proximity, compass,
barometer
Camera:
8 MP, 3264 x 2448 pixels
Display:
768 x 1280 pixels, 4.7 inches
Network:
Bluetooth v4.0, Wi-Fi 802.11
OS:
Android OS, v4.4.2 (KitKat)
Services:
Google Play Services 3.2, Google Maps API
IV.

Figure 11. Pulse width of the ultrasound sensor and the distance to object
detected.

beam that can be directed at miniature obstacles accurately.


The proposed solution can be improved using such an
ultrasonic sensor.
C. Power consumtion
Consumption of power is critical in a robotic vehicle
controlled by a mobile device.

RESULTS AND DISCUSSION

A. Maximum Detection Distance and Object Width


Under this test, the maximum detection distance of an
object with a particular width is measured. As see in Figure 10,
objects having a higher width are detected at a larger distances
from the vehicle.
B. Ultrasonic Beam Width
The effect of the pulse width of the ultrasound sensor in
detecting objects located at different distances is tested as given
in Figure 11 where it is evident that the pulse width has a
profound effect on the distance measured. The HC-SR04
ultrasonic sensor used in the solution is classified as a low cost
entry level ultrasonic sensor. There are advanced ultrasonic
sensors with higher response times and range emitting a narrow

Figure 12. Power consumption of the robotic vehicle.

As seen in Figure 12, the power usage of the autonomous


vehicle increases with speed. A 7.4 v 4200 mAh Li-ion battery
is able to power up the vehicle for 2-3 hours depending on the
speed. More experiments are required to verify the actual
power requirement of the vehicle for a specific application.

20 | 3 6

International Journal of Conceptions on Computing and Information Technology


Vol. 2, Issue. 4, June 2014; ISSN: 2345 - 9808
This capacity can be increased with LiPo (Lithium-ion
Polymer) batteries having a light weight.
D. NETWORK UTILIZATION
A real time communication link is established between the
vehicle and the web control panel to transmit status
information of the vehicle. Table 2 gives the network utilization
of the prototype where the relationship between the data rates
and the type of audio/video transmissions is clearly seen.
TABLE 2. NETWORK UTILIZATION OF THE PROPOSED VEHICLE.

[2]

[3]
[4]

[5]

[6]

Scenario

Bandwidth

No audio/video

3 kbps

[7]

With audio/video stream &


video refresh rate at 1 fps

150 Kbps

[8]

With audio/video stream &


video refresh rate at 5 fps

1 Mbps

[9]

With audio/video stream &


video refresh rate at 10 fps

1.5 Mbps

[10]

With audio/video stream &


video refresh rate at 15 fps

2 Mbps

[11]

[12]

V. CONCLUSION
This paper presents a technique based on mobile
technology to control and navigate a robotic vehicle while
avoiding obstacles detected along the path of travel. The
vehicle supports transmitting audio and video to a remote
operator who can view the path on a web browser. The
prototype design, whose navigation is controlled by an
Android-based smart phone, combines both ultrasound
technology and image processing routines to avoid obstacles.
Although the present design utilizes basic sensors, ultrasound
emitters and a low cost smart phone, the performance can be
enhanced further if advanced electronics are integrated to setup
the vehicle.

[13]

[14]

[15]

[16]

REFERENCES
[1]

Modi, Sachin. "Comparison of three obstacle avoidance methods for an


autonomous guided vehicle." PhD diss., University of Cincinnati, 1998.

21 | 3 6

Ohya, Akihisa, Takayuki Ohno, and Shin'ichi Yuta. "Obstacle


detectability of ultrasonic ranging system and sonar map understanding."
Robotics and Autonomous Systems 18, no. 1 (1996): 251-257.
Gray, Keith W. "Obstacle detection and avoidance for an autonomous
farm tractor." PhD diss., UTAH STATE UNIVERSITY, 2000.
Borenstein, Johann, David Wehe, Liqiang Feng, and Yoram Koren.
"Mobile robot navigation in narrow aisles with ultrasonic sensors." In
Proceeding on ANS 6th Topical Meeting on Robotics and Remote
Systems. 1995.
Rahman, Syedur. "Obstacle Detection for Mobile Robots Using
Computer Vision." Department of Computer Science University of York
Final Year Project (2005).
M. T. Inc, "PIC24FJ256GA110 Family Data Sheet," 2009. [Online].
Available:
http://ww1.microchip.com/downloads/en/DeviceDoc/39905d.pdf.
[Accessed 5 2 2014].
Kelkar, Nikhil D. "A fuzzy controller for three dimensional line
following of an unmanned autonomous mobile robot." PhD diss.,
Government College of Engineering, 1997.
Han, S., Q. Zhang, B. Ni, and J. F. Reid. "A guidance directrix approach
to vision-based vehicle guidance systems." Computers and electronics in
Agriculture 43, no. 3 (2004): 179-195.
Chen, Yang Quan, and Kevin L. Moore. "A Practical Iterative Learning
Path Following Control of an Omni Directional Vehicle." Asian Journal
of Control 4, no. 1 (2002): 90-98.
Minguez, Javier, Luis Montano, and Jos Santos-Victor. "Abstracting
vehicle shape and kinematic constraints from obstacle avoidance
methods." Autonomous Robots 20, no. 1 (2006): 43-59.
Broggi, Alberto, and Stefano Cattani. "An agent based evolutionary
approach to path detection for off-road vehicle guidance." Pattern
Recognition Letters 27, no. 11 (2006): 1164-1173.
R. Thamma, L. M. Kesireddy and H. Wang, "Centralized Vision-based
Controller for Multi Unmanned Guided Vehicle Detection," in IAJCASEE International Conference, 2011.
Borges, G. A., A. M. N. Lima, and G. S. Deep. "Characterization of a
neural network-based trajectory recognition optical sensor for an
automated guided vehicle." In Instrumentation and Measurement
Technology Conference, 1998. IMTC/98. Conference Proceedings.
IEEE, vol. 2, pp. 1179-1184. IEEE, 1998.
Lenain, Roland, Benoit Thuilot, Christophe Cariou, and Philippe
Martinet. "High accuracy path tracking for vehicles in presence of
sliding: Application to farm vehicle automatic guidance for agricultural
tasks." Autonomous robots 21, no. 1 (2006): 79-97.
Bucher, Thomas, Cristobal Curio, Johann Edelbrunner, Christian Igel,
David Kastrup, Iris Leefken, Gesa Lorenz, Axel Steinhage, and Werner
von Seelen. "Image processing and behavior planning for intelligent
vehicles." Industrial Electronics, IEEE Transactions on 50, no. 1 (2003):
62-75.
F Cuesta, Federico, Anibal Ollero, Begona C. Arrue, and Reinhard
Braunstingl. "Intelligent control of nonholonomic mobile robots with
fuzzy perception." Fuzzy Sets and Systems 134, no. 1 (2003): 47-64.