You are on page 1of 6

Collaborative quadricopter-mobile robots ground scan using ARTAGS

visual pose estimation


Alvaro R. Cantieri1 Ronnier F. Rohrich2 André S. Oliveira3 João A. Fabro4 and Marco A. Wehrmeister5

Abstract— The use of collaborative robot systems to perform major difficult found on this kind of system is the fault of
specific tasks is a strong research area on robotic systems. a fixed referential system on the environment. On external
Robot platforms are becoming cheaper, increasing the number applications the GPS systems offers a good solution if a high
of applications and new tasks variations on research labs. A
collaborative system that join together small mobile robots to precision positioning is not necessary, what is not the case
scan some specific area and a drone to set their position on the on a great number of tasks. The use of Real Time Kinematic
ground is a good example of the possible applications of this GPS (RTK GPS), a technique used to enhance the precision
kind. This work proposes the use of an autonomous quadri- of GPS position, can increase the precision to centimeter
copter and ”dummy” small ground robots on a collaborative level, but demands a fixed base station and a powerfull
task that uses virtual reality tags to estimate robot positions
and orientations on a simulation scene and provide information communication link to work properly. Unfortunately this
for their control. The robots are ”dummy” because do not kind of solution are not proper for indoor applications.
have any embedded odometer. The position and orientation On the last years the use of image systems became easier
data are achieved by the drone and transmitted via ROS and cheaper, with a large number of high resolution hardware
to a computational system that runs a PID control for each available. The decrease of image processing technology costs
individual robot. The simulation runs on a V-REP scene and
use a C++ code to control the mobile robots moves. allows new image based applications on mobile robotic
systems, including smart drones.
I. INTRODUCTION The visual tag is a tool that allows a body position and
The robot research area is responsible for some of the most angle estimation by using a special figure tag, a camera and
important technologies developed on the past years, whit image processing hardware. The tag is created to present an
several number of scientific and technical articles and new easy recognizable form, decreasing the processing time and
applications all over the world. The decrease of platforms performing a good position estimation.
costs and the new and high performance hardware avail- In this work a number of small augmented reality tags
able on global market brought powerful developments on (AR-Tags) are coupled on the top of the mobile robots to
several applications areas, like military, medical, transport, identify and estimate their position and allows the creation
surveillance, public control and education. Mobile robots are of a visual pose control system. A quadricopter with a ground
becoming common in human life, performing inclusive home pointed camera processes the relative position and orientation
tasks. between the robots and the world coordinate system.
The correct placement and move control of mobile robots The main objective of this work is to evaluate the viability
around the environment are still a challenge, due to the great of using a quadricopter to properly achieve the pose of each
variability of obstacles and the difficult to get precise location mobile ground robot, based on the AR-tag, and performs a
during displacements. Embedded sensors, GPS and camera displacement control without any other position information
systems are the most common set of tools used to achieve or additional hardware.
position and displacement data on this kind of robot. The To perform the system evaluation, a scan task was pro-
data fusion and pose estimation are not a simple task, and posed. All the mobile robots reach a specific relative position
a lot of positioning error are common on this cases. The between then and the quadricopter, and after that, begin to
follow the aircraft all long the simulation space, searching
*This work was supported by UTFPR and CNPq for a special tag posed on the ground. When some robot
1 Alvaro Cantieri is with Department of Telecomunica-
tions, Federal Institute of Parana, Curitiba - Parana, Brazil stay up this tag, a message is published on ROS and the
alvaro.cantieri@ifpr.edu.br quadricopter stop the scan work.
2 Ronnier F. Rohrich is with the Department of Electrical Engineering,
The main interest on using this kind of architecture is the
Federal University of Technology - Parana, Curitiba - Parana, Brazil
rohrich@utfpr.edu.br possibility of create a cheap and robust cooperative drone-
3 André S. Oliveira is with the Department of Electrical Engineering, robots system for applications that can not use traditional
Federal University of Technology - Parana, Curitiba - Parana, Brazil sensors and location hardware. This kind of architecture
andreoliveira@utfpr.edu.br can be used to auxiliary control of ground tractors on
4 João A. Fabro is with the Department of Computer Engineering,
Federal University of Technology - Parana, Curitiba - Parana, Brazil high precision agriculture, for example, like shown on [1].
fabro@dainf.ct.utfpr.edu.br Another possible application is the creation of cooperative
5 Marco A. Wehrmeister is with the Department of Computer Engineer-
drone-robots tasks on indoor environments, where the correct
ing, Federal University of Technology - Parana, Curitiba - Parana, Brazil
wehrmeister@utfpr.edu.br positioning of a group of small robots is something a little
978-1-5386-0956-9/17/$31.00 2017 IEEE difficult to achieve. Other possible application areas are
morphology robot arrangements, cooperative transport tasks, computer that calculates the position of each robot and sends
automatic SLAM, etc. control commands for it. The results presented on this article
shows the viability of using a visual positioning system to
II. R ELATED WORKS AND TECHNOLOGIES control a set of small robots. Similar implementations can
Visual mobile robot positioning and control tasks are com- be found on literature in great quantity, mostly on the ”robot
mon on some applications, like robot soccer or cooperative soccer” applications, once this kind of competition demands
robots tasks. the positioning of the player robots using a camera fixed on
The work of [2] shows a multi-robot cooperative system the top of the arena.
where each individual small car-like robot get images from The work described on [10] presents a visual simultaneous
a front camera, allowing the location of each one among the localization and mapping (vSLAM) algorithm assisted by
others. This work do not use a external camera to identify the artificial landmarks to improve the positioning of the robot
robots, but is one of first examples of cooperative systems during the navigation. Some AR tracking markers fixed on
of this kind. specific locates of the robot running space provide additional
A survey of visual perception for soccer applications can information of position and orientation, making easy and
be found on [3]. The authors shows some state of art for more precise the calculation of robot position. This data is
the use of visual systems to control soccer robots, their used to allow a SLAM task all long the place.
advantages and characteristics. On the article [11] the authors use a set of bar codes fixed
The Robot Soccer Small Size League (SSL) [4] develops on the environment walls to provide position and orientation
an open source project that provides visual detection and information to a NAO humanoid robot. The navigation is
others tools for the robot soccer applications. The visual performed by using the information of a group of detected
detection software uses a group of colored circles on the bars at each time, and show good results on the practical
top of each robot to estimate the position and orientation of application.
each one on the arena. This software allows beginners groups
The work of [12] is based on the development of a
to easily start on the SSL Soccer competition.
complementary image data processing software to provide
The work [5] describes a application that control a group
position and orientation data retrieved from an enriched bar
of small robots to create morphologies cooperatively. A
code tag. This tag is used to orientate a small robot whit
quadricopter is used to evaluate the ground robots positions
embedded camera among a route. The found results shows
and send informations about it, helping then planning the
a good performance for the proposed schema.
necessary moves to create the desired morphology figure.
The use of tag makers to provide information about a robot The work [13] implements a visual tag navigation system
position and displacement are not new, but the high cost based on AprilTag algorithm to provide position information
hardware necessary made the real application difficult until to a AR-Drone 2.0. Drone’s camera captures the frames
recent years. A big number of similar AR tags architectures and a off-board software performs computations and sends
are proposed, whit some small differences among it. The navigation commands back to the drone, providing a good
most commonly found on robotic area works are cited below. position feedback. This is an interesting work cause the use
A popular visual tracking tool is ARToolkit [6]. This tool of a drone’s camera to achieve and processing this position
provide a easy way to create the visual tags and implement feedback presents a lot of demands, like noise treatment,
recognition software to perform the tags position. drone stabilization and adequate image capture. The fact of
Another common used visual tracking tool was developed this work achieve success on the real application offers a
by [7], called ARTag. The tool is very similar to the good clue that the use of a drone to provide feedback to
ARToolkit software, whit some innovations. positioning ground robots, like proposed on this paper, is
The ALVAR-AR tracking markers was created by VTT possible and technical viable in real word implementation.
Technical Research Centre of Finland [8]. This tool provides The development show on [14] implements a new land-
the position and orientation estimation for a special tag im- mark tag and uses it to perform a mobile robot positioning.
age, and allows the use of a group of tags simultaneously on A up pointed camera captures the images and the robot
the environment. A package developed for ROS is available, embedded hardware performs the processing. The landmarks
making easy the robot area applications. are fixed on the ceiling whit small spacing among each other
Some works using AR tags for robot positioning and on a specific order. The results presented shows success robot
control can be found on scientific and technical databases. displacements among the environment using this technique.
Several types of applications on variated environments was The robot navigation system presented on [15] make a
found on literature. The set of applications is big, and interesting use of AR tracking markers tags. The robot
interesting ideas of how to use this schema on practical navigates among the environment and when a tag is found
applications was proposed, as follow. and recognized, it’s ID is used to search and download
A system described on [9] applies ARToolkit marker specific map information of the actual robot space. In this
detected on a top view video camera to provide position case the tag do not provide direct position or orientation
information and control a small robot. The camera is fixed information, but is an auxiliary system to provide information
on the top of an arena and sends the images to a processing about the robot surroundings.
III. S YSTEM DESCRIPTION AND TOOLS calculates the velocity correction for robot wheels, keeping
This work proposes the use of an simulated autonomous the displacement inside the tolerance values. Matlab PID
quadricopter with integrated camera to estimate the posi- Tuner was used to evaluate the PID gains on a simple robot
tion and orientation of a small mobile robots group on model. The chosen gains are P = 0.5, I = 0.01 and D =
the ground. The system is simulated on a Virtual Robot 0.01. A stronger PID architecture or maybe fuzzy control of
Experimentation Platform (V-REP) software [16] to validate the robots probably will offer better results for this kind of
the propose. The V-REP software is a powerful robotic applications, and must be implemented on future works.
simulation platform that offers a set of real robots models A scan area task was created to provide some chal-
and sensors for assembling and evaluate complex robotic lenges for the positioning system and PID controllers on
systems. The educational version of the software is used the simulation. At first step all the robots are displaced on a
for the simulations. V-REP software was chosen for this random position on the visual range of the camera. When the
application due to a personal preference of the working simulation starts, the quadricopter reachs the target position
group, but other simulation softwares like GAZEBO can also and stay fixed on it. After a 15 seconds wait, necessary to
be used to assemble the system whit small changes only. the quadricopter stay relatively stable on positon, the robots
A small mobile robot call BOB was assembled to this starts to move, reaching a previously set point relative to
application. BOB robot is simple, assembled like a disk whit the quadricopter coordinate system. They stop at 0.5 meters
two motorized independent wheels and two other spherical distance between the neighbors and rotate to align to Y axis
non motorized ones. A LUA script controls the robots of the coordinate system. For each new quadricopter position
moves subscribing a ”cmd vel” topic from the PID controller change, the robot control loop will move then to achieve the
software. When a message is received, the scrip calculates new relative positions.
the desired velocity for each wheel and starts rotating then. The quadcopter used on the application is a model pro-
To move forward, the script engages the two wheels whit the vided on V-REP by Eric Rohmer, and the propellers are cour-
same angular velocity. When a rotation message is received, tesy of Lyall Randell, as informed on the V-REP components
the script engages the two wheels on opposite ways and the credits. The model has a horizontal and vertical stabilization
robot rotates among it own Z axis. performed by a LUA script. A target can be moved among
The odometer information of the robot is read from V-REP the space, and it is reached by the quadricopter on a closed
and published on a ROS topic. This odometer information is loop. For each new displacement of the target, the position
used only to provide the robot position necessary to evaluated controller of the quadricopter performs a set of commands
the position data control used as reference to be compared that leads it to the new position and stabilizes it on this static
whit the position data get from the visual tag estimation. point.
A PID control algorithm uses the estimated data to per- A ground pointed camera fixed on the base of quadricopter
forms the robots displacement among the ground space. The achieves tags images and publishes it on a ROS node that
mobile robots are ”dummy”, what means that they do not provides information to the visual tag pose estimation code.
have sensors or odometer system embedded. All the pose The camera specifications are 120 degrees for perspective
information comes from the data sent by quadricopter. Figure angle and 1024 x 1024 pixels resolution.
1 shows the environment and it’s components. The quadricopter propelers generate a spray of small
visible particles as a visual effect, and this spray works as a
image noise to the camera. This is an easy way to evaluate
the performance of the visual tag software on real world
similar conditions, where visual noise are common.
A. Visual tag position estimation tool
The chosen AR tracking markers tool for the application
was the ALVAR Tracker [8]. It is a software library for
creating virtual and augmented reality (AR) applications,
developed by the VTT Technical Research Centre of Finland
and released under the terms of the GNU Lesser General
Public License. It provides all the necessary calculations
tools and image processing to achieve the position and
orientation of the tags distributed on the visual range of the
camera.
The AR TAG software provides a tool that generate the
Fig. 1. V-REP environment tags on a ”.png” image format. The size of each tag is set
on the generation. The figure 2 shows three of the tags used
The PID software architecture is simple. A C++ software on this work.
running on a PC receives the values of distance and orien- The tags size was set on 10 x 10 cm for all performed
tation error from V-REP on each new simulation step and tests. This size was chosen because on future works some
Fig. 2. Example of tags

Arduino compatible frame small robots will be used to test


this architecture on real world, and this is a good size to fix
on the top of this frames, bringing the simulation near to the
future practical proposed solution.
IV. E XPERIMENTS
A group of experiments was executed to achieve data
information necessary to the correct assembling of the archi-
Fig. 4. Positioin estimation error for different heights
tecture. The experiments was planned to provide answers for
a set of important questions related to the correct architecture
work, like better quadricopter height, pose error estimations
and PID control performance comparison.
A. Pose and orientation estimation in different heights using
the visual tags
The first experiment evaluates the quality of position and
orientation estimation provided by the camera fixed on the
mobile quadricopter for different heights. This is important
to verify if the schema could provide stable and accurate
readings that allows the PID control works correctly, and
to define for what heights this readings responds correctly
enough.
To perform this evaluation, five tags was placed on the
ground on the quadricopter camera visual range, and a group
of readings was storage. The figure 3 shows the schema. The Fig. 5. Angle estimation error for different heights
quadricopter was positioned, for the first evaluation, on a
height of 1 meter and a group of 100 pose estimations was
stored. the major difficult found using visual tags on the proposed
The process was repeated for the heights of 1.25m, 1.50m, application.
1.75m, 2.00m, 2.25m, 2.50m and 2.75m. The figure 4 shows The vertical position of the quadricopter provides a visual
the mean estimation error for each height test. area that set the capability of founding the robots on the
environment. If the quadricopter is too high, a big visual area
is get but the precision of the position reading decreases. If
the quadricotper is low, the precision increases but the visual
area become too small, restricting the possible pose of the
ground robots.
The better height found for the proposed application is
2 meters, where the position error do not generate loss of
PID control and the visual area is adequate for viewing all
the three robots displaced to perform the area scan. The
pose graph shows that for a height greater than two meters
the error becomes too big to provide adequate data for the
controllers. In fact, for real PID control tests the height must
be limited to 2.1 meters for a correct work of the code.
B. PID control of the ground robots using the visual tags
Fig. 3. Position estimation schema on V-REP The second experimentation step was necessary to evaluate
the accuracy of ground robots control using the PID control
The control of all the small robots demands an accurate software and the visual tags position estimation. To perform
position reading to achieve a good performance. This is this evaluation three robots were distributed on the ground
TABLE I
This graph shows the comparison between the real position
M EAN AND STANDARD DEVIATION FOR THE POSITION VARIABLE DATA
and the visual tag estimation position of a robot during a
Variable Mean
Standard displacement from point (0.5, 0.0) meters to point (0.1, 0.0)
Deviation meters on the simulated environment. The horizontal axis
X 1,534 0,1846076921
Y 1,986 0,1877086084 (X) shows the time and the vertical axis (Y) shows the
Z 1,612 0,1357203006 displacement among the Y axis of the simulation space. At
Angle 1,546 0,0151657509 first the robot rotates to achieve the correct orientation and
point to the final set point, what last approximately eight
seconds, so the graph shows a constant Y value. After that the
above the visual range of the quadricopter camera. The robot starts the displacement to desired set point, as shown
quadricopter stays at 2 meters fix height, and the control on the second part of graph. At the final part of displacement,
software leads the robots to a group of five set points on the the velocity decreases cause of the PID velocity control
environment. correction, so the resulting graph lines are not straight. The
The robots was distributed at random points on the ground difference of the real position achieved by V-REP odometry
and the control software leads then to the defined points and the visual tag position estimation can be evaluate for all
performing the PID angle and position control during all displacement looking for the difference between the colored
the displacement. When the displacement begins, the robots lines on graph. This is small for all the time, showing that the
rotate to point to the direction of the final set points, until do visual tag position estimation offers good positions reading
not reach a 0.01 radians tolerance error. After that the robots values for the proposed application.
runs in a straight line to the final point, until do not reach
a 0.05m position tolerance error. Finally the robots rotates
one more time to get aligned to the Y axis of the coordinate
system.
The process repeats 20 times for each point and the final
position error calculated. The pose final error evaluation for
all the estimations is shown on figure 6. It shows the mean,
maximum and minimum position error and angle error for
each group of readings. The table I shows the calculation of
the mean and standard deviation for all point data of variables
X, Y, Z and angle.

Fig. 7. Pose estimation error for a robot displacement

The cumulative error is another important variable to con-


sider on this application, once the proposed task of ground
scan demands that the robots moves a number of times all
long the environment. Figure 8 shows the cumulative error
for a set of 10 displacements of a robot. The final robot
position error is equal to 11,88 centimeter, a considerable
value. This kind of error make the practical applications
difficult, and demands some kind of correction algorithm for
long distance displacements tasks.

V. CONCLUSIONS
Fig. 6. Mean, maxim and minimum error pose for five evaluated points The use of visual tags and drone’s camera to estimate
the pose and orientation of a small group of mobile robot
The data analyses shows that the error increases for longer and provide information for a PID controller was tested. The
final points of the robots displacement. The grown of error simulation results shows that position estimation error are
limits the visual estimation accuracy of camera drone to small enough to allows an adequate positioning and control
small distances. For longer displacements the error becomes of the robots on the environment. The system was able
too big to allow the correct PID control of the robots. This to move a group of robots between small distance points
means that the visual control area is not big, and must be successfully.
set on a adequate value. The height position set for this The data analysis shows that the values of position esti-
experiment allows a visualization area of 2x2 meters, good mation for the proposed schema are good enough to allow
enough for all the tests. a correct performance of the PID controllers on the simula-
A visual estimation of position error is shown on figure 7. tions. As the literature works shows, the use of this kind of
[4] S. Zickler, T. Laue, O. Birbach, and M. Wongphati, “SSL-
Vision : The Shared Vision System for the RoboCup
Small Size League,” Tech. Rep., 2010. [Online]. Available:
http://robocupssl.cpe.ku.ac.th/sslvision
[5] N. Mathews, A. Christensen, R. Grady, and M. Dorigo, “Spatially
Targeted Communication and Self-Assembly,” 2012 IEEE/RSJ Inter-
national Conference on Intelligent Robots and Systems, pp. 2678–
2679, 2012.
[6] H. Kato, M. Billinghurst, and I. Poupyrev, “ARToolKit version 2.33:
A software library for Augmented Reality Applications.” Tech. Rep.,
2000.
[7] M. Fiala, “ARTag, a fiducial marker system using digital techniques,”
Proceedings of the IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, vol. 2, pp. 590–596, 2005.
[8] V. T. R. C. of Finland Ltd. (2017) Augmented Reality / 3D Tracking.
[Online]. Available: http://virtual.vtt.fi/virtual/proj2/multimedia/
[9] M. Fiala., “Vision guided control of multiple robots,” First Canadian
Fig. 8. Cumulative error for 10 displacements of the robot Conference on Computer and Robot Vision, 2004. Proceedings., 2004.
[10] K. Okuyama, T. Kawasaki, and V. Kroumov, “Localization and Posi-
tion Correction for Mobile Robot Using Artificial Visual Landmarks,”
in Proceedings of the 2011 International Conference on Advanced
visual tags offers a good tool to allow robots positioning on Mechatronic Systems, Zhengzhou, China, Aug. 11–13, 2011.
[11] L. George and A. Mazel, “Humanoid robot indoor navigation based
indoor conditions, including the flying autonomous ones. on 2D bar codes: application to the NAO robot,” 13th IEEE-RAS
The experiments show that it is possible to create an International Conference on Humanoid Robots (Humanoids), pp. 329–
architecture that join together a drone and a group of small 335, 2013.
[12] M. F. D. Alcantara, M. S. Hounsell, and A. G. Silva, “Enriched
ground robots on a collaborative task, using the visual Barcodes Applied in Mobile Robotics and Augmented Reality,” IEEE
tracking markers tags to provide valid position informations LATIN AMERICA TRANSACTIONS, vol. 13, no. 12, pp. 3913–3921,
for a displacement control code. The PID control code works 2015.
[13] T. H. Shuyuan Wang, “ROS-Gazebo Supported Platform for Tag-in-
correctly when receiving position and orientation data from Loop Indoor Localization of Quadrocopter,” in Intelligent Autonomous
the visual tags, if the visual area is not superior to the found Systems 14. IAS 2016. Advances in Intelligent Systems and Computing,
error limit one. For this experiment, the limit area was set D. Derickson, Ed. Springer, Cham, 2017.
[14] X. Zhong, Y. Zhou, and H. Liu, “Design and recognition
in 2x2 meters, what is achieved when the quadricopter stays of artificial landmarks for reliable indoor self-localization
on a 2 meters height. of mobile robots,” International Journal of Advanced
Robotic Systems, vol. 14, no. 1, 2017. [Online]. Available:
An evaluation of total error for a group of displace- http://journals.sagepub.com/doi/10.1177/1729881417693489
ments shows that between some reasonable limits the correct [15] R. Limosani, A. Manzi, L. Fiorini, F. Cavallo, and P. Dario, “Enabling
pose of the robots can be reach. As expected, the position Global Robot Navigation Based on a Cloud Robotics Approach,”
International Journal of Social Robotics, vol. 8, no. 3, pp. 371–380,
estimation error gets bigger as the camera-robot distance 2016.
increases, due to the decrease of size image tag achieved [16] C. Robotics. (2017) Virtual Robot Experimentation Platform. [Online].
by the drone’s camera. For practical applications a set of Available: http://www.coppeliarobotics.com/
drone’s small displacements must be done, to ensure control
of correct robot displacement conditions. The determination
of an adequate height drone position is also critic for a
correct work of the schema.
Similar works was found on literature, showing the viabil-
ity of the application. No quadricopter-robots collaborative
application using visual tags or similar was found on the
articles review. The advantage of this kind of architecture is
the possibility of good indoor positioning for mobile coop-
erative robot systems and the simplicity of the harware and
firmware necessities. A practical evaluation of this schema
is proposed on future works to verify the real environment
influence on the system and it’s viability.

R EFERENCES

[1] R. I. INC. (2017) I AM ROBO Intelligent control solution built for


industries. [Online]. Available: http://www.aee.us.com/#
[2] D. Aveek et al., “A Framework and Architecture for Multi-Robot Co-
ordination,” The International Journal of Robotics Research, vol. 21,
pp. 977–995, 2002.
[3] X. Li, H. Lu, D. Xiong, H. Zhang, and Z. Zheng, “A survey on visual
perception for robocup msl soccer robots regular paper,” International
Journal of Advanced Robotic Systems, pp. 1–10, 2013.

You might also like