You are on page 1of 4

SICE Annual Conference 2008 August 20-22, 2008, The University Electro-Communications, Japan

A semi-autonomous tracked robot system for rescue missions

Daniele Calisi1 , Daniele Nardi1 , Kazunori Ohno2 and Satoshi Tadokoro2

Department of Computer and Systems Science, Sapienza University of Rome, Italy (E-mail: 2 Graduate School of Information Science, Tohoku University, Sendai, Japan (E-mail:

Abstract: In this paper we describe our work in integrating the results of two previous researches carried on on two different robots. Both robots are designed to help human operators in rescue missions. In the rst case, a wheeled robot has been used to develop some software modules. These modules provide the capabilities for autonomously explore an unknown environment and build a metric map while navigating. The second system has been designed to face more challenging scenarios: the robot has tracks and ippers, and features a semi-autonomous behavior to overcome small obstacles and avoid rollovers while moving on debris and unstructured ground. We show how the set of software modules of the rst system has been installed on the tracked robot and merged with its semi-autonomous behavior algorithm, and we discuss the results of this integration with a set of experiments. Keywords: tracked robot, rescue mission, autonomy

In recent years, increasing attention has been devoted to rescue robotics, both from the research community and from the rescue operators. Robots can consistently help humans in dangerous tasks during rescue operations in several ways. A rescue scenario is usually unstructured and unstable, thus requiring the use of complex mechanical and locomotive design of the robots involved. On the other hand, communication unreliability and the difculty of direct control of complex robots require some degree of autonomy. The robotic system described in this paper is the integration of two previous works developed in our laboratories: a wheeled robot system that is able to autonomously explore a at environment; a tracked robot that features a semi-autonomous behavior to overcome small obstacles, debris and stairs. The software modules of the rst system has been installed on the tracked robot, and merged with its semiautonomous behavior. In this way, the robot is able to autonomously explore more challenging scenarios, with respect to the wheeled robot. Experiments have been conducted on the real tracked robot and on a simulator, and show that the resulting system is able to explore an unknown environment and to deal with debris and small obstacles, with a very limited operator support.

2D mapping capability; autonomous and semi-autonomous navigation and exploration capabilities; detection of motion hazards and the ability of address them (we will call this ability semi-autonomous obstacle overcoming behavior); communication infrastructure and human-robot interface that make it possible for a human operator to supervise the robot and take control of it in critical situations.

The whole system can be divided into two conceptual parts: the hardware part, that comprises the mechanical design and realization of the robot and the sensors that have been used, and the software part, that includes the software implementation of the algorithms. The overall goal of the system is to show effective autonomous and semi-autonomous behaviors in a rescue environment. In particular, we want to provide the robot with:

2.1 Hardware The unstructured and unstable environment in a rescue scenario makes the mechanical and locomotive design of the robot critical. Our robot is a 6-DOF crawler robot, currently under development, called Aladdin (see Figure 1). The robot has four ippers connected with the main body and a system of tracks that covers both the ippers and the main body, allowing for forward/backward movements and turning. The high mobility of this kind of crawler robots in complex environments is mainly due to the moving ippers that allow for multi degree-offreedom maneuvers. A 2D laser scanner (Hokuyo URG-04LX) is mounted on top of the robot and provides a 2D scan of the surrounding. This allows for autonomous mapping and localization in the environment. The sensors used for the semi-autonomous obstacle overcoming behavior include: 32 touch sensors mounted inside the main body, a set of current sensors (U RD Corp. HPS-10-AS, used to measure the torque on the front and rear ippers), and a gravity sensor (Crossbow Corp. CXL04LP3). Other details of this robot system can be found in [7]. 2.2 Software A set of software modules provides the robot with the requested autonomous capabilities. Among them, the most important are: two 2D mapping and localization modules, a ground contact detection mechanism, a three-level navigation algorithm, and an autonomous ex-

- 2066 -

PR0001/08/0000-2066 400 2008 SICE

Fig. 1 The Aladdin crawler robot ploration method based on frontiers. A previous version of this system, that includes the same exploration method, the 2D localization and mapping capabilities and an autonomous navigation module for at grounds, can be found in [1]. These modules will be described in detail in the following subsections. The communication between the robot and the rescue operators can be extremely unreliable in a rescue scenario. This requires some degree of autonomy for the robots to accomplish their mission. The exact level of autonomy is decided by many factors such as: the current communication reliability, the number of robots in the team, the need of the operator intervention in some critical situation that the robot is not able to autonomously deal with. As we will describe in detail in Section 2.3 , the different levels of autonomy are obtained activating or deactivating some software modules, and giving to the operator the ability to take more or less control over robot actions. 2.2.1 Mapping and localization For mapping and localization purposes, we use the 2D laser range nder. Some laser beams have been disabled because they are directed towards the ippers when these are raised. The localization method is scan-matching based and does not need odometry data (that, indeed, is very noisy in tracked robots). The mapping module integrates the current laser scan readings with the map, using an occupancy grid approach. The localization has been proven to be robust enough to slight changes in the pitch or in the roll of the robot pose, and to be able to realign the robot to the right position in the map when it returns on a horizontal ground. Anyway, during this event, the resulting map can be damaged. For this reason, when an excessive roll or pitch is detected (thanks to the gravity sensor), the mapping module is immediately disabled to avoid artifact on the map, while the localization module is kept running. 2.2.2 Exploration When enabled, this module sends target positions to the autonomous navigation algorithm. These target positions lie on the nearest frontier computed on the current map, i.e., the border between free and unknown space. This method has been proved to be able to guide the robot to a full exploration of an unknown environment (see [8]). 2.2.3 Navigation The navigation subsystem has three levels. In the rst level, a topological path is computed on the global 2D map, in order to guide the next levels and to avoid local minima. In the next level, an RRT-based (Rapidexploring Random Trees, see [6]) algorithm builds a tree of plans in order to steer the robot toward the target position; each step of each plan takes into account the robot shape, the collisions with the obstacles, the kinetic (and, possibly, also the dynamic) constraints of the robot movements. Moreover, the plan building process is interleaved with the plan execution and correction, the latter being necessary in order to allow for uncertainty in motion command effects and new obstacles. Details of these two levels can be found in [3]. In the third level of the navigation system, a reactive behavior to overcome obstacles is provided. While the previous two levels control the speeds of the tracks (i.e., steer the robot in 2D), this behavior makes use of a set of rules and thresholds to decide the position of the ippers in order to avoid rollovers and to climb over obstacles. Sensors used for these decisions are the touch sensors (that detect the robot body contact with the ground), the current sensors (that detect the ipper contact with some obstacles) and the gravity sensor (that detects roll and pitch changes). The rules for the front ippers are shown in Figure 2, an analogous set of rules has been developed for the rear ippers. 2.3 Autonomy levels and HRI The system can operate in four different degrees of autonomy and the human operator can switch among them during the mission: Full autonomy. In this mode, the exploration module sends periodically target positions (that lies on a frontier) to the navigation subsystem. In this case, if the communication with the operator is working, the system sends a warning to the operator, but keeps on behaving autonomously. Partial autonomy. In this case, the exploration module is disabled. The operator, through a graphical interface, can send target positions to the robot. The navigation subsystem autonomously guide the robot to the targets and builds the map. Partial tele-operated mode. The exploration is disabled as well as the rst two modules of the navigation subsystem. The semi-autonomous obstacle overcoming

- 2067 -

BodyContact Contact NotContact

FlipperContact RobotPitch

Contact Upward DOWN DOWN

Contact Downward UP UP

NotContact Upward UP DOWN

NotContact Downward UP DOWN

IF (FlipperContact == Contact AND RobotPitch == Upward) OR (BodyContact == NotContact AND FlipperContact == NotContact) THEN FrontFlipper = DOWN ELSE FrontFlipper = UP

Fig. 2 Control rules for the front ipper of the Aladdin robot behavior is kept running, in this way the operator can decide the speed of the tracks (i.e., the linear speed and turning speed in 2D), while the module chooses the ipper positions. Full tele-operated mode. In this case, the operator has also full control over the ipper positions. This is the most difcult and time-consuming method, and in our experiments has be used only to recover from a ipper stuck (see below), an operation that usually lasts for a very short time (less than 5 seconds). If the communication channel goes down, the robot automatically switches to Full autonomy from any other operational mode.

Experiments have been performed both in a real scenario, using the Aladdin robot described before, and using the USARSim simulator1 [2]. In all experiments, the robot task is to autonomously explore all the arena; if the robot is in a critical situation, the operator can intervene and restore the mission. Experiments in the real environment have been conducted in a small rescue arena [5] and show the effective exploration behavior of the robotic system also in presence of small obstacles and debris. The robot autonomously explores the environment and detects loss of contact with the ground or other kind of mobility hazards, being able to address them. In Figure 3 we show a map built during a mission: in the position marked with 1, the robot starts its exploration; the small purple box labeled with 2 is the mark put by the hazard detection module (it is the place where contact with the ground is detected; in that place, a 20 cm wooden bar, not visible with the laser scanner, lays on the ground); nally, the robot last position is marked with 3. Another set of experiments have been performed using the USARSim simulator and a simulated model of the robot that is similar to Aladdin (i.e., a tracked robot with four indipendent ippers). A rescue arena has been built inside the simulated world and contains obstacles and debris that the laser scanner cannot detect (see Figure 5). The arena is 7x5 m2 . Figure 4 shows the results of the experiments conducted in the simulated arena. In only one case the robot ipper got blocked in one of the wood frames and the operator intervention could not restore the robot functionality in order to continue the mission. Figure 6 show the resulting map of one of the missions. On the left side of

Fig. 3 The map built during the real experiment: 1) robot start position, 2) the obstacle detected using touch/torque sensors, 3) robot nal position

Fig. 5 The simulated scenario used in the experiments. the gure, a partial map is shown: purple bars show detected obstacles that lie outside of the 2D laser scanner detection plane, a green cross shows the current target of the robot navigation subsystem (the target is on a frontier) and a tree of plans is shown starting from the robot position. On the right side of the gure, the full map of the environment is shown, the places marked with op are where the operator intervention has been necessary for the robot to continue the mission.


The robot system presented in this paper has been proven to be effective in scenarios where small obstacles and mobility hazards are present. The set of experiments shows that the operator intervention is very limited and is mainly due to the ippers that can get blocked within

- 2068 -

Completed missions 90%

Mission duration 7.8 min 0.6 min

Operator interventions per mission 1.8

Tele-operated time 11.4 s

Fig. 4 Results of the simulated missions (mean over 10 missions)

Fig. 6 On the left: the tree of motion plans, built while the robot is moving. On the right: the nal map built during the autonomous exploration of the simulated scenario. obstacles. For this reason, the obstacle overcoming behavior has been recently improved in order to avoid ipper stucks ([9]). Indeed, this happened often during the experiments and is the main reason of operator interventions. As a future work, we want to integrate this improvement in the presented system. The major drawback of the presented approach is the use of a 2D based localization and mapping methods. These are stable enough to be used in simple situations, but eventually fail in more unstructured environments. In order to overcome these limitations, two improvements are proposed: from the one hand, a hardware mechanism that is able to keep the laser range nder horizontally will improve localization (this method is currently investigated in other research works by the scientic community); from the other hand, the use of an additional tilted (i.e., pointing toward the ground) laser can make it possible to measure elevation maps (the so called 2D and a half maps) and detect possible mobility hazards before the robot reaches them. The latter method has been already developed and applied on a simulated robot and successfully demonstrated during the RoboCupRescue Virtual Robots competition, in 2007 [4]. national Workshop on Safety, Security and Rescue Robotics (SSRR), pages 5459, Kobe, Japan, June 2005. ISBN: 0-7803-8946-8. D. Calisi, A. Farinelli, and S. Pellegrini. RoboCupRescue - Virtual Robots League, Team SPQR-Virtual, Italy, 2007. A. Jacoff and E. Messina. DHS/NIST response robot evaluation exercises. In IEEE International Workshop on Safety Security and Rescue Robots, Gaithersburg, MD, USA, 2006. J. Kuffner and S. LaValle. RRT-Connect: An efcient approach to single-query path planning. In Proc. of IEEE Int. Conf. on Robotics and Automation (ICRA2000), 2000. K. Ohno, S. Morimura, S. Tadokoro, E. Koyanagi, and T. Yoshida. Semi-autonomous control of a 6DOF crawler robot having ippers for getting over unknown steps. In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots & Systems (IROS), pages 2559 2560, 2007. B. Yamauchi. A frontier based approach for autonomous exploration. In IEEE International Symposium on Computational Intelligence in Robotics and Automation, 1997. T. Yuzawa, K. Ohno, S. Tadokoro, and Koyanagi E. Development of sensor reexive ipper control rule for getting over uknown 3D terrain. Submitted.






[1] S. Bahadori, D. Calisi, A. Censi, A. Farinelli, G. Grisetti, L. Iocchi, and D. Nardi. Intelligent systems for search and rescue. Proc. of IROS Workshop Urban Search and Rescue: from RoboCup to real world applications (IROS), Sendai, Japan, 2004. [2] S. Balakirsky, C. Scrapper, S. Carpin, and M. Lewis. USARSim: providing a framework for multi-robot performance evaluation. In Proceedings of the International Workshop on Performance Metrics for Intellingent Systems (PerMIS), 2006. [3] D. Calisi, A. Farinelli, L. Iocchi, and D. Nardi. Autonomous navigation and exploration in a rescue environment. In Proceedings of IEEE Inter-


- 2069 -