Insect Inspired Robots

John Lim1,2 , Chris McCarthy1,2 , David Shaw1,2 , Nick Barnes1,2 , Luke Cole2
1

Department of Information Engineering, RSISE, Australian National University

2

National ICT Australia, Australia

{johnlim, cdmcc, davids}@rsise.anu.edu.au, Nick.Barnes@nicta.com.au, cole@lc.homedns.org Abstract
Insect behaviour has been a rich source of inspiration for the field of robotics as the perception and navigation problems encountered by autonomous robots are faced also by insects. We work towards faster and more robust biologically-inspired, vision-based algorithms for various tasks such as navigation and control. In this paper, we introduce a research platform, the Insectbot, that was built as a testbed for new algorithms and systems. Current progress as well as the future direction of our work is also presented. Furthermore, given the limited processing power of the insect brain, higher level behaviours such as perception and scene understanding are also of interest - how does a fly differentiate between a hand moving in quickly to swat it and a wall rushing in towards it as it attempts to land on that wall? It is clear that this area of bio-inspired visual navigation is rich with potential research questions. The goal of our research is to develop bio-inspired systems and algorithms that will solve fundamental navigation problems more efficiently and robustly. Presently, we are focusing on a subset of problems with a smaller scope than the stated goal; we seek to concentrate our research on visual egomotion estimation, visual servoing, docking, feature detection and corridor-centring. These are basic subsystems that are integral within any autonomously moving robot and the benefits of improved algorithms in these areas are many. The organization of this work is as follows. Section 2 will introduce the robot platform that has been developed as a testbed for experimenting with new research ideas in a real-world environment. Then, Section 3 will detail several navigational subsystems developed on the platform. These include flow-based corridor-centring behaviour and a flow-based docking strategy. Future work and ongoing research will be discussed in Section 4 before concluding.

1

Introduction

Insects and other arthropods effortlessly navigate complex environments despite having relatively simple nervous systems. This is often attributed to robust and efficient motion control strategies where action and perception are closely coupled. Well-known visuo-motor behaviours include the use of optical flow for flight stabilization, corridor-centring, flight odometry and the execution of smooth, grazing landings [Srinivasan and Zhang, 2004], some of which have already been successfully implemented in robots [Argyros et al., 2004; Iida, 2003]. These motion strategies exploit the advantages of specialized eyes that provide sight over nearly the entire viewsphere. It has been shown that a spherical field of view is advantageous to the task of estimating selfmotion [Fermuller and Aloimonos, 1998]. This may be a contributing factor as to why flying creatures often have near-panoramic vision. Insect navigation and control is also of interest to robotics since it is believed that computationally cheap strategies are used as opposed to methods involving reconstruction and map-building. A classic example is beeinspired corridor-centring that balances the optical flow between the left and right walls [Srinivasan et al., 1999].

2

Research Platform: InsectBot

To facilitate the research interests and goals of insect inspired robotics, a novel mobile robot platform has been developed and appropriately named InsectBot. Two primary design features distinguish the InsectBot from many of the more common robotics research platforms [ActivMedia, 2006; iRobot, 2006; Acroname, 2006]. The first primary design feature is horizontal omnidirectional motion. The second primary design feature is vertical motion for a stereo camera system that uses a pair of hemispherical view (“fish-eye”) camera lens, each providing a 190◦ field of view. These two design features

allow 4 degrees of motion, such as simulating the aerial landing of a honeybee. The horizontal omni-directional motion is performed using four omni-directional wheels, spaced 90◦ apart. A small number of mobile robots exist that use omni-directional wheels: The Palm Pilot Robot Kit [Carnegie-Mellon, 2001] was an early three wheeled omni-directional robot and the Cornell Robocup [Cornell, 2006] team used four wheeled omni-directional robots, which may have contributed to their overwhelming success at the international RoboCup championships. However the use of four omni-directional wheels provides more stability for heavy payloads, which will enable large payload capabilities for the InsectBot. Furthermore the four omni-directional wheels provide an improved and simplified motion control system [Ashmore and Barnes, 2002]. The stereo cameras and their fish-eye lens have been mounted so a small overlap exists between the cameras in the visual field. This overlap coupled with almost 360◦ field of view provides a vision system that is quite common among insects within the natural world [Wehner, 1981]. This vision system is attached to a lift mechanism allowing 300mm of vertical motion for the entire vision system. Mounting this novel vision system to the omnidirectional base of InsectBot provides a mechanism for simulating some key aspects of complex motions such as flying. Helicopters and more recently the CSIRO Cable-Array robot [Usher et al., 2004] are well known approaches to studying flight. However helicopters are naturally unstable and extremely hard to control. The CSIRO Cable-Array robot can only operate within its environment. Furthermore using cables to move the cameras introduces wave-like effects which must be accounted for in software. In contrast, the InsectBot can operate in any environment with a flat surface, and has acute control over its degrees of motion, allowing the simulation of some key aspects of flight. The InsectBot is designed to fit any ATX style motherboard and supports most sensors, such as a SICK laser range finder. Currently a Mini-ITX VIA EPIA SP1300 motherboard with 1GB of DDR400 RAM is used for communication to the motors and sensors along with handling the motion control of the drive system and lift mechanism. Visual processing tasks are handled off board through one or more high end computers. Communication between these high end computers and the InsectBot is performed through a 56kbps wireless connection or a 100Mbps wired connection (if required). Approximately, 30 Amp-hours at 24V is available, which with the current InsectBot gives a few hours run time. Future technical developments on this robot will include: increased communication bandwidth through the use of muliple wireless and/or wired Network Interface Cards

(NICs); and more degrees of motion, such as a tilt and/or rotation mechanism for the vision system. Figure 1 shows a picture of the current InsectBot, however please refer to http://robot.anu.edu.au/∼insert for the full collection of the InsectBot photos, videos and research status.

Figure 1: The InsectBot

3
3.1

Navigational Subsystems
Flow-based Corridor Centring

Corridor centring, inspired by observations in honeybees [Srinivasan et al., 1999], can be achieved by adjusting the robot’s heading so as to maintain a balance of flow magnitude in the outer regions of the robot’s view. We have implemented a corridor-centring subsystem that makes use of full optical flow estimation over images from a single forward facing camera to achieve this. We emphasise

Multiple centring trials were conducted using a straight 2.5 metre long corridor with heavily textured walls (see Figure 2). An overhead camera was used to track the robot’s path, and from this, an assessment of directional control stability was conducted. Results indicated that in addition to the choice of optical flow method, the choice of spatio-temporal filter (for gradient-based methods) also had a significant effect on the overall performance of the system. Most notably, the temporal delay introduced through frame buffering was particularly problematic. The best overall performance was achieved using Lucas and Kanade’s gradient-based flow method [Lucas and Kanade, 1984] in conjunction with Simoncelli’s multi-dimensional paired filters [Simoncelli, 1994]. Figure 2: Corridor centring workspace the need to consider such navigation subsystem’s in the context of the broader system they inhabit, and the diverse range of tasks the system needs to perform. For this reason, we use full optical flow estimation in the implementation of all flow-based subsystems, rather than cheap approximations such as normal flow, or planar models. In so doing, this raises the important question of which optical flow method to choose for flow-based navigation subsystems such as corridor centring. While previous optical flow comparisons assess the accuracy and efficiency of optical flow techniques, they do not adequately support a systematic choice of technique when considering which to choose for a real-time navigation system. To address this shortcoming, we have conducted a comparison of optical flow techniques specifically for flow-based navigation. A preliminary study considered the choice of spatio-temporal filter for use with gradient-based optical flow estimation for tasks such as corridor centring and visual odometry [McCarthy and Barnes, 2003]. Subsequent work has extended this to an examination of several well cited optical flow methods as well [McCarthy and Barnes, 2004]. Unlike previous comparisons, we emphasise the need for in-system comparisons of optical flow techniques, in the context of the tasks they need to perform. As part of this comparison, each optical flow method was integrated into the control loop of the corridor centring subsystem of a mobile robot. Directional control for corridor centring was achieved using the simple control law: θ = K(τl − τr ), (1) where τl and τr are the average flow magnitudes in the left and right peripheral views respectively, and K is a tuned proportional gain. θ is then used directly for directional control.

3.2

Flow-based Docking

For a mobile robot to interact with an object in its environment, it must be capable of docking in close proximity with the object’s surface. Of particular importance is the control of the robot’s deceleration to an eventual halt, close enough to the object for the interaction to take place. To achieve this, the robot must acquire a robust estimation of time-to-contact, and from this, control the velocity accordingly. For a single camera approaching an upright surface, a common method of estimating time-to-contact is to measure the image expansion induced by the apparent motion of the surface towards the camera. This can be obtained from the optical flow field divergence. The use of visual motion to gauge time-to-contact is well supported by observations in biological vision. [Srinivasan et al., 2000] observes how honeybees use visual motion to decelerate and perform smooth graze landings. Lee [Lee, 1976] theorised that a human driver may visually control vehicle braking based on time-to-contact estimation obtained from image expansion. In much of the previous work with divergence-based time-to-contact estimation, divergence is measured at the same image location each frame. Such strategies ignore the effect of focus of expansion (FOE) shifts on the divergence measure across the image. Robot egomotion is rarely precise, and even where only translation is intended, rotations will be present due to steering control adjustments, differing motor outputs, bumps, and noisy flow estimates. We have developed a robust strategy for docking a mobile robot in close proximity with an upright surface, that accounts for small rotational effects through the constant tracking of the FOE [McCarthy and Barnes, 2006]. Through theoretical derivation and experimental results, we show that computing the time-to-contact with respect to the location of the FOE rather than the image centre, accounts for shifts of the optical axis due

inspired systems that are being investigated. The detector locates the likely 2D projection of rectangles in the environment by finding quadrilaterals with sides aligning with detected vanishing lines within the image in an order of complexity suitable for real-time applications (refer to Figure 5). This detector has been designed primarily for robot visual mapping. A brief outline of the stages of the perspective rectangle detection algorithm is given below [Shaw and Barnes, 2006]: 1. Vanishing Point Detection: Estimate the position and type of vanishing points and lines within the image. Figure 3: Setup for on-board docking tests.
On-board Docking - L&K Recursive 0.45 robot forward velocity 0.4 0.35 0.3 0.25 0.2 0.15 90 Trial 1 Trial 2 Trial 3 Trial 4 80 70 60 50 40 30 20 10 0 distance to wall (cm)

2. Perspective Oriented Edge Detection: Determine the directional components of the gradient aligned with the vanishing points. 3. Line Segment Detection: Estimate the line segments that are potential quadrilateral sides based on the perspective oriented edges. 4. Quadrilateral Detection: Determine the quadrilaterals from the intersection of detected line segments. A benefit of using rectangular features is that they represent implicit structural information (such as the size of wall panels and doors) that is of benefit for robotic applications. Even with a simple matching metric, significant structural information about a scene can be derived.

4

Future Work

Figure 4: On-board velocity-distance profiles for docking trials. to instantaneous rotations. To test the FOE-based docking strategy, it was implemented on-board a mobile robot with a single forward facing camera. Over multiple trials, it approached a heavily textured, roughly fronto-parallel wall, attempting to safely stop as close to the wall as possible without collision (see Figure 3). Figure 4 shows velocity-distance profiles for four of the trials conducted. In all four trials, the FOE-based strategy was used by the robot to dock in close proximity to the surface without collision.

3.3

Perspective Rectangle Detection

Robust feature detection is an important subsystem for many navigational tasks. In many indoor scenes, the environment consists predominantly of rectilinear structures. To capitalise on this for navigation, we have developed a detector for finding these perspective rectangle structural features that runs in real-time. This detector is expected to be an important component for the bio-

In this section, we identify key areas of interest for future investigation. The common motion control algorithms in robots are often computationally expensive because they involve reconstruction, map-building, geometric models, estimation of scene depth or a computation of the homography between views. These methods are also less robust to errors in camera and robot calibration. In contrast, the limited processing power of the neural structure of insects suggests that insects abandon such methods in favor of simpler and more robust strategies to solve the same problems. Motion control in bees [Srinivasan et al., 1999] and the use of landmarks for visual piloting in desert ants [Lambrinos et al., 2000] are oftcited examples. Also, it is shown in [Bekris et al., 2004] that any position on the ground plane can be reached using a navigational algorithm based purely on the angles between image features. We believe that there remains potential for many other computationally cheap yet powerful control algorithms to be derived from biological solutions. Furthermore, we intend to integrate these various algorithms into a single, unified navigational system with the capabilities for a large range of navigational tasks.

Figure 5: Perspective rectangle features in a corridor sequences. This will enable us to study the behaviour of the system as one or more motion strategies work in concert. This may lead to new synergies between existing algorithms being discovered. be able to identify different scenarios and environments and engage the corresponding subsystems in order to complete the necessary tasks.

References
[Acroname, 2006] Acroname, 2006. http://www.acroname.com. [ActivMedia, 2006] ActivMedia, 2006. http://www.activrobots.com/ROBOTS. [Argyros et al., 2004] A.A. Argyros, D.P. Tsakiris, and C. Groyer. Bio-mimetic centering behavior: Mobile robots with panoramic sensors. IEEE Robotics and Automation Magazine, Special Issue on Panoramic Robotics, 11(2):21–30, 2004. [Ashmore and Barnes, 2002] M. Ashmore and N. Barnes. Omni-drive robot motion on curved paths: The fastest path between two points is not a straight-line. Australian Joint Conference on Artificial Intelligence, pages 225–236, December 2002. [Bekris et al., 2004] K. E. Bekris, A.A. Argyros, and L. E. Kavraki. Angle-based methods for mobile

5

Conclusion

There are often many solutions to the same fundamental problems of robot vision and motion. Traditional solutions to these work well in general but become difficult to implement when the requirement is real-time performance. Since autonomous robots and living creatures encounter parallel problems as they tackle similar tasks, researchers have sought to derive new insights from the biological solutions to those problems in order to develop new solutions that work robustly in real-time. We have presented several visual navigation strategies which were implemented on the research platform. As highlighted in Section 4, various other research areas will also be investigated. All of these represent subsystems within an autonomous robot. As they come together, research will then concentrate on the development of higher-level perception and cognition systems that will

robot navigation: Reaching the entire plane. In International Conference on Robotics and Automation (ICRA04), pages 2373–2378, 2004. [Carnegie-Mellon, 2001] Robotics Institute Of CarnegieMellon. Palm pilot robot kit (pprk), 2001. http://www.cs.cmu.edu/ pprk/. [Cornell, 2006] Robocup Team At http://robocup.mae.cornell.edu. Cornell, 2006.

Image Proc, volume I, pages 790–3, Austin, TX USA, 1994. IEEE Sig Proc Society. [Srinivasan and Zhang, 2004] M. V. Srinivasan and S. W. Zhang. Visual motor computations in insects. Annual Review of Neuroscience, 27:679–696, 2004. [Srinivasan et al., 1999] M.V. Srinivasan, J.S. Chahl, K. Weber, S. Venkatesh, M.G. Nagle, and S.W. Zhang. Robot navigation inspired by principles of insect vision. Robotic Autonomous Systems, 26(2-3):203–216, 1999. [Srinivasan et al., 2000] M. V. Srinivasan, S. W. Zhang, J. S. Chahl, E. Barthand, and S. Venkatesh. How honeybees make grazing landings on flat surfaces. Biological Cybernetics, 83:171–183, 2000. [Usher et al., 2004] Kane Usher, Graeme Winstanley, Peter Corke, Dirk Stauffacher, and Ryan Carnie. A cable-array robot for air vehicle simulation. Australasian Conference on Robotics and Automation (ACRA), December 2004. [Wehner, 1981] R. Wehner. Handbook of Sensory Physiology. Berlin/Heidelberg: Springer-Verlag, 1981.

[Fermuller and Aloimonos, 1998] C. Fermuller and Y. Aloimonos. Ambiguity in structure from motion: Sphere versus plane. International Journal of Computer Vision, 28:137–154, 1998. [Iida, 2003] F. Iida. Biologically inspired visual odometer for navigation of a flying robot. Robotics and Autonomous Systems, 44(3-4):201–208, 2003. [iRobot, 2006] iRobot, 2006. http://irobot.com. [Lambrinos et al., 2000] D. Lambrinos, R. Moller, T. Labhart, R. Pfeifer, and R. Wehner. A mobile robot employing insect strategies for navigation. IEEE Transactions on Robotics and Autonomous Systems, 30(1):39–64, 2000. [Lee, 1976] D. N. Lee. A theory of visual control of braking based on information about time to collision. Perception, 5(4):437–59, 1976. [Lucas and Kanade, 1984] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of DARPA Image Understanding Workshop, pages 121–30, 1984. [McCarthy and Barnes, 2003] C. McCarthy and N. Barnes. Performance of temporal filters for optical flow estimation in mobile robot corridor centring and visual odometry. In Proceedings of the 2003 Australasian Conference on Robotics and Automation, 2003. [McCarthy and Barnes, 2004] C. McCarthy and N. Barnes. Performance of optical flow techniques for indoor navigation with a mobile robot. In International Conference on Robotics and Automation (ICRA Q04), pages 5093–5098, 2004. [McCarthy and Barnes, 2006] C. McCarthy and N. Barnes. A robust docking strategy for a mobile robot using flow field divergence. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006), page in press, 2006. [Shaw and Barnes, 2006] David Shaw and Nick Barnes. Perspective rectangle detection. In Proceedings of the Workshop of the Application of Computer Vision, in conjunction with ECCV 2006, pages 119–127, 2006. [Simoncelli, 1994] E P Simoncelli. Design of multidimensional derivative filters. In First Int’l Conf on

Sign up to vote on this title
UsefulNot useful