You are on page 1of 5

WESIC’98 Workshop on European Scientific and Industrial Collaboration on Promoting

Advanced Technologies in Manufacturing. Girona, June 1998.

Vision-guided Intelligent Robots


for Automating Manufacturing, Materials Handling
and Services

Rainer Bischoff and Volker Graefe

Bundeswehr University Munich


Institute of Measurement Science
Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany
Tel.: +49-89-6004-3589, Fax: +49-89-6004-3074
E-Mail: {Rainer.Bischoff | Graefe}@unibw-muenchen.de

Abstract
"Seeing" machines and "intelligent" robots have been the focus of research conducted by the
Institute of Measurement Science since 1977. Our goal is to gain a basic understanding of vision,
autonomy and intelligence of technical systems, and to construct seeing intelligent robots. These
should be able to operate robustly and at an acceptable speed in the real world, to survive in a
dynamically changing natural environment, and to perform autonomously a wide variety of tasks.
In this paper we report on three autonomous robots that have been developed during recent
research projects for automating manufacturing, materials handling, and services. In the order of
commissioning we have set up an autonomous vehicle, a stationary manipulator and a humanoid
robot with omnidirectional motion capability, a sensor head and two arms. We use standard video
cameras on all robots as the main sensing modality. We focused our research on navigation in
known and unknown environments, machine learning, and manipulator control without any
knowledge of quantitative models.

1 Introduction
As a result of the increasing demands of automating manufacturing processes and services with
greater flexibility, intelligent robots with the ability to adapt to knew environments and various
circumstances are key factors for success. To develop such robots manifold competencies are
required in disciplines such as mechanical engineering, electrical engineering, computer science
and mathematics.
Our expertise is to build modular robotic systems with various kinematic chains that use vision
sensors to perceive their environment and to perform user-defined tasks efficiently. To put the
robots into operation no or only minor modification of the infrastructure is necessary because our
approach uses vision as the main sensing modality and does not depend on any priori knowledge
of quantitative models. We have developed powerful image processing hardware, as well as
software and control algorithms, to enable robots to operate autonomously (section 2). Our AGV
ATHENE II is able to navigate in partly structured environments, e.g., in factories and office
buildings, making it suitable for all kinds of transportation tasks that are required to automate
manufacturing and services (section 3). Our stationary articulated manipulator is equipped with
an uncalibrated stereo-vision system being able to handle diverse objects without calculating its
inverse kinematics (section 4). In our current research project we have developed a prototype of
WESIC’98, Girona, June 1998 -2- Bischoff, Graefe: Vision-Guided Intelligent Robots

a future service robot, a mobile manipulator with 18 degrees of freedom. Because of its modular-
ity in both hardware and software it can be adapted to customers’ requirements, e.g., to meet
their needs for tasks like transporting and handling of goods, surveillance, inspection, or mainte-
nance (section 5).
Applied Research and Practical Relevance
From the very beginning our research work has been essentially guided by the rule that every
result had to be proved and demonstrated in practical experiments and in the real world. While
this approach is rather demanding, compared to mere computer simulations it has the great
advantage of yielding by far more reliable and valuable results. The fact that most of our research
has been conducted in cooperation with industrial partners has greatly helped us in directing our
work towards results that lend themselves to practical applications in the real world.

2 Digital Image Processing and Real-Time Vision Systems


All our robots use digital image processing as a powerful means of perception. Standard video
cameras provide images to a multi-processor system that evalu-
ates them in real time. Such a multi-processor system may
consist of simple microprocessors, digital signal processors,
transputers, or a combination of those. Communication bottle-
necks are avoided by using high-bandwidth video busses and
high-performance data links between the processors. Together
with controlled correlation, an exceptionally fast and robust
feature extraction algorithm developed by our institute, fast
and reliable image processing is possible.
Autonomous Road Vehicles
A first major demonstration experiment that in 1987 caught
much international attention was a road vehicle equipped with
our real-time vision system BVV 2 that allowed it to run on the
German Autobahn. Although at that time no other traffic was
allowed on the road, the achieved speed of 96 km/h constituted
a world record for autonomous road vehicles. Notably, in con-
trast to all other autonomous vehicles known at that time, the
driving speed was limited only by the performance of the vehi-
cle’s engine and not by the vision system. Key to this success
has been our real-time vision system BVV 2 anticipating in its
architecture the concept of object-oriented vision that was only
later formulated explicitly. Its two successors, BVV 3 and 4 ,
with their 100 times higher performance, enabled us to fully
implement object-oriented vision algorithms [Graefe 1993].
Thus, a simultaneous recognition of various objects in complex
dynamic scenes has been made possible. This constituted the
basis for an accurate perception of normal traffic situations
(Figure 1).
Obstacle Avoidance
Figure 1: Typical traffic situation
Obstacle avoidance is a major concern for all mobile robots. with various objects in a complex dy-
We have developed an obstacle detection and classification namic scene, recognized in real time
system suitable for high-speed driving, and a motion stereo by the vision system BVV 3
WESIC’98, Girona, June 1998 -3- Bischoff, Graefe: Vision-Guided Intelligent Robots

algorithm that allows an accurate distance measurement from a moving vehicle to an obstacle or
other stationary target without knowing the size of the target object or any parameter of the
camera used.

3 Mobile Robots ATHENE I und II


Navigation concepts for factory buildings and office envi-
ronments have been investigated with our vision-guided
mobile robots ATHENE I and II (Figure 2). These robots
are able to perform various transportation tasks in exten-
sive environments. We developed the concept of object-
oriented and behavior-based navigation. Its main charac-
teristic is that the selection of the behaviors to be executed
in each moment is based on a continuous recognition and
evaluation of the robot’s dynamically changing situation.
This situation essentially depends on the perceived states Figure 2: ATHENE II, an intelligent
of relevant objects, the robot’s repertoire of available mobile robot, mainly used for studying
behaviors and its actual goals to be accomplished. The indoor navigation and machine learning
navigation system relies on topological maps that the robot
learns during exploration runs. An operator informs the robot of the names of relevant mission
locations, e.g. “copy machine” or “laboratory”. Other users may then use those common location
names in communicating with the robot [Bischoff et al. 1996].
Executing a complex navigation task
Figure 3 shows, as an example, the mission description that the robot was given in an experiment,
and the resulting course followed by the robot. To make the task more complex for demonstra-
tion purposes the robot was instructed
to pass a rather large number of interme- NAME LIST
diate locations on its way to its final des- kitchen
landmark >>>Tour<<<
tination, the e-lab. The mission descrip-
mechanics ***
tion is simply a list of all the locations finish start mechanics
that should be passed by the robot, and exit 1
it ends with the final destination. e-lab rob-lab exit 1 exit 2
At the start of the experiment the robot mani-lab
knew that it was somewhere between rob-lab
the e-lab and the kitchen, facing the mani-lab
e-lab
mani-lab
kitchen. It had a map of the environment
stairwell
stairwell
that it had acquired in previous experi-
xerox exit 2
ments, and that did not contain any workshop workshop
gross errors in its metric attributes. (In e-lab
other experiments the robot completed 10 m
exit 2 ***
similar missions with maps into which Figure 3: The course traveled by ATHENE II END
errors of several meters for the lengths according to the mission description shown ***
of some corridors had been introduced.) on the right

4 Calibration-Free Manipulator
We have realized a calibration-free manipulator robot that consists of an articulated arm and a
stereo vision system (Figure 4). For this robot, we have developed a manipulation method that
does not rely on any prior calibration of any parameters of the system, in sharp contrast to
WESIC’98, Girona, June 1998 -4- Bischoff, Graefe: Vision-Guided Intelligent Robots

conventional methods. Our method does not require any


knowledge of the parameters of the manipulator (e.g., length of
its links or relationship between commanded control words and
actual movements of the arm) and of the cameras (e.g., focal
length, distortion characteristics, position relative to the manipu-
camera camera gripper
lator). Even severe disturbances, as arbitrary changes of the
object cameras’ orientations, that would make other robots fail are
tolerated while the robot is operating. Key to the systems’ ex-
J2 traordinary robustness are the renunciation of model knowledge
and a direct transition from image data to motor control words.
J3
J1 Because no calibration is needed such a robot is well suited for
J4 environments like homes or offices that require a high degree of
robustness in dealing with unexpected situations and where
J0 gripper
maintenance personnel is not readily available [Graefe, Ta 1995].
Currently we are studying methods of knowledge representation
camera C 1 suitable for machine learning. The goal is a robot that accumu-
lates experience in the course of its normal operations and, thus,
continuously improves its skills (learning by doing). Moreover,
whenever changing conditions invalidate past experience the
robot should automatically modify what it has learned.

camera C 2 5 Service Robot HERMES


Figure 4: Calibration-free mani-
pulator with five degrees of freedom The humanoid service robot HERMES with its two arms and
and a stereo vision system two “eyes” resembles a human in size and shape (Figure 5). It
already possesses many charac-
teristics that are needed by future service robots. HERMES’ two
arms are attached to a bendable body. This manipulation system
enables the robot to open drawers and doors, and to pick up
objects both from the ground and from tables. HERMES per-
ceives its environment with two video cameras that are mounted
on a moveable sensor head. The cameras’ images are processed
by a multi-processor system in real time. Visual feedback enables
HERMES to carry out various transportation, manipulation and
supervision tasks. A user-friendly and situation-sensitive human
interface allows even inexperienced users to communicate with
the robot effectively and in natural way. A specially designed
drive system with two powered and steered wheels guarantees
free manoeuverability in all directions [Bischoff 1997].
Central building blocks of the robot are compact drive modules
that incorporate in double cubes powerful motor-gear combina-
tions, the necessary power electronics, various sensors (angle
encoder, current converter, temperature sensor), a micro control-
ler for motion control and state supervision and an intelligent bus
interface (CAN). With these modules and various mechanical
links and adapters many different kinematic structures can be Figure 5: HERMES, a humanoid
built. The electrical links for power and communication lines are service robot with two arms, an
realized by uniform cables and connectors along the kinematic omnidirectionally mobile base and
chain of the robot structure. a stereo vision system
WESIC’98, Girona, June 1998 -5- Bischoff, Graefe: Vision-Guided Intelligent Robots

6 Conclusions
The ultimate goal of our research work is the development and construction of a robot that has
a practical intelligence similar to that of animals. We are convinced that in the future such robots
will have a great significance for society by performing many and diverse services for humans.
Towards this goal we have developed, and presented here, three of our robots:
C a vision-guided mobile robot that navigates in structured environments based on the recogni-
tion of its current situation,
C a completely uncalibrated manipulator that handles various objects by using an uncalibrated
stereo vision system, and
C a humanoid service robot that combines the abilities of the former mentioned robots and can
be used for transporting and handling goods at different locations of extensive environments.
Main Research Topics
The following list gives an overview of the principal working areas of the Institute of Measure-
ment Science at the Bundeswehr University Munich:
• architecture and design of real-time vision systems
• recognition, classification and tracking of objects in dynamic scenes
• motion stereo for distance measurement and spatial interpretation of image sequences
• calibration-free robots (i.e., robots not requiring quantitatively correct models)
• object- and behavior-oriented stereo vision as a basis for the control of such robots
• recognition of dynamically changing situations in real time as the basis for behavior selection
by robots and for man-machine communication
• system architectures for behavior-based mobile robots
• machine learning, e.g., for object recognition, motion control and knowledge acquisition for
navigation
Offer of Cooperation and Services
We offer services and cooperation in our principal working areas, e.g., expert reports, studies,
mid-term development cooperations and scientific project backing. We welcome tasks that enable
us to put new scientific discoveries into practice. We have extensive knowledge in the areas of
machine vision and development of intelligent robotic control. We possess powerful computer
systems, state-of-the-art equipped laboratories, experimental fields and workshops that we could
provide for joint research and development purposes. We address our offer above all to techno-
logically ambitious small and medium sized companies. We are eager to continue contributing to
an effective technology transfer from science to industry, as we have done in the past.

References
Bischoff, R.; Graefe, V.; Wershofen, K. P. (1996). Combining Object-Oriented Vision and
Behavior-Based Robot Control. Robotics, Vision and Parallel Processing for Industrial Automa-
tion. Ipoh, pp. 222-227.
Bischoff, R. (1997). HERMES - A Humanoid Mobile Manipulator for Service Tasks. Interna-
tional Conference on Field and Service Robotics. Canberra, pp. 508-515.
Graefe, V. (1993). Vision for Intelligent Road Vehicles. Proceedings, IEEE Symposium on
Intelligent Vehicles. Tokyo, pp. 135-140.
Graefe, V.; Ta, Q. (1995). An Approach to Self-learning Manipulator Control Based on Vision.
IMEKO International Symposium on Measurement and Control in Robotics, ISMCR '95.
Smolenice, pp. 409-414.

You might also like