You are on page 1of 6

ICAR '97

Monterey, CA, July 7-9, 1997

How to Control a Multi-Robot System by Means of


Projective Virtual Reality
E. Freund and J. RoBmann
Institute of Robotics Research (IRF)
University of Dortmund, Otto-Hahn-Str. 8,44227 Dortmund
freund@damon.irf.uni-dortmund.de / rossmann@damon.irf.uni-dortmund.de
http://www.irf.uni-dortmund.de/

Abstract The principal idea of .,,projectivevirtual reality", that


is how to generate commands in the ,,virtual world"
Smart man machine interfaces tum out to be a key tech-
that are then carried out by physical automation sys-
nology for robot applications in industrial environments
tems.
as well as in future scenarios for robot applications in
Different elements, concepts and metaphors realized
space for intemal and extemal servicing. For either
in the virtual environment to simplify the control of
field, the use of virtual reality techniques showed a great
the physical automation systems.
potential. At the IRF a virtual reality system was devel-
Different visual aids to allow the user who has
oped and implemented during the last two years which
emerged into the VR to supervise the robots and the
allows the intuitive control of a multi-robot system. The
automation systems (,,agents") under control.
general aim of the development was to provide the gen-
eral framework for Projective Virtual Reality which
The paper will start with a short description of the most
allows to project actions that are carried out by users in challenging practical application we had so far, the
the virtual world into the real world with the help of
control of the multi-robot system in the CIROS-testbed
robots.
(chapter 2. I), a multi-robot system originally developed
for space-laboratory servicing. The VR system was first
Keywords: virtual reality, multi-robot systems, robot-
utilized for this application, because various tasks that
control, workcell-modeling
are executed by robots in a space laboratory environ-
ment in order to conduct experiments have to be pro-
1 lntroduttion
grammed and supervised by researchers, who do not
In the robotics industry today there is a great demand have robot specific knowledge. With the help of the
for worker-oriented programming concepts, which developed VR system, these researchers can now per-
should be graphically oriented, icon- or at least menu- form their experiments in a graphically simulated virtual
based and intuitively comprehensible in order to allow environment. Practical tests showed that the learning
robot programming by workers on the shopfloor. All curves of experimentators to command the multi-robot
major robot manufacturers are working on the develop- system with the help of gestures and actions in the
ment of such techniques in order to enlarge their market simulated, virtual enviroinment were extremely steep.
to medium sized companies which do not have the After only 15 to 30 minutes of training, they were able
,,human resources" and the engineering capacity that to conduct their experiments with the help of the multi-
today is still necessary to realize a sophisticated robotic robot system in the CIROS spacelab environment suc-
application. We believe, that the next step in the devel- cessfully, fast and safely.
opment of intuitively operable man machine interfaces
is the use of virtual reality techniques, not only in ro- 2 Applications of the Virtual Reality System
botics. [6] gives an interesting overview of newly devel-
The control of the CIROS testbed was the most com-
oped applications and envisaged application fields for
prehensive application for the newly developed VR-
virtual reality techniques and new VR interaction tools.
system as a supervision- and control system.
In this paper the emphasis will be laid on the explana- 2.1 The CIROS multi-robot testbed
tion of our latest development in this field, a PC-based
virtual reality system which is used to control a multi- CIROS stands for Control of Intelligent Robots in Space.
robot system. This paper will cover three main aspects The testbed developed in the CIROS project is equipped
of the realization: with two redundant robots with 6 revolute and one pris-
matic joint each.

0-7803-4160-0-7/97 $10.00 0 1997 IEEE 759


Fig. I : The CIROS multi-robot testbed
The layout of the laboratory is similar to that of the
German Spacelab and is built in a modular manner. Six Fig. 3: A partial view of the CIROS environment in the
racks, switches and other operating elements of the virtual reality representation
experiments were reproduced and arranged in order to
be performing realistic operational sequences. A tool Both devices are equipped with position and orientation
exchange capability and force-torque sensors have been sensors, so that the location of the HMD and the data-
included to allow the robots to operate autonomously glove are known to a Pentium Pro PC which generates
under the multi-robot-control system IRCS, developed the virtual, graphical image of the environment with
at the Institute of Robotics Research. respect to the operator's position and viewing direction.
Furthermore, a graphical image of the operator's hand is
shown, which allows CO operate in the virtual environ-
ment. Fig. 3 shows a part of this environment including
the image of the experimentator's hand which he uses to
manipulate e.g. the drawers, the flaps, the levers or
experiment containers. One really challenging practical
test of the VR-system is depicted below.

Fig. 2: Two robots handling an object cooperatively


The redundant two-armed robot configuration with the
force-torque sensors at the robots' wrists permits fully
coordinated operation, similar to the cooperation capa-
bilities of two human anns (fig. 2)' as well as synchro-
nized or independent action of the two robots, working Fig. 4: Control of the CIROS-testbed by means of
together like a team. Furthermore, the robots are ,,projective virtual reality ('over a long distance
equipped with hand cameras and the whole laboratory via INTERNET
can be supervised by a scene camera. The CIROS multi-robot testbed of the IRF in Germany
was hlly controlled by colleagues of the University of
While designing the virtual environment for the CIROS
Southern Califomia over a distance of more than 12000
testbed, emphasis was laid on providing ,,a familiar
km. The VR-system was connected to the CIROS-
environment" to an experimentator who conducts ex- testbed via Internet and thus had to cope with time de-
periments in the space laboratory from ground with the lays varying between 1.S and 30 seconds.
help of the CIROS-VR-system. In order to ,,immerse"
into the virtual reality [4][5], the experimentator wears a
head-mounted-display (HMD) and a data-glove.

760
ning component of the IRaCS. The action planning com-
ponent can ,,understand" task descriptions on a high
level of abstraction like ,,open drawer", ,,insert sample 1
into heater slot 1" etc. and thus is the ideal counterpart
for the task deduction component of the VR-system.
Using this task deduction mode is almost ideal, because:
The required communication bandwidth is low,
because only subtasks like ,,open flap", ,,move part
A to location B" or ,,close drawer" are sent over the
communication channel.
The different subtasks are carried out safely by the
robot control system with its inherent capabilities
like task planning, robot coordination and sensor su-
pervision.
The user is no longer in the ,,realtime control loop".
Complete subtasks are recognized and carried out as
Fig. 5: Commanding of an Assembly Task in the Virtual a whole without the necessity for immediate feed-
Environment back to the user.
Fig. 5 shows another, less sophisticated application, For physical assembly tasks, the accuracy of the
which was developed for teaching purposes, to make environment model can be compensated for by
students familiar with VR applications. Here the opera- automatic sensor-supported strategies.
tor's hand is shown while assembling the ,,Cranfield The accuracy of the dala-glove tracking-device is not
Assembly Set". as important as for the direct tracking mode. The
allowable tolerances when the user is gripping an
3 The Idea of Projective Virtual Reality object or inserting a peg into a hole can be adjusted
in the VR-software.
When we started to control the robots via VR, we im- If a heavy or fragile object is moved in the VR, the
mediately found that the standard teleoperation or planning component is capable of automatically us-
,,hand-tracking" approach would not work for most of ing the two robots in a coordinated manner to con-
our applications which contain assembly tasks. The duct the transport in reality.
following problems arose. Different users working at different VR-systems can
Time delays between the display of a robot's do different tasks that are sent to the planning com-
movement in the VR and its physical movements ponent of the IRCS, .which then can compute an
are critical for the stability of the process, because, adequate sequence of the tasks to be carried out, de-
similar to standard teleoperation approaches, the pending on the available resources. Thus one robotic
user is still ,,in a realtime control loop". system can serve e.g. multiple experimentators in a
The graphical model has to be very precise. space laboratory environment.
The measurement of the position and orientation of If the robot control is versatile enough, there is no
the data-glove has to be very precise. longer a need to even show a robot in the virtual en-
Measures have to be taken to reduce ,,trembling" of vironment displayed to the user; so the user more
the operators hand. and more gets the impression of carrying out a task
Online-collision avoidance for the robots is neces- ,,himself', which is the highest level of intuitivity
sary to grant safe operation. that can be achieved.
A versatile sensor-control is necessary to compen-
If the planning component is versatile enough, it
sate for unwanted tensions when objects are in- cannot only control the robots, but also other kinds
serted into tight fittings. of automated devices. The action planning compo-
cope with the problems mentioned above, another nent in the CIROS environment ,,knows" that to
mode of operation of the VR-system was developed: the open the leftmost one of the three drawers, it doesn't
task deduction mode. The solution was to enhance the need to employ a robot. This drawer is equipped
VR-system in the way that while the user is working, the with a motor, so that it just has to control the motor
different subtasks that are carried out by him are recog- to open this drawer. Robot-automated and hard-
nized and task descriptions for the IRCS, the multi-robot automated tasks are thus controlled under one uni-
control system of the CIROS environment are deduced. fied framework.
These task descriptions are then sent to the action plan-

761
The list above gives the advantages of the presented performed by the user ,,make sense" and whether the
approach, but it also shows that the used robot control actions can be combined to a task description for the
system must have several features that are by far not robotic system. Fig. 6 shows an example of such a petri-
state of the art. But, as this paper focusses on VR, we net which allows to deduce tasks like ,,open Flap" or
have to direct the reader interested e.g. in the planning ,,close Flap" from the actions a user is performing in the
component to [3], where the action planning component VR. The types of petri-nets that are used for the task-
and the underlying methods are described in detail. deduction component of the virtual-reality-system are
,,state/transition-nets with named marks", a special class
The key issue is splitting the job between the task de- of petri-nets. The basic symbols of the petri-nets used
duction in the VR and the task ,,projection" onto the are given below.
physical automation components by the automatic ac-
tion planning component. This hides from the user the
typical robot-related difficulties like singularity-
treatment, collision avoidance, sensor-based guidance of
the robot and the problem to coordinate different robots
but allows to exploit the capabilities of the robot system.
The necessary expertise to e.g. conduct an experiment in
transition state action
I
a space laboratory environment like GIROS is thus
shared between the user with the necessary knowledge
about the experiment and the robot-control with the
necessary ,,knowledge" about how to control the robots.

3.1 Task Deduction in the VR-Environment


Compared to the resource-oriented action planning
system, as a part of the CIRQS multi-robot control sys-
tem IRCS [3], the recognition of tasks that are carried
out by the user in the virtual environment is rather sim-
pie. The task-deduction module relies on messages from
inside the VR system. Messages are generated and are
sent to the task deduction module for example when an
object was gripped by the user, when an object was
released or when the user's dataglove enters a certain
region of the environment displayed in the VR. These
messages are interpreted by means of finite state ma-
chines which can be visualized as petri-net like struc-
tures. These structures determine, if certain actions

task-
descriptions

3
/

state-
information
Fig.6: Cooperation between the petri-nets for task-deduction and the action-planning system of the IRCS

762
~

the next transition, the actual angle of the flap's joint


has to be evaluated. If, for example, the user opened the
flap, the angle is approximately 90 degrees, so that the
mark is to be moved to ,,Flap open". On the way from
,,Flap released" to ,,Flap open" in fig. 6 , we passed the
six-edged ,,communication"-symbol, which indicates,
that the task description ,,open Flap" is to be sent to the
action planning component of the robot control system
at this time to have the robot perform this task physi-
cally.

In order to be able to flexibly adapt the petri-nets to new


types of tasks, a description language was defined that
allows to load into the VR-system the correct set of
rules to work on for different applications.As the neces-
sary set of rules is formulated in this description lan- Fig. 9: The robots are handling an object cooperatively
guage, the mechanism to work on these rules could be Fig. 9, again with the supervision mode activated, shows
kept generic. No source-code changes in the VR-system that the automatic action planning component of the
are necessary. IRCS automatically employed both CIROS-robots in a
cooperative manner like two human arms to move the
4 Images and Metaphors in the Virtual World sample container. Although the container in the virtual
The VR-system used is based on the PC-based robot world was handled by one user's hand, the action plan-
simulation system COSIMIR (Cell Oriented Simula- ning component with its knowledge about the multi-
tion of Industrial Robots), which was developed at the robot system, automatically made this choice to handle
IRF [ 2 ] . COSIMIR does the rendering and also gener- the container properly.
ates the messages needed to make the transitions in the
task deduction nets.

Fig. IO: Visual aids for object placement and supervi-


sion
Fig. 8: Experts want to see the robots Fig. 10 shows another two visual aids for the user in the
COSIMIR is very flexible and also is a good basis to virtual world; the image of the user's hand which has
introduce the new ideas for robot supervision, teleop- gripped an open sample container is displayed in the
eration and object placement aids into the VR. Fig. 8 center of fig. 10. The first visual aid is the wireframe
shows the same view of the virtual world as fig. 3, ex- representation of the container displayed above the
cept that here the robots are shown as wireframe objects gripped container box. This wireframe - displayed in
to give the user a supervision capability. red on a color display - shows up in the virtual world,
if the user approaches a sensible deposit for the object
that is currently being grasped. The user may then re-
lease the object and it will snap to the wireframe posi-
tion, so that minor trembling or errors of the dataglove

763
position sensor can be compensated for automatically. cheaper, the display quality and operability of the sys-
The wirefiame on the right of the container, the second tems have reached a standard where automation appli-
type of visual aid, indicates the actual position of the cations should be envisaged. Whereas most VR-
physical container in the CIROS testbed which has not applications aim at the ,,plain VR", that is the improve-
yet been moved by the robots. ment of the virtual worlds that are displayed to the user,
this paper showed the application of new ideas related to
projective virtual reality, where the aim is to use VR-
technology as an intuitively operable man-machine
interface for robotic systems. The presented new task
deduction approach was developed to ,,project" virtual
actions onto robotic systems, that is to make physical
robots do the same tasks in the physical environment
that have been carried out by the user in the virtual envi-
ronment. This mode of operation holds great promises
for future programming and remote control applications
of robotic systems.

6 References

Ashmore, L.; Lara Ashmore's Home Page: Ex-


cellent starting-point for virtual reality related
Fig. 11: Teleoperation commanded in the virtual world
topics:
Fig. 11 gives an example, how teleoperation of a robot's
http://curry.edschoo1.virginia.edd-lha.5w/vr/.
hand camera can be commanded in the VR. The user
just grips a virtual camera as a metaphor for teleopera- Freund, E.; RoSmann, J.; Uthoff, J.; van der
tion mode and positions it so that the desired object can Valk, U.; Towards realistic Simulation of indus-
be inspected. This action makes the action planning trial Robots. Proceedings of the IEEE/?SJ/GI
component switch the multi-robot control system to Intelligent Robots and Systems, IROS, Munich,
teleoperation mode and guide to the desired position a Germany, September 1994
currently available robot that is equipped with a hand
camera - as is depicted by the wirefiame robot in fig. Freund, E., RoDmann, J., Hoffmann, K.;
11. ,,Automatic Action Planning as a Key Technique
for Virtual Reality based Man Machine Inter-
Last but not least, the clock that is shown in figs. 3 and faces", Proceedings of the Conference on Mul-
8 is used as a metaphor for ,,virtual time". An applica- tisensor Fusion and Integration for Intelligent
tion is e.g. the heating of a sample. The user might want Systems (MFI'96), Washington D.C., USA, 1996.
to move a sample into a heater and would normally have Helsel, K. (Editor); ,,Virtual Reality: Theory,
to wait for e.g. 2 hours to remove it again. In the real- practice and promise", Meckler, London, 1991,
ized VR-system the user will insert the sample into the ISBN 0-88 736-728-3.
heater, but then he will adjust the image of the clock to
two hours later and then immediately remove the sample Krueger, M.; ,,Artificial Reality 11", Addison
again. Both tasks, insertion and removal of the sample Wesley Publishing Company, 1991, ISBN 0-201-
will immediately be sent to the planning system, but as 52260-8.
the tasks contain a time stamp for their execution, the Kammerer, B., Maggioni, C.; ,,Gesturecomputer
insertion task will be carried out immediately, whereas - Research and Practice", Proceedings of the 4'h
the action planing system will not start to remove the International Conference: Interface to Real h
samples until the two hours have passed. Virtual Worlds, pp. 25 I-260, Montpellier,
France, 1995.
5 Conclusion
Virtual reality technology has developed the potential to
become a key-technology for the design of modern man-
machine interfaces in different fields. With the intui-
tively operable interaction media like data-glove and
head-mounted stereo displays getting better and

764

You might also like