Professional Documents
Culture Documents
760
ning component of the IRaCS. The action planning com-
ponent can ,,understand" task descriptions on a high
level of abstraction like ,,open drawer", ,,insert sample 1
into heater slot 1" etc. and thus is the ideal counterpart
for the task deduction component of the VR-system.
Using this task deduction mode is almost ideal, because:
The required communication bandwidth is low,
because only subtasks like ,,open flap", ,,move part
A to location B" or ,,close drawer" are sent over the
communication channel.
The different subtasks are carried out safely by the
robot control system with its inherent capabilities
like task planning, robot coordination and sensor su-
pervision.
The user is no longer in the ,,realtime control loop".
Complete subtasks are recognized and carried out as
Fig. 5: Commanding of an Assembly Task in the Virtual a whole without the necessity for immediate feed-
Environment back to the user.
Fig. 5 shows another, less sophisticated application, For physical assembly tasks, the accuracy of the
which was developed for teaching purposes, to make environment model can be compensated for by
students familiar with VR applications. Here the opera- automatic sensor-supported strategies.
tor's hand is shown while assembling the ,,Cranfield The accuracy of the dala-glove tracking-device is not
Assembly Set". as important as for the direct tracking mode. The
allowable tolerances when the user is gripping an
3 The Idea of Projective Virtual Reality object or inserting a peg into a hole can be adjusted
in the VR-software.
When we started to control the robots via VR, we im- If a heavy or fragile object is moved in the VR, the
mediately found that the standard teleoperation or planning component is capable of automatically us-
,,hand-tracking" approach would not work for most of ing the two robots in a coordinated manner to con-
our applications which contain assembly tasks. The duct the transport in reality.
following problems arose. Different users working at different VR-systems can
Time delays between the display of a robot's do different tasks that are sent to the planning com-
movement in the VR and its physical movements ponent of the IRCS, .which then can compute an
are critical for the stability of the process, because, adequate sequence of the tasks to be carried out, de-
similar to standard teleoperation approaches, the pending on the available resources. Thus one robotic
user is still ,,in a realtime control loop". system can serve e.g. multiple experimentators in a
The graphical model has to be very precise. space laboratory environment.
The measurement of the position and orientation of If the robot control is versatile enough, there is no
the data-glove has to be very precise. longer a need to even show a robot in the virtual en-
Measures have to be taken to reduce ,,trembling" of vironment displayed to the user; so the user more
the operators hand. and more gets the impression of carrying out a task
Online-collision avoidance for the robots is neces- ,,himself', which is the highest level of intuitivity
sary to grant safe operation. that can be achieved.
A versatile sensor-control is necessary to compen-
If the planning component is versatile enough, it
sate for unwanted tensions when objects are in- cannot only control the robots, but also other kinds
serted into tight fittings. of automated devices. The action planning compo-
cope with the problems mentioned above, another nent in the CIROS environment ,,knows" that to
mode of operation of the VR-system was developed: the open the leftmost one of the three drawers, it doesn't
task deduction mode. The solution was to enhance the need to employ a robot. This drawer is equipped
VR-system in the way that while the user is working, the with a motor, so that it just has to control the motor
different subtasks that are carried out by him are recog- to open this drawer. Robot-automated and hard-
nized and task descriptions for the IRCS, the multi-robot automated tasks are thus controlled under one uni-
control system of the CIROS environment are deduced. fied framework.
These task descriptions are then sent to the action plan-
761
The list above gives the advantages of the presented performed by the user ,,make sense" and whether the
approach, but it also shows that the used robot control actions can be combined to a task description for the
system must have several features that are by far not robotic system. Fig. 6 shows an example of such a petri-
state of the art. But, as this paper focusses on VR, we net which allows to deduce tasks like ,,open Flap" or
have to direct the reader interested e.g. in the planning ,,close Flap" from the actions a user is performing in the
component to [3], where the action planning component VR. The types of petri-nets that are used for the task-
and the underlying methods are described in detail. deduction component of the virtual-reality-system are
,,state/transition-nets with named marks", a special class
The key issue is splitting the job between the task de- of petri-nets. The basic symbols of the petri-nets used
duction in the VR and the task ,,projection" onto the are given below.
physical automation components by the automatic ac-
tion planning component. This hides from the user the
typical robot-related difficulties like singularity-
treatment, collision avoidance, sensor-based guidance of
the robot and the problem to coordinate different robots
but allows to exploit the capabilities of the robot system.
The necessary expertise to e.g. conduct an experiment in
transition state action
I
a space laboratory environment like GIROS is thus
shared between the user with the necessary knowledge
about the experiment and the robot-control with the
necessary ,,knowledge" about how to control the robots.
task-
descriptions
3
/
state-
information
Fig.6: Cooperation between the petri-nets for task-deduction and the action-planning system of the IRCS
762
~
763
position sensor can be compensated for automatically. cheaper, the display quality and operability of the sys-
The wirefiame on the right of the container, the second tems have reached a standard where automation appli-
type of visual aid, indicates the actual position of the cations should be envisaged. Whereas most VR-
physical container in the CIROS testbed which has not applications aim at the ,,plain VR", that is the improve-
yet been moved by the robots. ment of the virtual worlds that are displayed to the user,
this paper showed the application of new ideas related to
projective virtual reality, where the aim is to use VR-
technology as an intuitively operable man-machine
interface for robotic systems. The presented new task
deduction approach was developed to ,,project" virtual
actions onto robotic systems, that is to make physical
robots do the same tasks in the physical environment
that have been carried out by the user in the virtual envi-
ronment. This mode of operation holds great promises
for future programming and remote control applications
of robotic systems.
6 References
764