The Virtual Round Table - a Collaborative Augmented Multi-User Environment

Wolfgang Broll, Eckhard Meier, Thomas Schardt
Institute for Applied Information Technology (FIT) GMD - German National Research Center for Information Technology Sankt Augustin, Germany

wolfgang.broll@gmd.de eckhard.meier@gmd.de thomas.schardt@gmd.de

ABSTRACT
Existing immersed virtual reality and particularly augmented reality systems are usually based on dedicated and expensive hardware and I/O devices. Nevertheless they often provide limited support for the collaboration between multiple users and can hardly be used for arbitrary location independent application environments. Additionally, navigation and interaction within such virtual environments are not very natural or intuitive, making them difficult to use. This paper will introduce a new collaborative augmented reality environment - the Virtual Round Table. This environment is designed to support location-independent mixed reality applications, overcoming the limitations for collaboration and interaction of existing approaches. Moreover it extends the physical workplace of the users into the virtual environment, while preserving traditional verbal and non-verbal communication and cooperation mechanisms.

This simple example already showed most of the features the Virtual Round Table (VRT) environment is based on:

• augmentation of the current (working) environment • collaboration between multiple users • intuitive interaction with 3D objects
The Virtual Round Table (VRT) is an interactive task-oriented cooperation environment based on augmented reality technology [8]. A prototype of the Virtual Round Table is currently developed and evaluated within the CAMELOT (Collaborative Augmented Multi-User Environment with Live Object Tracking) project. The Virtual Round Table environment enables participants of a work group to share a 3D application within their regular working environment. Dynamic communication processes are supported by the VRT environment in a task oriented approach. Beside common facial communication the system particularly encourages non-verbal communication, visual association, and sensorimotor abilities of the work group members. The basic idea of the Virtual Round Table is the perspectively correct 3D stereo visualization of a synthetic scene within the real world working environment of the user using see-through projection glasses (see figure 1). Augmented reality is used as a key technology to enhance the real world by virtual objects. In our approach this technology is exploited to facilitate intuitive multi-user interactions, which are realized by combining real and virtual objects into a conceptional unit. By superimposing physical items by virtual objects, they become symbolic representatives of their attached virtual counterparts. By this, manipulations can be directly mapped onto the appropriate virtual objects and hence enable users to interact with these objects in a straight forward manner. This concept of an intuitive tangible user interface provides the basis for the Virtual Round Table to become a collaborative work group environment. Complex structures represented by spatial arrangements of different objects can be manipulated arbitrarily by a group of users. By using the user’ hand as the primary s manipulator of objects, the Virtual Round Table system can take advantage of the human senses and prestidigitation in respect to virtual object manipulation. In contrast to other technical scenarios based on shutter glasses, such as the Responsive Workbench [19] or a CAVE [12], the Virtual Round Table is location independent and provides an individually adapted stereo view of the virtual world artifacts for each user. The use of conventional, non-sensor-attached items as placeholder objects enforces flexibility, expendability and local inde-

Keywords
Augmented reality, CSCW, collaborative virtual environments, multi-user.

1 INTRODUCTION
Imaging yourself sitting at a table in a restaurant giving your opponent directions to the railway station. Using salt and pepper or a glass to identify major buildings while drawing streets on the table cloth with your knife. Another person sitting at the neighbor table next to you has watched your explanation and spontaneously leans forward to correct you. This is a very intuitive as well as powerful way to provide complex information. Now, what if your opponent would not only listen to your explanations, but additionally see the appropriate virtual 3D objects (streets, buildings, the railway station) exactly where you put the placeholder objects. Thus he would obtain a much richer and better understanding of your explanation, but for you it would still be the same simple and intuitive interface, which allows you to interact with the subjects or even collaborate with other persons very spontaneously.
Permission to make digital or hard copies of part or all of this work or personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. CVE 2000, San Francisco, CA USA © 2000 ACM 1-58113-303-0/00/09...$5.00

39

real table projected virtual objects light-weight camera superimposed real world objects light-weight stereo display

Figure 2.• Stereo projection glasses and camera
wireless ethernet laptop computer

6 DOF tracker

Figure 1.• Basic Virtual Round Table architecture pendence. In contrast to traditional pure virtual environments or video conferencing systems the familiar verbal and nonverbal communication is not restricted by the system. In fact our approach emphasizes the use of common collaboration and cooperation mechanisms used in regular meeting situations and extends them into the virtual environment. In this paper we will present the basic components of the Virtual Round Table environment and show how they are applied to our sample application scenario. In the second section we will introduce the Virtual Round Table architecture. In the third section we will present our current application scenario used for testing the approach with user groups. The fourth section gives a short overview on the results we received as well as on the problems we faced. The final section compares our approach to existing augmented reality systems.

view than augmentation based on video mixing. We currently use the Sony light-weight projection glasses LDI-D100 providing a resolution of 800x600 (SVGA) at 85 Hz. Due to the SmallTool rendering engine, 3D objects are based on the VRML’ ISO stan97 dard. The system requires a stereo-capable OpenGL compliant graphics accelerator.

2.2 Tracking and Registration
The basic problems, which have to be solved by all augmented or mixed reality environments, are the tracking of the users’ viewpoints to allow the perspectively correct visualization of virtual objects projected into the real environment and the registration of real world objects or landmarks. To be able to keep the visualization of the virtual scene permanently synchronized to the movements of the individual user, the real world location and viewing direction of each participant has to be tracked continuously and in real-time by an appropriate tracking device. Because of the high accuracy of the human vision system, the realization of the device’ position and orientation detection s mechanisms have to be highly accurate. To give the user the freedom to move around and manipulate objects arbitrarily, the ideal system would be wireless as well as sourceless. Moreover, the tracking device should be insusceptible against external ascendancies, to avoid temporal discrepancies of the real and virtual scene synchronization. Existing magnetic or ultrasonic tracking systems have to deal with problems such as metal disturbances or occlusion, which are unacceptable for the application areas intended. Within the first prototype of the Virtual Round Table environment we use the InterSense IS 600 inertial tracking system for viewpoint tracking of participating users. In order to use this system for an accurate six-degree-of-freedom tracking, a rather large local installation of ultrasonic-based emitters has to be installed. This limits the portability and location independence of the overall system significantly. Moreover, the ultrasonic sensors are susceptible to occlusion. Additionally wired sensors mounted to the user’ head s restrict her possibilities to move and interact freely.

2 VIRTUAL ROUND TABLE ARCHITECTURE
In this section we will describe the basic hardware and software components of the current Virtual Round Table prototype.

2.1 Visualization
The visualization of 3D objects is based on the multi-user virtual reality toolkit SmallTool [10]. While this toolkit is currently available on IRIX, Solaris, Windows 9x/NT and Linux, we focus on a PC-based solution for Virtual Round Table applications in order to be able to use mobile laptops as soon as sophisticated 3D acceleration will become available. Additionally a PC-based solution is a condition for the aimed low-cost approach. We currently use a Windows NT based environment for testing and evaluation purposes. Augmentation is realized using semi-transparent stereo projection glasses (see figure 2). While the semi-transparent projection of virtual objects into the real scene does not allow for complete superimposition or covering of real object, it provides a more accurate

2.2.1 Inertial Tracking Device
To overcome these problems, we are currently developing a sixdegree-of-freedom inertial tracking device based on the existing (three-degree-of-freedom) MOVY prototype [15]. The main features of the advanced six-degree-of-freedom MOVY are:

40

• wireless operation • sourceless operation (no stationary components) • simple external initialization and re-calibration • low-cost device (less than one tenth the cost compared to the
tracking system currently used) The MOVY six-degree-of-freedom tracker is based on three basically independent types of sensors (see figure 3):

camera) can see an object, its position can be calculated (obviously it is not necessary to detect object positions for objects not realized by any user). In addition to the real objects the round table itself is tracked by the camera. This provides the tracking system with additional information on the viewpoint of the user. The calculated viewpoint data can be used for external re-calibration of the viewpoint tracking system.

2.3 Object Manipulation
The main interaction paradigm used within the Virtual Round Table scenario is the manipulation of virtual objects by tangible real world placeholder objects. This requires real world objects (such as cups, books, salt and pepper) to be introduced to the system (as placeholder objects). These objects are then associated with a synthetic virtual 3D object. Thus real and virtual objects form a conceptional unit - an interaction unit (see figure 4). By

• inertial sensors to detect linear acceleration in three perpendicular axes

• gyroscopic sensors to detect rotational acceleration (three rotation axes)

• magnetic sensors for additional calibration of the rotational
values (inclination and declination)
z x x z x z

y inertial sensors

y gyroscopic sensors

y magnetic sensors

real placeholder object

virtual artifact

Figure 3.• MOVY sensor types In addition to these sensors an external initialization and re-calibration of the inertial sensors is required to prevent drifting. We currently evaluate different methods to address this problems. Possible solutions include temporarily fixed radio-based emitters or video-based approaches (see below).

interaction unit Figure 4.• Interaction units as tangible interfaces superimposing physical items by virtual objects and using them as placeholder objects, they become a symbolic representative of its attached virtual counterpart. In this way, manipulations can be directly mapped onto the appropriate virtual object and hence enable users to interact with these objects in a straight forward manner.

2.2.2 Video-based Tracking
In order to realize the registration of real world objects as tangible interfaces to the virtual environment, a continues and accurate tracking of these objects is required in addition to the detection of the users’ viewpoints. Since our goal is to use arbitrary objects available in the current environment of the user, our approach aims not to rely on sensor-based tracking or objects with particular markers attached to them. To achieve this, we uniquely register each physical object and track its movements within the real environment by a camera driven image recognition process. Two cameras are attached to the stereo glasses of each user (see figure 2). By using two cameras, a stereo image can be obtained, which will allow us to extract additional depth information based on image parity. In our current prototype object recognition is limited to the position of the objects on the table. This is realized by simple pattern matching algorithms. However the realization of more complex manipulations than simple movements of the objects requires the recognition of additional object features using advanced mechanisms. Most problems based on partial occlusion leading to inaccurate or even no position data could satisfactory be solved by combining the camera tracking information of all local participants. As long as at least one participant (i.e. her head-mounted

3 APPLICATION SCENARIO
Application scenarios for the Virtual Round Table are based on three major components: collaboration, augmentation and interactivity. Potential applications include but are not limited to spatial or urban planning and architecture, education and simulation, catastrophe management as well as stage and fair planning. We currently exploit the possibilities of the Virtual Round Table environment in the area of building construction and urban planning. An augmented round table with 3D construction models on it is a powerful tool to fill the gap between an architectural design and the final physical 3D building. It serves as a virtual 3D scalemodel with mobility and high flexibility to use. Figure 5 shows a simple sample scenario (in this example a single fixed mounted camera is used rather than head-mounted cameras). The VRT is used as a major management tool for a construction project director and her team.

41

Figure 5.• Simple Virtual Round Table scenario. Project preparation phases consume months of work for a medium size project and there are still many misunderstandings and confusions when the project is realized. Before the actual construction, the Virtual Round Table environment can be of great benefit for the phase of digestion of the design and the preparation for the large number of experts and teams to obtain a three dimensional perception of the buildings and their environment. In addition to the general urban planning tasks it can also be used for preparation of construction material, work planning, etc. for individual buildings. During the construction, it can be used for problem shooting, detail comprehension, progress control, and modification of the design due to new constraints. The combination of augmented reality and the unification of real and virtual objects to a manipulable entity emphasizes the communicative potential of an active modeling process by a detailed but still flexible representation of the objects of interest. Supposing a collaborative discussion process of different kinds of experts and project members, the Virtual Round Table environment enables all participants to easily develop a number of varying design concepts. In comparison to pure physical models, visually improved and changeable virtual objects advance the participant’ conceivability s of the construction. Additionally, abstract information such as stress and loading can be visualized and supplementary taken into account. Thus, optimizations of producibility and expense can be dynamically evaluated. Diagnostics of different constellations can easily be verified to ensure an optimal utilization of all available capacities.

4 RESULTS AND RESTRICTIONS
Based on experiences made with the first prototype of the Virtual Round Table we gained the following results: The limited field-of-view (28 degree horizontal) of the lightweight glasses currently used, disturbs the user’ view signifis cantly. Another particular problem is that the size of the seethrough area is wider than the actual display. We hope to overcome these problems with next generation projection glasses. Nevertheless most users would prefer to use more advanced displays such as virtual retinal displays (e.g. [26]) or holographic displays (e.g. [16]) as soon as such devices will become available. We discovered that it is particularly important to consider real objects covering virtual objects further away. This leads to an effect, which destroys the 3D stereo impression of the viewer instantly, since the brain seems to see a paradox. The problem can only be solved by providing unlit (black) virtual objects representing the real objects displayed to the corresponding location. This approach however is not feasible, since the user should be able to use arbitrary real-world objects (not only those already known to the system). The augmentation of the real environment with virtual objects using semi-transparent displays has a number of approach immanent disadvantages: first, complete occlusion of real objects is not possible, enlightened parts of the environment will always be visible even when covered by virtual objects; second, as darker a virtual objects is, as more it tends to appear transparent, since dark-

42

ness and transparency are both realized by a lack of light issued from the projection glasses (see figure 6).

5 RELATED WORK
Augmented Reality (AR) as used for the Virtual Round Table approach is not a principally new technology, but was often restricted to expensive high performance hardware and by that limited to small set of application areas [13]. Existing approaches can be subdivided into those based on semi-transparent see-through systems and video-based systems to combine the virtual environment with the current view of the user [20]. The Bricks prototype [14] uses physical items for the manipulation of synthetic objects. These artifacts can be temporary attached to system internal entities and hence can act as specialized, spacemultiplexed input devices. The realized prototype application of the “Graspable User Interface” uses two Ascension Flock of Birds receivers as graspable bricks, operating on a rear projected desktop surface called “Active Desk”. The BUILD-IT [21] system is based upon the concept of a “Natural User Interface”, that is to empower computer interaction in a natural way using all of the users body parts. The BUILD-IT application supports engineers in designing assembly lines and building plants. The technical infrastructure is based upon a table top interaction area, enhanced by a projection of a 2D computer scene on the table top. Additionally, a video camera is used to track manipulations of a small, specialized brick, that can be used as an “universal interaction handler”. DigitalDesk [27] is one of the first augmented reality environment systems. Its main approach is to shift functionality from a workstation onto a desk instead of giving a workstation additional desktop properties as it is done in traditional graphical user interfaces. The DigitalDesk application is focused on direct computer-based interaction with selected regions of paper documents. The TransVision system [22] was developed at the Sony Computer Science Laboratory. In contrast to other augmented reality systems mixing of the real and virtual world it is not based on headmounted displays. The approach uses palmtop computer as display units. The user perceives the impression of a see-through screen at the palmtop computer. Actually a small camera is applied to each palmtop computer. The position and orientation of this camera is captured by a 6 DOF tracker. The grabbed video image is superimposed by the virtual 3D objects and displayed at the palmtop screen. The system allows an arrangement for collaborative work similar to the one used by the Virtual Round Table environment. User interactions however are very restricted due to the overall architecture. The Real Reality [11] approach introduced at the University of Bremen is focused on the synchronous modeling of physical and virtual systems. So-called “twin-objects”, consisting of a real artifact and a corresponding virtual representation, can be manipulated by the users at will. This kind of a “complex construction kit” enables users to create dynamic physical and virtual models in parallel. The Studierstube system [25] developed at the Technical University of Vienna uses light-weight head-mounted displays to project artificial 3D objects into the real world. In contrast to the Virtual Round Table environment, the system uses a stationary environment. The approach introduces the Personal Interaction Panel (PIP) [24] as a new input device for augmented reality. Virtual objects such as maps, button, or sliders are then projected onto the panel and can be selected by the pen. While the use of the PIP

Figure 6.• Dark object areas appear transparent when displayed by see-through devices While the projection glasses currently used, already allows the user to adjust the level of transparency (from see-through to almost immersive) by an additional LCD layer, this layer currently covers the whole display. Providing such a layer on a random-accessbasis (one LCD for each pixel) would allow us to overcome most of the problems of current see-through devices. While this approach would require only minor changes to the current display technology, it would also require appropriate graphics accelerators providing not only a RGB but rather a RGBA video signal. We currently also evaluate video-based augmentation techniques, where the video image captured by a head-mounted camera is used as background image within the display, while the projection glasses operate in immersive (non-see-through) mode. However, due to the missing depth information within the video image, occlusion problems are even more problematic. While the quality of the six-degree-of-freedom trackers has been improved significantly, their repeating accuracy is still insufficient for augmented reality applications. Thus additional methods are required for a proper registration. Similar to other existing approaches we currently try to overcome this problem by an additional calibration based on the camera images. This approach however cannot easily be extended to arbitrary environments, where significant physical properties to be tracked by the camera recognition system are hardly to be know in advance. Thus the further improvement of the tracked six-degree-of-freedom data by redundant tracking components seems to be the only feasible solution. Spontaneous interaction with (arbitrary) real-world objects is very important. Circumstantial introduction processes and learning cycles for new place holder objects discourage participants from using them. In particular application areas -especially in the area of construction- fully immersed walk-throughs may be required in addition to a pure augmented reality environment.

43

shows, that AR can be used to realize new interaction mechanisms, the interaction techniques are basically the same as used within conventional immersed VR environments. The MIT Media Lab [17] introduces the vision of Tangible Bits [18], that allow users to grasp and manipulate digital information by coupling them to everyday physical objects and environments. The design of Tangible User Interfaces and ambient media is focused on foreground activities of the users as well as peripheral background information. Several prototype systems such as metaDESK, transBOARD and ambientROOM have been developed, that take the concept of a Tangible User Interface into account. Important application areas of augmented reality also include medical applications [9] and mechanical assembly line applications [23].

[12] Cruz-Neira, C., Sandin, D., and DeFanti, T., “Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE”, in Proceedings of SIGGRAPH’ ACM SIGGRAPH, pp. 13593, 142 (1993). [13] Feiner, S., MacIntyre, B., and Seligmann, D., “Annotating the Real World with Knowledge-Based Graphics on a See-Through Head-Mounted-Display”, in Proceedings of Graphics Interface 1992, pp. 78-95 (1992). [14] Fitzmaurice, G. W., Ishii, H., Buxton, W., “Bricks: Laying the Foundations for Graspable User Interfaces”, in Proceedings of CHI'95 Conference on Human Factors in Computing Systems, pp. 442-449, ACM, New York (1995). [www] http://www.dgp.toronto.edu/people/GeorgeFitzmaurice/papers/gwf_bdy.html [15] Henne, Peter, “MOVY - Wireless Sensor for Gestures, Rotation, and Movement”, ERCIM NEWS, No. 36, p. 42, (Jan. 1999). [16] Holographic displays. Musion GmbH. [www] http:// www.musion.de/ [17] Ishii, H.: Tangible Media Group Project List, 1996. [www] media.mit.edu [18] Ishii, H., Ullmer, B., “Tangible Bits: Towards Seamless Interfaces between people, Bits and Atoms”, CHI’ Atlanta, Georgia (1997). [www] http://tangi97, ble.www.media.mit.edu/groups/tangible/papers/ Tangible_Bits_CHI97/Tangible_Bits_CHI97.pdf [19] Krüger, W., Bohn, C., Fröhlich, B., Schüth, H., Strauß, W. and Wesche, G., “The Responsive Workbench: A Virtual Work Environment”, IEEE Computer 28 (7), pp. 42-48 (1995). [20] Milgram, P. and Kishino, F., “Augmented Reality: A Class of Displays on the Reality-virtuality Continuum”, SPIE Vol. 2351-34, Telemanipulator and Telepresence Technologies, pp. 282-292 (1994). [21] Rauterberg, M., Fjeld, M., Krueger, H., Bichsel, M., Leonhardt, U. & Meier, M., “BUILD-IT: a video-based interaction technique of a planning tool for construction and design”, in H. Miyamoto, S. Saito, M. Kajiyama & N. Koizumi (eds.) Proceedings of Work With Display Units--WWDU'97, pp. 175-176. Takorozawa: NORO Ergonomic Lab (1997). [www] http:// www.ifap.bepr.ethz.ch/~rauter/publications/ WWDU97paper.pdf [22] Rekimoto, J. and Nagao, K., “The world though the computer: Computer augmented interaction with real world environments”, in Proceedings of the UIST’ 95, ACM Symposium on User Interface Software and Technology, pp. 29-36 (November, 1995). [23] Sharma, R. and Molineros, J., “Interactive Visualization and Augmentation of Mechanical Assembly Sequences”, in Proceedings of Graphics Interface ’ 96, pp. 230-237 (1996).

6 CONCLUSION AND FUTURE WORK
In this paper we presented our approach of a collaborative virtual environment based on augmented reality technology. The Virtual Round Table provides an interactive, location-independent, 3Denhanced working environment for multiple users. By the use of new interaction techniques based on the combination of real world objects and virtual world artifacts we provide an intuitive and natural approach to interact with virtual objects. By that the Virtual Round Table environment extends the user’ workplace into time s and (3D-) space, providing the basis for new types of collaborative applications. In our future work we will continue our work at the six-degree-offreedom MOVY tracker to provide a high-quality low-cost viewpoint registration solution. Object tracking will be enhanced in order to provide information on object orientation in addition to the pure location information. Video-based augmentation will be further evaluated. The impact on health risks when using headmounted projection displays within regular working situations requires further investigation. Initial application scenarios will be tested with selected user groups to influence the further development of the user interface metaphor and to ensure the overall intuitiveness of the approach.

7 REFERENCES
[8] Azuma, R. “A Survey of Augmented Reality”, Presence, Vol. 6, No. 4, ed. W. Barfield, S. Feiner, T. Furness III, and M. Hirose, MIT Press, Cambridge, MA, pp. 355-385 (1997). [9] Berlage, T., “Augmented reality for diagnosis based on ultrasound images”, in: Troccaz, J.; Grimson, E.; Mösges, R. (eds.): Proceedings CVRMed-MRCAS '97. (Lecture Notes in Computer Science Nr. 1205), pp. 253-262, Berlin: Springer (1997). [10] Broll, W., “SmallTool - A Toolkit for Realizing Shared Virtual Environments on the Internet”, Distributed Systems Engineering, Special Issue on Distributed Virtual Environments, No. 5, pp. 118-128 (1998). [11] Bruns, F. W., “Integrated Real and Virtual Prototyping”, in: Proceedings of the 24th Annual Conference of the IEEE Industrial Electronics Society (IECON 98), Aachen, Germany (1998).

44

[24] Szalavári, Zs. and Gervautz, M., “The Personal Interaction Panel - A Two-handed Interface for Augmented Reality”, in Proceedings of EUROGRAPHICS 1997 (1997). [25] Szalavári, Zs., Schmalstieg, D., Fuhrmann, A., Gervautz, M., “Studierstube - An Environment for Collab-

oration in Augmented Reality”, Virtual Reality Journal, Springer (1998). [26] Virtual Retinal Displays, Microvision Inc. [www] http://www.mvis.com/ [27] Wellner, P., “Interacting with Paper on the DigitalDesk”, in Communications of the ACM, 36 (7), pp. 87-96 (1993).

45

Sign up to vote on this title
UsefulNot useful