You are on page 1of 10

Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

Architectures for Shared Haptic Virtual Environments


Pietro Buttolo, Roberto Oboe** and Blake Hannaford*

Ford Motor Company *University of Washington **University of Padua


Scientific Research Laboratory Dept. of Electrical Engineering Dept. of Elettronica ed Informatica
Dearborn, MI Seattle, WA Padua, Italy
pbuttolo@ford.com blake@ee.washington.edu oboe@zen.dei.unipd.it
corresponding author

1 Abstract
The lack of force feedback in visual-only simulations may seriously hamper user proprioception, effective-
ness and sense of immersion while manipulating virtual environments. Haptic rendering, the process of
feeding back force to the user in response to interaction with the environment is sensitive to delay and can
become unstable. In this paper we will describe various techniques to integrate force feedback in shared
virtual simulations, dealing with significant and unpredictable delays. Three different implementations are
investigated: static, collaborative and cooperative haptic virtual environments.
2 Acknowledgments
This work was supported by the National Science Foundation (grant #BCS-9058408), and partially by the
Allen Innovation Award from the University of Washington Libraries.
3 Introduction
In the last few years implementation of shared virtual environments has been particularly active. Distrib-
uted, multi-user simulations have been implemented for training [1], education [2], concurrent engineering
[3], entertainment [4], and battle simulation [5]. However, the lack of force feedback in visual-only simula-
tions seriously hampers user proprioception, effectiveness and sense of immersion while manipulating
objects in virtual environments. Virtual simulations in which haptic devices apply force feedback to the
user are receiving growing attention from both industry and universities [6], [7]. In the last few years haptic
research has focused on the design of devices, human perception studies, and haptic rendering of virtual
environments, leaving little space for integration of haptics into shared virtual environments [8]. Spidar
was the first successful implementation of a multi-user haptic simulation. In Spidar, two users can simulta-
neously grasp and manipulate the same virtual object [9]. In the dual user configuration described in [10]
the joint action of two hands is necessary to successfully complete an assembly task.
There are two major problems in implementing a shared virtual simulation:
1) The manipulation, and therefore the modification of the same shared virtual environment by users at dif-
ferent sites might result in diverging representations. Coherency of the virtual environment state must be
guaranteed.
2) The need to communicate over large distances may introduce a significant latency. Moreover, latency
can be unpredictable when communication throughput is not guaranteed, as with the internet [11].
Both communication latency and system architecture influence the overall delay in processing haptic infor-
mation. In a graphic-only simulation with significant delay, the user adopts a “move and wait” strategy to
restore hand-eye coordination. In a haptic simulation, delay in processing information can easily bring the
system composed of user and device to instability, since haptic displays are active devices which exchange
energy with the user.
The focus in implementing a shared haptic simulation must therefore be to reduce delay in processing
force information, satisfying at the same time the applications requirements and constraints. Moreover,
because of limited development resources, it is often required to “plug-in” force feedback into existing
visual simulations. In some cases limitations inherent in running a simulation over large distances make it
physically impossible to meet the initial requirements. Let’s consider the task of simulating two users mov-

1
Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

ing an engine block across a room, feeling each other pushing and pulling onto the shared engine. A high
quality simulation is impossible if communication latency is in the order of hundreds of milliseconds, as it
often happens across the internet.
We identified three major classes of shared interaction: 1) browsing static environments, such as feeling
haptic information in documents, databases, web pages; 2) sharing collaborative environments, in which
users alternate in manipulating a common environment; 3) interacting in cooperative environments, in
which the task requires the simultaneous action of more than one user.
The system implementation should also depend on how users interact with virtual objects. We will con-
sider two modalities of rendering haptic information. The first, impulsive haptic rendering, models
impulsive collisions such as kicking a ball, and hammering a nail. The second, continuous haptic render-
ing, models extended collisions such as pushing against a wall or lifting an object.

In the next section the process of feeding back force as a result of user actions will be briefly described. In
particular we will outline its requirements in terms of refresh rate and stability. Then we will analyze the
three different system implementations for shared haptic virtual environments, pointing out limitations and
advantages. A practical example will be described in more details.
4 Haptic Rendering.
Haptic rendering is the process of computing and applying force feedback to the user in response to his/her
interaction with the virtual environment. How haptic rendering is implemented should depend on the appli-
cation requirements, since there is no unique or best solution. In this section we will describe two different
approaches: impulsive haptic rendering and continuous haptic rendering.
4.1 Impulsive Haptic Rendering
In some applications we might be interested in haptically rendering only sharp collisions between objects,
such as for example a tennis racket hitting a ball. We will call the module that detects collisions and calcu-
lates impulsive forces the Collision Detection Engine (CDE). For example, let’s consider a discrete time
computer simulation, sampling time T , modeling a ball of radius r 2 sliding on a frictionless surface collid-
ing against a virtual racket, as shown in Figure 4.1. We will call the module that computes the virtual
objects motion the Dynamic Engine (DE). In this simple case it consists of the following equation,

x 2 ( t 1 + T ) = x 2 ( t 1 ) + x˙2 ( t 1 )T (4.1)

l1 r2
m1 ẋ 1 ẋ 2 m2

x1 x2
t1 < tc < t2 t 2 = ( n + 1 )T
+
t 1 = nT tc

Figure 4.1: A virtual racket and a tennis ball collide at t = t c . The ball motion after collision (draw-
ing on the right) is calculated by equating energy and momentum before and after collision.

If at time steps t 1 and t 2 the position of racket and ball swaps, as in figure, a collision must have happened
somewhere in between. Modeling motion with parametric functions, and enforcing contact between racket
and ball, we can calculate the time of impact t 1 ≤ t c ≤ t 2 :

x 1 ( t ) = x 1 ( t 1 ) + ẋ 1 ( t – t 1 )
, x1 ( tc ) – x2 ( tc ) = l1 ⁄ 2 + r2 (4.2)
x 2 ( t ) = x 2 ( t 1 ) + ẋ 2 ( t – t 1 ) 2

In an elastic collision, kinetic energy and momentum must be equal before and after collision, therefore:

2
Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

m 1 ẋ 1 + m 2 ẋ 2 = m 1 ẋ 1c + m 2 ẋ 2c
1 2 1 2 1 2 1 2 (4.3)
--- m 1 ẋ 2 + --- m 2 ẋ 2 = --- m 1 ẋ 2c + --- m 2 ẋ 2c
2 2 2 2
To simplify the implementation, the force applied to the racket to simulate the impulsive collision can be
rendered as an uniform force pulse of duration T and intensity ∆Q ⁄ T , where ∆Q is given by

∫ F ( t )dt = ∫ mdv = m 1 ẋ 1c – m 1 ẋ 1 = ∆Q (4.4)


More realistic impulsive functions are described in [12]. This haptic rendering implementation does not
need to run at an high sampling rate since force feedback is applied open loop. The major limitation is that
the impulsive rendering method does not work for extended contact with virtual objects. Continuous haptic
rendering is a different approach that models this type of collisions.
4.2 Continuous Haptic Rendering
When pushing a finger against a rigid wall, the amount of displacement induced in the wall and the force
applied by the finger are continuous variables satisfying a physical relation imposed by the wall imped-
ance, i.e. the transfer function between force and displacement, as shown in Figure 4.2:

x
Figure 4.2: A stiff virtual wall modelled as a mechanical impedance. The stiffness component
(spring) models elastic collisions while the damper (shock absorber) models energy dissipation.
The CDE continuously estimates the force to be fed back to the user’s finger from

Kx ( t ) + Bẋ ( t ), if x > 0
F(t) =  (4.5)
 0, otherwise
In case of moving objects, the force F computed by the CDE is used by the DE to simulate motion:

F ( t ) = mx ˙˙
( t ) + bẋ ( t ) (4.6)
where m and b are mass and damping relative to object motions. The end-effector of the master manipu-
lator is modeled in the virtual environment as a rigid body having zero inertia and damping coefficients and
infinite stiffness. In general, objects of complex shape can be modeled as a collection of springs and shock
absorbers normal to the surface.
4.3 Integrating Graphics and Haptics
In practice, the continuous haptic renderer is implemented as a discrete time process, as shown in Figure
4.3. See [13] for a detailed discussion on computer controlled systems. In a shared virtual environment
communication between different components introduces delay into the system. Latency might be present
between the CDE and the DE, or between the Haptic Display position and force signals and the CDE.
How fast should we sample? And how will delay affect performances? Some authors suggests a threshold
based on human perception, resulting in a requirement for force reflection bandwidth of at least 30Hz-
50Hz [14]. We found that, to realistically simulate collisions with rigid objects, a stiffness of at least 1000-
10000 [N/m] must be simulated. However, how this relates to the sampling time T and communication
delay depends on the haptic device itself and on the CDE-DE implementation.

3
Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

Human Operator
Z ho ( s )
Fh x hd
delay1 kT delay2
Haptic Display
– Ts
–h1 1–e Z hd ( s ) 1 –h2
z ------------------ --- z
s s
F
CDE
delay3 delay4
x vo
–h3 –h4
z DE z

Figure 4.3. Block diagram of a haptic virtual environment. An operator holding a haptic device inter-
acts with a virtual object. The region outside the dashed box is a discrete time process, sampling time
T. In a shared virtual environment communication between different components introduces delay into
the system.If CDE and DE run on different sites, then delay3 and delay4 are present. If CDE and the
haptic device controller run on different sites, then delay1 and delay2 are present.

We experimentally measured stability maps “virtual object stiffness versus communication delay” and
“virtual object stiffness versus sampling time” for our system [15] and continuous haptic rendering. The
results are not only quantitatively valid for our setup, but also qualitatively of general significance. When
the sampling rate is decreased or delay is increased, the maximum stiffness that can be simulated without
bringing the system to instability drops dramatically.
delay - stiffness stability curve sampling rate - stiffness stability curve
100 100

80 80
sampling rate [msec]
delay [msec]

60 60

40 40
system unstable system unstable

20 20
system stable
system stable
0 0
0 200 400 600 800 1000 0 200 400 600 800 1000
virtual object stiffness [N/m] virtual object stiffness [N/m]

Figure 4.4. Stability maps “virtual object stiffness versus communication delay” [left] and
“virtual object stiffness versus sampling time” [right]. To guarantee system stability with signif-
icant delay or slow refresh rate, it is necessary to reduce stiffness.

In our system, to simulate objects with 1000 N/m stiffness, the sampling rate must be at least 200Hz, and
the delay less than 5msec. Many state of the art haptic systems use 1000Hz sampling rates [6]. The real
time requirements for graphic rendering and haptic rendering are therefore quite different, depending on
the haptic rendering implementation (see Figure 4.5).

In the following section we will analyze in detail three different architectures for Shared Haptic Virtual
Environments (SHVE). We will give implementation examples using both impulsive and continuous ren-
dering.

4
Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

Impulsive haptic rendering Continuous haptic rendering

haptic
rendering graphic
VE haptic VE
rendering rendering
graphic
rendering > 200Hz 10-50Hz
10-50Hz

Figure 4.5: A virtual simulation with impulsive haptic rendering [left], and continuous haptic ren-
dering [right] implementations. Note the different constraint in computational update for the two sen-
sory channels.

5 Architectures for Shared Haptic Virtual Environments


We can group architectures for SHVE in three major classes: static, collaborative and cooperative shared
virtual simulations [16], [17], [10]. In a static virtual environment each user can explore by looking and
touching, but not modify the environment. Users may or may not see each other during the simulation, but
cannot touch each other. Examples are browsing geometrical or shared databases on the net. In a collabo-
rative virtual environment users can modify the environment but may not simultaneously shape or move
the same virtual object. This scheme can be applied for surgical or professional training, co-located CAD
and entertainment. In a cooperative virtual environment users can simultaneously modify the same vir-
tual object. Users can see and touch each other, directly or indirectly through a common object. Possible
applications are like those for collaborative environments. In the following paragraphs we will analyze in
detail the three architectures.
5.1 Static Virtual Environment: Browsing a Database
In a static virtual environment users cannot modify the environment. The restriction of not letting users
edit the environment greatly simplifies the implementation. Each user connects to a central database in a
Client-Server fashion. If the application requires awareness to others, status information, such as position,
can be exchanged by communicating all information to the server, where it can be accessed by all partici-
usern usern
user2
user1

user2
user1
server

user3 user4 user3


user4
Client-Server Peer to Peer
Figure 5.1: Client-Server and Peer-to-Peer connectivities

pants, or directly from user to user, in a Peer-to-Peer fashion [18]. The simulation can be partitioned into
multiple servers, each managing a different region. Clients migrate during the simulation from one server
to another, and can interact only with clients connected to the same server.

After connecting to the server. the user locally replicates the virtual environment by either downloading

5
Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

the full Virtual Environment (VE) database or periodically requesting information pertinent to the immedi-
ate neighborhood. At the user level this is analogous to a single user simulation since there is no interaction
with others, as shown for one of the user by the shaded area in Figure 5.2. Therefore, this scheme works
well for any delay in communicating information. If the application requires continuous contact with the
environment the graphics and haptic loops must be decoupled, as shown on the right in Figure 4.5, to guar-
antee a minimum delay in processing haptic information.
The overall layout is shown in Figure 5.2. Note that information pertinent to the VE flows from the server
to the users, and not vice versa. At the user sites information is visually processed (Graphic Renderer, GR)
and displayed (Visual Display, VD), and forces are computed (CDE) and applied (Haptic Display, HD).

User 1 User 2 User 3 User 4


VD HD VD HD VD HD VD HD

GR CDE GR CDE GR CDE GR CDE

VE VE VE VE

local haptic
rendering Server
VE network

Figure 5.2: A haptic browser allows different users to access a common database. The database is
stored at the server site, but upon connection is replicated at each site. Each user can view and touch
the environment, but not modify it.

5.2 Collaborative Virtual Environment: “One at a time”


In a collaborative virtual environment only “one user at a time” can simultaneously edit the same virtual
object. Replication of the VE at each user site is still convenient to reduce delay in the haptic rendering
loop. However, since these local copies can be modified during object manipulation special care must be
taken in enforcing a coherent representation of the VE. There are two alternatives to synchronize users
access to the VE:
1) Central Server: a central server acts as a scheduler and keeps the only official copy of the VE. When-
ever a user gets close enough to an object, a request to edit is sent to the server. The server processes
these requests in a first-come-first-serve basis, granting ownership and locking the object to prevent
modification form other users. Other users are still allowed to touch and modify their own copy, however
these changes will not be copied at the server site. After the user owning the object has finished editing
and moves away, the object representation modified at the user site is sent and updated at the server site.
A request to release object ownership is then sent to the server. The server sends the new object repre-
sentation to all other users and then assigns the object ownership to the next user (see Figure 5.2). Cli-
ent-Server connectivity fits well with this implementation (see Figure 5.1).
2) Token Ring: users own the right to edit an object according to some predefined rules. In a token ring
implementation users are sequentially given permission to edit. After the client owning permission to
edit has completed the task, it sends a message to the next user passing ownership. There is no official
copy of the VE at the server site, but this is passed around from user to user, instead. Since there is no
need for a central server, Peer-to-Peer connectivity fits well with this implementation (see Figure 5.1).
Note that for Client-Server connectivity all communications are directed to-from a server. The server
must process all incoming data, and then broadcast them. This scheme clearly does not scale well as the

6
Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

number of users increases since the server is over-loaded. In Peer-to-Peer connectivity the computational
load is distributed more homogeneously in the system, and exchange of information is faster between
different users since does not need to pass through the server [19]. Multicasting can further reduce the
communication load [20].

User 1 User 2 User 3 User 4


VD HD VD HD VD HD VD HD

GR CDE GR CDE GR CDE GR CDE


DE DE DE DE
VE VE VE VE
FEE FEE FEE FEE
local haptic
rendering
SCD
Server

network
VE

Figure 5.3: Collaborative virtual environment, Central Server configuration. The server acts as a
scheduler, assigning ownership in a first-come-first serve basis, and it keeps the only official VE copy.
The Finite Element Engine (FEE) is introduced to allow shape manipulation.

In both Central Server and Token Ring schemes the haptic rendering loop is executed at each user site on a
local replica of the virtual environment (see Figure 5.2). Local simulation is therefore decoupled from that
of other participants, and it is possible to cope with large delay, as we will see in the next section, where we
will describe a practical implementation of Collaborative Token Ring architecture.
Force Feedback Multi-player Squash (FFMS)
“Force Feedback Multi-player Squash” (FFMS) is a practical implementation of a Token Ring Collabora-
tive architecture [21]. A similar but only visual simulation, “multi-player handball” has been implemented
at University of Alberta [4]. In real squash, two players alternate in hitting the ball. At a specific time, a
player is the designated hitter, the others are waiting, trying to anticipate the next move. In FFMS, more
than two players can play together, but as in real squash, only one player, the Active Player (AP) is allowed
to hit the ball. Using a haptic device, as in real squash, players feel the collision with the ball . The speed of
the ball after collision and the intensity of the impact reflected to the operator are determined using the
impulsive haptic rendering method described by eq(4.3) and eq(4.4).
The system connectivity is Peer-to-Peer to reduce communication delay (see Figure 5.4). All players send
their relative position to the other players and the server, every 50 msec. These messages are also used to
check heart beating, in other words that all connections are still alive. After hitting the ball, the AP broad-
casts the new position and speed of the ball to the other players. Non-APs update the position of the ball on
a local replica. Once a packet from the AP signals a collision the new data is used to adjust the estimated
position. This technique is called dead-reckoning [22]. Graphics and force rendering are synchronous
with communication. Force impulses calculated using eq(4.4) are applied in response to collision with the
ball.
We implemented an Adaptive-Dynamic-Bounder, to ensure that the “dynamics” of the simulation is com-
patible with the delay in the communication. To this purpose, each client estimates the round trip delay
in between peers, and sends the data to the server. The server determines the maximum round trip delay,
and sends back to all the clients the maximum absolute speed and maximum change in speed allowed after

7
Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

a collision of the ball with a virtual racket. These parameters are calculated so that, in the worst case, the
ball will not travel more than a distance equal to the length of the squash-court in a time equal to the com-
munication delay. This is necessary because the local estimated position of the ball is not necessarily the
position of the ball of the AP.
When the AP hits the ball, and passes the role of AP to the next player, the ball could have already traveled
out of the court. The longer the delay, the higher the probability of such an occurrence. A similar tech-
nique, Adaptive-User-Motion-Bounder, not yet implemented, could be used to limit the maximum speed
achievable by the user in moving the racket, introducing force feedback damping proportional to the maxi-
mum delay. In such a way, not only the dynamics of the ball could be limited, but also that of the players.

LAN UW LAN Univ. of Padua


client2
client1
client3
Seattle Padua

client-to-client client-to-server
server (or peer-to-peer) communication
client4 communication

screen dump
Figure 5.4. Force Feedback Multi-player Squash. A server is used to handle connection-disconnec-
tion requests to the game, to synchronize the start and end of the game, and to monitor the correct
functioning of the system. Multiple clients, one per player, contain the complete model of the system
(replication), that consists of the dimensions of the squash-court, the position of all players, and the
position and the speed of the ball.

The protocol used for communication is an enhanced UDP built on top of the UDP/IP. The standard UDP
allows a faster round trip communication, but packets may be lost or received out of order. Our enhanced
UDP implementation manages a reliable flow for single event packets, such as connection and disconnec-
tion requests. In the case of continuous flow of information, such as the positions of the players, packets
are sent unreliably, but packets out of order are discarded.
5.3 Collaborative Virtual Environment: “Feeling each other”
In a Haptically Cooperative Environment (HCE) users can simultaneously manipulate and haptically feel
the same object. This also involves the ability of feeling and pushing other users while moving in the sim-
ulation1. The possibility of kinesthetically interacting with other users makes the simulation truly realistic.
On the other hand, this also poses stringent constraints on the system layout and the maximum allowable
communication latency. Is it really worth it?
We believe that HCEs are potentially beneficial, in some cases even indispensable, for:
1) Training of a team of professionals: force feedback has already been integrated into virtual surgical sim-
ulators. Let’s imagine a team operating on the same real patient. Each team member interaction with the
patient is perceived by all the other members, indirectly when pulling the patient tissue, or directly
because of collisions in the limited workspace. These factors might need to be reproduced in a realistic

1. In [Pere et al., 1996] exoskeleton devices are used in a cooperative assemblying simulation. Since these
devices are not grounded, force are not transmitted from user to user while pushing onto the same object.
This means that if a user is passively holding onto an object moved by a different user, hand-eye coordina-
tion will be compromised, since the passive user position is changing in the VE but not in reality.

8
Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

simulation.
2) Entertainment: adding force feedback, thus allowing participants to kinesthetically interact with each
other, adds a new dimension of fun.
3) Telerobotics: a telemanipulation system shares many aspects of a shared virtual environment [23]. In
current implementations, the mix of virtual fixtures and real manipulators enhance the quality and safety
of the remote manipulation. Remote manipulation could be shared among multiple users.
Because multiple users are simultaneously interacting on the same object, it is necessary to allow only one
DE to modify the object position, and only one FFE to modify its shape. These are in fact the only two
modules that change the status of the virtual environment.
A possible solution is to perform all the computation at the server site, CDE, DE, and FFE, while the cli-
ents are simply sending to the server their haptic display (or pointing device) positions, and receiving the
force-torque vectors to apply back to the users. This system architecture can be improved distributing the
CDE among the clients (see Figure 5.5). Each client performs its own collision detection, calculates its
own interaction forces with the virtual environments, and then sends the information to the server that will
update positions and shapes. The CDE on the server site is responsible to calculate collisions between vir-
Server User 1 User 2: Haptic User 2: Graphics User 3
SCD VD HD HD VD VD HD

CDE GR CDE VE CDE GR VE GR CDE


DE
VE VE
FEE

network
VE
distributed rendering

Figure 5.5: Cooperative Virtual Environment. Each user computes its own interaction forces, but
manipulation of VE is centralized at the server site. This scheme was successfully implemented and
tested at the University of Washington for communication latency less than 30 msec.
tual objects.
The problem with allowing simultaneous manipulation is that to enforce coherency it is not possible to run
the dynamic engine on a local replica of the virtual environment. This means that the local copy is updated
with a certain delay after manipulation occurs, since data processing (DE, FFE) is performed at the server
site (see Figure 4.3 and Figure 5.5). Kinesthetically linking remote users is therefore particularly challeng-
ing. The latency in the haptic rendering loop should not be bigger than 5-10msec to stably interact with
stiff virtual objects. Delay of up to 50-100msec could be manageable if we accept a reduction in the object
stiffness or the introduction of artificial damping to keep the haptic device stable (see Figure 4.4).
5 – 10 [ msec ] stiff objects
delay max =  (5.1)
 30 [ msec ] soft objects
If, during the simulation, one of the users is performing a delicate manipulation that requires high quality
force feedback, it is possible to move the DE and FFE to its local site. In this way the client becomes the
server for the required amount of time, and the delay in this client force feedback loop does not include any
transmission time. However, the other participants are still affected. An adaptive dynamic bounder can
be used to change object stiffness depending on an round trip communication latency estimate to keep
the simulation stable.
6 Conclusions
In this paper we showed how delay in the haptic rendering process might cause instability, and how differ-

9
Architectures for Shared Haptic Virtual Environments, special issue of Computer & Graphics

ent approaches would work for specific group of applications. We laid out three different system architec-
tures that simulate shared interaction in static, collaborative and cooperative environments. The focus was
in reducing delay in processing force information, satisfying at the same time the applications require-
ments and constraints.
Practical implementation were tested for all three architectures. However, more test and further develop-
ments are needed to assess these schemes for a larger number of users and for different communication
protocols and media.
7 Bibliography
1 S.Stansfield, N.Miner, D.Shawver, D.Rogers, An application of shared virtual reality to situational
training, Proceedings Virtual Reality Annual International Symposium, 156-161, (1995).
2 C.E.Loeffler, Distributed Virtual Reality: Applications for Education, Entertainment and Industry, Tele-
ktronikk. vol.89, no.4, (1992).
3 J.Maxfield, T.Fernando, P.Dew, A distributed virtual environment for concurrent engineering, Proceed-
ings Virtual Reality Annual International Symposium, 162-171, (1995).
4 C.Shaw, M.Green, The MR Toolkit Peers Package and Experiment, Proceedings IEEE Virtual Reality
Annual International Symposium, 463-470, (1993).
5 J.Calvin, A.Dickens, B.Gaines, P.Metzger, D.Miller, D.Owen, The Simnet Virtual World Architecture,
Proeedings IEEE Virtual Reality Annual International Symposium, 450-455, (1993).
6 T.H.Massie, J.K.Salisbury, Probing Virtual Objects With the PHANToM Haptic Interface, Proceedings
ASME WInter Annual Meeting, Session on Haptic Interfaces for Virtual Environment and Teleoperator
Systems, (1994).
7 W.A.McNeely, Robotic Graphics: A New Approach to Force Feedback for Virtual Reality, Proceedings
IEEE Virtual Reality Annual International Symposium, 336-341, (1993).
8 P.Buttolo, Shared Virtual Environments with Haptic and Visual Feedback, http://rcs.ee.washington.edu/
BRL/project/shared/
9 M.Ishii, M.Nakata, M.Sato, Networked SPIDAR: A Networked Virtual Environment with Visual,
Auditory, and Haptic Interactions, PRESENCE, vol.3, no. 4, 351-359, (1994).
10 E.Pere, D.Gomez, G.Burdea, N.Langrana, PC-Based Virtual Reality System with Dextrous Force-
Feedback, Proceedings of the ASME Dynamics Systems and Control Division, (1996).
11 K.C.Claffy, Internet Traffic Characterization, Dissertation for the degree of Doctor of Philosophy, CSE,
University of California, San Diego, (1994).
12 R.M. Brach, Mechanical Impact Dynamics, Wiley, New York, (1991).
13 G.F.Franklin, J.D.Powell, Feedback Control of Dynamic Systems, Addison-Wesley, (1991).
14 K.B.Shimoga, A Survey of Perceptual Feedback Issues in Dexterous Telemanipulation: Part I. Finger
Force Feedback, Proceedings IEEE Virtual Reality Annual International Symposium, 263-270, (1993).
15 P.Buttolo, B.Hannaford, Pen-Based Force Display for Precision Manipulation in Virtual Environment,
Proceedings IEEE Virtual Reality Annual International Symposium, (1995).
16 H.W.Broll, Interacting in distributed collaborative virtual environments, Proceedings Virtual Reality
Annual International Symposium, 148-155, (1995).
17 P.Buttolo, B.Hannaford, B.McNeely, An Introduction to Haptic Simulation, Tutorial notes IEEE Virtual
Reality Annual International Symposium, (1996).
18 G.Singh, L. Serra, W.Png, A.Wong, H.Ng, BrickNet: sharing object behaviors on the Net, Proceedings
IEEE Virtual Reality Annual International Symposium, 19-27, (1995).
19 T.Funkhouser, Network Topologies for Scalable Multi-User Environments, Proceedings IEEE Virtual
Reality Annual International Symposium, 222-228, (1996).
20 M.R.Macedonia, M.J. Zyda, D.R.Pratt, D.P.Brutzman, P.T.Barham, Exploiting reality with multicast
groups, Proceedings IEEE Virtual Reality Annual International Symposium, 2-10, (1995).
21 P. Buttolo, R. Oboe, B. Hannaford, B.McNeely, Force Feedback in Virtual and Shared Environments,”
Proceedings MICAD, Paris, (1996).
22 R.Gossweiler, R.J.Laferriere, M.L.Keller, R.Pausch, An Introductory Tutorial for Developing Multiuser
Virtual Environments, PRESENCE, vol 3, no 4, 225-264, (1994).
23 P.Buttolo, D.Kung, B.Hannaford, Manipulation in Real, Remote and Virtual Environments, Proceed-
ings IEEE Conference on System, Man and Cybernetics, (1995).

10

You might also like