Professional Documents
Culture Documents
Single-Point Interaction For 3D Object Handling in Augmented Reality
Single-Point Interaction For 3D Object Handling in Augmented Reality
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 61
Abstract— Augmented Reality (AR) is a combination of two different worlds, between motions of real and virtual elements are
merging in single display. AR brings a lot of advantages to provide naturally and intuitively interaction. This paper will discuss
the single-point interaction in AR that allows user to handle and manipulate the 3D object in real-time. The interaction has
discovered in AR environment due to the limitation in vision-based tracking technique since tracking are most fundamental
issues in developing AR application.We will explore the implementation of single-point interaction for robustness of tracking
which is required to ensure both elements of real and virtual can be combined relative to the user’s viewpoints. Besides reduce
system lag and increase the speed in tracking for 3D object drawn on the marker ground without delay.
Index Terms— Interaction techniques, Image Processing, Computer Vision, Real-time systems.
—————————— ——————————
1 INTRODUCTION
In many AR applications, tracking is one of the prob- ing only involves the orientation tracking.
lematic factors that contribute to the ineffective AR pres-
entation [9]. For AR system to be able to provide the user 3.2 Viewpoints Calculation
with the correct information of real and virtual images, it The aim in this process is to obtain the Tcm as shown in
needs to know the user location virtually. Tracking is figure 2 is the transformation matrix has found from
used for providing the input needed for the correct regis- markers detection produce both of coordinates marker
tration of virtual objects with respect to the real world. and camera. By using the perspective transformation ma-
AR tracking recently can be divided into sensor-based trix, matrices values from rotation and translation matrix
[10], vision-based [10, 11], and hybrid tracking techniques are found. The perspective viewpoints for transformation
[12]. matrix have transformed the camera viewpoints as shown
in figure 2. This perspective transformation matrix is be-
ing found from the default initialization of calibration
3 SETTING UP THE TRACKING SYSTEM process. The calibration process was necessary to define
The tracking system in this project uses the vision-based the matrix of world coordinate for camera calibration.
tracking that involves image processing in order to calcu- Camera calibration is a camera position and orientation in
late pose of camera to be aligned to real world objects the world is recognized as extrinsic parameters.
correctly. In tracking there are static and dynamic regis-
tration errors but many researchers have found the ways
to solve [13]. The ARToolKit [5] library has developed for
vision-based tracking very famous in AR area, while oth-
er researches described efficient approaches minimizing
the registration errors in tracking [10]. Marker-based was
original tracking for vision-based traditionally tracking
by using ARToolkit like tracking multiple distinct markers
works best when the marker attached another marker.
However, model-based is usually useful for correspon-
dence between 2D entities and their 3D world coordinate.
Model-based is the technique that renders 3D objects
when world coordinate of marker track the world coordi- Fig. 2: Calculate viewpoints in coordinates system
nate. Markerless also current trends improvement for
vision-based tracking which is detecting the user hand All involves viewpoint calculation and their estimation
[14]. process. When two markers detected in parallel on the
Vision-based tracking use camera to capture real ob- screen, the equations of points drawn on the tool marker
ject to display the virtual object on the real object went and fiducial marker tracked in the ideal screen coordi-
through a few process. Marker detection occurs when nates. When computer is running the program, the IDs
tracking process has done that tracks the 3D object to that defines the markers has assigned to array list since it
available on the marker in real world, the viewpoint of gains the parameters values. The use of marker
the virtual object has calculated and the rendering process represented in list of array has unique characteristic that
complete to draw the object. 3D position can be counted from a single marker. The
same results can have by using multi markers’ arrays. The
3.1 Marker Detection variable declared the near sources position and far
In this process markers went through image processing sources position, both position are detected relatively.
for their thresholding in the constant value. Contouring User freely moves around spreadly over a wider area
process bring the regions for outline that has fitted by since we have defined how far they can spread as well as
extracting the four line segments. This set of lines pro- how near can be viewed. Therefore this tracking method
duce four vertices the the coordinates. The vertices of the proposed in this paper uses the system-defined IDs of
regions are useful for intersections. They collided and the marker arrays. Pattern for marker arrays defined in
line segments are calculated to be stored in the system. Marker node in scene graph. Computer captured and
Fiala [3] has improved standard tracking with digital recognized the relationships of all markers. This will de-
processing by storing the values in array and represents scribe in world coordinates.
the IDs. All markers pattern store defines by IDs, known
as user-defined markers, has improved the accuracy of 3.3 Render and Display
tracking, use them on many objects without needs of run- The figure 3 illustrates the 3D object was drawn on top of
ning out. If single marker is detected, ID of marker is be- fiducial markers for ground. The viewpoint position is
ing passed as an integer to the first parameter of marker known by the computer where the camera tracks the vir-
configuration and the optional marker considered as tual object correctly for augmentation in relative way.
double to the second parameter. The tracking involves When the marker is recognized by the camera, the render-
orientation and position tracking which sometimes track- ing process executed to display the 3D objects by track the
position and angle and draw them over the camera enti- the draw function. This condition is similar to the tool
ties. Therefore it appears naturally to the display and user marker detection. When both of markers successful de-
observer through the single viewpoint. tected, then system checks for collision detection between
single-point tool and the desired 3D object has touched.
The intersection process will decide whether the 3D object
is picked or deleted. The delete function runs not to delete
the objects from the objects’ list, but it is for avoiding
from being updated to the object’s list.
4 SINGLE-POINT INTERACTION
AR system provides user with the opportunity to interact
with virtual elements in many ways [15]. However num-
bers of issues has been encountered. In a technical aspect,
tracking and registration issues for accuracy, the robust-
ness and system configurations itself. In contrast from the
usability aspect, the AR interaction should be in natural
manner and intuitive interfaces. The occlusion issue is
one of the usability when we need to solve the boundaries
for real elements to occlude virtual elements.
Many researchers have explored interaction tech-
niques applied in AR. In AR interaction techniques like-
Go-Go Interaction [16] has encouraged other researchers
to continue studies in interaction issues. VOMAR [17]
application developed for 3D object handling method by
using a marker represented as paddle in tangible setting.
The FingAR [15], the MagicCup [18] are example of mul-
timodal interaction use gestures and speech. Mobile
phone or wearable technologies explored acts as an inte-
raction device for object manipulation in AR [19].
With feature-based tracking, the single marker Fig. 5: Flow of single-point interaction for object handling
represents as a tool to allow user interacts and manipulate
the desired object. In global define, this single marker
contains with points as shown in figure 4, points that
connects with geometry node for every 3D object to be
picked and release.
The intersection between points with 3D object that has Fig. 6: Logic of single-point intersection for 3D object handling
been selected known as single-point interaction. This
proposed method will allow them to collide with each Figure 6 shows how points drawn on the tool marker in-
other to update any changes has been made by user. The teracts with the ground marker for objects translation.
single marker contains with points must relative to the Single-point interaction method will allow user freely
ground fiducial markers to approve the collision detec- moves around with maintain in his own viewpoint
tion in order to place and remove the object on ground. through the ground to copy the 3D object on the tool
By updating the transformation matrix for each function marker. The system runs need to detect both of the mark-
called by single marker node and ground marker node. ers. The intersection between nearpoint and desired ob-
The flow for implementation as described in figure 5 ject allow user to manipulate the picked object by chang-
below. When the multiple marker belongs to the contain- ing the position.
er has detected, the list of 3D objects being are called by
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 1, JANUARY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 64
users rating the effectiveness performances for overall are [6] H. Kato, M. Billinghurst, I. Poupyrev, N. Tetsutani and K. Tachibana
positive. In contrast, for drop behavior, both of user no- Tangible Augmented Reality for Human Computer Interaction. In
vices and non-novices satisfied for the effectiveness of Proc. of Nicograph 2001.
allocating the objects on the ground. However for the ro- [7] Billinghurst, M., Poupyrev, I., Kato, H., May, R. (2000) Mixing
tation task, non-novices did not prefer much. Realities in Shared Space: An Augmented Reality Interface for
Collaborative Computing. In Proceedings of the IEEE Interna-
tional Conference on Multimedia and Expo (ICME2000), July
6 CONCLUSION
30th - August 2, New York.
As conclusion, this paper contributes to the development [8] Ajune W., Sunar S. M., "Survey on Collaborative AR for Multi-
of interaction technique for AR application. The issues in user in Urban Studies and Planning” Edutainment 2009 Sprin-
user tracking for 3D object handling in AR is investigated. ger-Verlag Berlin Heidelberg, Lecture Note in Computer
To develop interaction technique in AR, we are seeking Science LNCS 5093,(2009)
for robustness of our tracking system and providing user [9] Stricker, G.Klinker and D. Reiners, A fast and robust line-based
an intuitive method for 3D object handling. optical tracker for augmented reality applications. In IWAR ‘98,
In this paper describes about the implementation of pp. 31-46, 1998.
new interaction method using feature-based tracking for [10] J. P. Rolland, L. Davis and Y. Baillot., A survey of tracking
interaction called single-point interaction. Single-point technology for virtual environments, In Fundamentals of
interaction was assigned each points with coordinate axes Wearable Computers and Augmented Reality, 1st ed, W. Bar-
belongs to tool marker intersect with geometrical model field and T. Caudell, Eds. Mahwah, NJ: CRC, 2001, pp. 67-112.
coordinates for collision detection. This method is for [11] J. Park, B. Jiang and U. Neumann. Vision-based pose computa-
speed up the tracking process and with the single-point tion: robust and accurate augmented reality tracking. In IWAR
interaction helps the accuracy and robust in dealing with ‘99, pp. 312, 1999.
virtual object when camera viewpoint is keep calculating [12] G. Klein and T. Drummond. Robust visual tracking for non-
in real-time. instrumented augmented reality. In ISMAR ‘03, pp. 113-122,
In this paper we also explain on user tracking tech- 2003.
niques for vision-based tracking system. Interaction tech- [13] M. Bajura and N. Ulrich. Dynamic registration correction in
niques proposed by other researches with multimodal video-based augmented reality systems. IEEE Computer
interaction and mobile as a device. The behaviors of 3D Graphics and Applications, 15:5 52-60, September 1995.
manipulation such as change the position, able to rotate [14] Comport, E. Marchand and F. Chaumette. A real-time tracker
and remove the object has been implemented and user for markerless augmented reality. In ISMAR ‘03, pp. 36-45,
ability to add and remove object intuitively also has been 2003.
employed for interaction techniques. This single-point [15] V. Buchmann, S. Violich, M. Billinghurst, and A. Cockburn,
interaction technique will overcomes the 3D object mani- "FingARtips: gesture based direct manipulation in Augmented
pulation problems in AR and brings effective and intui- Reality," in Proc. 2nd International Conference on Computer
tive user interaction in AR application. It also provides Graphics and Interactive Techniques in Australasia and South
seamless interaction between real and virtual elements. East Asia, 2004, pp. 212-221.
[16] Poupyrev, I., Billinghurst, M., Weghorst, S., Ichikawa, T. (1996)
ACKNOWLEDGMENT The Go-Go Interaction Technique: Non-Linear Mapping for Di-
rect Manipulation in VR. Proceedings of the 1996 ACM Sympo-
We would like express our appreciation Research Man-
sium on User Interface Software and Technology (UIST ’96),
agement Center, Universiti Teknologi Malaysia (UTM)
ACM Press, pp. 79-80.
and Malaysia Government for providing financial sup-
[17] Kato, H., Billinghurst, M., Poupyrev, I., Imamoto, K., Tachiba-
port of this research through FRGS grant scheme Vote
na, K.: Virtual Object Manipulation on a Table-Top AR Envi-
78599.
ronment. In: Proceedings of the International Symposium on
Augmented Reality (ISAR 2000), October 2000, pp. 111–119
REFERENCES
(2000)
[1] Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre. [18] Irawati, S., Green, S., Billinghurst, M., Duenser, A., Ko, H.: An
Recent advances in augmented reality. IEEE Computer Graphics & evaluation of an augmented reality multimodal interface using
Applications, 21:6, 34-47, 2001. speech and paddle gestures. In: Pan, Z., Cheok, D.A.D., Haller,
[2] S. White, D. Feng, and S. Feiner, "Poster: Shake Menus: Activation and M., Lau, R., Saito, H., Liang, R. (eds.) ICAT 2006. LNCS, vol.
Placement Techniques for Prop-Based 3D Graphical Menus," in Proc. 4282, pp. 272–283. Springer, Heidelberg (2006)
IEEE 3DUI, Lafayette, LA, pp. 129-130. , 2009. [19] Henrysson, A., Billinghurst, M., Ollila, M.: Virtual object mani-
[3] M. Fiala, "ARTag, a fiducial marker system using digital techniques," in pulation using a mobile phone. In: Proceedings of the 2005 in-
Proc. CVPR 2005, San Diego, CA, pp. 590-596. , 2005. ternational conference on Augmented tele-existence, Christ-
[4] F. Zhou, H. B.-L. Duh, and M. Billinghurst, "Trends in Augmented church, New Zealand, December 5-8 (2005)
Reality Tracking, Interaction and Display: A Review of Ten Years of
ISMAR " in Proc. IEEE ISMAR, Cambridge, UK, pp. 193-202, 2008.
[5] M. Billinghurst, H. Kato, and I. Poupyrev, “The Magic-Book – Moving
Seamlessly between Reality and Virtuality”, IEEE Computer Graphics
and Applications 21 (2001), no. 3, 6–8, ISSN 0272-1716, 2001.