You are on page 1of 6

{ŒGplllGp•›Œ™•ˆ›–•ˆ“Gj–•Œ™Œ•ŠŒG–•G

p•‹œš›™ˆ“Gp•–™”ˆ›ŠšGOpukpuGYWW_P
kjjSGkˆŒ‘Œ–•SGr–™ŒˆGqœ“ GXZTX]SGYWW_

Intelligent User Interface for Human-Robot Interaction

T. H. Song, J. H. Park, S. M. Chung K. H. Kwon and J. W. Jeon


School of Information and Communication Engineering School of Information and Communication Engineering
Sungkyunkwan University Sungkyunkwan University
Suwon, Korea Suwon, Korea
{ thsong, fellens, kuni80 }@ece.skku.ac.kr { khkwon, jwjeon }@yurim.skku.ac.kr

Abstract - Human-Robot Interaction technology, used to Any interface should be based on as natural and intuitive
command a robot or to acquire information from a robot, is interaction as possible. Therefore, such a personal service
defined as the communication method between humans and the robot must be based on embedding the robot with high
robot. Human-robot interaction consists of the input facility, intelligence and designed with safety in mind [1. 2].
environment display, intuitive command and reaction, and the
Zenn Z.’s research discusses the necessary information
architecture of the interface program. This research focuses on
the framework of the interface program that contains the exchange between operator and service robot and proposes a
controls, communication architecture and robot mark-up human friendly MRI(Magnetic Resonance Imaging). The
languages. The application that builds the Intelligent User resulting intelligent human-robot system should be able to
Interface uses the functions: Plug-and-play for robot peripherals; solve many service tasks even in unknown situations using a
Self-diagnosis; and Multimodal sensor fusion. This research natural form of communication between human and service
proposes the robot’s architecture to make the decision based on robot [3]. Perzanowski D.’s research focused on a multimodal
the robot’s level of autonomy. Our Intelligent User Interface interface for mobile robots. He assumed a model of
system consists of Ultrasonic, Position sensitive detector (PSD), communication and interaction that, in a sense, mimics how
and DC motors. We verify operation of the Intelligent User
people communicate. It incorporates both natural language
Interface for the Human-Robot Interaction through evaluating a
scenario: the remote control of the mobile robot’s navigation in understanding and gesture recognition as communication
an environment containing obstacles. modes [4]. Other human-robot interface research uses a
combination of methods to guide the design of service robots.
I. INTRODUCTION The work is multi-disciplinary and draws on methods and
theories from user centric system design, industrial design,
Much research in robot technology focuses on human- natural language processing and robotics. The design process
robot interaction to enable robots to perform services instead is iterative, applying results from user trials to refine the
of humans. The development of robot technology is becoming design of a prototype or component [5].
more complex. Many fields of endeavor are required: Human-robot interaction consists of four types of
electronic; electrical; communication; semiconductor; and technique. First, the interaction method contains an input
Industrial engineering. Among these, artificial intelligence and device and an input method. The basic input mechanism uses
human engineering are important technologies for human- keyboard and mouse [6. 7]. A haptic interface device is more
robot interaction. A robot’s working paradigm is dependent on sensitive and accurate than either keyboard or mouse. Haptic
a robot’s type of work. In the past a robot was designed to devices and wearable input devices may be specially designed
perform repetitive simple actions to carry out standard tasks. for robot control of, for instance, mobile or humanoid robots
There is a trend towards designing robots to perform more [8. 9. 10. 11]. Second, technology is configured to build maps
varied, complex services requiring advanced ‘intelligence’. of and display unknown environments. Cognition can be built
Robots were designed to rapidly carry out instructions without into the robot to sense and estimate the environment using
mistake. Current robot developments require them to perform ultrasonic, position sensitive detectors, cameras, and laser
services replacing humans, especially in situations that would range finders. A robot can simultaneously build 2D or 3D
otherwise compromise a human’s quality of life. Examples environment maps. It can remotely monitor the area around
where robots might be employed include welfare services, the robot [12. 13]. The third technology of human-robot
medical treatment, mechanized agriculture, atomic power interaction is the cognition: sensitivity to user's commands and
plants, and electric/gas work. Nanotechnology and component reaction. The commands for interaction between human and
minimization techniques allow robots to improve a human's robot are the senses of voice, gesture, emotion, touch, and
quality of life in the human's life space. The robots can take force, etc [14. 15. 16. 17]. Last of all, the technique for
various forms as well as being miniaturized. Living in the efficient control of the interaction method is the framework
same space as a person opens up the possibilities for personal for the user interface program. The newest interface
service robots that offer convenience to humans that can be framework or robot middleware architecture method is the
easily utilized. semantic web and multimodal sensor interface [20. 21].

`^_TXT[Y[[TYX^XT_VW_VKY\UWWGⓒYWW_Gplll X[]Z
In this paper, the focus is on the user interface program fast polling period is used, the processor’s performance
framework. We propose the new user interface program decreases. This mechanism of polling saves the processor’s
architecture for interaction with a robot. The special resources. If the slave processor detects a signal from the
components for the interactive framework solution are the plug-in sensor, then the slave processor transfers the sensor
function of Plug-and-Play, the function of self-diagnosis, and information to the master processor through the Serial
the multi-modal sensor fusion algorithm. These proposed Peripheral Interface (SPI) protocol (Fig. 1).
functions of the user interface framework can improve the
B. Self-diagnosis
ease and efficiency of robotic control. The number of mistakes
Early forms of robot were largely industrial, performing
made by the robot is decreased by checking the self-diagnosis
repetitive simple tasks. Current designs look at service robots
function when the robot sensor is out of order.
whose function is to help human life in any environment:
The paper is organized as follows: Section II describes the
office, home, public area, hospital and so forth. The robot’s
components of the intelligent user interface. Section III
fault tolerance and safety are thus important. The focus of
presents the decision level of robot autonomy. Section IV
much research and technological innovation is to develop the
defines the intelligent user interface for each level of
robot’s fault-tolerance and safe operation. The “A Study of
autonomy. The experimental results of the intelligent user
Self-diagnosis Systems for an Autonomous Mobile Robot”
interface for the human-robot interaction are shown in Section
proposed a method that copes with self-conditioning using
V. Section VI presents a brief conclusion for our proposal.
internal sensory information by multiple sensors in the system.
II. COMPONENTS OF INTELLIGENT USER INTERFACE Its focus is on functional units in a single robot system. The
method that divides fault conditions into three levels is also
A. Plug-and-Play
examined. The behavior, which must cope with the faulty
Generally, the robot recognizes its surrounding
condition, is set at each level to continue to execute the task
environment using various sensors. It recognizes user's
[24]. Four current sensors are used to measure power variation
command and behavior. The Robot sensor is an important
and redundant methods are used to manage, detect and
element. It acts like a human sense. Actually, many kinds of
diagnosis current sensor status [25]. The other self-diagnosis
robot sensors are imitations of human sense. Therefore, the
research considers the design and implementation of a
sensor’s initialization (e.g. sensor set up and start-up process)
prototype redundancy management scheme for on-line
is important. So research of sensor plug in, sensor
detection and isolation of faulty sensors that can be used as an
initialization, and sensor calibration has been promulgated by
integral part of intelligent instrumentation systems in strategic
many researchers. “Toward a practical robot programming
facilities [26].
framework” presents a set of design goals that an ideal robot
framework should achieve using Player 2.0. As a step toward
the goal of building a useful robot framework, this research
developed Player to meet the goals of a robot framework. It
provides compatibility with the Player 1.x code base
developed by the existing Player user community [22].
However it lacks hard real-time performance. The “Prototype
design of the plug-and-play desktop robotic system” presents
a prototype of a desktop robotic system that enables the plug-
and-play function via the internet through a Universal Serial
Bus (USB) port of a Personal Computer (PC). A distinct
feature of the design is to bring plug-and-play functionality
into this design using a high speed USB interface device. An
internal distributed control system is also adopted in our
design. This enables improved performance by including
advanced control schemes. In addition, versatility, stability
and system cost are taken into consideration [23]. Previous Fig. 1. Plug-and-play algorithm by polling method.
"Plug-in" methods, based on the USB interface, cannot use a
low cost processing unit or microprocessor if they do not TABLE I
support USB functionality. Other “Plug-and-Play” PENDING_FAULT_VALUE BY USER COMMAND
implementations are PC-based only. Therefore these two USER COMMAND PENDING_FAULT_VALUE
methods cannot be adapted to low cost robotic systems. MOVE GO 5 sec
MOVE BACK 5 sec
This proposed “Plug-and-Play” functionality can utilize a TURN LEFT 5 sec
low cost processor. It uses a polling input port in the TURN RIGHT 5 sec
processor. The polling method is efficient in continuously REQUEST SENSOR DATA 3 sec
checking the signal. This is better than the alternative of using REQUEST ROBOT STATUS 3 sec
an interrupt function. But the polling period is not fast. If a MOVE POINT TO POINT 30 sec

X[][
In this paper, the proposal of self-diagnosis for the robot C. Multi-modal Sensor Fusion
consists of three parameters. First, a check parameter Research in multi-modal sensors focuses on the
measures how faithfully the robot executes user commands. acquisition of useful sensor data, similar to human senses,
The user can order the robot to move or to perform a task. The through the sensor fusion. Special technologies for sensor data
movement command controls the robot’s navigation. The task fusion have been developed by many researchers. The
command performs something of service to the human. Our “Decision-Theoretic Multi-sensor Planning and Integration for
self-diagnosis only checks the move command. The fault Mobile Robot Navigation” presents a decision-theoretical
check procedure for the robot measures the difference framework that allows for rational decision making under
between the intent of the user's command and actual uncertainty. It is a highly modular system that facilitates easy
repositioning of the robot. If the difference is above the system integration [27]. Other multi-modal sensor fusion
POSITION_FAULT_VALUE threshold then the research creates simple and intuitive user interface and tele-
FAULT_POSITION_COUNT is incremented. The second operations to control the slave robot easily. It provides multi-
parameter is the time taken to perform the user command. If modalities such as visual, auditory and haptic senses. It
robot does not response within the expected time after it enables an operator to easily control every function of a robot
receives the user’s command then the deployed in a field, ROBHAZ-DT2 [28].
FAULT_PENDING_COUNT is incremented. The expected This paper presents a sensor fusion algorithm using a
time for each command is specified in Table 1. The third CMOS camera, ultrasonic, and position sensitivity detector.
parameter is whether or not the sensor is operating normally. The position sensitivity detector (GP2D120) has a cone type
The sensor check is complex. However, our proposed system characteristic of about 15 degrees. It detects nearby obstacles
of sensor checking is defined by sensor type. So we define (about 4cm ~ 40cm from the robot). The ultrasonic sensor
sensor types and the number of sensors: one CMOS camera; 5 (SRF08) has a cone type characteristic of about 60 degrees,
ultrasonic sensors; and 5 position-sensitive detectors. The the detection range is 30 ~ 1100 cm. The ultrasonic sensor’s
fault checking of the CMOS camera uses the mean color minimum range is limited from 30cm to 300cm from the
image (RGB). If the result of fault checking is a failure, then robot. As a result, the position sensitive detector measures
the FAULT_CAMERA_COUNT is incremented. A CMOS obstacles in the near proximity, whereas the ultrasonic sensor
camera’s fault checking is based on the decision whether the can measure long distances. The CMOS camera senses the
mean value of the color image is larger than the shape of obstacles using ultrasonic sensors and position
FAULT_PIXEL_HIGH_VALUE or smaller then the sensitive detector arrays. We used a low cost CMOS camera
FAULT_PIXEL_LOW_VALUE. The fault decision value for in this prototype (Fig. 2).
the camera can be defined by the user monitoring it. The fault
III. DECISION LEVEL OF AUTONOMY
checking mechanism for the ultrasound sonar and position
sensitive detectors can be detected by the sensor’s setting in The decision level of autonomy for a robot is important in
the robot. The ultrasound sonar sensor and position sensitive human-robot interaction. The level of autonomy of human-
detectors are crosschecked against distance data. If a sensor robot interaction consists of environment recognition, action
fault is detected then the FAULT_SONAR_COUNT or decision, and intelligent behavior. The research into autonomy
FAULT_PSD_COUNT is incremented. level for robots is in its developmental stages. “A Model for
Types and Levels of Human Interaction with Automation”
outlines a model for types and levels of automation. This
provides a framework and an objective basis of automation for
making such choices. It proposes that automation can be
applied to four broad classes of function: 1) information
acquisition 2) information analysis, 3) decision and action
selection, and 4) action implementation. Within each of these
types, automation can be applied across a continuum of levels
from low to high [29]. Other research on the level of
autonomy is to design a robot user interface for mobile
devices. Human robot interaction (HRI) as well as the limited
input and output functions of mobile devices should be
considered [30].
This paper follows existing research based on ten steps in
the level of automation of decision and action selection. Our
level of autonomy will be decided by a proposed cognition
algorithm and self-diagnosis mechanism. Our cognitive
algorithm includes plug-and-play functions and multi sensor
fusion algorithm. The cognitive algorithm on the level of
Fig. 2. The detection range of Position sensitive detector and Ultrasonic. decision-making is based on the number of plug-in sensors

X[]\
and is the result of multi sensor fusion. The cognitive ability
has a higher level when the number of sensors is greater. The
function of self-diagnosis is based on the present ability level
to perform the desired action. If the result of self-diagnosis is
a fault then the robot cannot operate and does not perform the
user’s mission. The robot’s level of autonomy consists of two Fig. 5. Level of autonomy membership function.
parameters. One is the ability of cognition: plug-and-play;
multi-modal sensor fusion. Another parameter is the self- IV. INTELLIGENT USER INTERFACE
diagnosis function. The level of autonomy for the robot is An intelligent user interface manages the user commands
decided by fuzzy control logic output. The input of fuzzy and display of sensor data autonomously. If the robot has a
control logic uses the level of autonomy and decision level. low level of autonomy, then the intelligent user interface
The fuzzy controller output is generated by a fuzzy rule set by focuses on the sensor display. At a high level of autonomy the
the level of decision and level of action (Table 2-4 and Fig. 3- robot’s display is more detailed and visual than for a low level
5). of autonomy robot. Low level autonomy’s user interface uses
simpler commands. At a high level of autonomy the user
TABLE II interface display is complex visualize than for the low level of
COGNITIVE VARIABLE
autonomy. The user can control a robot through intuitive
VARIABLE MEAN
Z There is no sensor fusion ability commands. Therefore, an intelligent user interface supports a
S Sensor fusion ability is few high level of human-robot interaction for processing intuitive
M Sensor fusion ability is normal user commands. The function of sequencing the user interface
B Sensor fusion ability is high display screen is the key point along with the robot’s level of
VB Sensor fusion ability is very high
autonomy. Our intelligent user interface is show in Fig. 6.
TABLE III User interface for level of autonomy 2~4 is text based display.
SELF-DIAGNOSIS VARIABLE User can be decision robot environment using CMOS camera,
VARIABLE MEAN Ultrasonic, and Position sensitive detector. Therefore robot
Z There is no problem to robot ability is can be upgrade by user’s judgment.
S There is some problem to robot
The intelligent user interface for level of autonomy 5 ~ 7
M Problem of robot is normal
B There is much problem to robot is more complex visualize sensor display than user interface
VB There is very much problem to robot level of autonomy 2~4. But user can be control easily (Fig. 7).

TABLE IV
LEVEL OF AUTONOMY VARIABLE
VARIABLE MEAN
L1 Level of Autonomy 1
L2 Level of Autonomy 2
L3 Level of Autonomy 3
L4 Level of Autonomy 4
L5 Level of Autonomy 5
L6 Level of Autonomy 6
L7 Level of Autonomy 7
L8 Level of Autonomy 8
L9 Level of Autonomy 9
L10 Level of Autonomy 10
Fig. 6. Intelligent User Interface (IUI1) for LOA 2 ~ 4.

Fig. 3. Ability of cognitive membership function.

Fig. 4. Self-Diagnosis membership function.


Fig. 7. Intelligent User Interface (IUI2) for LOA 5 ~ 7.

X[]]
V. EXPERIMENT INTELLIGENT USER INTERFACE FOR
HUMAN-ROBOT INTERACTION
The experiment evaluating our proposal for an intelligent
user interface, accounting for the robot’s level of autonomy, is
as follows (Fig. 11). The robot used in the experiment consists
of motor part, sensor part, and control part. The motor is
powered by DC 12 voltage, the maximum torque is 2kg/cm,
and normal toque is 102rpm. The motor part consists of four
motors in all. It also includes two encoders. The encoders are
set on the front of the robot on the left and right motors. There
are three types of sensor: one 1.3 Mega-pixels CMOS camera
module; 5 ultrasonic sensor (SRF08); and 5 position sensing
Fig. 8. The obstacle distance display using color. devices (GP2D120). All of the sensors and motors are
controlled by low cost microcontrollers in the sensor interface
board. The sensor interface board can communicate between
the robot server and robot through RS-232C protocol.

Fig. 9. Robot indicator.

The interface display screen for level of autonomy 5~7 is Fig. 11. Test environment.
fill with graphical sensor display and indicator. The sensor
display part is present 3 colors. The display of virtual stick
shows Fig. 8. User can be known avoid direction for obstacle
avoidance simultaneously by indicator in user interface for
level of autonomy 5~7. The indicator operating mechanism
shows Fig. 9.
The intelligent user interface for level of autonomy 8
~ 10 has most complex visualize graphical display. However
user can to do operate by user intuitive command. This user
interface is best support for human-robot interaction. The
robot can do autonomously navigation without human
command and control through robot’s auto navigation
algorithm. Our high level of autonomy robot is contain the
virtual force field intuitive navigation algorithm (Fig. 10). Fig. 12. Test of user performance.

Fig. 10. Intelligent User Interface (IUI3) for LOA 8 ~ 10. Fig. 13. Test of user error.

X[]^
The experiment for the intelligent user interface [13] H. Ahn, I. Sa, J. Choi, “3D Remote Home Viewer for Home Automation
demonstrates the plug-and-play function, self-diagnosis, and Using Intelligent Mobile Robot”, International Joint Conference 2006,
Busan, Korea, pp. 3011-3016, Oct. 18-21, 2006.
multi-modal sensor fusion. It also performs the robot’s
navigation at each level of autonomy via user command. The [14] J. Trafton, N. Cassimatis, M. Bugajska, D. Brock, F. Mintz, and A.
Schultz, “ Enabling Effective Human-Robot Interaction Using
results of our propose function are proved by determining the Perspective-Tacking in Robot”, IEEE Transactions on Systems, Man,
usability (Fig. 12-13). And Cybernetics-Part A: Systems and Humans, Vol.35, No. 4. , pp. 460-
470, July 2005.
VI. CONCLUSION
[15] K. Kim, K. Kwak and S. Chi, “Gesture Analysis for Human-Robot
Our proposed function of an intelligent user interface Interaction”, ICACT, pp. 1824-1827, Feb 2006.
provides an easy and intuitive control based on the autonomy [16] R. Mourant and P. Sadhu, “ Evaluation of Force Feedback Steering in a
level of the robot. Our useful functions, plug-and-play, self- Fixed Based Driving Simulator”, Proceedings of the Human Factors and
diagnosis, and multi-modal sensor fusion, are shown to help Ergonomics Society 46th Annual Meeting, pp. 2202-2205, 2002.
upgrade the human-robot interaction. Our features are [17] M. Scheutz, P. Schermerhorn, C. Middendorff, J. Kramer, D. Anderson
embedded in the software of a functionally operating robot for and A. Dingler, “Toward Affective Cognitive Robots for Human-Robot
Interaction”, Amerian Association for Artificial Intelligence
novice.
(www.aaai.org), 2006.
ACKNOWLEDGMENT [18] Y. Ha, J. Sohn, Y. Cho, and H. Yoon, “Design and Implementation of
Ubiquitous Robotics Service Framework”, ETRI Journal, Vol. 27,
This research was supported by the MIC (Ministry of Number 6, pp. 666-676, December 2005.
Information and Communication), Korea, under the ITRC
[19] H. Yanco, J. Drury, “A Taxonomy for Human-Robot Interaction”, AAAI
(Information Technology Research Center). Technical Report FS-02-03, pp. 111-119, November 2002.

REFERENCES [20] D. Ryu, S. Kang, M. Kim, “Multi-modal User Interface for


Teleoperation of ROBHAZ-DT2 Field Robot System”, Proceedings of
[1] IFR UN-ECE, World Robotics 2000. 2004 IEEE/RSJ International Conference on intelligent Robots and
[2] J. Heinzmann, and A. Zelinsky, “A safe-control paradigm for human- Systems, Sendai, Japan, pp. 168-173, Sep. – Oct. 2, 2004.
robot interaction”, Journal of Intelligent and Robotic Systems, 25(4), pp. [21] R. Sharma, V. Pavlovic, T. Haung, “Toward multimodal human-
295-310, 1999. computer interface”, Proc. IEEE, vol.86, pp. 853-869, May 1998.
[3] Z. Zenn, J. Jung, and K. Park, “Human-friendly Man-Machine [22] T. Collett and B. Macdonald, B. Gerkey, “Toward a Practical Robot
Interaction in Smart Home”, IEEE International Workshop on Robot and Programming Framework”, Proceedings of the Australasian Conference
Human Communication, pp. 177-182, 1996. on Robotics and Automation, 2005.
[4] D. Perzanowski, A. Schultz, W. Adams, E. Marsh, M. Bugajska, [23] Y. Zuang, J. Su, “Prototype Design of The Plug-and-Play Desktop
“Building a Multimodal Human-Robot Interface”, Intelligent Systems Robotic System”, Proceedings of the Third International Conference on
Publication Date, Volume: 16, no.1, pp. 16-21, Jan-Feb 2001. Machine Learning and Cybernetics, Shanghai, pp. 26-29, August 2004.
[5] A. Green, H. Huttenrauch, M. Norman, “User Centered Design for [24] K. Kawabata, S. Okina, T. Fujii, H. Asama, “A Study of Self-diagnosis
Intelligent Service Robots”, Proceedings of the 2000 IEEE International System for an Autonomous Mobile Robot”, The 27th Annual Conference
Workshop on Robot and Human Interactive Communication Osaka, of the IEEE Industrial Electronics Society, pp. 381-386, 2001.
Japan, pp. 161-166, September 27-29, 2000.
[25] K. Su, T. Chien, C. Liang, “Develop a Self-diagnosis Function Auto-
[6] H. Yanco, J. Drury, J. Scholtz, “Analysis of Human-Robot Interaction at recharging Device for Mobile Robot”, Proceedings of the 2005 IEEE
a Major Robotics Competition”, Journal of Human-Computer Interaction, International Workshop on Safety, Security and Rescue Robotics, Kobe,
2004. Japan, pp. 1-6, June 2005.
[7] B. Maxwell, N. Ward, and F. Heckel, “Game-Based Design of Human- [26] H. Polenta, A. Ray, J. Bernard, “ Microcomputer-Based Fault Detection
Robot Interfaces for Urban Search and Rescue”, CHI 2004 Fringe, 2004. Using Redundant Sensors”, IEEE Transactions on Industry Applications,
[8] S. Zhai, “User performance in Relation to 3D Input Device Design”, vol. 24. No. 5, pp. 905-912, September/October 1988.
Computer Graphics 32(4), ACM, pp. 50-54, November 1998. [27] S. Kristensen, H. Christensen, “Decision-Theoretic Multisensor
[9] S. Zhai, E. Kandogan, Barton, A. Smith and T. Selker, “ Design and Planning and Integration for Mobile Robot Navigation”, Proceeding of
Experimentation of a Bimanual 3D Navigation Interface”, Journal of the 1996 IEEE/SICE/RSL International Conference on Multisensor
Visual Languages and Computing, 3-17, Oct. 1999. Fusion and Integration for Intelligent Systems, pp. 517-524, 1996.

[10] J. Lapointe, N. Vinson, “Effects of joystick mapping and field-of-view [28] D. Ryu, S. Kang, M. Kim, J. Song, “Multi-modal User Interface for
on human performance in virtual walkthroughs”, Proceeding of th 1st Teleoperation ROBHAZ-DT2 Field Robot System”, Proceedings of
International Symposium on 3D Data Processing Visualization and 2004 IEEE/RSJ Internatioal Conference on Intelligent Robots and
Transmission Padova, Italia, June 18-21, 2002. Systems September 28-October, Sendal, Japan, pp. 168-173, 2004.

[11] H. Hasunuma, M. Kobayashi, H. Moriyama, T. Itoko, Y. Yanagihara, T. [29] R. Parasuraman, T. Sheridan, Fellow, C. Wickens, “A Model for Types
Ueno, K. Ohya and K. Yokoi, “A Tele-operated Humanoid Robot and Levels of Human Interaction with Automation”, IEEE Transactions
Drives a Lift Truck”, IEEE International Conference on Robotics and on Systems, Man, And Cybernetics, Part a: Systems and Humans, vol.
Automation Washington, D.C, pp. 2246-2252, May. 2002. 30, No.3, pp. 286~297, May 2000.

[12] J. Park, Y. Lee, J. Song, “Intelligent Update of a Visual Map Based on [30] J. Park, T. Song, P. Xuan, K. Kwon, G. Kim, S. Hong, and J. Jeon, “A
Pose Reliability of Visual Features”, International Conference on Mobile Terminal User Interface for Intelligent Robots”, Proceedings of
Advanced Robotics, Jeju, Korea, August 21-24, 2007.(3D) International Conference on Human-Computer Interaction, Beijing,
China, pp. 903~911, July, 2007

X[]_

You might also like