You are on page 1of 12

Industrial Robot: An International Journal

Head gesture recognition for hands-free control of an intelligent wheelchair

Pei Jia Huosheng H. Hu Tao Lu Kui Yuan

Article information:

Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

To cite this document:

Pei Jia Huosheng H. Hu Tao Lu Kui Yuan, (2007),"Head gesture recognition for hands-free control of an intelligent wheelchair",
Industrial Robot: An International Journal, Vol. 34 Iss 1 pp. 60 - 68
Permanent link to this document:
Downloaded on: 31 January 2016, At: 01:16 (PT)
References: this document contains references to 20 other documents.
To copy this document:
The fulltext of this document has been downloaded 8055 times since 2007*

Users who downloaded this article also downloaded:

Yong Zhang, James P. Neelankavil, (1997),"The influence of culture on advertising effectiveness in China and the USA: A crosscultural study", European Journal of Marketing, Vol. 31 Iss 2 pp. 134-149
Robert A. Olsen, (2012),"The influence of affect on stock price volatility: new theory and evidence", Qualitative Research in
Financial Markets, Vol. 4 Iss 1 pp. 26-35
Lida P. Kyrgidou, Mathew Hughes, (2010),"Strategic entrepreneurship: origins, core elements and research directions",
European Business Review, Vol. 22 Iss 1 pp. 43-63

Access to this document was granted through an Emerald subscription provided by Emerald Subscribers

For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service
information about how to choose which publication to write for and submission guidelines are available for all. Please visit for more information.

About Emerald

Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of
more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online
products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics
(COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

Research article

Head gesture recognition for hands-free control

of an intelligent wheelchair
Pei Jia and Huosheng H. Hu
Department of Computer Science, University of Essex, Colchester, UK, and

Tao Lu and Kui Yuan

Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

Institute of Automation, Chinese Academy of Sciences, Beijing, Peoples Republic of China

Purpose This paper presents a novel hands-free control system for intelligent wheelchairs (IWs) based on visual recognition of head gestures.
Design/methodology/approach A robust head gesture-based interface (HGI), is designed for head gesture recognition of the RoboChair user. The
recognised gestures are used to generate motion control commands to the low-level DSP motion controller so that it can control the motion of the
RoboChair according to the users intention. Adaboost face detection algorithm and Camshift object tracking algorithm are combined in our system to
achieve accurate face detection, tracking and gesture recognition in real time. It is intended to be used as a human-friendly interface for elderly and
disabled people to operate our intelligent wheelchair using their head gestures rather than their hands.
Findings This is an extremely useful system for the users who have restricted limb movements caused by some diseases such as Parkinsons disease
and quadriplegics.
Practical implications In this paper, a novel integrated approach to real-time face detection, tracking and gesture recognition is proposed, namely
Originality/value It is an useful human-robot interface for IWs.
Keywords Wheelchairs, Automation
Paper type Research paper

The successful deployment of IWs requires high

performance and low cost. Like all the other intelligent
service robots, the main performance of IWs includes:
the autonomous navigation capability for safety, flexibility,
mobility, obstacle avoidance; and
the intelligent interface between users and IWs, including
hand-based control (joystick, keyboard, mouse, touch
screen), voice-based control (audio), vision-based control
(cameras), and other sensor-based control (infrared
sensors, sonar sensors, pressure sensors).

1. Introduction
To improve quality of life for the elderly and disabled
people, electric-powered wheelchairs (EPWs) have been
rapidly deployed over the last 20 years (Ding and Cooper,
1995;Simpson et al., 2004; Galindo et al., 2005). Up to
now, most of these EPWs are controlled by users hands via
joysticks, and are very difficult for elderly and disabled users
who have restricted limb movements caused by Parkinsons
disease and quadriplegics. As cheap computers and sensors
are embedded into EPWs, they becomes more intelligent,
and are named intelligent wheelchairs (IWs). Various
research and developments on IWs have been carried out
in the last decade, such as TAO projects (Gomi and
Griffith, 1998), Rolland (Rofer and Lankenau, 1998), Maid
(Prassler et al., 1998), NavChair (Levine et al., 1999),
UPenn Smart Wheelchair (Rao et al., 2002), SIAMO
(Mazo et al., 2002).

As an hands-free interface, head gestures and EMG signals

have already been applied in some existing IWs, such as
WATSON (Matsumoto and Zelinsky, 1999; Matsumoto et al.,
2001), OSAKA wheelchair (Nakanishi et al., 1999; Kuno
et al., 2001), NLPRWheelchair (Wei, 2004), EMGWheelchair
(Moon et al., 2005). However, these systems are not robust
enough to be deployed in the real world and much
improvement is necessary. The new generation of head
gesture-based control of wheelchairs should be able to deal
with the following uncertainty in the practical applications of
IWs, the:
background may be cluttered and dynamically changing
when IWs move in the real world.

The current issue and full text archive of this journal is available at

Industrial Robot: An International Journal

34/1 (2007) 60 68
q Emerald Group Publishing Limited [ISSN 0143-991X]
[DOI 10.1108/01439910710718469]

This research project has been jointly funded by the Royal Society in the
UK and the Chinese Academy of Sciences in China.


Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

Head gesture recognition

Industrial Robot: An International Journal

Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan

Volume 34 Number 1 2007 60 68

excellent processing capabilities (30 MIPS) and a compact

peripheral integration, so that the control system is able to
achieve both real-time signal processing and high
performance driving control. Note that an obstacle
avoidance module is embedded in the DSP motion
controller for safe operation. Our RoboChair has two control
modes, namely intelligent control and manual control.

user may have different facial appearances at different

time, such as mustache and glasses;
face color may change dramatically in varying illumination
conditions; and
users head may move around for looking rather than

In this paper, a novel head gesture-based interface (HGI) is

developed for our IWs, namely RoboChair, based on the
integration of the Adaboost face detection algorithm (Bradski,
1998) and the Camshift object tracking algorithm (Viola and
Jones, 2004). In our approach, head gesture recognition is
conducted by means of real-time face detection and tracking.
The developed HGI aims to solve the problems listed above
and provide a useful human-robot interface for our
The rest of the paper is organised as follows. Section 2
presents the system hardware structure of our RoboChair. In
section 3, we propose a new HGI for the control of our
RoboChair, which is based on the combination of both
Adaboost and Camshift algorithms. Experimental results are
presented in section 4 to show the feasibility and performance
of our new algorithm. Finally, a brief conclusion and future
work are presented in section 5.

2.1 Intelligent control mode

This is currently being developed in this project. Under this
control mode, our RoboChair is controlled by the proposed
HGI. A Logitech webcam is used to acquire the facial images
of the user. After the image data is sent to the laptop, head
gesture analysis and decision-making stages are implemented.
Finally, the laptop sends control decisions to the DSP motion
controller that actuates two DC motors.
2.2 Manual control mode
This is already been built in our RoboChair. Under this
control mode, our RoboChair is controlled by the joystick that
is directly connected to an A/D converter of the DSP motion
It should be noticed that no matter which control mode our
RoboChair is working under, in order to deal with
uncertainties in the real world, sonar readings are directly
sent to the DSP motion controller for real-time obstacle
avoidance and emergency handling.

2. System hardware structure

Figure 1 shows the picture of our RoboChair that was built in
2004 and has the following components:
six ultrasonic sensors at a height of 50 cm for obstacle
avoidance (four at the front and two at the back);
DSP TMS320LF2407-based controller for motion
control of two differentially-driven wheels;
a local joystick controller to connect to an A/D converter
of the DSP-based controller;
a Logitech 4000 Pro Webcam for recognising the users
head gestures; and
Intel Pentium-M 1.6G Centrino laptop with WindowsXP
installed to analyze the head gestures.

3. HGI for RoboChair

3.1 Adaboost and Camshift algorithms
Adaboost is the most recent face detection method with both
high accuracy and fast speed (Viola and Jone, 2004). It
extracts the Haar-like features of images that contain image
frequency information. This process is very fast since only
integer calculation is implemented. Then a set of key features
are selected from these extracted features. After being sorted
according to the importance, this set of features can be used
as a cascade of classifiers that are very robust and able to
detect various faces under varying illumination conditions
and different face colors. Also, Adaboost is able to detect
profile faces.
Figure 3 shows the block diagram of Adaboost face
detection algorithm. It consists of a sequence of stages:
1 data acquisition (image capturing);
2 pre-processing (filtering);
3 feature extraction (rectangular features); and
4 a parallel stage: classifiers design (boosted cascade design)
and classification.

The control architecture of our RoboChair is shown in

Figure 2. The TI DSP chip TMS320LF2407 is used as the
core processor of the motion control module. It offers
Figure 1 Prototype of the RoboChair used in this research

In the classifier design, supervised learning is adopted to

select Ababoost features.
Camshift is a very efficient color tracking method based on
image hue (Bradski, 1998), and is in fact a classical
optimization algorithm. It uses a robust non-parametric
technique for claiming density gradients to find the mode
(peak) of probability distribution called the mean shift
algorithm. Each iteration, Camshift aims to find the mean
window center using a fixed window size. If either the window
center or the window size is unstable, both values need to be
adjusted accordingly until convergence.
Figure 4 shows the flowchart of the Camshift object
tracking process, which consists of four stages:

Head gesture recognition

Industrial Robot: An International Journal

Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan

Volume 34 Number 1 2007 60 68

Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

Figure 2 Block diagram of the control architecture for our RoboChair


window size adjustment (control);
target search; and

Note that Camshift has some limitations, it cannot:

accurately track the face when the illumination condition
changes; and
work well under the cluttered background.

can be obtained rapidly. Further direction judgement is

needed when frontal face is detected. Here, a simple
scaled nose template matching is used to calculate the
nose position in the frontal face window, through which
face directions can be determined very accurately.
If the users face is lost from camera view, Adaboost face
detection is applied in the whole captured image frame so
that the users face will be tracked once again.

3.3 Head gesture recognition

Intel OpenCV has already contained the trained frontal and
right profile face classifiers. After simply flipping the captured
image, the left profile face can be detected as well. In order to
recognise the head gesture under any special situations
robustly, Adaboost frontal, left profile and right profile head
gesture classifiers are adopted in our research.
Since, Adaboost is an appearance-based face detection
method, the left face appearance is quite similar to the right face
appearance. Occasionally, it may have difficulty to decide
whether a face is left profile or right profile. Therefore, a simple
strategy is used to solve this situation: the profile face with bigger
detection window dominates the head gesture. When both left
profile and right profile detection windows are of the same size,
the wheelchair will keep the status of the previous cycle.
If the profile face is detected, our RoboChair is going to
turn left or right. If the frontal face is detected, further left

3.2 Integration of Adaboost and Camshift algorithms

Since, low-cost IWs have limited onboard computing power,
Adaboost face detection algorithm cannot achieve real-time
performance. On the other hand, Camshift face tracking
algorithm runs very fast, but is not robust to varying
illumination conditions and noisy backgrounds. In order to
obtain both fast speed and high accuracy, it is necessary to
integrate both algorithms in a unified framework as shown in
Figure 5 (Jia and Hu, 2005).
Every frame, the system is trying to keep tracking the users
face which is always in front of the webcam in our RoboChair
If the users face is tracked successfully, Adaboost face
detection is applied in a comparatively small Camshift
tracking window so that the face position and size, as well
as head gesture (including frontal, left and right profiles)

Head gesture recognition

Industrial Robot: An International Journal

Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan

Volume 34 Number 1 2007 60 68

Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

Figure 3 Block diagram of the Adaboost face detection process

Figure 4 Flowchart of the Camshift object tracking process

size. Here, we use a size of 100 100 pixels as a standard

face window. Then, the classical template matching
method is applied in this small window to calculate the
precise nose position, which will tell the exact frontal face
head gestures.

frontal/right frontal/up frontal/down frontal/center frontal

head gesture is to be recognised. Because of the varying
distance from the face to the webcam, the detected face
windows in different frames are not of the same size. Thus, it
is necessary to scale the detected face windows to a standard

Head gesture recognition

Industrial Robot: An International Journal

Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan

Volume 34 Number 1 2007 60 68

Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

Figure 5 Flowchart of the HGI (integration of Adaboost and Camshift algorithms)

user has no intention to control the wheelchair using head

gestures. Also, we assume that useful head gestures should have
a range of 458 turning angles on each side (up, down, left and
right). If the turning angles of head gestures being detected are
out of the range, our HGI will notify this and no motion control
commands will be generated. If head gestures are detected
within the specified range, RoboChair will act according to the
following rules:
Speed up, if frontal face up is recognised.
Slow down until stop, if frontal face down is recognised.

3.4 Motion control commands

Owing to a big difference between the frontal face and the
profile face, Adaboost detects frontal and profile head gestures
separately. Also, when the user just moves his/her head to look at
something, but does not want to move the wheelchair, our HGI
should be able to distinguish this situation and avoid generating
any unnecessary action. To achieve this, we restrict the onboard camera to focus on the face of the user who sits right in the
center of the wheelchair. If the users head and face are outside
of the central position, our HGI will treat this situation as the


Head gesture recognition

Industrial Robot: An International Journal

Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan

Volume 34 Number 1 2007 60 68

Turn left, if left frontal/profile face is recognised.

Turn right, if right frontal/profile face is recognised.
Keep speed, if central face is recognised.

Table I Speed of Adaboost and Camshift


Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

4. Experimental results

4.1 Adaboost face detection vs Camshift face tracking

In this experiment, we tested the performance of Adaboost
and Camshift algorithms seperately under ten special
1 normal;
2 bright;
3 darkness;
4 face color noise;
5 occlusion;
6 different face color;
7 multiple faces with different colors;
8 multiple faces with similar colors;
9 profile; and
10 profile with bright lighting.

Minimum face size

Time cost per frame (s)




The results are statistically calculated during a period of five

4.3 Nose template matching for recognition of frontal
face posture
Unlike left profile and right profile head gestures, further
investigation is conducted to recognise frontal head gestures.
Figure 7 shows clearly that our proposed head gesture
recognition method is fairly feasible. There are five frontal
head gestures to be recognised, namely:
1 center frontal;
2 up frontal;
3 down frontal;
4 left frontal; and
5 right frontal.

As shown in Figure 6, the upper image in each condition

shows the performance of Adaboost face detection algorithm,
and the lower image shows the performance of Camshift face
tracking algorithm. It is clear that the performance of
Adaboost is very robust under different lighting and noise
conditions and faces can be detected very accurately.
However, the performance of Camshift is not robust
although it is very fast. In some cases, faces cannot be
detected accurately, such as in Figure 6(c), (d), (g) and (j).

Each head gesture has two images: the upper image shows the
detected face and the lower image shows the recognised head
gesture. As can be seen, the small rectangule indicates the face
posture based on its relation with the big rectangular box.
4.4 RoboChair demonstration
Finally, RoboChair demonstrations are presented here to
show the feasibility and performance of the proposed HGI
framework. Figure 8 shows a sequence of images that our
RoboChair is entering and passing a gate:
start moving forward;
turning right;

4.2 Speed issues for Adaboost and Camshift

Table I shows that, under the same conditions, the proposed
method is much faster than applying Adaboost alone, without
taking the image capturing time into consideration. The
experiments are finished in WindowXP on the Intel PentiumM 1.6G Centrino laptop, with the image resolution
640 480 and the minimum face size 20 20 or 40 40.

Figure 6 Robustness of Adaboost face detection (row 1, 3) and inaccuracy of Camshift face tracking (row 2, 4)


Head gesture recognition

Industrial Robot: An International Journal

Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan

Volume 34 Number 1 2007 60 68

Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

Figure 7 Frontal face posture recognition by nose template matching

Figure 8 An image sequence showing our RoboChair passing through a doorway


users head is not located in the center of images or is looking

around the surroundings without the intention of moving.

entering the gate;

at the gate;
passing the gate; and
turning right again.

It is clear that the proposed HGI is able to recognise the users

head gesture for hands-free control of our RoboChair. No
manual operation is needed, which will make the users life
much easier and comfortable.
Figure 9 shows a sequence of images demonstrated by our
RoboChair under head gesture control:
turn right;
continue turning right with head posture right up;
turn left;
turn left with hand color noise;
continue turning left with hand color noise; and

5. Conclusion and future work

This paper describes the design and implementation of a
novel hands-free control system for IWs. The developed
system provides enhanced mobility for the elderly and
disabled people who have very restricted limb movements or
severe handicaps. A robust HGI, is designed for vision-based
head gesture recognition of the RoboChair user. The
recognised gestures are used to generate motion control
commands so that the RoboChair can be controlled according
to the users intention. To avoid unnecessary movements
caused by the user looking around randomly, our HGI is
focused on the central position of the wheelchair to identify
useful head gestures.
Our future research will be focused on some extensive
experiments and evaluation of our HGI in both indoor and
outdoor environments where cluttered backgrounds,
changing lighting conditions, sunshine and shadows may
bring complications to head gesture recognition.

In this demonstration, some uncertainty such as hand color

noise has been delibrately added into images. When head
gestures are not in normal postures, i.e. the non-vertical head
gestures in Figure 9(d) and (e), HGI is still able to identify the
users intention and control the RoboChair very well, which is
very robust. However, HGI will ignore head gestures if the

Head gesture recognition

Industrial Robot: An International Journal

Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan

Volume 34 Number 1 2007 60 68

Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

Figure 9 A image sequence to show that our RoboChair functions well even when some uncertainties were added


International Workshop on Robot and Human Communication

(ROMAN 2001), pp. 262-7.
Mazo, M., Garcia, J.C., Rodriguez, F.J., Urena, J., Lazaro,
J.L. and Espinosa, F. (2002), Experiences in assisted
mobility: the SIAMO project, Proceedings of IEEE
International Conference on Control Applications, Vol. 2,
September 18-20, pp. 766-71.
Moon, I., Lee, M., Chu, J. and Mun, M. (2005), Wearable
EMG-based HCI for electric-powered Wheelchair users
with motor disabilities, Proceedings of IEEE International
Conference on Robotics and Automation, Barcelona, Spain,
pp. 2660-5.
Nakanishi, S., Kuno, Y., Shimada, N. and Shirai, Y. (1999),
Robotic Wheelchair based on observations of both user
and environment, Proceedings of IEEE/RSJ Conference on
Intelligent Robots and Systems (IROS 1999), Kyongju, Korea,
pp. 912-7.
Prassler, E., Scholz, J., Strobel, M. and Fiorini, P. (1998),
MAid: a robotic wheelchair operating in public
Science,Vol. 1724, Springer-Verlag, Berlin, pp. 68-95.
Rao, R.S., Conn, K., Jung, S.H., Katupitiya, J., Kientz, T.,
Kumar, V., Ostrowski, J., Patel, S. and Taylor, C.J. (2002),
Human robot interaction: application to smart
Wheelchairs, Proceedings of IEEE International Conference
on Robotics and Automation (ICRA 2002), Washington, DC,
USA, May, pp. 3583-8.
Rofer, T. and Lankenau, A. (1998), Architecture and
applications of the Bremen autonomous Wheelchair, in
Wang, P.P. (Ed.), Proceedings of the 4th Joint Conference on
Information Systems,Vol. 1, Association for Intelligent
Machinery, pp. 365-8.
Simpson, R., LoPresti, E., Hayashi, S., Nourbakhsh, I. and
Miller, D. (2004), The smart Wheelchair component
system, Journal of Rehabilitation Research and Development
(JRRD), Vol. 41 No. 3B, pp. 429-42.
Viola, P. and Jones, M.J. (2004), Robust real-time face
detection, International Journal of Computer Vision, Vol. 57
No. 2, pp. 137-54.
Wei, Y. (2004), Vision-based human-robot interaction and
navigation of intelligent service robots, PhD dissertation,

Bradski, G. (1998), Real-time face and object tracking as a

component of a perceptual user interface, Proceedings of the
4th IEEE Workshop on Applications of Computer Vision,
Princeton, NJ, USA, October, pp. 214-9.
Ding, D. and Cooper, R.A. (1995), Electric powered
wheelchairs, IEEE Control Systems Magazine, Vol. 25
No. 2, pp. 22-34.
Galindo, C., Gonzalez, J. and Fernandez-Madrigal, J.A.
(2005), An architecture for cognitive human-robot
integration. Application to rehabilitation robotics,
Proceedings of IEEE International Conference on Mechatronics
and Automation, Niagara Falls, Canada, pp. 329-34.
Gomi, T. and Griffith, A. (1998), Developing intelligent
wheelchairs for the handicapped, Assistive Technology and
Artificial Intelligence, Applications in Robotics, User Interfaces
and Natural Language Processing, Lecture Notes In
Computer Science, Vol. 1458, Springer-Verlag, Berlin,
pp. 150-78.
Jia, P. and Hu, H. (2005), Head gesture based control of an
intelligent wheelchair, Proceedings of the 11th Annual
Conference of the Chinese Automation and Computing Society
in the UK (CACSUK05), Sheffield, UK, September 10,
pp. 85-90.
Kuno, Y., Murakami, Y. and Shimada, N. (2001), User and
social interfaces by observing human faces for intelligent
wheelchairs, ACM Int. Conf. Proceeding Series archive, Proc.
of the 2001 workshop on Perceptive user interfaces, Orlando,
Florida, USA, pp. 1-4.
Levine, S.P., Bell, D.A., Jaros, L.A., Simpson, R.C., Koren,
Y. and Borenstein, J. (1999), The NavChair assistive
wheelchair navigation system, IEEE Transcations on
Rehabilitation Engineering, Vol. 7 No. 4, pp. 443-51.
Matsumoto, Y. and Zelinsky, A. (1999), Real-time face
tracking system for human-robot interaction, Proceedings of
the 1999 IEEE International Conference on Systems, Man and
Cybernetics (SMC99),Vol. 2, pp. 830-5.
Matsumoto, Y., Ino, T. and Ogasawara, T. (2001),
Development of intelligent wheelchair system with face
and gaze based interface, Proceedings of the 10th IEEE

Head gesture recognition

Industrial Robot: An International Journal

Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan

Volume 34 Number 1 2007 60 68

Yang, M.H., Kriegman, D.J. and Ahuja, N. (2002),

Detecting faces in images: a survey, IEEE Transactions
on Pattern Analysis and Machine Intelligence, Vol. 24 No. 1,
pp. 34-58.

Institute of Automation, Chinese Academic of Sciences,


Further reading
Texas Instruments (2003), TMS320LF2407, TMS
SPRS0941-APRIL 1999-REVISED September, Texas
Instruments, Dallas, TX.

Corresponding author

Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

Huosheng H. Hu can be contacted at:

To purchase reprints of this article please e-mail:

Or visit our web site for further details:


Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

This article has been cited by:

1. Mohammed Alshraideh, Basel A. Mahafzah, Saleh Al-Sharaeh, Ziad M. Hawamdeh. 2015. A robotic intelligent wheelchair
system based on obstacle avoidance and navigation functions. Journal of Experimental & Theoretical Artificial Intelligence 27,
471-482. [CrossRef]
2. Sofia Ira Ktena, William Abbott, A. Aldo FaisalA virtual reality platform for safe evaluation and training of natural gaze-based
wheelchair driving 236-239. [CrossRef]
3. Keita Matsuo, Yi Liu, Donald Elmazi, Leonard Barolli, Kazunori UchidaImplementation and Evaluation of a Small Size
Omnidirectional Wheelchair 49-53. [CrossRef]
4. Jaehong Lee, Byungki Jin, Soo-chan Jee, Jiyoon Han, Myung Hwan YunGesture interface appropriateness analysis on smart
TV functions 1314-1316. [CrossRef]
5. Bo Yu, YongQiang Chen, YingShu Huang, Chenjie XiaStatic hand gesture recognition algorithm based on finger angle
characteristics 8239-8242. [CrossRef]
6. Manoj Challagundla, K. Yogeshwar Reddy, N. Harsha VardhanAutomatic motion control of powered wheel chair by the
movements of eye blink 1003-1007. [CrossRef]
7. Farid Abedan Kondori, Shahrouz Yousefi, Li Liu, Haibo LiHead operated electric wheelchair 53-56. [CrossRef]
8. Fatma Ben Taher, Nader Ben Amor, Mohamed JallouliEEG control of an electric wheelchair for disabled persons 27-32.
9. Marcelo R. Petry, Antonio Paulo Moreira, Brigida Monica Faria, Luis Paulo ReisIntellWheels: Intelligent wheelchair with usercentered design 414-418. [CrossRef]
10. Ericka Janet Rechy-Ramirez, Huosheng HuBi-modal Human Machine Interface for Controlling an Intelligent Wheelchair 66-70.
11. Srinivas GanapathyrajuHand gesture recognition using convexity hull defects to control an industrial robot 63-67. [CrossRef]
12. Tao LuA motion control method of intelligent wheelchair based on hand gesture recognition 957-962. [CrossRef]
13. Noriyuki Kawarazaki, Dimitar Stefanov, Alejandro Israel Barragan DiazToward gesture controlled wheelchair: A proof of concept
study 1-6. [CrossRef]
14. Qingjie Zhao, Weicun Xu, Yuxia Wang, Xiaoming Shi. 2013. Using head poses to control a virtual robot walking in a virtualmaze.
Optics Communications 295, 84-91. [CrossRef]
15. Brgida Mnica Faria, Srgio Vasconcelos, Lus Paulo Reis, Nuno Lau. 2013. Evaluation of Distinct Input Methods of an
Intelligent Wheelchair in Simulated and Real Environments: A Performance and Usability Study. Assistive Technology 25, 88-98.
16. Noriyuki Kawarazaki, Alejandro Israel Barragan DiazGesture recognition system for wheelchair control using a depth sensor
48-53. [CrossRef]
17. Ericka Janet Rechy-Ramirez, Huosheng Hu, Klaus McDonald-MaierHead movements based control of an intelligent wheelchair
in an indoor environment 1464-1469. [CrossRef]
18. A. A. Al-Haddad, R. Sudirman, C. Omar, S. Zubaidah M. TumariWheelchair motion control guide using eye gaze and blinks
based on Bug algorithms 398-403. [CrossRef]
19. Jinhua Zeng, Yaoru Sun, Fang WangA Natural Hand Gesture System for Intelligent Human-Computer Interaction and Medical
Assistance 382-385. [CrossRef]
20. HeonHui Kim, YunSu Ha, Zeungnam Bien, KwangHyun Park. 2012. Gesture encoding and reproduction for humanrobot
interaction in texttogesture systems. Industrial Robot: An International Journal 39:6, 551-563. [Abstract] [Full Text] [PDF]
21. Elisa Perez, Carlos Soria, Oscar Nasisi, Teodiano Freire Bastos, Vicente Mut. 2012. Robotic wheelchair controlled through a
vision-based interface. Robotica 30, 691-708. [CrossRef]
22. Tae-Beum Ryu, Jae-Hong Lee, Joo-Bong Song, Myung-Hwan Yun. 2012. Conditions of Applications, Situations and Functions
Applicable to Gesture Interface. Journal of the Ergonomics Society of Korea 31, 507-513. [CrossRef]
23. Alaa Halawani, Shafiq ur Rhman, Haibo Li, Adi Anani. 2012. Active vision for controlling an electric wheelchair. Intelligent
Service Robotics 5, 89-98. [CrossRef]
24. Gin Chong Lee, Chu Kiong Loo, Letchumanan Chockalingam. 2012. An integrated approach for head gesture based interface.
Applied Soft Computing 12, 1101-1114. [CrossRef]

Downloaded by LULEA UNIVERSITY OF TECHNOLOGY At 01:16 31 January 2016 (PT)

25. Anwar Al-Haddad, Rubita Sudirman, Camallil Omar, Koo Yin Hui, Muhammad Rashid JiminWheelchair Motion Control Guide
Using Eye Gaze and Blinks Based on PointBug Algorithm 37-42. [CrossRef]
26. S. Aungsakul, A. Phinyomark, P. Phukpattaranont, C. Limsakul. 2012. Evaluating Feature Extraction Methods of
Electrooculography (EOG) Signal for Human-Computer Interface. Procedia Engineering 32, 246-252. [CrossRef]
OF AN INTELLIGENT WHEELCHAIR. International Journal of Humanoid Robotics 08, 707-724. [CrossRef]
28. Lai Wei, Huosheng HuA multi-modal human machine interface for controlling an intelligent wheelchair using face movements
2850-2855. [CrossRef]
29. Anwar Al-Haddad, Rubita Sudirman, Camallil OmarGaze at Desired Destination, and Wheelchair Will Navigate towards It.
New Technique to Guide Wheelchair Motion Based on EOG Signals 126-131. [CrossRef]
30. A.A. Al-Haddad, R. Sudirman, C. OmarGuiding Wheelchair Motion Based on EOG Signals Using Tangent Bug Algorithm
40-45. [CrossRef]
31. Yi Zhang, Jiao Zhang, Yuan LuoA novel intelligent wheelchair control system based on hand gesture recognition 334-339.
32. Stephen Vickers, Simon CouplandSoft computing head tracking interaction for telerobotic control 1-6. [CrossRef]
33. Ryota Kato, Yasunari Yoshitomi, Taro Asada, Masayoshi Tabuse. 2010. A system for synchronizing the nods of a CG character
and a human using thermal image processing and a moving-average model. Artificial Life and Robotics 15, 10-16. [CrossRef]
34. Lai Wei, Huosheng HuEMG and visual based HMI for hands-free control of an intelligent wheelchair 1027-1032. [CrossRef]
35. R. Kato, Y. Yoshitomi, T. Asada, Y. Fujita, M. TabuseA method for synchronizing nods of a CG character and a human using
thermal image processing 848-853. [CrossRef]
36. Lai Wei, Huosheng Hu, Kui YuanUse of forehead bio-signals for controlling an Intelligent Wheelchair 108-113. [CrossRef]
37. Tom Carlson, Yiannis DemirisHuman-wheelchair collaboration through prediction of intention and adaptive assistance
3926-3931. [CrossRef]
38. Chun Sing Louis Tsui, Pei Jia, John Q. Gan, Huosheng Hu, Kui YuanEMG-based hands-free wheelchair control with EOG
attention shift detection 1266-1271. [CrossRef]