Professional Documents
Culture Documents
net/publication/323935725
CITATIONS READS
13 3,981
6 authors, including:
Some of the authors of this publication are also working on these related projects:
Eye On You: Fusing Gesture Data from Depth Camera and Inertial Sensors for Person Identification View project
All content following this page was uploaded by Albert Christian on 27 March 2018.
∗
Department of Computer Science
National Chiao Tung University, Hsinchu, Taiwan, ROC
cww0403@nctu.edu.tw
†
School of Computing Science
University of East Anglia, Norwich, UK
Abstract. Recently some studies have shown that the major influential factor of our
health is not only physical activities, but the states of our emotion that we experience
through our daily life, which continuously build our behavior and affect our physical health
significantly. Therefore, emotion recognition draws more and more attention for many
researchers in recent years. In this paper, we propose a system that uses off the-shelf
wearable sensors, including heart-rate, galvanic skin response, and body temperature sen-
sors, to read physiological signals from the user and apply machine learning techniques
to recognize emotional states of the user. These states are key steps, toward improving
not only the physical health but also emotional intelligence in advance human-machine
interaction. In this work, we consider three types of emotional states and conducted ex-
periments on real-life scenarios, achieving highest recognition accuracy of 97.31%.
1. Introduction. The Internet of Things (IoT) [30] have rapidly become one of the
most popular research topics due to its wide range of application scenarios. In general,
these applications can be divided into three categories: (1) smart home [32], (2) smart
transportation [33], and (3) e-health [34], which relies on wearable sensors. Wearable tech-
nology [31] is often touted as one of the important applications of the IoT, which consists
of low-cost and low-power sensors available in markets. The existing wearable applications
are in their early phase and currently dominated by physical activities tracking devices.
So-called smart-band can monitor some of our daily activities, such as walking, running,
cycling, and swimming. It also can track when and how long we sleep. Some of these act
as personal trainers and count how many calories was burned. Although physical health
monitoring applications are booming, only few studies have been conducted on emotional
health.
Emotion is an integral part of our health and has a huge impact on our bodies, mental
health and behavior. Weak emotional can affect our immune system, making us more
likely to get colds and other minor infections. In reference to American Psychological As-
sociation (APA) [10], more than 53% of Americans report personal health problems as a
source of stress. This issue of stress which has left unchecked can contribute many health
problems, such as high blood pressure, heart disease, obesity and diabetes. According to
a study conducted by the American College Health Association (ACHA) [11], a consid-
erable proportion of students said that mental health problems affected their academic
1
2 Bohdan Myroniv, Cheng-Wei Wu, Yi Ren, Albert Budi Christian, Ensa Bajo, and Yu-Chee Tseng
performance, causing them to dropping courses, or receiving low grades in the classes.
Most of the existing studies of emotion recognition are based on visual camera-based
approaches [2, 3, 6], audio speech-based approaches [1, 4, 5, 7] and physiology-based emo-
tion recognition [27, 28]. The visual camera-based approaches that uses image processing
technologies to detect users’ emotions. However, the approach requires the users to be
well seen by the cameras. Moreover the approaches recognition may fall significantly.
With regards to the audio speech-based approaches [1, 4, 5, 7], the main idea of these
approaches is to use the speakers’ loudness, tone and intonation patterns in a speech to
detect their emotions. However, conversational properties may vary in different cultures
and nationalities, which may lead to worse recognition results from the system. Recently
many studies have shown that less attention was paid to physiological signals analysis
for emotion recognition using wearable devices[27, 28]. However, detecting users’ emo-
tions using physiological signals required collecting reliable data. In the sense that the
autonomous nervous system cannot be controlled consciously by users themselves. Such
data will not be affect by the factors, such as light requirements in video-based approaches
or cultural peculiarities in audio-based approaches.
In this paper, we use three non-invasive sensors as shown in Fig. 1: Heart Rate sensor
to sense user’s pulse, Beats Per Minute (BPM), Galvanic Skin Response (GSR) sensor to
sense the skin conductivity to modulates the amount of sweat secretion from sweat glands,
and body Temperature sensor (T) to sense high and low temperate values of the user’s
body. Then, we apply sliding window-based segmentation method to segment collected
data and extract candidate features from the segments. After that, we feed extracted
features to classifiers to identify the type of users’ emotional states. We implement the
prototype of the proposed methods on Arduino platform and evaluate the performance
of six classification algorithms to train emotion classification models: Random Tree, J48,
Nave Bayes, SVM, KNN and Multilayer Perceptron Neural Network.
Analyzing User Emotions via Physiology Signals 3
We collected data from 10 participants using Geneva Affective Picture Database (GAPED)
and International Affective Picture System (IAPS) as triggering mechanism for experi-
mental investigations of emotion. These databases consist of a set of labeled pictures
which were designed to stimulate particular emotions. The results show high accuracy
above 97% of user’s emotional conditions recognition using low-cost sensors that are avail-
able in current wearable devices. In recognition stage, we consider three general emotional
states: Negative, Neutral and Positive.
The rest of this paper is organized as follows: Section 2 reviews the related work.
Section 3 introduces the details of the proposed method algorithms and experimental
methodology. Data analysis, performance evaluation and prototype implementation of
the proposed system are described in Section 4. Finally, Section 5 gives conclusion and
future works.
2. Related Works. In this section we discuss about existing studies which fall into three
categories: emotion recognition based on video, audio and physiology approaches:
2.1. Video-based emotion recognition. The approaches in this category can be di-
vided into two types: emotion recognition (ER) based on facial expression [18, 19, 20]
and body movements [17, 21]. Both of them rely on cameras and image processing tech-
nologies to detect emotional states. However, in order to detect emotional states, user
has to be in a range of camera vision and even if so, user has to be well seen. In other
words, such system will output an unreliable results or even will not work in a poorly lit
environment. Besides, the user can consciously control facial expression to false classifi-
cation results. In addition, camera-based emotion recognition systems are stationary and
cannot be properly used as a day-long wearable solution.
we are breathing. However, they are costly and cannot be properly integrated in smart-
watches and fitness bands.
3. The Proposed Method. In this section, we introduce the proposed methods and
explain the procedure of physiological signals processing for obtaining user’s emotional
states.
The collected sensor data are sent to a server via Bluetooth for further processing. Fig.
2 shows the Input-Process-Output diagram of the proposed system. In this work, we
consider the recognition of three states of emotion; positive, neutral and negative. Data
generated from the sensors go through four phases before finally recognizing an individ-
ual user’s emotional state, which are pre-processing, segmentation, feature extraction and
emotion classification.
Pre-processing Phase: In this phase, we observe that collected data tend to have
missing values during a short period (i.e., one to two seconds) just after the connec-
tion stability between the devices are established. In order to address such problem,
we therefore cut off the first two seconds’ data from the collected data to avoid the
interference of the missing data. Such issue may happen due to the ongoing estab-
lishment of the Bluetooth connection.
Analyzing User Emotions via Physiology Signals 5
ID Feature Formula
1
Pn
1 Mean µ(A) = n i=1 ai
1
Pn
2 Variance V ar(A) = n i=1 (ai − µ(A))2
1
Pn 2
3 Energy Eng(A) = n i=1 (ai )
1
Pn
4 Average Absolute Difference Aad(A) = n−1 i=2 | ai−1 − ai |
1
Pn
5 Average Absolute Value Aav(A) n−1 i=1 | ai |
1 Pn 3
i=1 (ai −µ(A))
6 Skewness Skew(A) = n
1 Pn
[ n i=1 (ai −µ(A)) ]3/2
2
1 Pn
(a −µ(A))4
7 Kurtosis Kurt(A) [ 1n Pni=1(a i−µ(A))2 ]2
n i=1 i
1
Pn
8 Zero Crossing Rate ZCR = 2 i=2 | sign(ai ) − sign(ai−1 ) |
Pn
i=2 |sign(ai −µ(A))−sign(ai−1 −µ(A))|
9 Mean Crossing Rate M CR = 2
q P
1 n
10 Root Mean Square Rms(A) = n i=1 a2i
Feature Extraction Phase: In this phase, we extract candidate features from the
sensor signals in each segment. For instance, let A =< a1 , a2 , ..., an > be a time
series data of a sensor in a segment, where ai (1 ≤ i ≤ n) is the ith sample in A. The
extracted candidate features are shown in Table 1.
Emotion Classification Phase: In this phase, the extracted candidate features are
used to build classifiers using classification algorithms in Weka [12]. We consider
the following six different types of classification algorithms: K -Nearest Neighbor
(abbr. KNN), J48 Decision Tree (abbr. J48), Nave Bayes (abbr. NB), Random
Tree (abbr. RT), Support Vector Machine (abbr. SVM) and Multilayer Perceptron
Neural Networks (abbr. MP).
3.2. Experiment Methodology. In experiment, we recruit ten participants, which are
divided into two groups: seven males and three females. Their ages are between twenty-
two and thirty. Fig. 3 shows how we label sensor data using emotional triggering database.
Analyzing User Emotions via Physiology Signals 7
Table 2. The accuracy of the group models under different sizes of over-
lapping windows
All the experiments are conducted in separate room. We request the participants to leave
their phones out during the experiments. Emotion triggering is a very important part of
the experiment. In the emotional triggering, we use two affective emotion photo databases
in our experiments. The first one is Geneva Affective Picture Database (GAPED) [8] pro-
vided by Swiss Center for Affective Sciences, and the second one is International Affective
Picture System (IAPS) [9] provided by Center for The Study of Emotion and Attention.
We choose twenty photos for each type of emotional states (i.e., positive, neutral, and
negative) from the above mentioned databases. Each photo was displayed for five sec-
onds. We have tried different orders of data collection, i.e. different emotion stimuli were
shown in different days, and different consistency. Such approach was applied to minimize
dependencies of one emotional condition from another one.
We held our experiments three times. The data collected from the first experiment
was never used in the processing. This is because, we realize that participants themselves
comments that they felt nervous and strange due to the bunch of sensors and cables
connected to their arms, which would influence participants’ emotion and lead to noise
data. Therefore, the data collected from the first experiment is not used. However after
the first experiment, our participants were more familiar with the sensors setup and it
was not disturbing for them to wear these sensors in the next data collection rounds.
Therefore, the data collected from the second and third experiments are used in the
processing.
4. Performance Evaluation.
4.1. Experimental Evaluation. In this subsection, we test the analysis and recognition
performance of the proposed sensing system using different types of classifiers, including
KNN, J48, NB, RT, SVM, and MP. During the testing process, we apply 10 fold cross-
validation to do the performance evaluation. We consider two kinds of models called group
model and personalized model. In group model, the emotion classifier is constructed
by considering all the participants’ physiology signals in the training phase. While in
8 Bohdan Myroniv, Cheng-Wei Wu, Yi Ren, Albert Budi Christian, Ensa Bajo, and Yu-Chee Tseng
Group Dataset
Accuracy of each segment size
Classifiers
1 sec. 2 sec. 3 sec. 4 sec. 5 sec.
RT 83.69% 82.90% 82.58% 73.29% 80.52%
J48 90.35% 84.49% 86.48% 87.25% 80.55%
NB 39.46% 39.16% 45.34% 41.62% 34%
KNN 72.76% 74.75% 76.57% 80.45% 79.56%
SVM 47.11% 42.94% 42.64% 44.84% 49%
MP 86.28% 82.30% 79.87% 77.28% 72%
4.2. Data Analysis. Table 2 describe the accuracy analysis of the group models under
the different size of segmentation. In this experiment, the overlapping size δ is set to 0
seconds. In Table 2, we can see that the J48 classifier performs better than the other five
classifiers. Besides, when the window size θ is set to 1 second, the J48 classifier achieves
90.35% recognition accuracy. In Table 2, the SVM and NB classifiers are not very effective,
their average accuracies are 45.59% and 39.91%, respectively. Then, we test the effect of
δ on the overall accuracy of the proposed system. In this experiment, the window size θ
Analyzing User Emotions via Physiology Signals 9
is set to 5 seconds. Table 2 shows the accuracy of the group models when the overlapping
size is varied from 0.5 seconds to 4.5 seconds. As shown in Table 3, overlapping size is
set to 4.5 seconds, the KNN classifier has the best recognition accuracy, which is up to
97.31%. In the next experiment, we fix the parameters θ and δ to 5 seconds and 4.5
seconds, respectively, and evaluate the precision, recall and F-Measure of the proposed
system using KNN classifier for each class. Table 4 shows the experimental results. As
shown in Table 4, the system using KNN classifier has good recognition result for each
class. Furthermore, we train model for each participate to test the recognition accuracy
for personalized models. Because there are ten participants, ten personalized models were
built. In this experiment, we set the parameters θ and δ to 5 seconds and 4.5 seconds,
respectively, and consider the KNN classifier. Fig. 4 shows the classification results for
each personalized model of each user. As shown in Fig. 4, the average accuracy of all
the personalized models is 97.78%. Moreover, the maximum and minimum accuracies are
98.2% and 96.8%, respectively.
Table 4 shows the accuracy of the group models under the different segmentation size,
while Fig. 5 shows the accuracy of each personalized model. In this experiment, the
overlapping size δ is set to 0 seconds. In Table 3, the J48 classifier performs better than
the other five classifiers. Besides, when the window size θ is set to 1 second, the J48
classifier achieves 90.35% recognition accuracy. In Table 3, the SVM and NB classifiers
are not very effective, the average accuracies are 45.59% and 39.91%, respectively.
Fig. 7, shows pattern rule for emotion classification. It describe that accuracy grows
higher as the overlapping size increases. Hence, the decision of combining more segments
and overlapping size is made as it helps in generating higher accuracy among the classifiers.
Table 5 shows the improvement of accuracies over 10 seconds segment in relation to
10 Bohdan Myroniv, Cheng-Wei Wu, Yi Ren, Albert Budi Christian, Ensa Bajo, and Yu-Chee Tseng
overlapping size. In the above table, it presents that when the dataset is small then the
prediction will be inaccurate. On the contrary, when the dataset is huge then inaccurate
predictions are observe due to the long transition between emotional states. Therefore,
in the presented system the optimal combination in 5 seconds segment along with 4.5
seconds overlapping size to achieve an accurate prediction. Fig. 8 shows the comparison
of accuracy between four best classification algorithms: Random Tree (RT), Decision Tree
(J48), k-Nearest Neighbor (KNN), Multilayer Perceptron (MP) and those with sliding
window (SW) method. The usage of sliding window method increases accuracy among
all classifiers averagely for 5.5% as shown in table 6. The performance of classifiers with
each of five segments and overlapping windows is shown on Table 7 for segment size
θ = 1, 2, 3, 4and5 respectively.
4.3. Prototype Implementation. In this sub-section, we introduce the prototype re-
sult of the proposed system. Fig. 6 shows the detail of system prototype components. We
developed an Android application to recognize users’ emotional state online. It consists
of Arduino side and Android side. On the Arduino side, heart rate, body temperature
and GSR sensors will continuously collect users’ physiology signals and send them to
the smart-phone via Bluetooth. On the Android side, the collected data will go through
Analyzing User Emotions via Physiology Signals 11
When Bluetooth connection is that established, the App will go to the second screen,
where user can see three boxes shown their detected heart beats per minute (show. as
Heart rate in the interface), GSR (abbr. as Skin resp. in the interface) and temperature
(abbr. as Tempr. in the interface) data from the interface in a real-time fashion. Detected
user’s emotional state is shown in the upper right side of the App. Online recognition is
done by using the pre-trained classifier inside the smart-phone.
4.4. System Performance Testing. We perform the testing of our system in real-life
scenario. We consider different actions to see what emotions will be triggered and see if
they match our expected results. Fig. 10 demonstrate a user doing different actions, such
as: reading paper, scrolling Facebook, watching funny video, reading article, watching
negative pictures.In our performance testing, we can see a user reading article about
newly released laptop, and on left side of the screen shows the screen cast from the
prototype application in on-line fashion. Here the output of the App shows ”Positive”,
which means user’s emotional state is positive.
Furthermore, we would like to point out that while transition between different emo-
tional states. Our usually takes approximately 45 - 60 second to be perfectly steady and
accurate on a given emotional state. For that been the case, since heart rate, tempera-
ture and other physiological signals cannot change quickly, which might lead to a situation
where It takes some time for our bodies to increase / decrease its characteristics.
5. Conclusions and Future Work. In this work, we consider the heart rate, body
temperature and galvanic skin response sensors to design a wearable sensing system for
effective recognition of users’ emotional states. The proposed system allows recognizing
three types of emotions, including positive, neutral and negative, in an on-line fashion.
We apply the machine learning technology to process the physiology signals collected from
the user. The process consists of four main phases: data pre-processing, data segmen-
tation, feature extraction, and emotion classification. We implement the prototype of
the system on Arduino platform with Android smart-phone. Extensive experiments on
real-life scenarios show that the proposed system achieves up to 97% recognition accuracy
when it adopts the k -nearest neighbor classifier. In the future work, we will consider more
types of user’s emotional states and consider to find a correlation between these emotions
and physical activities for more diverse and novel applications. Below is the demo video,
source code and dataset of our system (https : //goo.gl/KhhcyA).
Analyzing User Emotions via Physiology Signals 13
Acknowledgment. This work was support in part by the Ministry of Science and Tech-
nology of Taiwan, ROC., under grant MOST 104-2221-E-009-113-MY3, 105-2221-E-009-
101-MY3, 105-2218-E-009-004.
REFERENCES
[1] Amin S., Andriluka M., Bulling A., Mller M. P., Verma P.,“Emotion recognition from embedded bod-
ily expressions and speech during dyadic interactions,” IEEE International Conference on Affective
Computing and Intelligent Interaction, pp. 663–669, 2015.
[2] Bulut M., Busso C., Deng Z., Kazemzadeh A., Lee C.M., Lee S., Narayanan S., Neumann U., Yildirim
S.,“Analysis of emotion recognition using facial expres-sions, speech and multimodal information,”
International Conference on Multimodal Interfaces, pp. 205–211, 2004.
[3] Bilakhia S., Cowie R., Eyben F., Jiang B., Pantic M., Schnieder S., Schuller B., Smith K., Val-
star M.,“The continuous audio/visual emotion and depression recognition challenge,” International
Workshop on Audio/Visual Emotion Challenge, pp. 3–10, 2013.
[4] Aswathi E., Deepa T. M., Rajan S., Shameema C. P., Sinith M. S.,“Emotion recognition from
audio signals using Support Vector Machine,” IEEE Recent Advances in Intelligent Computational
Systems, pp. 139–144, 2015.
[5] Dai K., Fell J. H., MacAuslan J.,“Recognizing emotion in speech using neural net-works,” Interna-
tional Conference on Telehealth/Assistive Technologies, pp. 31–36, 2008.
[6] Chakraborty A., Chakraborty U. K., Chatterjee A., Konar A.,“Emotion Recognition From Facial
Expressions and Its Control Using Fuzzy Logic,” IEEE Trans-actions on Systems, Man, and Cyber-
netics - Part A: Systems and Humans, Vol.39, Issue. 4, pp. 726–743, 2009.
[7] Pao T. L., Tsai Y. W., Yeh J. H.,“Recognition and analysis of emotion transition in Mandarin speech
signal,” IEEE International Conference on Systems Man and Cybernetics, pp. 3326–3332, 2010.
[8] Dan-Glauser E. S., Scherer K. R.,“The Geneva affective picture database (GAPED): a new 730-
picture database focusing on valence and normative significance,” Behavior Research Methods, Vol.
43, pp. 468-477, 2011.
[9] Bradley, M.M., Cuthbert, B.N., Lang, P.J.,“International Affective Picture System (IAPS): Technical
Manual and Affective Ratings. ,” NIMH Center for the Study of Emotion and Attention, 1997 ,
http : //csea.phhp.uf l.edu/media.html.
[10] American Psychological Association, http : //www.apa.org/index.aspx
[11] Consequences of Poor Mental Health, www.campushealthandsaf ety.org/mentalhealth/consequences/
[12] Weka 3: Data Mining Software in Java, http : //www.cs.waikato.ac.nz/ml/weka/
14 Bohdan Myroniv, Cheng-Wei Wu, Yi Ren, Albert Budi Christian, Ensa Bajo, and Yu-Chee Tseng
[13] Ralf Kbele, Mandy Koschke, Steffen Schulz, Gerd Wagner, ShravyaYeragani, Chaitra T. Ra-
machandraiah, Andreas Voss, Vikram K. Yeragani,Karl-Jrgen Br,“The influence of negative mood
on heart rate complexity measures and baroreflex sensitivity in healthy subjects,” Indian Journal of
Psychiatry, pp.4247, 52(1):. 2010.
[14] Gregory D. Webster, Catherine G. Weir. ,“Emotional Responses to Music: Interactive Effects of
Mode, Texture, and Tempo,” Motivation and Emotion,Volume 29, Issue 1, pp 1939, 2005.
[15] Wolfram Boucse . ,“Electrodermal Activity,” University of Wuppertal , 2012.
[16] Lauri Nummenma, Enrico Glereana, Riitta Harib, Jari K. Hietanend,“Bodily maps of emotions,”
Developmental science, pp 1-8, 2016.
[17] Ginevra Castellano, Santiago D. Villalba, Antonio Camurri,“Recognizing Human Emotions from
Body Movement and Gesture Dynamics,” ACII, pp. 71-82, 2007.
[18] Carlos Busso, Zhigang Deng, Serdar Yildirim, Murtaza Bulut, Chul MinLee, Abe Kazemzadeh,
Sungbok Lee, Ulrich Neumann, Shrikanth Narayanan,“Analysis of emotion recognition using facial
expressions, speech and multi modal information,” ICMI, pp 205-211, 2004.
[19] G.U. Kharat, S.V. Dudul,“Neural network classifier for human emotion recognition from facial ex-
pressions using discrete cosine transform,” First International Conference on Emerging Trends in
Engineering and Technology , pp.653-658, 2008.
[20] R.Cowie, E.D. Cowie, J.G. Taylor, S.Ioannou, M.Wallace, S. Kollias,“An Intelligent system for facial
recognition,” IEEE International Conference on Multimedia and Expo, 2005.
[21] A. Cammuri, I. Lagerlof, G. Volpe,“Recognizing emotion from dance movement: comparison of
spectator recognition and automated techniques,” International journal of human-computer studies,
vol. 59, pp. 213-255,2003.
[22] J.H.Jeon, R.Xia, Y.Liu .,“Sentence level emotion recognition based on decisions from subsentence
segments,” IEEE, ICASSP, pp.4940-4943, 2011.
[23] A.A. Razak, M.I.Zainal Abidin, R. Komiya,“Comparison between fuzzy and NN method for speech
emotion recognition,” International conference on Information Technology and Applications, 2015.
[24] S.G.Koolagudi, R.Reddy, K.S. Rao,“Emotion recognition from speech signal using epoch parame-
ters,” SPCOM, 2010.
[25] T.L.Pao, J.H.Yeh, Y.W.Tsai,“Recognition and analisis of emotion transition in Mandarin speech
signal,” IEEE, SMC, pp.3326-3332, 2010.
[26] Andreas Haag, Silke Goronzy, Peter Schaich, Jason Williams,“Emotion Recognition Using Biosen-
sors: First Steps towards an Automatic System,” Affective Dialogue Systems, Tutorial and Research
Workshop, ADS, Kloster Isree, 2004.
[27] Eun-Hye Jang, Byoung-Jun Park, Sang-Hyeob Kim, Myoung-Ae Chung,Mi-Sook Park, Jin-Hun
Sohn,“ Classification of Human Emotions from Physiological signals using Machine Learning Algo-
rithms,” IARIA, pp.395-400, 2013.
[28] Eun-Hye Jang, Byoung-Jun Park, Sang-Hyeob Kim,“Emotion Classification based on Physiological
Signals inducted by Negative emotions,” IEEE, ICNSC, 2012.
[29] Bohdan Myroniv, Cheng-Wei Wu, Yi Ren, and Yu Chee Tseng,“Analysis of Users’ Emotions Through
Physiology,” The 11th International Conference on Genetic and Evolutionary Computing, 2017.
[30] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash,“Internet of Things: A
Survey on Enabling Technologies, Protocols, and Applications,” IEEE Communications Surveys and
Tutorials, Vol. 17, No.4, pp.2347-2376, 2015.
[31] Oscar D. Lara and Miguel A. Labrador,“A Survey an Human Activity Recognition Using Wearable
Sensors,” IEEE Communications Surveys and Tutorials, Vol. 15, No. 3, 2013.
[32] Smart Home, https : //en.wikipedia.org/wiki/Homea utomation
[33] Smart Transportation, https : //en.wikipedia.org/wiki/V ehiculara utomation
[34] Wearables / e-Health, https : //en.wikipedia.org/wiki/EHealth
Analyzing User Emotions via Physiology Signals 15
Ensa Bajo received his BSc. degree in Electrical Engineering from the
Republic of China Military Academy, Taiwan. Currently, he is pursuing
a M.Sc. in the Department of Computer Science at National Chiao
Tung University. His research interests include smart-phone sensing
and Data mining.