Professional Documents
Culture Documents
1 Introduction
Many studies have been carried out to understand the influence of emotion in
education during the last decade [1–6]. Along with traditional research methods
that make use of questionnaire, methods that pay attention in understanding
human activities (e.g. facial expression [7], posture [9], human-smartphone in-
teraction [10][12]) towards detecting their emotion are also utilized to find the
link between the classroom emotional climate and academic achievement. Never-
theless, using physiological data to detect emotion could be considered as highly
accurate method since these data reflects exactly the biological reaction of emo-
tions. To collect physiological data from human, wearable sensors need to be
2 Bao-Lan Le-Quang et al.
attached to a human body. Many systems using wired wearable sensors to detect
emotion have been proposed [11][19][24]. Unfortunately, wired sensors do not let
people move freely; and with a web of wires, it is difficult to apply these systems
into reality. Other systems use wireless wearable sensors but lack of online data
transmission protocols and inconvenience for people to wear such sensors for a
whole day [13][28]. Recently, wristbands are known as flexible wearable sensors
that can measure physiological data, transfer data to a cloud, monitor health
status online, even give a good recommendation for healthcare [25][26]. Light,
touchable, accurate, cheap, fashionable, and easy to use are significant benefits
of these wristbands towards making them popular. Besides, smartphones are
also integrated with a lot of sensors. Among these sensors gyroscope and ac-
celerometer sensors totally can help to understand physical activities of human
[12][14][25].
Leveraging on these advantages of smartphones and wristbands, we build
Wemotion, a smart-sensing system using smartphones and wristbands to detect
students’ emotion towards improving learning-teaching experience in education.
A smartphone is used to collect emotional tags and transmit data to a cloud. A
wristband is utilized to collect physiological data (e.g. heartbeat, walking step)
and send such data to the smartphone. The Wemotion has two types of model
for detecting emotion: (1) general model that can work for any user who just
joins our system, (2) personal model that can personalize the general model to
a specific user. The general model is first generated by using a training data
collected from a certain number of users. Then, via the interactive learning
process, this model will be personalized to be a personal model that fits for an
individual user. Periodically, the general model is re-trained again by updating
training data collected from personal models. Thus, the accuracy of recognition
rate, the diversity and the imbalance of training data will be improved over a
period of time. The main contribution in this paper is that our method can
personalize a model to increase the accuracy of the system. Moreover, our model
continues to evolve for its whole life cycle: the longer people use our system,
the more accurate the results are. Whereas, other methods that use wristbands
and/or smartphones for detecting, build either a general model for all users or
an individual model for a selected user.
The paper is organized as follows: Section 2 gives a brief review of exist-
ing methods which pay attention to detecting emotion using wristband and/or
smartphones; Section 3 describes our system; Section 4 reports the evaluation of
our system as well as the comparison with others, and the last Section concludes
our paper and raises the future direction of our research.
2 Related Works
In last decade, we have witnessed many studies on emotion/mood recognition
[15–18]. In [8] the authors propose a formula that can recognize students’ mood
during an online self-assessment process. The system gets feedback from each
student to build a personal formula that is applied to recognize his/her mood.
Unfortunately, due to heavy dependence on the individual data, this personal
Wemotion: A system to detect emotion using wristbands and smartphones 3
3 Wemotion
In this section, we discuss the overview of the Wemotion framework including
the working environment setup, algorithms and methods used to detect emotion.
(b)
(a)
[21]), to activate an app in their smartphone, and to start Bluetooth Low Energy
(BLE) connection to let the device and the smartphone talk together. The app
is built to display and receive tags from a student whenever his/her emotion
changes. Second, the heart-beat and step-count data captured by the wristband
will be sent to the smartphone and forwarded to the backend. Third, at the
backend, models being trained before are used to detect emotion. The result
is sent back to smartphones that are already registered before for displaying.
At this stage, the student can re-tag the emotion if he/she feels that the result
does not match his/her emotion. This cycle will be looped unlimited times.
Occasionally, new versions of models are re-trained again based on the number
of re-tags generated by students.
system usages. Nevertheless, most of the cheap wristbands in the current market
can only measure heart-beat and step-count. In order to adapt this method to
our system, we modify the original one as follows:
– Calculating the Inter-beat Interval (IBI) in milliseconds from a heart-beat
value (HBV) by using equation (3)
– Applying the sliding-window whose size is set as W seconds along the time
axis. With each sliding step, following values are calculated:
• meanIBI : the mean of IBIs inside the slide window
• SDNN : the standard deviation of NN interval where NN interval is the
interval between normal heart-beats (beat-to-beat). In our case, it is IBI.
v
u
u 1 X N
SDN N = t [I(n) − I]2 (4)
N − 1 n=2
In other words, after every sliding step, one fv is created by using the equation
(6) and sent to the next stage for processing. Table 1 shows a sample of feature
vectors in training set.
6 Bao-Lan Le-Quang et al.
In order to detect emotion, we create general and personal models that receive
the features generated in the previous step as the input and classify such an
input to a suitable label. To display the results of emotion detection (i.e. for the
testing stage) as well as to manually label the unknown/misclassified emotion
(i.e. for training and feedback stage), we develop an app running in a smartphone
(Android phone in our case).
General Model: We use Support Vector Machine (SVM) to build the gen-
eral model (Alg.1). Feature vectors {fv} are fed to SVM for training with full
labels manually annotated by volunteers. In the testing stage, {fv} with no label
is fed to SVM to get the right label.
(a)
(b)
Fig. 2: (a) The workflow of the Personal Model; (b) The workflow of the Fusion
Model
personal model. Figure 2b and Algorithm 3 describe how the fusion model works.
emotion, the teacher could adjust his/her classroom emotional climate (e.g. the
teaching methods, classmate interaction, quiz, teacher-student interaction) to
decrease the portion of negative and neutral emotion inside his/her class.
We build a website that can be accessed by a teacher with his/her account.
Each class is assigned the classID. Teachers and students are required to login
to a virtual classroom identified by the classID with their ID (i.e. teacherID and
studentID). The virtual classroom classID displays the percentage of positive,
negative, and neutral emotion inside his/her class during the class time.
4 Experimental Results
To evaluate the Wemotion, four volunteers, three male students (user 1, user 2,
and user 3) and one female pupil (user 4), are invited to test the system. The
Xiaomi Miband 1S (figure 1b) and different Android smartphones are used. User
1 joins the system for one and a half month; user 2 uses the system for one day;
user 3 tests the system for two months; and user 4 plays with the system for one
month.
We investigate in three types of emotion: positive (e.g. fun, desire, happiness,
joy, interest, and compassion), negative (e.g. depression, sadness, anger, stress,
and anxiety), and neutral. Each user is asked to manually label “positive”, “neg-
ative”, and “neutral” whenever he/she feels his/her emotion changes. This task
will repeat for feedback process (i.e. the Wemotion will display a label when the
user’s emotion changes, and ask the user to correct the label).
We utilize Google Cloud with Intel Haswell CPU, 3.75GB RAM for setting
the working environment. The parameters of SVM are set as default, using the
scikit-learn library. The sliding window is set by W=60 seconds.
Table 2, 3, and 4 describes the results of emotion detection by using the
general model, the personal model, and the fusion model, respectively. Table 5
denotes the comparison among these three models.
In general, the fusion model generates the best results comparing with others.
This confirms our hypothesis of common senses and individual reaction for a
certain situation mentioned in sub-section 3.4.
We use the methods introduced in [26] and in [12] to compare with our pr-
posed method. In [26], Heart Rate Variability (HRV) and Electrodermal Activity
(EDA) are used to create features. These data come from expensive devices that
are not affordable by pupils. . On the other hand, we use Xiaomi Miband 1S, a
cheap device and only measure heart-beat and step-count which are very pop-
ular for most wristband sold in the current market. In [26], the k-NN, Decision
Tree (DT), and Bagged Ensembles of Decision Trees (BE-DT) are leveraged
to classify emotion, while we use SVM to classify emotion. In [12], data from
smartphone are used to build a ”sensor” with personalized, all-user and hybrid
models. The personalized model requires some training data for each person,
this led a problem if a new user join into their system and it can impact to the
result of their hybrid model. So we test our data just with their all-user model.
Table 6 denotes the comparison of our method and method introduced in [26].
Clearly, our method can detect emotion with higher accuracy rate comparing
to the method introduced in [26] and [12] though we use a cheap device with
poorer information.
To evaluate the satisfaction of teachers and students when using the Wemo-
tion for educational purpose, we conduct a survey to collect idea from them after
using our system for a while. Generally, we received a positive feedback of how
useful our system can be in education. Figure 3 illustrates the feedback results
from teachers and students.
Fig. 3: Feedback from teachers and students when using Wemotion. (a) Is Wemo-
tion useful in education? (b) does Wemotion help to promote academic achieve-
ment?
5 Conclusions
In this paper, we introduce the Wemotion, the emotion detection system us-
ing smartphones and wristbands. The general, personal, and fusion models are
introduced, evaluated, and compared to other methods. We also conducted a
survey to see how good our system is in education. Our system differs from
other methods that either build a general model for all or build several personal
Wemotion: A system to detect emotion using wristbands and smartphones 11
models for individuals. We build a fusion model that integrates both the general
and personal models to overcome the overfitting, common senses and emotion
level differences among people. Moreover, our system has the ability to evolve
itself during their life cycle towards increasing the accuracy rate occasionally.
In the future, we will develop the sub-system of education to understand bet-
ter the link between monitoring students’ emotion during class time, as well as
student-student and teacher-student interaction towards proposing a good policy
for emotional classroom climate.
References
1. Reinhard Pekrun (1992). “The Impact of Emotions on Learning and Achievement:
Towards a Theory of Cognitive/Motivational Mediators”. Applied Psychology: An
International Review, vol. 41, n. 4, pp.359-376.
2. Tanis Bryan, Sarup Mathur, and Karen Sullivan (1996). “The Impact of Positive
Mood on Learning”. Learning Disability Quarterly, vol. 19, no. 3, pp. 153-162.
3. Ika Febrilia and Ari Warokka (2011). “The Effects of Positive and Negative Mood
on University Students’ Learning and Academic Performance: Evidence from In-
donesia”. The 3rd Int. Conf. on Humanities and Social Sciences, pp.1-12.
4. Sottilare, Robert A., and Michael Proctor. ”Passively classifying student mood and
performance within intelligent tutors.” Journal of Educational Technology & Society
15, no. 2 (2012): 101.
5. Rich Lewine, Alison Sommers, Rachel Waford, and Catherine Robertson (2015).
”Setting the Mood for Critical Thinking in the Classroom”. Int. Journal for the
Scholarship of Teaching and Learning, vol. 9, no. 2, Article 5, pp. 1-4.
6. Tze Wei Liew and Su-Mae Tan (2016). “The Effects of Positive and Negative Mood
on Cognition and Motivation in Multimedia Learning Environment”. Educational
Technology & Society, vol. 19, no. 2, pp. 104–115.
7. Saket S Kulkarni, Narender P Reddy and SI Hariharan (2009). ”Facial expression
(mood) recognition from facial images using committee neural networks”. BioMed-
ical Engineering OnLine 2009 8:16.
8. Christos N. Moridis and Anastasios A. Economides (2009). ”Mood Recognition
during Online Self-Assessment Tests”. IEEE TRANSACTIONS ON LEARNING
TECHNOLOGIES, VOL. 2, NO. 1, pp. 50-61.
9. Thrasher, Michelle, Marjolein D. Van der Zwaag, Nadia Bianchi-Berthouze, and
Joyce HDM Westerink (2011). ”Mood recognition based on upper body posture
and movement features”. In International Conference on Affective Computing and
Intelligent Interaction, pp. 377-386. Springer Berlin Heidelberg.
10. Gjoreski, Martin, Hristijan Gjoreski, Mitja Lutrek, and Matja Gams (2015). ”Au-
tomatic detection of perceived stress in campus students using smartphones”. In
Intelligent Environments (IE), 2015 International Conference on, pp. 132-135. IEEE.
11. Maaoui, Choubeila, and Alain Pruski (2010). ”Emotion recognition through phys-
iological signals for human-machine communication”. Cutting Edge Robotics. pp.
317-333. INTECH Open Access Publisher.
12. LiKamWa, R., Liu, Y., Lane, N.D. and Zhong, L.. (2013). ”MoodScope: building
a mood sensor from smartphone usage patterns”. Proceeding of the 11th annual
international conference on Mobile systems, applications, and services. pp. 389-402.
ACM.
12 Bao-Lan Le-Quang et al.
13. Odinaka, Ikenna, et al. (2012). ”ECG biometric recognition: A comparative anal-
ysis.” IEEE Transactions on Information Forensics and Security 7.6 : 1812-1824.
14. Tapia, Emmanuel Munguia, et al. (2007). ”Real-time recognition of physical activ-
ities and their intensities using wireless accelerometers and a heart rate monitor.”
Wearable Computers, 11th IEEE International Symposium on. IEEE
15. Sebe, Nicu, et al. (2005). ”Multimodal approaches for emotion recognition: a sur-
vey.” Internet Imaging VI. Vol. 5670. International Society for Optics and Photonics.
16. Vinola, C., and K. Vimaladevi. (2015). ”A survey on human emotion recognition
approaches, databases and applications.” ELCVIA Electronic Letters on Computer
Vision and Image Analysis14.2: 24-44.
17. Sharma, Nandita, and Tom Gedeon. (2012). ”Objective measures, sensors and com-
putational techniques for stress recognition and classification: A survey.” Computer
methods and programs in biomedicine 108.3 (2012): 1287-1301.
18. Basu, Saikat, et al. (2017). ”A review on emotion recognition using speech.” Inven-
tive Communication and Computational Technologies (ICICCT), 2017 International
Conference on. IEEE.
19. Wioleta, S. (2013). ”Using physiological signals for emotion recognition”. 6th In-
ternational Conference on Human System Interactions (HSI) (pp. 556-561). IEEE.
20. McDuff, Daniel, Sarah Gontarek, and Rosalind Picard. (2014). ”Remote measure-
ment of cognitive stress via heart rate variability.” Engineering in Medicine and
Biology Society (EMBC), 36th Annual International Conference of the IEEE.
21. https://www.amazon.com/Original-Xiaomi-Monitor-Wristband-
Display/dp/B01GMQ4Y3O
22. Alexandratos, V. ”Mobile Real-Time Stress Detection.” (2014).
23. Leys, Christophe, et al. ”Detecting outliers: Do not use standard deviation around
the mean, use absolute deviation around the median.” Journal of Experimental
Social Psychology 49.4 (2013): 764-766
24. Healey, Jennifer A., and Rosalind W. Picard. ”Detecting stress during real-world
driving tasks using physiological sensors.” IEEE Transactions on intelligent trans-
portation systems6.2 (2005): 156-166.
25. Sano, Akane, and Rosalind W. Picard. ”Stress recognition using wearable sensors
and mobile phones.” Affective Computing and Intelligent Interaction (ACII), 2013
Humaine Association Conference on. IEEE, 2013.
26. Zenonos, Alexandros, et al. ”HealthyOffice: Mood recognition at work using smart-
phones and wearable sensors.” Pervasive Computing and Communication Work-
shops (PerCom Workshops), 2016 IEEE International Conference on. IEEE, 2016.
27. Wijsman, Jacqueline, et al. ”Wearable physiological sensors reflect mental stress
state in office-like situations.” Affective Computing and Intelligent Interaction
(ACII), 2013 Humaine Association Conference on. IEEE, 2013.
28. Valenza, Gaetano, et al. ”Wearable monitoring for mood recognition in bipolar
disorder based on history-dependent long-term heart rate variability analysis.” IEEE
Journal of Biomedical and Health Informatics 18.5 (2014): 1625-1635.
29. Chin R. Reyes, et al. ”Classroom Emotional Climate, Student Engagement, and
Academic Achievement.” Journal of Educational Psychology, 104(3): 700-712