You are on page 1of 37

Accepted Manuscript

An Affective and Web 3.0-Based Learning Environment for a Programming


Language

Ramón Zataraín Cabada, María Lucía Barrón Estrada, Francisco González


Hernández, Raúl Oramas Bustillos, Carlos Alberto Reyes-García

PII: S0736-5853(16)30490-7
DOI: http://dx.doi.org/10.1016/j.tele.2017.03.005
Reference: TELE 925

To appear in: Telematics and Informatics

Received Date: 30 September 2016


Revised Date: 24 March 2017
Accepted Date: 29 March 2017

Please cite this article as: Cabada, R.Z., Estrada, M.L.B., Hernández, F.G., Bustillos, R.O., Reyes-García, C.A., An
Affective and Web 3.0-Based Learning Environment for a Programming Language, Telematics and Informatics
(2017), doi: http://dx.doi.org/10.1016/j.tele.2017.03.005

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
An Affective and Web 3.0-Based Learning Environment for a
Programming Language
1
Ramón Zataraín Cabada *
rzatarain@itculiacan.edu.mx
1
María Lucía Barrón Estrada
lbarron@itculiacan.edu.mx
1
Francisco González Hernández
francisco.gonzalez@itculiacan.edu
1
Raúl Oramas Bustillos
raul.oramas@udo.mx
2
Carlos Alberto Reyes-García
kargaxxi@inaoep.mx

1
Instituto Tecnológico de Culiacán,
Division of Research and Postgraduate Studies
Juan de Dios Bátiz 310 Pte. Col. Guadalupe,
Culiacán, Sinaloa, CP 80220, México

2
Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE)
Luis Enrique Erro No. 1, Sta. Ma. Tonanzintla, Puebla, 72840, México

*Corresponding Author
An Affective and Web 3.0-Based Learning Environment for a
Programming Language
Abstract.
We present a Web-based Environment for learning Java Programming that aims to provide
adapted and individualized programming instruction to students by using modern learning
technologies as a recommender and mining system, an affect recognizer, a sentiment
analyzer, and an authoring tool. All these components interact in real time to provide an
educational setting where the student learn to develop Java programs. The recommender
system is an e-learning 3.0 software component that recommends new exercises to a
student based on the actions (ratings) of previous learners. The affect recognizer analize
pictures of the student to recognize learning-centered emotions (frustration, boredom,
engagement, and excitement) that are used to provide personalized instruction. Sentiment
Text Analysis determines the quality of the programming exercises based on the opinions
of the students. The authoring tool is used to create new exercises with no programming
work. We conducted two evaluations: one evaluation used the Technology Acceptance
Model to assess the impact of our software tool on student behavior. The second evaluation
calculated the student's t-test to assess the learning gain after a student used the tool. The
results of the evaluations show the students perceived enjoyment and are willing to use the
tool. The study also show that students using the tool have a greater learning gain than
those who learn using a traditional method.

Keywords: Web 3.0; Intelligent Learning Environment; Educational Applications

1. Introduction
Learning a first programming language is a major challenge for a student of computer
science or a related field (Soloway & Spohrer, 2013; Hoc, 2014). Some factors that
contribute to this challenge are: the teaching methods employed by the instructor, the study
methods employed by the student and his/her abilities and attitudes, and the nature of the
art of programming. On the other hand, new advanced learning technologies related to what
is called the Web 3.0 are actively used in the teaching scene (Hendler, 2009; Morris, 2011;
Barassi & Treré, 2012; Vásquez-Ramírez et al. 2016). The combination and integration of
these new web technologies like Linked Data (Bizer et al. 2009; Heath & Bizer, 2011), Big
Data (Manyika et al. 2011), Semantic Web (Berners-Lee et al. 2001), and Educational Web
Mining (Romero & Ventura, 2010), with new fields and application of artificial
intelligence, affective computing, and education pedagogy, has created new powerful
learning environments with modern interfaces for many knowledge fields. This new area
has been called E-Learning 3.0 in many works (Rubens et al. 2012; Dominic et al. 2014;
Hussain, 2013) marking a difference with past Web 2.0 educational systems where social
networks played a very important role.

In this paper, we present Java Sensei, a Web-based Intelligent Learning Environment (ILE)
designed for learning Java programming (Cabada et al. 2015). Our learning system has an
Affective and Intelligent Tutoring System (ATS) working in a web-based environment. The
Affective and Intelligent Tutoring System adapts its content individually according to
preferences of students and a student model, either changing its graphic user interface or
presenting different learning material. In order to achieve this goal, Java Sensei implements
a recommendation system which uses a rating method where each rating represents the user
preference. The rating method uses a general algorithm which performs a comparison
between a current user and other users in the database. The rated elements are exercises and
help items which are saved in a data server. Both items receive ratings from all users in
order to record their preferences. Java Sensei uses the technique “Collaborative Filtering”
(Schafer et al. 2007) by taking advantage of what a Web system is in real time. The
learning environment also contains a facial expression recognizer to identify emotions in
students, a sentiment analyzer to identify the quality of the exercises based on the dialogues
(texts) of the students, and a fuzzy system for working the reasoning and inference of the
learning (expert) system.

The paper is organized as follows: section 2 describes related work of the main topics.
Section 3 presents the research methodology and section 4 shows and explains the software
architecture of the Web-based system. Section 5 shows implementation of affect and
intelligence into the system. Section 6 presents the evaluation of the tool acceptance.
Section 7 shows the assessment of academic performance. Finally, conclusions and future
work are discussed in Section 8.
2. Related Work

This section describes research work in areas related to our investigation.

2.1. The Web 3.0 and Learning

E-Learning 1.0 had learning or educational objects available online. It allowed students to
read learning material stored in the Web. This learning material was created by authors of
web pages (Richardson, 2005). E-Learning 2.0 allowed students not only to read learning
material but also to write/create it (Richardson, 2005; Downes, 2005; Vásquez-Ramírez et
al. 2014a). Users/Students also became the authors of the collective intelligence on the
Web. Social Networks were the main tools to allow users/students to participate as authors
of the knowledge in the different learning environments. In addition to allowing students to
read and write learning material, in an E-Learning 3.0 environment, students collaborate
with each other (Wheeler, 2011). Artificial Intelligence plays a crucial role in this type of
learning environments by providing new methods and techniques to organize web pages so
they can be structured as a semantic web. Users (Students) can see the Web not only as data
and information but as knowledge that can be accessed from a learning environment.
Today, there have been many implementations of learning environments which use
different technologies related to the concept of the Web 3.0. Kurilovas et al (2014) presents
an analysis of interconnections between students’ learning styles, their preferred learning
activities, relevant teaching/learning methods, and Learning-Object types in a Web 3.0
virtual learning environment. Personalization of learning styles is implemented by special
ontologies. In (Dietze et al. 2012), authors introduce a Linked Education environment and
describe a general approach to use existing TEL (Technology-Enhanced Learning) data on
the Web by allowing to treat it as Linked Data in the educational domain. Paredes-Valverde
et al (2015) propose a natural language interface that allows non-expert users to access
semantic knowledge bases through the formulation of queries in natural language. The
approach uses a special domain-independent ontology. In order to test the method they
conducted an evaluation in the music domain using LinkedBrainz. Vásquez-Ramírez et al
(2014b) presents a system called AthenaTV to generate Android-based educational
applications for TV, working with the scheme used by Google to develop interfaces based
on interface design patterns established in Google TV.

2.2. Affect Recognition in Facial Expressions

There are currently many works related to obtaining the current emotion in facial
expressions, and they implement different methods of feature extraction and classification.
For example, Khandait et al. (2012) uses a neural network with back-propagation for the
classification of emotions such as astonished, neutral, sad, upset, frightened, happy and
angry. They work with morphological image processing operations such as SUSAN Edge
Detection and they use JAFFE database to train their neural network. Gangwar's emotion
recognizer (Gangwar et al. 2013) is an intelligent system that uses a feed forward neural
network to analyze images of people bearing different emotions. The output of the neural
network in this recognizer is a vector of six values, where each one indicates the similarity
of the processed image with each emotion. The algorithm always selects the emotion with
the highest value. De et al (2015) has a process of extracting facial features where they use
an HSV color model to detect the face in the image. After that, they use PCA to reduce the
size of the face dimension. Table 1 shows the main features of these recognizers as the
classification method, the method used for feature extraction, and the precision rate they
reach for emotion prediction.

Table 1. Affect Recognizers for Facial Expression.

Reference Classification Feature Extraction Method Rate


Khandait et al. 2012 Neural Network SUSAN edge detection 95.2 %
Gangwar et al. 2013 Neural Network 2D Discrete Cosine Transform 88 %
De et al. 2015 Euclidean Distance Comparison Eigenfaces 85.38 %

There are also many valuable surveys about the state of art in face expression recognizers.
One of the most recent works is from Sariyanidi et al (2015) that covers a good number of
recognizers not only for basic emotions but also for non-basic emotions. It also presents
different aspects of face recognition such as face registration, spatial representation, spatio-
temporal representation, and dimensionality reduction.
2.3. Modern Learning Environments for Programming Languages

Modern research on intelligent learning environments or intelligent tutoring systems (ITS)


for programming languages has focused on different programming languages like Java, C,
C#, or PHP. Wiggins et al. (2015) implemented the JavaTutor System, an affective tutoring
system that uses natural language dialogues for human-to-human or human-to-computer
communication using machine learning techniques. CSTutor is a tool (Hartanto & Reye,
2013) to help students learn programming in C#. The tool incorporates anchored instruction
for the domain representation of the programming language and game theory, instead of
affect recognition, for motivation improvement. Students can use PHP ITS (Weragama &
Reye, 2012) to learn programming in the PHP scripting language. The tutoring system
makes an analysis written by the student and translates it into AST (abstract syntax tree)
code. To check the correctness of a program, the tutoring system analyzes whether all
predicates in the goal for a particular task are present in the final state. Through this
method, the student writes the solution of the task defined in its own way, as long as the
final state of his/her program meets the specified target. CTutor (Kose & Deperlioglu,
2012) is an intelligent learning environment (ILE) where students perform exercises
programming in C. The idea of this system is to facilitate the effectiveness of learning when
students navigate through complex problems and thereby provide an advanced education
system to enhance the learning process. This ILE allows teachers to create new
programming exercises and manage them from the system interface. The domain in the
system is adjustable for the solution of each exercise. The tool uses a constructivist
pedagogical orientation where the authors give special weight to feedback as the main
source of information; moreover, its domain is formed on a constraint-based model.

Table 2 shows a comparative analysis between 11 different types of learning environments


for programming that we felt were important and very different from each other. We did
not get into the verbose descriptions of these works; rather table 2 give a summary of all the
surveyed works. The papers are presented in chronological order starting from 2003.

There are some important issues or problems related to learning systems shown in table 2.
The first problem is the inability of these systems to recognize emotions centered on
learning. In our research only Java Tutor System (Wiggins et al. 2015) recognizes emotions
and these are only basic emotions (i.e. anger, fear, sadness, happiness, disgust, and surprise)
which are not generally relevant to the process of learning (D’Mello et al. 2008; Lehman et
al., 2008). This represents a weakness in the other systems since emotions such as
engagement, frustration, boredom, or confusion (non-basic emotions) dominate in a
learning environment.

A second important problem is that these systems focus on teaching programming, ignoring
key aspects of teaching good programming practices. For example when the student finds a
solution to a problem, the system must sometimes recommend a better or optimal solution.
This is also important when recommending exercises according to the student's affective
state and level of knowledge.

Table 2. Comparative Analysis of Learning Environments (LE) for Programming.

Learning Year Description and Language Important AI features


Environment
JITS 2003 LE for learning Java that generates feedback It uses AI techniques to
(Sykes & based on the student's responses. It uses the generate feedback.
Franek, 2003) ACT-R theory.
BITS 2004 It provides a guide to find the most suitable It uses Bayesian networks
(Butz et al. subjects that must be studied to learn to track the student
2004) programming in C++. knowledge.
j-LATTE 2009 It is a constraint-based tutor for Java that It uses a constraint-based
(Holland et al. teaches a subset of the Java programming model.
2009) language.
JavaGuide 2009 It is a system that guides students to select Adaptive educational
(Hsiao et al. the most appropriate questions about Java system using hypermedia.
2009) programming.
ITS PHP 2012 The LE analyzes the student code (PHP) and A knowledge base in an
(Weragama & translates it into abstract syntax tree (AST). ITS with first-order
Reye, 2012) predicate logic and
classical and hierarchical
planning.
CTutor 2012 The domain module in the LE can be Intelligent Learning
(Kose & adapted to the students. It uses a constraint- Environment using a
Deperlioglu, based model to help students coding C Constraint Based Model.
2012) programs.
NooLab 2012 LE used in distance education with a The tool is implemented in
(Neve et al. constructivist teaching model. NooLab is an Intelligent Learning
2012) focused on programming in JavaScript Environment.
language.
DebugIT 2013 LE to teach programming using the concept Case Based Reasoning
(Carter & of code debugging. (CBR) into an Intelligent
Blank, 2013) Tutoring System.
CSTutor 2013 LE to help students learn C#. It uses the Incorporation of the
(Hartanto and concept of anchored instruction, where anchored learning
Reye, 2013) teaching and learning activities are designed approach in an Intelligent
around a story or situation that includes a Tutoring System.
problem or issue.
Intelligent 2015 It uses the concept of visual communication Integration of animation
Code Tutoring for the understanding of large datasets and and visualization with
System thus facilitate their programming. It programming and software
(Kadge et al. produces a Flowchart. testing through dynamic
2015) flow charts.
The JavaTutor 2015 It is a LE for Java with multimodal affect It handles dialogs and
System recognition. It takes into account cognitive natural language
(Wiggins et al. and affective aspects of students using processing with affect
2015) different hardware tools to recognize their recognizing.
affective state.

3. Research Methodology

In order to develop the software and the study, we combine a methodology for the
development of a software system with a descriptive research study (Bryman, 2015). A
descriptive research study is concerned with describing the characteristics of a particular
group or person. For the development of the software system, the Rational Unified Process
(RUP) methodology (Jacobson et al. 1998) was used in combination with a method
for software architecture design that allows to assess the quality requirements of
a software system that satisfy both functional and quality requirements (Bosch & Juristo,
2003).

The complete research process had four important steps:

Design of the Software Architecture. In this stage, the goal was to distribute the system's
functionalities in various components that interacted with each other to achieve the general
operation defined in the functional requirements, as well as to determine the architectural
model that best adjust to the established quality requirements.

Implementation of components. The purpose of this stage was to implement each


component of the software architecture according to its specifications and design.

Integration of components. The functionalities of all the components (intelligent tutoring


system, recommender system, individual recognizers of emotions or affection, etc.) were
integrated to deploy the Web 3.0-Based Learning Environment for Java programming.
Evaluations and Analysis. Two studies using the environment were carried out. The results
of these studies were then analyzed. The first study was performed to evaluate usability and
acceptance using a research model based on TAM. The second study was conducted to
evaluate the learning gain of the students.

The next sections give a detailed explanation of how these four steps were developed to
design and implement the learning environment Java Sensei and to evaluate it with
students.

4. The Design of Java Sensei

This section describes the design of the software architecture and the Web interfaces of
Java Sensei. Java Sensei has a layered design in order to organize its modules and
components. This layered design allows scalability and easy maintenance because its tasks
and responsibilities are distributed. Information is exchanged between layers, modules and
components through well-defined interfaces. This architectural style was proposed due to
its ability to add new features, so future changes such as new methods of emotion
recognition can be carried out without affecting other components. It also enables a better
control over data persistence, allowing the addition of more repositories to process
thousands of students' photos and log data. A description of each element of the
architecture is presented below. Fig. 1 illustrates the organization and structure of the layers
in the software architecture.
Fig. 1. Architecture of Java Sensei

4.1 Presentation Layer


This layer contains the components that interact with users. The user communicates with
the system using several input devices (screen, mouse, keyboard, or any type of input). The
components of this layer are responsible of several tasks such as generating web content,
building web user interface, and controlling the information flow in the system. The
modules are described as follows.

Authoring Tool and Example-Tracing Interpreter


This module contains two components developed for two users: teachers and students. The
first component is an authoring tool to develop new exercises from a web interface and the
second component allows Java Sensei to follow the student's response and compare it to the
correct solution. This component is an Example-Tracing Interpreter that provides the
student with step-by-step guidance and multiple strategies to the problem solutions,
including optimal, sub-optimal, and erroneous solutions. The teacher/instructor uses the
authoring tool to create new exercises with different types of feedback and emotional text,
etc. The system will use these elements (messages, feedback, etc.) in execution time to
interact with the student according to his/her answers and emotional state. The Authoring
Tool interface is shown in Fig. 2.

Fig 2. Graphic user interface of authoring tool.

The main task of the interpreter is to perform the example-tracing approach (Aleven et al.
2009; Aleven et al. 2006). The module integrates a tree of steps which contains
metacognitive and emotional data. The system will execute the example-tracing algorithm
to traverse the graph, selecting the nodes on each step, according to the input of the student.
The type of available steps (nodes in Fig. 2) are initial step (P.I.), error step (P.E.), optimal
step (P.O.), sub-optimal step (P.S.O.), final optimal step (P.F.O.), and final sub-optimal
step (P.F.S.).

Web User Interface Generator


This module contains five components which are responsible for creating web user
interfaces at execution time. All components have different tasks depending on what kind
of interface will show. One component (Exercises) generates a structure which contains a
question with multiple choice answers that are presented as buttons. A second component
(Pedagogical Tutor) creates a visual representation of a pedagogical agent which can show
different types of texts and facial expressions. Currently, it has four facial expressions
(delighted, surprised, empathetic, and skeptical), and two types of texts (metacognitive and
emotional texts). A third component (Resources) builds a visual representation of help
items for the user. It shows diverse items such as videos, codes, explanations, and images.
A fourth component (Course Content) shows the content of a course and informs the
student his/her progress in the course by showing covered and uncovered topics. The
visualization is through a collapsible menu divided by main topics which contain 10 lessons
each. The last component of this layer (Recommendations) shows users a menu of exercise
recommendations. This menu contains a set of buttons which represent top-rated exercises.
The menu is different for every user because ratings vary among users. The system shows
the exercise when the user selects a button. Fig. 3 shows an example of recommendations
provided by the system to help the student.

Fig 3. Exercise recommendations for users when they visit the help menu

Login
Facebook login is a third party component which allows to avoid implementing a custom
login. System login component connects the Presentation layer with Tutoring layer. Its
responsibility is to move information between layers to register progress of the student.
Some information can be metacognitive and emotional data, advances in the course content,
reviewing of test scores, exercises, and help item ranking.

5. Implementation and Integration of Components

In this section we present with details the implementation of the components related to the
affect and intelligence into the Tutoring System. The components were organized into two
big layers of the software architecture which are called Tutoring layer and Intelligent layer.

5.1 Tutoring Layer


This layer contains all modules related to the Intelligent Tutoring System which are
described below.

ITS Module
This module was implemented using the problem-solving approach wherein students learn
by following a clear structure and using different solutions. The solutions are presented
repetitively until the student gets a complete understanding of the problem. There are 3
strategies for solving problems. The first strategy evaluates theoretical concepts using
exercises with “true-false” options. The second strategy shows programming code that
represents a partial or complete program and asks the student to analyze it and to choose the
output. The third strategy allows to create more complex exercises that include a finite
number of steps to reach the solution.

Domain Management. This component contains expert knowledge that the student wants
to learn and practice. Knowledge representation is implemented with knowledge space
theory (Doignon & Falmagne 2012). The expert model represents six basic skills that
students must master to be modeled by a graph representing the knowledge space. These
skills are introduction to Java, variables and calculations, selection, iteration, methods, and
arrays.

Student Management. This component stores the cognitive and emotional information of
the student. The stored information is Current Emotion (the current emotion of the student
when answering a Java programming exercise); Previous Emotion (the student’s previous
emotion); Global Student Skill (the proficiency of the student regarding the course); and
Quality of Current answer (the number of errors that the student made in the exercise).

Recommender System Module


This module manages part of the adaptation on the system. The module adapts its content
individually according to student preferences. Those preferences are set by the students
when they rate the elements used. Recommendations change for each user according to
given ratings. Each rating represents the student preferences (Ricci et al. 2011). The
component implements a collaborative filtering (CF) approach for a learning environment
in the Web, where it is possible to receive ratings from different students in different
platforms (PC or mobile devices). Every user can rate resources after using them, and these
ratings affect predictions or recommendations. This type of rating is known as explicit
feedback (Koren & Bell, 2011). In order to generate a representative recommendation, the
system predicts the rating of an element using ratings from other users with a similarity in
older ratings of other elements (Desrosiers & Karypis, 2011). Next, we explain the
implementation of the recommender system.

The recommender system follows a general four-step algorithm to create recommendation


items. The first step is collecting preferences from a database and saving preferences into a
file. The file contains a set of ranking values for every exercise and help item solved and
used by the student. This file contains the key properties to build the system
recommendations which are: student identification, item (exercise and help) identification,
and the rating element. The second step of the system is finding similar users by measuring
the similarity among all users in the database. In order to achieve it, we used Pearson
correlation. The correlation coefficient measures how two sets of data fit on a straight line;
named best-fit line because it tries to come as close to all the items as possible. The third
step of the system is ranking people; that means that it is necessary to save all calculated
similarities in a data matrix. Also, we need to save the top similarity for every user (with
respect to other users). For practical reasons, we only use the top five ranking for every
people. Finally, the fourth step is to generate recommendation items.
An example of the recommender system process to ranking people (third step) is as
follows:

a) Rankings from top five people are extracted from the database, these values are
shown in columns Ex 1, Ex 2, and Ex 3 in Table 3.
b) Every item is multiplied by the similarity value from the user; these values are
columns S.xEx 1, S.xEx 2, and S.xEx 3 in Table 3.
c) Compute Total by adding all S.x fields (see Total row in Table 3).
d) Exercises with more grades have a major value. In order to normalize this problem
(partially evaluated exercises), every similarity from users who ranked the exercise
is added. The result is written in row called Sim. Sum in Table 3.
e) Every value of field Total is divided by the value of field Sim. Sum. The result is
written in row Total/Sim. Sum. This value represents the prediction for the user.

Table 3. Example of rating for one person as demonstration.

Person Similarity Ex 1 S.xEx 1 Ex 2 S.xEx 2 Ex 3 S.xEx 3


Person 1 0.98 3 2.94 1 0.98 3 2.94
Person 2 0.75 5 3.75 2 1.5 0
Person 3 0.23 4 0.92 5 1.15 0
Person 4 0.9 2 1.8 0 3 2.7
Person 5 0.12 5 0.6 3 0.36 1 0.12
Total 10.01 3.99 5.76
Sim. Sum. 2.98 2.08 2

Total/Sim. Sum 3.35 1.91 2.88

5.2 Intelligent Layer


This layer contains those components responsible for processing and making decisions with
data generated by the system. The components inside this layer execute intelligent actions
in order to perform tutoring activities. This layer receives requests from the Tutoring Layer
and collects data from the Web Content Layer.

Fuzzy Logic Module


This module contains a fuzzy-logic system that implements the reasoning and inference of
the learning (expert) system. Fuzzy rules were built considering student emotions and other
pedagogical aspects such as the quality of answers or global ability. In total there are 495
rules. Fuzzy sets and fuzzy rules were written using Fuzzy Control Language (FCL) and the
implementation was supported with library JFuzzyLogic (Cingolani & Alcalá-Fdez, 2013).
Emotional values are extracted from modules Facial Emotion Recognition Module and
Sentiment Text Recognition (see Fig. 1); the values are produced from photographs of
students (face expressions) taken from the ILE interface and from text dialogs entered by
students. The cognitive values are generated by calculating the completed tasks (Java
exercises) and incorrect/correct answers in the ILE. The tutor can produce output variables
which are reactions and actions; some of the reactions are positive/neutral/negative
feedback messages about the progress of the student. The transmission of messages to
students is done by a pedagogical agent. The agent shows different facial expressions and
dialogues in order to communicate with the student. There are three types of answers:
feedback (messages with a positive, negative, or neutral perspective regarding the progress
during the year); empathic and emotional responses (phrases received by the student
depending on their current emotional state); intervention (it represents whether the
Pedagogical Agent must perform an intervention or not on the student).

Linguistic or fuzzy variables were established by two teachers with expertise in Java. The
variables are a representation of a state; in the case of cognitive states, they are represented
by values bad or good; in the case of emotional states, a linguistic variable represents an
emotion (frustration, boredom, engagement, and excitement).

Sentiment Text Recognition

In Java Sensei we used sentiment text analysis to determine the quality of the programming
exercises based on the opinions of the students. With this information, the instructor will
be able to know how to improve the programming exercises based on the student’s opinion.
To perform the sentiment analysis it is necessary to have a data corpus labeled with at least
two categories: positive and negative. In the case of Java Sensei, the TASS corpus (Liu,
2012) is used. Fig. 4 shows the class diagram for the module SentiTextSensei that is
responsible for performing the sentiment analysis in the system.
Fig. 4. Class Diagram for module SentiTextSensei .

The sentiment text recognizer (Sentiment Text Recognition component in Fig. 1) was
implemented after cleaning up the data in the corpus. The process to clean up the data has
four steps: (1) you have to convert all the text to lowercase letters; (2) punctuation must be
eliminated; (3) stop-words are eliminated; and finally (4) the emoticons are translated to an
equivalent text. Class CorpusProcessing is responsible for performing these tasks through
the methods loadCorpus(), cleanCorpus(), and processCorpus(). Class CorpusProcessing
generates a new data file or corpus. This file is used as input for class TrainCorpus which is
responsible for converting the texts of the corpus into a series of numbers by extracting
their most relevant features using method featureExtractor(). With method setClassifier(), a
Naive Bayes classifier algorithm is used to train the corpus using method trainModel() that
generates a new file with the already-trained corpus. We used a Naïve-Bayes Classifier
because it is the simplest and most widely used classifier. As mentioned by Liu (2012), a
Naïve-Bayes classification model computes the posterior probability of a class, based on
the distribution of the words in the document. The model works with the Bag of Bows
(BOWs) feature extraction which ignores the position of the word in the document. It uses
the Bayes theorem to predict the probability that a given feature set belongs to a particular
label. Finally, class SentimentPredictor receives the already-trained corpus and the text to
be evaluated. With method predictor(), the sentiment analysis is performed and the polarity
of the text (positive or negative) is obtained.

Facial Emotion Recognition Module

In order to recognize facial expressions in the learning environment, we decided to build


our own facial-expressions database or corpus. The database should contain faces
expressing emotions directly related to learning. Every face in the database is labeled as
one of the learning-centered emotions (frustration, boredom, engagement, and excitement).
This set of emotions have been proven to always be present during the process of deep
learning (Baker et al. 2010; Bosch et al. 2015). In order to accomplish this task, we used an
electroencephalography (EEG) technology called EMOTIV Epoc. The process of creating
the database is shown in Fig. 5. The system captured student's emotions while the student
was solving a problem by coding a Java program using a Java environment. In this process,
every 5 seconds a webcam takes a photograph of the student (1) and meanwhile, the
EMOTIV Epoc device captures brain activity (2). Every student photograph is labeled with
the emotion obtained from the EMOTIV Epoc device (3). Both objects (photograph and
emotion) are saved into the facial expression Database (4). A total of 7,019 photographs
were stored into the database with this process.

Fig. 5. Methodology to create the face expression database.

Fig. 6 shows the class diagram of the module for emotion recognition. The class diagram
reflects 3 activities of the System: corpus creation, classifier training, and emotion
recognition. The relationship between processes and classes is explained below:

As previously stated, in the process to create the corpus the student photographs taken by a
webcam are associated with an emotion obtained from the EMOTIV Epoc device. However
to save the results to a database, we need to create a data structure that can save them. The
class CorpusDBObject represents this object containing emotion and photography as its
attributes. The class CorpusCollection is an administrator of a set of objects of type
CorpusDBObjects, and the attribute listPhotos represents that set. The method
addCorpusObject() adds objects of type CorpusDBObjects to the set and the method
saveDatabase() saves the whole set to the database.

Fig. 6. Class Diagram for module FacialEmotionRecognition.

The training process is defined with the interface ITraining which contains the following
methods: loadData() to load the labeled photographs; configure() to set the classifier
configuration parameters; train() to run the training process; testScore() to evaluate the
classifier and set a reliability score, and saveModel() to save the training to disk. The class
TrainingSVM has an inheritance relationship with ITraining and uses a Super Vector
Machine (SVM) algorithm. The attribute data represents the already loaded photographs,
classifier is the classifier as a model, and extractor obtains face features like eyes, forehead,
nose, and eyebrows.

The process to recognize emotions consist of two important steps: feature extraction and
class prediction. The class Recognizer contains the method getEmotion() which receives the
photograph and returns the current emotion as a result. The attributes classifier and
extractor are needed to perform the recognition process. The class FeatureExtractor has
two attributes which represent the photograph and the feature extraction method. The
method getFeatures() provides the features of the photograph and uses both attributes. The
class ClassifierSVM performs the prediction of the emotion with the method predict() and
loads the model created in the training with the method loadModel().

Our emotion recognizer was implemented based on the local binary pattern (LBP) method
(Happy & Routray, 2015). In this technique, the recognizer identifies emotions by dividing
the user’s face in active facial patches. The algorithm for emotion classification was
implemented with a support vector machine (SVM). We used scikit-learn (Pedregosa et al.
2011) as a support tool to create the SVM.

5.3 Web Content Layer

This layer contains four repositories: Sentimental Text Corpus: contains the TASS Corpus
described above. Fuzzy Logic Rules (FCL files): it contains rules defined with the fuzzy
control language so that this repository is repeatedly used by the Intelligent layer.
Resources repository: it stores files that are used by other layers. Exercises repository: it
contains files needed to build interfaces and provide recommendations in the Presentation
layer as well as the Tutoring layer.

5.4 Data Layer

The system has a MongoDB database containing information of the student and the
Domain Management Module which in turn manages the requests of these components for
reading or updating data.

6. Evaluation of Technology Acceptance

We selected the Technology Acceptance Model (TAM) (Davis, 1989) to investigate the
impact of technology on user behavior. TAM is one of the most popular models to predict
technology acceptance in e-learning studies (Teo, 2009).

6.1 Research Model

Our research model (see Fig. 7), based on TAM (Davis, 1989) has four factors: perceived
usefulness (PU), perceived ease to use (PEU), perceived enjoyment (PE), and attitude
toward using (ATU) to evaluate their impact in the intention to use (ITU) of the system Java
Sensei. Perceived ease of use (PEU), perceived enjoyment (PE), and attitude toward use
(ATU) are the usability factors according to Teo (2009). Perceived usefulness is defined as
the degree to which a student believes that using Java Sensei technology will help him/her to
enhance learning to program in an efficient manner. Perceived enjoyment is considered a
type of intrinsic motivation factor and is defined as “the extent to which the activity of using
the computer is perceived to be enjoyable in its own right, apart from any performance
consequences that may be anticipated” (Yu, et al, 2017). Perceived ease to use refers to the
degree to which a person believes that using a particular technology will be free of effort
(Teo, 2009).

Fig. 7. Research model based on TAM.

PEU and PU are two fundamental constructs in TAM and they influence ATU, which
affects ITU. We discussed the relationship between Java Sensei and Perceived Usefulness
(PU), Perceived Ease of Use (PEU), Perceived Enjoyment (PE), Attitude Toward Using
(ATU), and Intention to Use (ITU). From this TAM model (shown in Fig. 7) we formulated
the following hypotheses:

H1. Perceived ease of use (PEU) will positively affect perceived usefulness (PU).

H2. Perceived ease of use (PEU) will positively affect perceived enjoyment (PE).

H3. Perceived ease of use (PEU) will positively affect attitude toward activity (ATU).

H4. Perceived enjoyment (PE) will positively affect attitude toward activity (ATU).

H5. Perceived usefulness (PU) will positively affect attitude toward activity (ATU).

H6. Perceived usefulness (PU) will positively affect intention to use (ITU).
H7. Perceived enjoyment (PE) will positively affect intention to use (ITU).

H8. Attitude toward activity (ATU) will positively affect intention to use (ITU).

The constructs were measured in this study with multiple choice questions on a 7-point
Likert-type scale. The scores used in all items ranged from 1 (strongly disagree) to 7
(strongly agree).

6.2 Method

Participants.

We asked 43 engineering students to complete a preliminary questionnaire of 10 items. The


sample has 14 women and 29 men, all of them of Mexican origin. Participants’ ages ranged
from 19 to 22 years old, with an average age of 21 years old. The privacy of student
information (in this case student pictures) is addressed through a notice to the user (which
is in Spanish). This notice informs the students that the photographs can be used only for
the purposes of this research and will not be used for profit or published outside the
laboratories. Moreover, the system at the beginning, asks permission to activate the Web
camera. After asking students for permission to use the camera, everyone accepted that the
camera would be taking pictures of them and saving them on the system server.

Procedure.
The study was divided in two sessions. In the first session, the students were using the tool
under teacher supervision. This session last two hours and was carried out in a computer
science lab at Instituto Tecnológico de Culiacán. After this session, for two days students
were able to login and use the tool without supervision. The last day the students completed
the usability survey in about 10 minutes. The questionnaire instrument contains 10
questions and was designed to measure the five factors in the intention to use Java Sensei as
learning environment.

6.3 Results

The questionnaire statements of the survey and their descriptive statistics are presented in
Table 4. All mean (M) values are within a range of 4.88 and 5.81. The standard deviation
(SD) range was from 0.93 to 1.47.
Table 4. Results of Survey with mean (M) scores and standard deviations (SD).

Questionnaire statements M SD
Perceived usefulness (PU)
PU1. The use of Java Sensei can help me improve my academic 5.53 1.20
performance in Java programming courses.
PU2. Java Sensei is useful for learning the basics of programming in 5.81 0.93
Java.
Perceived ease of use (PEU)
PEU1. The Java Sensei user interface is easy to use. 5.81 1.24
PEU2. Interacting with Java Sensei is easy because it does not require 5.40 1.45
much mental effort.
Attitude toward using (ATU) Java Sensei
ATU1. Using Java Sensei in the classroom is a good idea. 5.53 1.24
ATU2. Learning Java with Java Sensei is more interesting than a 4.88 1.47
traditional class.
Perceived enjoyment (PE)
PE1. I enjoyed learning to program in Java with Java Sensei. 5.26 1.27
PE2. It was fun learning Java programming with Java Sensei. 5.42 1.22
Intention to use (IU)
IU1. I would like Java Sensei to be applied with other programming 5.77 1.19
languages.
IU2. I would recommend Java Sensei to all my friends. 5.51 1.30

A reliability analysis was completed in order to confirm the internal validity and
consistency of the items used for each variables. We calculate a Cronbach alpha for the
statements belonging to each construct in the research model. Cronbach’s alpha values
from 0.6 to 0.7 are considered the lower limit of acceptability. Table 5 shows the reliability
of the measurement scales. The obtained Cronbach alpha for perceived enjoyment (PE) and
intention to use (ITU) are at a very satisfactory and satisfactory level respectively, as shown
in Table 5, but in the case of perceived usefulness (PU), perceived ease of use (PEU), and
attitude toward using (ATU) Java Sensei, the values are in the high limit of acceptance.

Table 5. Cronbach’s alpha values (Reliability).

Variable Cronbach’s alpha


Perceived usefulness (PU) 0.66
Perceived ease of use (PEU) 0.65
Perceived enjoyment (PE) 0.92
Attitude toward using Java Sensei (ATU) 0.67
Intention to use (ITU) 0.77
From the results obtained, we can see that the PE factor is excellent so we can infer that the
students enjoyed using the Java Sensei tool and that the intention to use the tool is
acceptable to good. On the other hand, a moderately acceptable relationship can be
observed between the PEU, PU and ATU factors suggesting that graphical user interface
(GUI) of the system should be improved to enhance easy to use feature.

To verify all hypotheses (H1 to H8), a regression analysis was applied to study the
relationship between pairs of variables defined in our research model. The test was
conducted using a two-sided alpha level of 0.05. Table 6 presents the results of the
regression analysis.

Table 6. Summary of hypotheses testing.


Hypothesis Dependent variable Independent variable Coef R2 p-value
*H1 Perceived usefulness Perceived ease of use 0.30 0.28 0.0001
H2 Perceived enjoyment Perceived ease of use 0.51 0.50 0.0000001
*H3 Attitude toward Java Sensei Perceived ease of use 0.32 0.30 0.0001
H4 Attitude toward Java Sensei Perceived enjoyment 0.68 0.67 0.00
*H5 Attitude toward Java Sensei Perceived usefulness 0.46 0.44 0.000001
*H6 Intention to use Perceived usefulness 0.39 0.37 0.000009
H7 Intention to use Perceived enjoyment 0.58 0.57 0.00
H8 Intention to use Attitude toward Java Sensei 0.64 0.6 0.00

Fig. 8 shows that hypotheses H1, H3, H5, and H6 are rejected since their R2 values indicate
that there is no correlation between the dependent and independent variables. That is, a
direct relationship between perceived ease of use and perceived usefulness was not
detected, and perceived ease of use has no relation with attitude toward use the tool. In
addition, perceived usefulness of the tool does not influence the intention to use.

On the other hand, from the statistical regression analysis we found that students enjoy
using the tool (PE) and therefore we can determine that it influences students to continue
using the tutor (ITU).
Fig. 8. Results of regression analysis.

7. Evaluation of Academic Performance

The second assessment was about the learning gain aspect when a student is using the tool.
Our research is focused on answering the following question: Is there a significant
difference in the averages scores of student when using the learning environment Java
Sensei? Our null hypothesis (H0) is that the student by using Java Sensei will not improve
the gain in learning: the alternative hypothesis is that the tool will improve the gain in
student learning (H1).

7.1 Method

Participants.

In this study participated 76 computer system students (undergraduate) with the same
academic background of the Instituto Tecnológico de Culiacán. From this sample, 17 were
female (22%) and 59 male (78%). The ages of participants were 20 to 22 years. Four male
students were not considered for the study because they refused to participate after
completing the pretest. The 72 students were randomly divided into two groups of 36
students. One group was used as the control group (they used traditional learning material)
and the other group was used as the experimental group (they used the learning environment
Java Sensei). Students granted permission to use the camera with them.

Procedure.
Two test were designed to evaluate student knowledge before and after using Java Sensei.
The study was divided in three main phases: in the first phase, all students from both groups
answered the same pretest containing 14 multiple choice questions. Most of the questions
were about interpreting code and finding the output produced by the code. In the second
phase, the students participated in a learning session covering three topics of Java
programming (basic principles of Java, variables, and selection statements). In this learning
session, the experimental group used the Java Sensei tool and the control group participated
in lecture lessons in a traditional way (classroom). The session with the experimental tool
lasted two hours and was carried out in a computer science laboratory at the Instituto
Tecnológico de Culiacán. This process was guided by computer and verbal instructions
given by the programming teachers, who were accessible to answer any questions and to
solve any problem. After this session, for two days students were able to login and use the
tool without supervision. The session with the control group was conducted by the same
teachers and last 8 hours (8 one-hour lessons). In the third phase, all students from both
groups answered a posttest (same test for both groups). Pretest and posttest were different
tests. Grades in both pretests and posttests were in a range between 0 and 100. As we want
to measure whether there is a meaningful difference in the learning process of students we
calculated a measure named student’s learning gain. We measure the student’s learning gain
for each student using the pretest and posttest scores. Learning gain is calculated by
subtracting the pretest score from the posttest score in each group (control and
experimental).
7.2 Results

Fig. 9 shows a bar chart which represents the frequency histogram of the student's grades for
both groups. In the X axis of the graph the student's scores are shown, which are grouped in
ranges of 10 to 10. The Y axis shows the frequency obtained from the students that are in
that range. Within each bar, the individual frequency of each range is added. Clearly, we can
observe that the scores of the control group are greater than the scores of the experimental
group. This can be contrasted more with the descriptive data of both groups seen in table 7.

16
14
14
12
10
10
Frequency

8 8
8 7
6 Control
6 5
4 Experimental
4 3
2 2
2 1 1 1
0 0 0 0
0
20 30 40 50 60 70 80 90 100
Score

Fig. 9. Histogram of pretest results.

Table 7 shows that the mean (M) of the control group is clearly superior to that of the
experimental group with a lower standard deviation (SD) indicating more homogeneity. We
also observe that the median (Mdn) and mode of the control group are superior to those of
the experimental group, reflecting that the data have a higher distribution.

Table 7. Pretest results.


Group M Mdn Mode SD
Experimental 59.694 57 57 17.472
Control 67.861 71 64 14.206

The results of the posttest are shown in Fig. 10. In the graph it is seen that the control group
is grouped mostly in class 80; however, it is evident that the grades of the experimental
group remain with an already inclined distribution towards the classes 80, 90, and 100.
18 17
16
14
12 11
Frequency

10 9
8 7 Control
6 6
6 5 Experimental
4
4
2 2 2
2 1
0 0
0
40 50 60 70 80 90 100
Score

Fig. 10. Histogram of posttest results.

The descriptive statistics are presented in table 8. Table 8 shows that both groups improved
their scores. The experimental group increased in average 16.96 points (posttest mean 76.66
vs. pretest mean 59.694). On the other hand the control group got a mean of 72.05 in the
posttest showing an improvement of 4.189; the experimental group mean is a little more
than four times higher than the mean of the control group.

Table 8. Posttest results.


Group M Mdn Mode SD
Experimental 76.66 79 86 12.80
Control 72.05 71 71 13.17

Finally, the learning gain is evaluated. The results are shown in Table 9.

Table 9. Learning gains in both groups.


Group Total Mean SD Minimum Maximum
value value
Experimental 719 19.97 17.21 -14 65
Control 150 4.16 7.95 -14 28
It can be clearly seen that the total learning gain of the experimental group is much higher
than that of the control group. The mean and maximum value show that there was a higher
gain in the experimental group. However, this data is insufficient to determine if there is a
significant difference between the experimental group and the control group. For this reason
we have performed a Student’s t-test for two-tail and two independent groups. The values
obtained are shown in table 10.

Table 10. t test results.


SE t-stat df p-value t-crit lower upper sig
Two 3.22133494 5.04670988 68 3.5787E-06 1.99546893 9.82906907 22.6852166 yes
Tail

The test showed that there are significant differences between both groups with a p value
<0.05. With these results we conclude that the learning gain of the experimental group is
statistically higher than that of the control group, since there is a significant improvement in
the learning gain of the experimental group as a result of the use of Java Sensei.

To support our results, we verify if there is a significant difference between the pretest and
posttest results of the experimental group. Fig. 11 shows a bar chart which represents the
frequency histograms of the student's grades. We observed that the best results are in the
posttest.

12 11
11
10 9
9 8 8
8
Frequencies

7 7
7 6
6 5
5 PreTest
4 3 PostTest
3 2 2 2
2 1 1
1 00 0 0 0 0
0
10 20 30 40 50 60 70 80 90 100
Scores

Fig. 11. Student grades in the pretest and posttest.


To validate the scores of both tests, a Student's t-test was calculated for paired samples with
an alpha value of 0.05 and two-tailed test. The results of the test are presented in Table 11,
which also contains the arithmetic mean, the standard deviation, and the value t.

Table 11. Results of pretest and posttest.


Groups Count M SD SE t df Cohen d Effect r
Posttest 36 76.6666667 12.8062485
Pretest 36 56.6944444 17.4729496
Difference 36 19.9722222 17.2137588 2.8689598 6.96148556 35 1.16024759 0.76200321

Using the results of the previous table we obtain the p-value to know if there are significant
differences in the means of the paired sample. The results of the test are shown in Table 12.

Table 12. Results of p-value.


p-value t-crit lower upper Sig
One Tail 2.142E-08 1.68957246 Yes
Two Tail 4.2839E-08 2.03010793 -25.7965203 -14.1479242 Yes

Table 11 shows a significant difference between the two arithmetic means with a total value
of 19.97. Table 12 indicates a p-value <0.05 which represents a large margin approaching
quite close to 0; so it is concluded that there are significant differences between the pretest
scores and the posttest scores having the posttest score a higher arithmetic mean. With the
results we conclude to reject the null hypothesis (H0) and accept the alternative hypothesis
(H1): Using the tool Java Sensei improves the gain in student learning.

8. Conclusions and Future Work

A goal of this research was to design and implement a new Web 3.0-based Environment for
learning Java Programming. The core of this software system is an Intelligent Tutoring
System which is supported by advanced learning technologies such as a recommendation
system, a facial expression recognizer, a sentiment analyzer, and an authoring tool. The ITS
and support tools are embedded into a Web-based environment forming an ILE for learning
Java Language. The implementation of data persistence was performed in the NoSQL
database MongoDB. The priority was the intensive use of the database, either for reading or
writing on it. This type of database has successfully proven to work well for this type of
functions (Wei-ping et al. 2011). The performance of the application was taken into account
by creating a heavy client, which is the one that performs the photo-taking process, sending
only the necessary information to the server. This avoids overloading the server.

Another goal of this work was to use the technology acceptance model (TAM) to find the
engineering students’ perceptions of our Web-based Environment for learning Java
Programming, their intention to use the tool, and the attitude toward it. We found different
answers with respect to the acceptance of our tool. First, we found that the students had a
great sense of enjoyment and fun when they are learning Java using our tool. The evaluation
of this aspect (enjoyment), considered a fundamental motivation for the student when he/she
use the tool, obtained the greatest score. Another finding was that most of the students show
they will intent to use the tool in the future. They will also recommend the tool to friends
and would like to have a similar tool for other languages (e.g. C++). With respect to
usefulness defined as “the perception of improving academic performance when using the
tool” (Davis, 1989), easy to use which is “the perception that the use of the tool is effortless”
(Davis, 1989), and attitude toward using the tool (the perception that the tool is useful and
easy to learn), students consider the tool was a little bit lower than satisfactory.

Also, a third goal was to evaluate the learning gains when the student are using the tool. We
showed that 32 students out of 36 obtained passing grades (greater than 70) on a posttest and
after using the Java Sensei tool. Only 16 of those students had a passing grade higher than
70 before using the tool. This means that after using the tool, twice as many students scored
a passing grade, indicating an excellent impact when students are using the tool.

There are various limitations with the existing study that need to be addressed in future
work. A limitation is that the sample size was small, giving the statistical results less
reliability than if the sample were larger. Another limitation is that the students were
sampled from a single university (Instituto Tecnológico de Culiacán) and undergraduate
level only, so the results might not generalize.

For future work, we need to do more evaluations with a much larger number of students
from different universities and different levels (high school, undergraduate, and graduate
students). We also need to do more testing on the impact of using affect and sentiment
recognition including gamification techniques (Kapp, 2012) in order to apply modern
methods of student motivations. On the other hand, the system should take into account that
some students do not grant permission for the camera to take photos, where only by other
means (e.g. text dialogues), the emotional state of the student is obtained. In this respect,
we are working so that the sentiment analyzer can recognize the student's emotional state
through the text that the learner introduces into the system.

References

Aleven, V., McLaren, B. M., Sewall, J., & Koedinger, K. R., 2006. The cognitive tutor
authoring tools (CTAT): preliminary evaluation of efficiency gains. In International
Conference on Intelligent Tutoring Systems (pp. 61-70). Springer Berlin Heidelberg.
Aleven, V., Mclaren, B. M., Sewall, J., & Koedinger, K. R., 2009. A new paradigm for
intelligent tutoring systems: Example-tracing tutors. International Journal of Artificial
Intelligence in Education, 19(2), 105-154.
Barassi, V., & Treré, E., 2012. Does Web 3.0 come after Web 2.0? Deconstructing
theoretical assumptions through practice. New media & society, 14(8), 1269-1285.
Baker, R. S., D'Mello, S. K., Rodrigo, M. M. T., & Graesser, A. C., 2010. Better to be
frustrated than bored: The incidence, persistence, and impact of learners’ cognitive–
affective states during interactions with three different computer-based learning
environments. International Journal of Human-Computer Studies, 68(4), 223-241.
Berners-Lee, T., Hendler, J., & Lassila, O., 2001. The semantic web. Scientific
American, 284(5), 28-37.
Bizer, C., Heath, T., & Berners-Lee, T., 2009. Linked data-the story so far. Semantic
Services, Interoperability and Web Applications: Emerging Concepts, 205-227.
Bosch, J., & Juristo, N. (2003, May). Designing software architectures for usability.
In Proceedings of the 25th International Conference on Software Engineering (pp. 757-
758). IEEE Computer Society.
Bosch, N., D'Mello, S., Baker, R., Ocumpaugh, J., Shute, V., Ventura, M., Wang, L. &
Zhao, W., 2015. Automatic detection of learning-centered affective states in the wild.
In Proceedings of the 20th international conference on intelligent user interfaces (pp.
379-388). ACM.
Bryman, A. (2015). Social research methods. Oxford university press.
Butz, C. J., Hua, S., & Maguire, R. B. (2004). Bits: a bayesian intelligent tutoring system
for computer programming. In Western Canadian Conference on Computing
Education.
Cabada, R. Z., Estrada, M. L. B., Hernández, F. G., & Bustillos, R. O., 2015. An affective
learning environment for java. In 2015 IEEE 15th International Conference on
Advanced Learning Technologies (pp. 350-354). IEEE.
Carter, E., & Blank, G. D. (2013, July). An Intelligent Tutoring System to Teach
Debugging. In International Conference on Artificial Intelligence in Education (pp.
872-875). Springer Berlin Heidelberg.
Cingolani, P., & Alcalá-Fdez, J., 2013. jFuzzyLogic: a java library to design fuzzy logic
controllers according to the standard for fuzzy control programming. International
Journal of Computational Intelligence Systems, 6(sup1), 61-75.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS quarterly, 319-340.
De, A., Saha, A., & Pal, M. C., 2015. A Human Facial Expression Recognition Model
Based on Eigen Face Approach. Procedia Computer Science, 45, 282-289.
Desrosiers, C., & Karypis, G., 2011. A comprehensive survey of neighborhood-based
recommendation methods. In Recommender systems handbook (pp. 107-144).
Springer US.
Dietze, S., Yu, H. Q., Giordano, D., Kaldoudi, E., Dovrolis, N., & Taibi, D., 2012. Linked
Education: interlinking educational Resources and the Web of Data. In Proceedings of
the 27th annual ACM symposium on applied computing (pp. 366-371). ACM.
D’Mello, S., Jackson, T., Craig, S., Morgan, B., Chipman, P., White, H., Persom, N., Kort,
B., el Kaliouby, R., Picard, R. & Graesser, A., 2008. AutoTutor detects and responds to
learners affective and cognitive states. In Workshop on emotional and cognitive issues
at the international conference on intelligent tutoring systems (pp. 306-308).
Doignon, J. P., & Falmagne, J. C., 2012. Knowledge spaces. Springer Science & Business
Media.
Dominic, M., Francis, S., & Pilomenraj, A., 2014. E-learning in web 3.0.International
Journal of Modern Education and Computer Science, 6(2), 8.
Downes, S., 2005. E-learning 2.0. Elearn magazine, 2005(10), 1.
Gangwar, S., Shukla, S., & Arora, D., 2013. Human Emotion Recognition by Using Pattern
Recognition Network. Int. Journal of Engineering Research and Applications, 3(5),
535-539.
Happy, S. L., & Routray, A., 2015. Automatic facial expression recognition using features
of salient facial patches. IEEE transactions on Affective Computing, 6(1), 1-12.
Hartanto, B., & Reye, J. (2013). Incorporating anchored learning in a C# Intelligent
Tutoring System. In Doctoral Student Consortia-Proceedings of the 21st International
Conference on Computers in Education, ICCE 2013 (pp. 5-8). Asia-Pacific Society for
Computers in Education.
Heath, T., & Bizer, C., 2011. Linked data: Evolving the web into a global data
space. Synthesis lectures on the semantic web: theory and technology, 1(1), 1-136.
Hendler, J., 2009. Web 3.0 Emerging. Computer, 42(1), 111-113.
Hoc, J. M. (Ed.). (2014). Psychology of programming. Academic Press.
Holland, J., Mitrovic, A., & Martin B. (2009). J-Latte: a constraint-based tutor for java. In
17th International Conference on Computers in Education, Hong Kong, 142–146.
Hsiao, I. H., Sosnovsky, S., & Brusilovsky, P. (2009, September). Adaptive navigation
support for parameterized questions in object-oriented programming. In European
Conference on Technology Enhanced Learning (pp. 88-98). Springer Berlin
Heidelberg.
Hussain, F., 2013. E-Learning 3.0 = E-Learning 2.0 + Web 3.0?. Journal of Research &
Method in Education. 3(3), 39-47.
Jacobson, I., Booch, G., & Rumbaugh, J. (1998). The unified software development
process:[the complete guide to the Unified Process from the original designers]. The
Addison-Wesley Object Technology Series Show all parts in this series.
Kadge, S., Shaikh, M. B. F., Shaikh, F., & Jain, B. (2015). Intelligent Code Tutoring
System. International Journal of Global Technology Initiatives, 4(1), C35-C43.
Kapp, K. M. (2012). The gamification of learning and instruction: game-based methods and
strategies for training and education. John Wiley & Sons.
Khandait, S. P., Thool, R. C., & Khandait, P. D., 2012. Automatic facial feature extraction
and expression recognition based on neural network. Int. J. Adv. Comput. Sci. Appl.,
vol. 2, no. 1, pp. 113–118.
Koren, Y., & Bell, R., 2011. Advances in collaborative filtering. In Recommender systems
handbook (pp. 145-186). Springer US.
Kose, U., & Deperlioglu, O., 2012. Intelligent learning environments within blended
learning for ensuring effective c programming course. International Journal of
Artificial Intelligence & Applications, 3(1), 105.
Kurilovas, E., Kubilinskiene, S., & Dagiene, V., 2014. Web 3.0–Based personalisation of
learning objects in virtual learning environments. Computers in Human Behavior, 30,
654-662.
Lehman, B. A., Matthews, M., D'Mello, S. K., & Person, N., 2008. Understanding students’
affective states during learning. In Ninth International Conference on Intelligent
Tutoring Systems (ITS'08).
Liu, B., 2012. Sentiment analysis and opinion mining. Synthesis lectures on human
language technologies, 5(1), 1-167.
Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., & Byers, A. H.,
2011. Big data: The next frontier for innovation, competition, and productivity.
http://www.mckinsey.com/business-functions/business-technology/our-insights/big-
data-the-next-frontier-for-innovation, Retrieved September 29, 2016.
Morris, R. D., 2011. Web 3.0: Implications for online learning. TechTrends,55(1), 42-46.
Neve, P., Hunter, G., Livingston, D., & Orwell, J. (2012, December). NoobLab: an
intelligent learning environment for teaching programming. In Proceedings of the The
2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and
Intelligent Agent Technology-Volume 03 (pp. 357-361). IEEE Computer Society.
Paredes-Valverde, M. A., Valencia-García, R., Rodríguez-García, M. Á., Colomo-Palacios,
R., & Alor-Hernández, G., 2015. A semantic-based approach for querying linked data
using natural language. Journal of Information Science, 0165551515616311.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... &
Vanderplas, J., 2011. Scikit-learn: Machine learning in Python. Journal of Machine
Learning Research, 12(Oct), 2825-2830.
Ricci, F., Rokach, L., & Shapira, B., 2011. Introduction to recommender systems
handbook (pp. 1-35). Springer US.
Richardson, W., 2005. The educator's guide to the read/write web. Educational
Leadership, 63(4), 24.
Romero, C., & Ventura, S., 2010. Educational data mining: a review of the state of the
art. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and
Reviews), 40(6), 601-618.
Rubens, N., Kaplan, D., & Okamoto, T., 2012. E-Learning 3.0: anyone, anywhere, anytime,
and AI. In International Conference on Web-Based Learning (pp. 171-180). Springer
Berlin Heidelberg.
Sariyanidi, E., Gunes, H., & Cavallaro, A., 2015. Automatic analysis of facial affect: A
survey of registration, representation, and recognition. IEEE transactions on pattern
analysis and machine intelligence, 37(6), 1113-1133.
Schafer, J. B., Frankowski, D., Herlocker, J., & Sen, S., 2007. Collaborative filtering
recommender systems. In The adaptive web (pp. 291-324). Springer Berlin Heidelberg.
Soloway, E., & Spohrer, J. C. (2013). Studying the novice programmer. Psychology Press.
Sykes, E. R., & Franek, F., 2003. An Intelligent Tutoring System Prototype for Learning to
Program Java TM. IEEE Xplore. Proceedings of the The 3rd IEEE International
Conference on Advanced Learning Technologies (ICALT’03).
Teo, T., 2009. Modelling technology acceptance in education: A study of pre-service
teachers. Computers & Educa, 52(2), 302-312.
Vásquez-Ramírez, R., Alor-Hernández, G. and Rodríguez-González, A. (2014 a), Athena:
A hybrid management system for multi-device educational content. Comput Appl Eng
Educ, 22: 750–763. doi:10.1002/cae.21567.
Vásquez-Ramírez, R., Alor-Hernández, G., Sánchez-Ramírez, C., Guzmán-Luna, J.,
Zatarain-Cabada, R., & Barrón-Estrada, M. L. (2014 b). AthenaTV: an authoring tool
of educational applications for TV using android-based interface design patterns. New
Review of Hypermedia and Multimedia, 20(3), 251-280.
Vásquez-Ramírez, R., Bustos-Lopez, M., Montes, A. J. H., Alor-Hernández, G., &
Sanchez-Ramirez, C. (2016). An Open Cloud-Based Platform for Multi-device
Educational Software Generation. In Trends and Applications in Software
Engineering (pp. 249-258). Springer International Publishing.
Wei-ping, Z., Ming-Xin, L. I., & Huan, C. (2011, May). Using MongoDB to implement
textbook management system instead of MySQL. In Communication Software and
Networks (ICCSN), 2011 IEEE 3rd International Conference on (pp. 303-305). IEEE.
Weragama, D., & Reye, J., 2012. Designing the knowledge base for a PHP tutor.
In International Conference on Intelligent Tutoring Systems (pp. 628-629). Springer
Berlin Heidelberg.
Wiggins, J. B., Boyer, K. E., Baikadi, A., Ezen-Can, A., Grafsgaard, J. F., Ha, E. Y., ... &
Wiebe, E. N., 2015. JavaTutor: an intelligent tutoring system that adapts to cognitive
and affective states during computer programming. In Proceedings of the 46th ACM
Technical Symposium on Computer Science Education (pp. 599-599). ACM.
Wheeler, S., 2011. E-learning 3.0: Learning through the extended smart web. In National
IT Training Conference.
https://ittrainingconference.files.wordpress.com/2011/04/ittc_stevewheeler_smartweb.
pdf, Retrieved September 29, 2016.
Woolf, B. P., 2010. Building intelligent interactive tutors: Student-centered strategies for
revolutionizing e-learning. Morgan Kaufmann.
Yu, J., Lee, H., Ha, I., Zo, H., 2015. User acceptance of media tablets: An empirical
examination of peerceived value. Telematicsand Informatics.
We present a Web 3.0-based Learning Environment for an affective and intelligent tutoring
system for the Programming Language Java. >The Environment includes a recommender and a
mining system, an affect recognizer, and an authoring tool > the recognition is oriented to
Learning-centered emotions > The recommender system is an e-learning 3.0 software
component that makes recommendations of new exercises to a student based on the actions
(ratings) of previous learners. >We describe two different evaluations to measure the key
aspects of our system.

You might also like