You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/336373880

Effect of Chatbot Systems on Student's Learning Outcomes

Article · October 2019

CITATIONS READS

0 99

3 authors, including:

Suhni Abbasi Hameedullah Kazi


Sindh Agriculture University Isra University, Pakistan
10 PUBLICATIONS   7 CITATIONS    36 PUBLICATIONS   120 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Intelligent clinical training systems View project

PLC Project View project

All content following this page was uploaded by Suhni Abbasi on 10 October 2019.

The user has requested enhancement of the downloaded file.


Effect of Chatbot Systems on Student’s
Learning Outcomes
Suhni Abbasi 1 2 , Hameedullah Kazi 1 and Nazish Nawaz Hussaini 3
1
Department of Computer Science
Isra University
Hyderabad, 71000, Pakistan
2
Information Technology Centre
Sindh Agriculture University
Tandojam, 70060, Pakistan
3
Institute of Mathematics and Computer science
University of Sindh
Jamshroro, 76090, Pakistan
E-mail:suhni.abbasi@sau.edu.pk

Providing the learning environment having positive effects on students learning


remains a great challenge from decades. The solutions having the interactive interface
and using natural language to answer the student’s questions are always considered as a
positive goal-oriented solution. This paper presents the android chatbot system, in which
the user can interact using natural languages, to improve the learning outcomes that are
the forefront of educational success. This research identifies and classifies different
question patterns to be integrated into a chatbot system. Furthermore, this study designed
an android-based chatbot for learning Object-Oriented Programming (OOP) and
participants of the study were given the tasks to be performed by the developed system
and conventional search engine in order to measure the results of both. The research
evaluated that high germane load(GL) and a significant difference in the lower level of
cognitive layer outcomes were observed in post-test scores using the developed chatbot
system.
Keywords: chatbot, Natural Language Processing, Mobile Learning cognitive
load, learning outcomes, usability

1 INTRODUCTION
The emergence of human computer interaction (HCI) aims to provide user-
friendly and specific information to the users. To retrieve the information from the
internet for the users Google, Yahoo, and other information retrieval systems are
commonly used. The information retrieved from these systems contains documents or
links to other pages or documents. Whenever students require the information specifically
meant to find the solution to their problem using these information retrieval systems, the
results are neither satisfying their needs nor effective to improve their learning abilities.
Such kind of problems gives birth to the needs of the natural language dialog systems in
which the students write their domain-specific problems in natural language and retrieve
the concise and satisfying answers [1, 2]. The computer programmer based on artificial
intelligence (AI) concepts were proposed by [3]. In these programs, users input their
queries in natural language and output are generated in voice/text/ image formats. These
programs are known as Chatbot Systems. Chatbots are humanoid robots used as a
personal assistant, car assistant [4], customer support, transactions, and helpdesks; smart
wallets, data analytic; repetitive tasks; automated virtual assistants and so on [5]. From

[SYLWAN., 163(10)]. ISI Indexed 49


asking Siri to set our alarm in the morning to use google maps to reach our destinations,
we are all surrounded by chatbots [6]. Education is an area in which chatbots have and
can make a significant positive impact to learning [7], as an educational tool they are
helpful to provide the efficient and satisfying answers to questions of the students [4]. [8]
reported that the chatbot system not only enhances the learning outcomes of students but
it also has a significant impact on retention of memory. [9] reported that chatbot is an
effective tool to enhance student’s learning and participation in a classroom setting, they
are also an effective tool when used in online competitions. [10] describes that presenting
the information in multiple ways could be helpful to the students to think about their
knowledge and improve their knowledge. Chatbots can be instigated as intelligent
personal assistants also called virtual subordinates on mobile devices, as artificial tutors
in the educational field as they can provide social networking purview for providing
personalized marketing to customers and prompt the personalized feedback to learners
[11]. Additionally, the learning goals can be achieved effectively when integrating a
Chatbot into the tutoring system properly. Creating a dialogue-based HCI is challenging
and difficult by means of quantitatively and qualitatively focusing on to enhances
student’s ability to learn. Thus, it seems to be a great challenge to determine the effect of
chatbot systems on student’s learning such as to measure the cognitive load, stress
management, learning outcome, memory retention, etc. Many chatbot systems have been
found but most of them are designed for entertainment purpose just a few of them are
intended to enhance the learning of the students. Whenever students face any problem
they often practice Google to search their academic problems but many times they don’t
find any satisfactory answers to their problem within the time frame. Students demand
useful interface from which they can ask any domain-specific question not only by typing
their questions but to which they can speak to like a human. This research intended to
design and develop an android based chatbot system which aims to answer the student’s
questions for OOP. The LearningBot app is designed by incorporating the various
question patterns identified by the students in the pilot study. Nowadays, the mobile-
based chatbot systems are increasingly integrated to make these agents accept different
input styles, it is, therefore, the developed prototype that accepts both the text and voice
commands, the output is also generated as text, audio or in image format. The usefulness
of the LearningBot was first and foremost important measurement was taken before
evaluating the effects on student’s cognitive load and learning outcomes.

1.1 Learning in chatbot system


Learning is the process of obtaining new or transforming existing knowledge,
skills, or behaviors. It is a permanent change in one’s knowledge, behavior or in the skill
that comes from a sort of training or experience. Humans, animals, and machines have
the ability to learn and this ability involves various kinds of information which could be
achieved progressively [12]. Learning may occur as a part of school education, self-
education, or any other specific training in a conscious or unconscious way [13, 14].
Learning requires at least two types of participants, namely the person being instructed
and the content of the learning itself [15]. [16] Conceptualized learning as a
multidimensional construct of learning skills, cognitive learning outcomes, such as
procedural, declarative and strategic knowledge, and attitudes. [17] reported that there is
a great need to present the curriculum contents in a variety of different knowledge
presentations to apply the knowledge within the virtual world to support and facilitate the
learning process. It is reasonable to assume that a learning environment affects how users
[SYLWAN., 163(10)]. ISI Indexed 50
behave in a learning environment. In addition to learning the environment, design, build,
and deliver content are also important factors that affect user behavior in a learning
environment [18]. One of the main issues of user behavior in the e-learning environment
is that a huge amount of data is available which is gathered from different sources but the
amount of specific data for analysis is very much small. Most commonly user interacts
with the e-learning environment by the means of web browser, but this mechanism does
not provide relevant data which user required [15]. Nowadays, capturing the essence of
behavior-based control to adapt general and well-defined architecture is very much
challenging in the learning environment, for example, architecture should also be natural
so that the beginners easily understand and grip on it [19]. Students learning objectives
such as knowledge acquisition, understanding, application, analysis, production, and
evaluation can be enhanced by integrating the chatbot agent with the interactive system.
The efficiency of learning will be more improved if the student knows how the learning
approaches are performed [20]. The chatbot and task-oriented system are two different
approaches which use the natural language dialogues and yet both are not satisfying the
user’s demands. The problem with the task-oriented approach is that too many limitations
are forced to the dialogues and chatbot responses are not always meaningful and they do
not follow the manners of persuasive dialogues [21]. However, natural language dialogue
is quite popular in the AI area [22], particularly in permissible argumentation [23, 24]. So
efficiently using AI and natural language dialogue fill up the gap left between the chatbot
approach and task-oriented technique.

1.2 Usability of the chatBot system


Usability is to measure the extent to which a software product such as software
applications, websites, books, learning tools, machines, process or anything to which
human can interact is useful by a group of users to accomplish the desired goals with
effectiveness, efficiency, memorability, and satisfaction in a specified context of use.
Most commonly usability concerns about;
1. Who are the users?, What do they know?, What can they learn?
2. What do users want or need to do?
3. What is the user’s general background?
4. What is the user’s context for working?
5. What must be left to the machine?
[25] addressed the relationship between usability and the performance of general
question answering system. Most of the evolution models focus on system centered
evaluation but the user-centered evaluation has attracted less concentration. Therefore,
[26] proposed an evaluation model of successful question answering system from the
user’s perspective. The goal of this model is to answer two questions, which are
6. How do individual users evaluate the success of a question answering
system? and
7. What factors influence an individual user’s evaluation of the question
answering system success?

1.3 Effects on students learning


To analyze the effects of using the LearningBot the cognitive load for solving
the given tasks was measured. Bloom’s low-level outcomes were also compared as
learning outcomes in pre and post-test of the study. The detail about these learning
attributes are described in section 1.3.1 and 1.3.2.

[SYLWAN., 163(10)]. ISI Indexed 51


1.3.1 Cognitive load
Cognitive load occurs when someone needs to process information in their
working memory, such as when performing tasks, tests, or instructions. Cognitive load is
an important factor in designing an effective teaching or learning system because the
cognitive control of cognitive load is critical to achieving effective learning. Cognitive
load can be defined as “a multidimensional construct that represents the load imposed on
the learner’s cognitive system by performing a particular task” [27]. Cognitive load theory
(CLT) is an information processing theory that links working memory constraints to
teaching effects. The excessive cognitive load should be avoided for learning activities to
occur. However, the ability to measure this load is the basis for stating the load imposed
by the tasks contained in the development system [28]. CLT emphasizes working memory
constraints as a determinant of the effectiveness of the instructional design. Since working
memory is very limited in capacity (4 ± 1 element) [29] and duration [30], it should be
used as efficiently as possible to construct patterns in long-term memory. The capacity is
unlimited, and optimization and related learning activities take place. Once the
information is stored in long-term storage, it can be retrieved when needed. Therefore, if
long-term learning has taken place, the working memory limit will be reduced.
Types of cognitive load: In most current descriptions of CLT, three types of
cognitive load are distinguished [31]. First, the intrinsic load (IL) refers to the extent to
which task elements must be related to each other. IL load can also be called task
complexity. Second, the external load (EL), which is caused by psychological activities
and factors that do not directly support learning. The working memory capacity used by
these activities and information does not lead to information acquisition or automation.
EL is caused by “bad” instructional design and should be minimized. Third, the germane
load (GL) which is caused by psychological activity and support learning information.
Unlike EL, the cognitive design should increase the GL. The IL is mainly imposed by
task complexity, while the EL and GL are mainly imposed by what must be learned by
instructional design. Since the IL are not affected by the instructional design, the
combination of EL and GL should be as effective as possible. The greater the proportion
of GL, the greater the potential of learning. Therefore, for learning something it needs to
be transferred from the EL load to the GL. The three cognitive loads are additive and the
total load imposed by the task cannot exceed the working memory capacity. It is assumed
that the total EL and GL are equal to the total cognitive load minus the IL.

1.3.2 Learning outcomes


The effect on students could be determined with the help of learning outcomes.
Learning outcomes are intended to be achieved by defining the purpose of the research,
questions or the assumption made for the research or and often by directly applying the
powerful research tools. The educators or assessors most commonly use the research
purposes, questions or hypotheses and sometimes through research instruments and
results for the identification of the potential learning outcomes. Learning outcomes are an
explicit description of what a learner should know, understand and be able to do because
of learning. [32] states that alternatively learning outcome is a written statement of what
a successful student/learner is expected to be able to do at the end of the module/course
unit or qualification [33]. There are many classifications of learning outcomes.
Traditionally, researchers have focused on the cognitive dimension of learning outcomes
[34]. Others have included affect-oriented objectives such as appreciation [35]. Learning

[SYLWAN., 163(10)]. ISI Indexed 52


outcomes could be achieved effectively and easily if the pedagogical psychologists
design, develop and evaluate easy-to-use learning techniques [36]. The learning
techniques used outside the classroom such as chatbot or other computer learning
environment play a vital role as self-directed, engaging, entertaining, interactive,
anthropomorphized, and emphasis on increasing the educational goal and outcomes. [37]
focused that designing such computer software grounding on educational framework
providing the natural language(NL) medium to communicate with it would be helpful in
various disciplines. [38] emphasized that chatbot has great potential to improve
communication skills between students and computers and to stimulate student-computer
interaction. In addition, it has traditionally been considered challenging to improve
learning outcomes and tedious learning experiences in distance learning. [39] argues that
chatbots which are developed using professional resources with strong educational
scenarios have a significant impact on student learning.

2 METHODOLOGY
In this section, the participants for the evaluation of prototype are selected. The
experimental setting includes the development of the knowledge-base as the brain of the
chatbot system, graphical user interface(GUI) the way by which the user interacts with
the system, and the complete system architecture of the LearningBot.

2.1 Participants
To measure the usefulness of the LearningBot and the effect on student’s learning
20 undergrad students of the first and second year of Computer Sciences Department, Isra
University, Hyderabad and 90 undergrad students of the first year of Information
Technology Centre, Tandojam were selected to participate in the research. First, all the
students were introduced to the interface of the LearningBot, and usability test was
performed. The participants were then randomly divided into two groups and presented
with the various tasks related to OOP. One group of students was supposed to use the
traditional search engine and other group used Learning to answer the question presented
to them in the start.

2.2 Knowledge Base


The knowledge base of the LearningBot is designed for the OOP domains using
the MySql Database Server. The major component of the LearningBot is chatbot’s brain
and brain must be populated with the required questions. The brain or the knowledge-
based is populated in two steps, first the various question samples were collected from
the participants which they wish to ask during the classroom for acquisition of knowledge
of OOP, later the collected questions were cleaned and categorized as “What”, “Why”,
“Perform Operation”, “How”, “Advantage or Disadvantage”, “Application” and “Who”
stored in the database.

2.3 GUI
GUI and artificial intelligence markup language(AIML) parsing architecture are
implemented using java language due to its flexible approach to customize the interface
design and connected with knowledge base via MySqlConnector.net. The GUI was
designed in Android Studio 3.4. The design of the LearningBot is very user-friendly so
that students feel easy to interact with it. First, the students were asked to sign up for the

[SYLWAN., 163(10)]. ISI Indexed 53


account at first time shown in Fig. 1, so that the information about the student’s
interactions can be kept secure and privacy should be maintained. Students were then
asked to choose their nickname and avatar so that they can interact with the LearningBot
in an appealing way. Students were also asked to choose the name and avatar for the Bot
to whom they want to interact, shown in Fig. 2. In LearningBot the students may ask they
are NL queries either by typing or by giving voice command using their mic as shown in
Fig. 3. The students can view the response as text or an image shown in Fig. 4,
depending upon the query they ask from the bot. The students were also provided an
option to listen to the response of their queries using the speakers of headphones.

Figure 1: Sign up for Figure 2: Choosing name and


LearningBot. avatar.

Figure 3: Interaction with Figure 4: Image as a response


LearningBot from LearningBot.

2.4 System Architecture


The LearningBot is developed by employing the principles and techniques for
intelligent chatbot systems and natural language processing(NLP). It is complex to enter
large categories of question-answers responses manually in the existing chatbot systems,
hence, to overcome this issue automated chatbot systems are used, which automatically
extract user’s responses from any source of knowledge i.e. clouds, databases or from any
search engines. LearningBot’s architecture consists of four main components: the user
interface(UI), the AIML parser, knowledge base, and text-to-speech conversion system.
The components and process of LearningBot are shown in Fig. 5.

[SYLWAN., 163(10)]. ISI Indexed 54


Figure 5: Process modeling of LearningBot.

The user’s NL queries are sent using user interface to the AIML parser, if students
input the queries using the voice commands, then speech recognition API is used to
convert the voice commands into the text outputs. The text output is also sent to the AIML
parser via UI. The AIML parser separates the given NL queries into wildcards and AIML
Patterns. The wildcard is again sending back the UI which formulates the SQL query
using the wildcard and fires the query to the knowledge base system. If the query matches
successfully, wildcard matching response is provided to the UI, if not the UI will display
the message of “sorry the bot cannot understand what you said to the user”. The android
app format the SQL response and display to the user.

1. User sends the NL queries using android based interface to the AIML
parser
2. Converts voice commands text output using speech recognition API
3. Send text output to the android application
4. AIML parser matches the AIML pattern and returns the wildcards
5. The wildcard is sent to the android application
6. UI formulates the SQL query using the wildcard
7. The SQL query is fired to the knowledge base
8. If the wildcard matches, the knowledge-base returns wildcard matching
response to the UI
9. The result is displayed to the user in a desired format

2.5 Usability of the system

[SYLWAN., 163(10)]. ISI Indexed 55


Usability is a part of the broader term “user experience” and refers to assessing
the quality attributes of the user interface that are easy to use. It also involves ways to
improve ease of use during the design process. ISO 9241-11 [40] standards refer to the
concept of usability as “specifying the effectiveness, efficiency, and satisfaction of a
user’s implementation of a specific target environment”. Effectiveness is about accuracy
and completeness with which users achieve their targeted goals whereas efficiency refers
to the extent to which resources are applied to achieve these goals. The usability of the
developed LearningBot was measured by using the system usability score(SUS), it was
developed by John Brooke to assess the overall usability of the system [41]. SUS uses 10
items to collect user’s subjective opinion. Each item is presented to participants on a 5-
point scale, ranging from 1 for “Strongly disagree” to 5 for “Strongly agree”. Item 1 and
4 relate to the learnability of the system and the remaining items correspond to the
usability of the system. The SUS provides an easy scoring system. These scores can also
be compared to the scores that the researchers identified. The mean of the SUS scores that
the researchers found is 68. With this scoring system, we can compare the score of
chatbots to other information systems.

2.6 Effects on student’s learning


To measure the effects on students learning the following attributes are taken to
evaluate student’s performance.
2.6.1 Cognitive load
Properly measuring various cognitive loads can help us understand why the
effectiveness and efficiency of the learning environment may vary depending on the form
of instruction and learner characteristics [42]. If the cognitive load is high, working
memory is used less effective and less relevant learning activities occur. To obtain control
over the imposed cognitive load, the load needs to be measured. The ability to measure
and influence the cognitive load a task imposes on a learner is relevant for the design of
effective instruction and it can gain more insight into the processes that underlie cognitive
load. This can lead to cognitive overload when the load is too high. As defined in the
introduction section for different types of cognitive loads, the IL load can be improved in
instructional design by selecting learning tasks that match the student’s existing
knowledge [43], while the EL should be minimized to reduce the invalid load, [44, 45]
and allow learners to participate in GL activities. To measure the collective cognitive
load, the instrument developed by the [42], shown in Fig. 6 is used.
There is a 10 item instrument to measure student’s cognitive load. Items 1-3 are
about the complexity of the topic itself, so it is expected to load the IL factor; items 4-6
are about the negative features of the instructions and interpretation, so the EL factor is
expected to be measured; items 7-10 involve instructions and explanations. To measure
the degree of contribution to learning, the factor of GL will be measured. Each item is
presented to participants on a 10-point scale, ranging from 0 for “not at all case” to 10 for
“completely the case”.

[SYLWAN., 163(10)]. ISI Indexed 56


Figure 6: Cognitive load measurement instrument adapted from [42].

2.6.2 Learning outcomes


Learning outcomes can be determined through research purposes, research
questions or assumptions, and sometimes through research tools and, if necessary,
research findings. The learning outcomes clearly describe what learners should know,
understand and be able to go through learning [32], they also provide valid evidence about
the student’s abilities at the end of the course/module of learning [33].

3 RESULTS AND DISCUSSION


To analyze the usability of the developed LearningBot the student’s survey was
conducted and to analyze the effects on students learning, the experiment employing the
tasks with cognitive load was carried out in order to measure the learning outcomes. All
the participants were divided into two groups randomly, one group used our developed
LearningBot and other group used the conventional search engine to solve the tasks they
were given. The responses received from the participants were evaluated from the experts.
The results for all three attributes are described as following:

3.1 Usability measurements


For evaluating Usability, the students who used the LearningBot were given
open-ended feedback form. All the students (n=110) were given the SUS questionnaire.
To calculate the individual response Brook’s [41] is adapted in this study, the following
equation was used to calculate the SUS score: Odd items (1, 3, 5, 7, 9):
𝑆𝑢𝑚1 = 𝐼𝑛𝑑𝑖𝑣𝑖𝑑𝑢𝑎𝑙𝑠𝑐𝑜𝑟𝑒𝑣𝑎𝑙𝑢𝑒 − 1 (1)
Even items (2, 4, 6, 8):
𝑆𝑢𝑚2 = 5 − 𝐼𝑛𝑑𝑖𝑣𝑖𝑑𝑢𝑎𝑙𝑠𝑐𝑜𝑟𝑒𝑣𝑎𝑙𝑢𝑒 (2)

𝑆𝑈𝑆𝑠𝑐𝑜𝑟𝑒 = 2.5 ∗ (𝑠𝑢𝑚1 + 𝑠𝑢𝑚2) (3)

[SYLWAN., 163(10)]. ISI Indexed 57


Figure7: Usefulness of the LearningBot.

The above calculations helped our system to quantify usability as if it were a


measurable entity. 68% of the score has been widely used in the literature as a usability
threshold for interactive technologies (i.e. computer applications and websites). Fig. 7
shows the individual usability score count. As the figure reveals that score for 3 students
was 0, 15 students scored 50, 2 students scored 60, 20 students scored 70, 5 students
scored 8, 30 students scored 90, 14 scored 95 and 21 students 100 SUS scores. Whereas
the total average SUS score was 68.13.
3.2 Cognitive load measurements
For evaluating the cognitive load experimental group of students (n=110) was
divided into two equal groups and 4 different tasks were assigned to each student.
Randomly half set of students was asked to find the solutions of their tasks by using
developed LearningBot and another set of students was asked to use a conventional search
engine.

Figure 8: Evaluation results for student’s cognitive load.

The results for the cognitive load in Fig. 8, shows that IL while performing the
tasks using the google search engine is high as compared to the tasks performed using the
LearningBot. The EL for the item “The instructions and/or explanations during the
[SYLWAN., 163(10)]. ISI Indexed 58
activity were very unclear” is high when using the google search engine. However, the
significant high GL for all the items is achieved using the LearningBot by the system.
The results also indicate that total EL and GL is exactly equal to the total cognitive load
minus i.e. 46.67.

3.3 Learning outcomes measurements


For evaluating the learning outcomes, all students (n=110) were randomly
divided into two parts. Pre-and Post-test was conducted for both groups using the
conventional search engine and LearningBot to solve given tasks for selected domains.
The OOP learning tasks were assigned to all participants in pre-and post-test sessions.
The outcomes were measured by the Score which students obtained after each round of
the test. Average scores of each group in pre-and post-test are given Table 1 and Fig. 9.

Figure 9: Results for the Learning outcomes measurement.

Table 1: Evaluation results for student’s learning outcomes.


Learning Outcomes
Pre-Test Post-Test
Google 26.84(9.09) 40.62(9.47)
LearningBot 34.64(8.26) 57.2(9.86)

By adapting the conventional search or LearningBot, the lower layer of Bloom’s


outcomes was improved significantly as compared to learning only from the lectures or
books. The post-test score was significantly higher than pre-test scores (𝑝0.001, Mann-
Whitney). However, there were significantly higher learning outcomes when students
used the LearningBot rather than a search engine to complete their task (𝑝0.001, Mann-
Whitney). Henceforth, students learned more when using the LearningBot by comparing
the search engine. Further, the responses generated from the LearningBot were quite well
in terms of quality, efficiency, and students were satisfied with the responses.

4 CONCLUSION
[SYLWAN., 163(10)]. ISI Indexed 59
This research focuses on the effect on students’ learning by using LearningBot
to solve their problems. First, the usability testing was performed to find the usefulness
of the developed system. After achieving the SUS score threshold, the effects on students’
learning was assessed for OOP by comparing the LearingBot and conventional search
engine. From the literature review, we can find that the chatbot system measures and
improves students’ learning productivity, and it can be an efficient tool when using in the
learning environment. The effects on students learning were evaluated by two learning
attributes, which are cognitive load and learning outcomes. For usability of the developed
prototype, students show satisfactory results for the understand-ability of their problems
by the systems. The results obtained for the cognitive load and learning outcomes show
the significant effects on students’ learning when adapting the LearningBot to answer
students’ question rather than a conventional search engine. GL is significantly high while
performing the tasks using the LearningBot. Further, it is recommended that currently,
the prototype is limited for one domain of programming only it can be enhanced by adding
more domains. The developed prototype can also be embodied in the current course
management system (CMS) to retrieve responses from available courses of the
curriculum for further improvement.

References
[1] B. Shawar and A. E., “Chatbots: are they really useful?” Journal of
Computational Linguistics and Language Technology, Vol. 22, 2007, pp. 29–
49.
[2] L. Fryer, M. Ainley, A. Thompson, A. Gibson, and Z. Sherlock, “Stimulating
and sustaining interest in a language course: An experimental comparison of
chatbot and human task partners,” Computers in Human Behavior, Vol. 75,
2017, pp. 461–468.
[3] U. Dhavare and U. Kulkarni, “Natural language processing using artificial
intelligence,” International Journal of Emerging Trends & Technology in
Computer Science (IJETTCS), Vol. 4, 2015, pp. 203–205.
[4] M. Kumar, P. Chandar, A. Prasad, and K. Sumangali, “Android based
educational chatbot for visually impaired people,” in 2016 IEEE International
Conference on Computational Intelligence and Computing Research
(ICCIC), 2016, pp. 1–4.
[5] C. Holotescu, “Moocbuddy: a chatbot for personalized learning with moocs,”
in RoCHI, 2016, pp. 91–94.
[6] A. Kerlyl, P. Hall, and S. Bull, “Bringing chatbots into education: Towards
natural language negotiation of open learner models,” in International
Conference on Innovative Techniques and Applications of Artificial
Intelligence, 2006, pp. 179–192.
[7] R. Khan, S. Chawla, K. Kishor, and B. Ramana, “A survey for determining the
usefulness of a chatbot in today’s world,” Journal of Applied Science and
Computations, Vol. 4, 2019, pp. 637–47.
[8] S. Abbasi and H. Kazi, “Measuring effectiveness of learning chatbot systems
on student’s learning outcome and memory retention,” Asian Journal of
Applied Science and Engineering, Vol. 3, 2014, pp. 251–260.
[9] L. Benotti, M. Martínez, and F. Schapachnik, “Engaging high school
students using chatbots,” in Proceedings of the 2014 conference on
[SYLWAN., 163(10)]. ISI Indexed 60
Innovation & technology in computer science education, 2014, pp. 63–68.
[10] J. Kay, Z. Halin, T. Ottomann, and Z. Razak, “Learner know thyself: Student
models to give learner control and responsibility,” in Proceedings of
International Conference on Computers in Education, 1997, pp. 17–24.
[11] C. Mahalakshmi, T. Sharmila, S. Priyanka, M. Sastry, B. Murthy, and M.
Reddy, “A survey on various chatbot implementation techniques,” Journal of
Applied Science and Computations, Vol. 4, 2019, pp. 320–330.
[12] M. Nicolescu and M. Mataric, “Natural methods for robot task learning:
Instructive demonstrations, generalization and practice,” in Proceedings of
the second international joint conference on Autonomous agents and
multiagent systems, 2003, pp. 241–248.
[13] M. J. Prince and R. M. Felder, “Inductive teaching and learning methods:
Definitions, comparisons, and research bases,” Journal of engineering
education, Vol. 95, 2006, pp. 123–138.
[14] T. Leahey and R. Harris, Human learning. 1em plus 0.5em minus 0.4em
Prentice Hall, 1989.
[15] E. Mor, J. Minguillón, and F. Santanach, “Capturing user behavior in e-
learning environments,” 2007.
[16] M. Pivec, O. Dziabenko, and I. Schinnerl, “Aspects of game-based learning,”
in 3rd International Conference on Knowledge Management, Graz, Austria,
2003, pp. 216–225.
[17] D. Johanssen, “Designing constructivist learning environments,”
Instructional-design theories and models, Vol. 2, 2002, pp. 215–232.
[18] H. Coates and G. James, R.and Baldwin, “A critical examination of the
effects of learning management systems on university teaching and learning,”
Tertiary education and management, Vol. 11, 2005, pp. 19–36.
[19] F. Qiu and X. Hu, “Behaviorsim: a learning environment for behavior-based
agent,” in International Conference on Simulation of Adaptive Behavior,
2008, pp. 499–508.
[20] B. Zimmerman, “Theories of self-regulated learning and academic
achievement: An overview and analysis,” Self-regulated learning and
academic achievement. 1em plus 0.5em minus 0.4em Routledge, 2013, pp.
10–45.
[21] N. Shechtman and L. Horowitz, “Media inequality in conversation: how
people behave differently when interacting with computers and people,” in
Proceedings of the SIGCHI conference on Human factors in computing
systems, 2003, pp. 281–288.
[22] T. Reed, C.and Norman, Argumentation machines: New frontiers in
argument and computation. 1em plus 0.5em minus 0.4em Springer Science
and Business Media, 2003.
[23] T. Bench-Capon, “Try to see it my way: Modelling persuasion in legal
discourse,” Artificial Intelligence and Law, Vol. 11, 2003, pp. 271–287.
[24] K. Greenwood, T. B. Capon, and P. McBurney, “Towards a computational
account of persuasion in law,” in Proceedings of the 9th international
conference on Artificial intelligence and law, 2003, pp. 22–31.
[25] T. Kokubu, Y. Sakai, T.and Saito, H. Tsutsui, T. Manabe, M. Koyama, and
F. H., “The relationship between answer ranking and user satisfaction in a
question answering system,” in NTCIR, 2005.

[SYLWAN., 163(10)]. ISI Indexed 61


[26] C. Ong, M. Day, and W. Hsu, “The measurement of user satisfaction with
question answering systems,” Information & Management, Vol. 46, 2009, pp.
397–403.
[27] F. Paas, J. E. Tuovinen, H. Tabbers, G. Van, and W. Pascal, “Cognitive load
measurement as a means to advance cognitive load theory,” Educational
psychologist, Vol. 38, 2003, pp. 63–71.
[28] G. Wernaart, “Cognitive load measurement: Different instruments for
different types of load?” Master’s thesis, 2013.
[29] N. Cowan, “The magical number 4 in short-term memory: A reconsideration
of mental storage capacity,” Behavioral and brain sciences, Vol. 24, 2001,
pp. 87–114.
[30] L. Peterson and M. Peterson, “Short-term retention of individual verbal
items.” Journal of experimental psychology, Vol. 58, 1959, p. 193.
[31] M. Van, J. Jeroen, and J. Sweller, “Cognitive load theory and complex
learning: Recent developments and future directions,” Educational
psychology review, Vol. 17, 2005, pp. 147–177.
[32] J. Bingham, “Guide to developing learning outcomes,” 1999.
[33] A. Stephen, “Using learning outcomes-a consideration of the nature, role,
application and implications for european education of employing’learning
outcomes’ at the local, national and international levels,” University of
Westminster, Edinburgh, 2004.
[34] B. S. Bloom, M. Engelhart, E. Furst, W. Hill, and D. Krathwohl, “Taxonomy
of educational objectives: the classification of educational goals,” Inc.(7th
Edition 1972), 1956.
[35] D. R. Krathwohl, “Taxonomy of educational objectives: The classification of
educational goals,” Affective Domain, 1964.
[36] J. Dunlosky, K. Rawson, M. Marsh, E.J.and Nathan, and D. Willingham,
“Improving students learning with effective learning techniques: Promising
directions from cognitive and educational psychology,” Psychological
Science in the Public Interest, Vol. 14, 2013, pp. 4–58.
[37] C. Fosnot and R. S. Perry, “Constructivism: A psychological theory of
learning,” Constructivism: Theory, perspectives, and practice, Vol. 2, 1996,
pp. 8–33.
[38] W. L. Johnson, J. Rickel, and J. C. Lester, “Animated pedagogical agents:
Face-to-face interaction in interactive learning environments,” International
Journal of Artificial intelligence in education, Vol. 11, no. 1, 2000, pp. 47–
78.
[39] L. Wood, T. Reiners, and T. Bastiaens, “Design perspective on the role of
advanced bots for self-guided learning,” The International Journal of
Technology, Knowledge, and Society, Vol. 9, 2014, pp. 187–199.
[40] N. Bevan, J. Carter, and S. Harker, “Iso 9241-11 revised: What have we learnt
about usability since 1998?” in International Conference on Human-
Computer Interaction, 2015, pp. 143–151.
[41] J. Brooke, “Sus-a quick and dirty usability scale,” Usability evaluation in
industry, Vol. 189, 1996, pp. 4–7.
[42] J. Leppink, F. Paas, C. P. Van der Vleuten, T. Van Gog, and J. J. Van
Merri¨enboer, “Development of an instrument for measuring different types
of cognitive load,” Behavior research methods, Vol. 45, 2013, pp. 1058–

[SYLWAN., 163(10)]. ISI Indexed 62


1072.
[43] S. Kalyuga, “Knowledge elaboration: A cognitive load perspective,”
Learning and Instruction, Vol. 19, 2009, pp. 402–410.
[44] S. Kalyuga and J. Hanham, “Instructing in generalized knowledge structures
to develop flexible problem solving skills,” Computers in Human Behavior,
Vol. 27, 2011, pp. 63–68.
[45] F. Paas, A. Renkl, and J. Sweller, “Cognitive load theory and instructional
design: Recent developments,” Educational psychologist, Vol. 38, 2003, pp.
1–4.

[SYLWAN., 163(10)]. ISI Indexed 63

View publication stats

You might also like