Professional Documents
Culture Documents
net/publication/336373880
CITATIONS READS
0 99
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Suhni Abbasi on 10 October 2019.
1 INTRODUCTION
The emergence of human computer interaction (HCI) aims to provide user-
friendly and specific information to the users. To retrieve the information from the
internet for the users Google, Yahoo, and other information retrieval systems are
commonly used. The information retrieved from these systems contains documents or
links to other pages or documents. Whenever students require the information specifically
meant to find the solution to their problem using these information retrieval systems, the
results are neither satisfying their needs nor effective to improve their learning abilities.
Such kind of problems gives birth to the needs of the natural language dialog systems in
which the students write their domain-specific problems in natural language and retrieve
the concise and satisfying answers [1, 2]. The computer programmer based on artificial
intelligence (AI) concepts were proposed by [3]. In these programs, users input their
queries in natural language and output are generated in voice/text/ image formats. These
programs are known as Chatbot Systems. Chatbots are humanoid robots used as a
personal assistant, car assistant [4], customer support, transactions, and helpdesks; smart
wallets, data analytic; repetitive tasks; automated virtual assistants and so on [5]. From
2 METHODOLOGY
In this section, the participants for the evaluation of prototype are selected. The
experimental setting includes the development of the knowledge-base as the brain of the
chatbot system, graphical user interface(GUI) the way by which the user interacts with
the system, and the complete system architecture of the LearningBot.
2.1 Participants
To measure the usefulness of the LearningBot and the effect on student’s learning
20 undergrad students of the first and second year of Computer Sciences Department, Isra
University, Hyderabad and 90 undergrad students of the first year of Information
Technology Centre, Tandojam were selected to participate in the research. First, all the
students were introduced to the interface of the LearningBot, and usability test was
performed. The participants were then randomly divided into two groups and presented
with the various tasks related to OOP. One group of students was supposed to use the
traditional search engine and other group used Learning to answer the question presented
to them in the start.
2.3 GUI
GUI and artificial intelligence markup language(AIML) parsing architecture are
implemented using java language due to its flexible approach to customize the interface
design and connected with knowledge base via MySqlConnector.net. The GUI was
designed in Android Studio 3.4. The design of the LearningBot is very user-friendly so
that students feel easy to interact with it. First, the students were asked to sign up for the
The user’s NL queries are sent using user interface to the AIML parser, if students
input the queries using the voice commands, then speech recognition API is used to
convert the voice commands into the text outputs. The text output is also sent to the AIML
parser via UI. The AIML parser separates the given NL queries into wildcards and AIML
Patterns. The wildcard is again sending back the UI which formulates the SQL query
using the wildcard and fires the query to the knowledge base system. If the query matches
successfully, wildcard matching response is provided to the UI, if not the UI will display
the message of “sorry the bot cannot understand what you said to the user”. The android
app format the SQL response and display to the user.
1. User sends the NL queries using android based interface to the AIML
parser
2. Converts voice commands text output using speech recognition API
3. Send text output to the android application
4. AIML parser matches the AIML pattern and returns the wildcards
5. The wildcard is sent to the android application
6. UI formulates the SQL query using the wildcard
7. The SQL query is fired to the knowledge base
8. If the wildcard matches, the knowledge-base returns wildcard matching
response to the UI
9. The result is displayed to the user in a desired format
The results for the cognitive load in Fig. 8, shows that IL while performing the
tasks using the google search engine is high as compared to the tasks performed using the
LearningBot. The EL for the item “The instructions and/or explanations during the
[SYLWAN., 163(10)]. ISI Indexed 58
activity were very unclear” is high when using the google search engine. However, the
significant high GL for all the items is achieved using the LearningBot by the system.
The results also indicate that total EL and GL is exactly equal to the total cognitive load
minus i.e. 46.67.
4 CONCLUSION
[SYLWAN., 163(10)]. ISI Indexed 59
This research focuses on the effect on students’ learning by using LearningBot
to solve their problems. First, the usability testing was performed to find the usefulness
of the developed system. After achieving the SUS score threshold, the effects on students’
learning was assessed for OOP by comparing the LearingBot and conventional search
engine. From the literature review, we can find that the chatbot system measures and
improves students’ learning productivity, and it can be an efficient tool when using in the
learning environment. The effects on students learning were evaluated by two learning
attributes, which are cognitive load and learning outcomes. For usability of the developed
prototype, students show satisfactory results for the understand-ability of their problems
by the systems. The results obtained for the cognitive load and learning outcomes show
the significant effects on students’ learning when adapting the LearningBot to answer
students’ question rather than a conventional search engine. GL is significantly high while
performing the tasks using the LearningBot. Further, it is recommended that currently,
the prototype is limited for one domain of programming only it can be enhanced by adding
more domains. The developed prototype can also be embodied in the current course
management system (CMS) to retrieve responses from available courses of the
curriculum for further improvement.
References
[1] B. Shawar and A. E., “Chatbots: are they really useful?” Journal of
Computational Linguistics and Language Technology, Vol. 22, 2007, pp. 29–
49.
[2] L. Fryer, M. Ainley, A. Thompson, A. Gibson, and Z. Sherlock, “Stimulating
and sustaining interest in a language course: An experimental comparison of
chatbot and human task partners,” Computers in Human Behavior, Vol. 75,
2017, pp. 461–468.
[3] U. Dhavare and U. Kulkarni, “Natural language processing using artificial
intelligence,” International Journal of Emerging Trends & Technology in
Computer Science (IJETTCS), Vol. 4, 2015, pp. 203–205.
[4] M. Kumar, P. Chandar, A. Prasad, and K. Sumangali, “Android based
educational chatbot for visually impaired people,” in 2016 IEEE International
Conference on Computational Intelligence and Computing Research
(ICCIC), 2016, pp. 1–4.
[5] C. Holotescu, “Moocbuddy: a chatbot for personalized learning with moocs,”
in RoCHI, 2016, pp. 91–94.
[6] A. Kerlyl, P. Hall, and S. Bull, “Bringing chatbots into education: Towards
natural language negotiation of open learner models,” in International
Conference on Innovative Techniques and Applications of Artificial
Intelligence, 2006, pp. 179–192.
[7] R. Khan, S. Chawla, K. Kishor, and B. Ramana, “A survey for determining the
usefulness of a chatbot in today’s world,” Journal of Applied Science and
Computations, Vol. 4, 2019, pp. 637–47.
[8] S. Abbasi and H. Kazi, “Measuring effectiveness of learning chatbot systems
on student’s learning outcome and memory retention,” Asian Journal of
Applied Science and Engineering, Vol. 3, 2014, pp. 251–260.
[9] L. Benotti, M. MartÃnez, and F. Schapachnik, “Engaging high school
students using chatbots,” in Proceedings of the 2014 conference on
[SYLWAN., 163(10)]. ISI Indexed 60
Innovation & technology in computer science education, 2014, pp. 63–68.
[10] J. Kay, Z. Halin, T. Ottomann, and Z. Razak, “Learner know thyself: Student
models to give learner control and responsibility,” in Proceedings of
International Conference on Computers in Education, 1997, pp. 17–24.
[11] C. Mahalakshmi, T. Sharmila, S. Priyanka, M. Sastry, B. Murthy, and M.
Reddy, “A survey on various chatbot implementation techniques,” Journal of
Applied Science and Computations, Vol. 4, 2019, pp. 320–330.
[12] M. Nicolescu and M. Mataric, “Natural methods for robot task learning:
Instructive demonstrations, generalization and practice,” in Proceedings of
the second international joint conference on Autonomous agents and
multiagent systems, 2003, pp. 241–248.
[13] M. J. Prince and R. M. Felder, “Inductive teaching and learning methods:
Definitions, comparisons, and research bases,” Journal of engineering
education, Vol. 95, 2006, pp. 123–138.
[14] T. Leahey and R. Harris, Human learning. 1em plus 0.5em minus 0.4em
Prentice Hall, 1989.
[15] E. Mor, J. Minguillón, and F. Santanach, “Capturing user behavior in e-
learning environments,” 2007.
[16] M. Pivec, O. Dziabenko, and I. Schinnerl, “Aspects of game-based learning,”
in 3rd International Conference on Knowledge Management, Graz, Austria,
2003, pp. 216–225.
[17] D. Johanssen, “Designing constructivist learning environments,”
Instructional-design theories and models, Vol. 2, 2002, pp. 215–232.
[18] H. Coates and G. James, R.and Baldwin, “A critical examination of the
effects of learning management systems on university teaching and learning,”
Tertiary education and management, Vol. 11, 2005, pp. 19–36.
[19] F. Qiu and X. Hu, “Behaviorsim: a learning environment for behavior-based
agent,” in International Conference on Simulation of Adaptive Behavior,
2008, pp. 499–508.
[20] B. Zimmerman, “Theories of self-regulated learning and academic
achievement: An overview and analysis,” Self-regulated learning and
academic achievement. 1em plus 0.5em minus 0.4em Routledge, 2013, pp.
10–45.
[21] N. Shechtman and L. Horowitz, “Media inequality in conversation: how
people behave differently when interacting with computers and people,” in
Proceedings of the SIGCHI conference on Human factors in computing
systems, 2003, pp. 281–288.
[22] T. Reed, C.and Norman, Argumentation machines: New frontiers in
argument and computation. 1em plus 0.5em minus 0.4em Springer Science
and Business Media, 2003.
[23] T. Bench-Capon, “Try to see it my way: Modelling persuasion in legal
discourse,” Artificial Intelligence and Law, Vol. 11, 2003, pp. 271–287.
[24] K. Greenwood, T. B. Capon, and P. McBurney, “Towards a computational
account of persuasion in law,” in Proceedings of the 9th international
conference on Artificial intelligence and law, 2003, pp. 22–31.
[25] T. Kokubu, Y. Sakai, T.and Saito, H. Tsutsui, T. Manabe, M. Koyama, and
F. H., “The relationship between answer ranking and user satisfaction in a
question answering system,” in NTCIR, 2005.