Professional Documents
Culture Documents
Tutoring Systems
1 Introduction
Conversational dialog systems allow people to communicate with intelligent soft-
ware in a natural way. Natural user interfaces equipped with conversation abil-
ities, voice recognition, and speech generation have been recognized as a future
user interface in various domains [12] and are already being commercialized.
Such conversational interfaces are useful in intelligent tutoring systems (ITS)
[4], where mixed-initiative dialogues are commonly used to teach conceptual
information [10,21]. They leverage the flexibility and expressiveness of the nat-
ural language, allowing learners to convey partial knowledge and ask questions.
However, we still need to investigate what the best method is to implement effec-
tive conversational interfaces for intelligent tutoring [8], particularly considering
the limitations of natural language interfaces (NLI) [19]. Some information is
better conveyed via visual representations and concept maps are widely used for
learners to visualize relationships and hierarchical organization of ideas.
The interface presented here is a hybrid of two approaches: textual conver-
sation and visualization. It provides learners with a wide degree of flexibility in
reporting knowledge and receiving feedback, but it also scaffolds learner behav-
ior through automated assessment and feedback. The visualization part of the
c Springer International Publishing AG, part of Springer Nature 2018
C. Penstein Rosé et al. (Eds.): AIED 2018, LNAI 10948, pp. 413–418, 2018.
https://doi.org/10.1007/978-3-319-93846-2_77
414 J. Ahn et al.
learning objective (e.g., answering a question). Along with the adaptation fol-
lowing the interactive dialogue, an open learner model [3,5] based adaptation
is supported. The prototype is equipped with a learner model that tracks stu-
dents’ mastery and consistency on concepts. The scores are overlaid on concepts
using two black and white “arcs” that transparently show the user’s mastery
and consistency of the concepts.
Fig. 1. Adaptive Visual Dialog prototype shows a textual dialog (a) and a visualization
that presents concepts and clusters (b) synced with the conversation context. Student
directly asks a question from visualization (c). Mastery and consistency scores loaded
from an open learner model are overlaid on concepts as black and white arcs (d). (Color
figure online)
to the system. We have devised a preliminary set of strategies for how concept
graphs and text-based dialog interactions should be coordinated (Table 1).
Users can also create their visual representations attuned to their own under-
standing of the text. Visual nodes are derived from the annotations made by the
users within the text using the user interface – to select text and drag it to
the visual dialog interface where it is rendered as a new node. Users can then
interact with this node and define relationships in the same manner as they
would with the other nodes in the visual dialog (i.e., it may be assigned to clus-
ters, explored, or linked via edges to other nodes). The very process of creating
and manipulating such visual representations can be thought of as a form of
self-explanation which can be learning activity [7]. These visual representations
can be compared to the standard (i.e. automatically extracted) visualization
via partial graph-matching algorithms [9], enabling concrete feedback on user-
generated visualization. These annotated visual dialogs can also be shared by
the users (students) with their peers and instructors and receive feedback.
References
1. Watson Conversation (2018). https://www.ibm.com/watson/services/conversation
2. Ahn, J., Brusilovsky, P.: Adaptive visualization for exploratory information
retrieval. Inf. Process. Manag. 49(5), 1139–1164 (2013)
3. Ahn, J., Brusilovsky, P., Grady, J., He, D., Syn, S.Y.: Open user profiles for adap-
tive news systems: help or harm? In: WWW 2007: Proceedings of the 16th Interna-
tional Conference on World Wide Web, pp. 11–20. ACM Press, New York (2007).
https://doi.org/10.1145/1242572.1242575
4. Anderson, J.R., Boyle, C.F., Reiser, B.J.: Intelligent tutoring systems. Science
(Washington) 228(4698), 456–462 (1985)
5. Bakalov, F., Meurs, M.J., König-Ries, B., Sateli, B., Witte, R., Butler, G., Tsang,
A.: An approach to controlling user models and personalization effects in recom-
mender systems. In: Proceedings of the 2013 International Conference on Intelligent
User Interfaces, IUI 2013, pp. 49–56. ACM, New York (2013)
6. Bredeweg, B., Forbus, K.D.: Qualitative modeling in education. AI Mag. 24(4), 35
(2003)
7. Chi, M.T., De Leeuw, N., Chiu, M.H., LaVancher, C.: Eliciting self-explanations
improves understanding. Cognit. Sci. 18(3), 439–477 (1994)
8. Coetzee, D., Fox, A., Hearst, M.A., Hartmann, B.: Chatrooms in moocs: all talk
and no action. In: Proceedings of the First ACM Conference on Learning @ Scale
Conference, L@S 2014, pp. 127–136. ACM, New York (2014)
9. Gold, S., Rangarajan, A.: A graduated assignment algorithm for graph matching.
IEEE Trans. Pattern Anal. Mach. Intell. 18(4), 377–388 (1996)
10. Graesser, A.C., Chipman, P., Haynes, B.C., Olney, A.: Autotutor: an intelligent
tutoring system with mixed-initiative dialogue. IEEE Trans. Educ. 48(4), 612–618
(2005)
11. Grawemeyer, B., Cox, R.: A bayesian approach to modelling users’information
display preferences. User Model. 2005, 225–230 (2005)
12. Hearst, M.A.: ‘Natural’ search user interfaces. Commun. ACM 54, 60–67 (2011)
13. Jacomy, M., Heymann, S., Venturini, T., Bastian, M.: Forceatlas2, a continu-
ous graph layout algorithm for handy network visualization. Medialab center of
research 560 (2011)
14. Leelawong, K., Biswas, G.: Designing learning by teaching agents: the betty’s brain
system. Int. J. Artif. Intell. Educ. 18(3), 181–208 (2008)
15. Lehmann, S., Schwanecke, U., Dörner, R.: Interactive visualization for opportunis-
tic exploration of large document collections. Inf. Syst. 35(2), 260–269 (2010)
16. Leuski, A., Allan, J.: Interactive information retrieval using clustering and spatial
proximity. User Model. User Adapt. Interact. 14(2), 259–288 (2004)
17. Rosvall, M., Bergstrom, C.T.: Maps of random walks on complex networks reveal
community structure. Proc. Natl. Acad. Sci. 105(4), 1118–1123 (2008)
18. Rus, V., DMello, S., Hu, X., Graesser, A.: Recent advances in conversational intel-
ligent tutoring systems. AI Mag. 34(3), 42–54 (2013)
19. Shneiderman, B.: A taxonomy and rule base for the selection of interaction styles.
In: Shackle, B., Richardson, S.J. (eds.) Human Factors for Informatics Usability,
pp. 325–342. Cambridge University Press, Cambridge (1991)
418 J. Ahn et al.
20. Viegas, F., Smith, M.: Newsgroup crowds and authorlines: visualizing the activity
of individuals in conversational cyberspaces. In: Proceedings of the 37th Annual
Hawaii International Conference on System Sciences, 2004, 10 pp., January 2004
21. Woolf, B.P.: Building Intelligent Interactive Tutors: Student-Centered Strategies
for Revolutionizing e-learning. Morgan Kaufmann, Boston (2010)