You are on page 1of 15

Towards a Grounded Dialog Model for

Explainable Artificial Intelligence

Prashan Madumal, Tim Miller, Frank Vetere, and Liz Sonenberg

School of Computing and Information Systems


University of Melbourne, Parkville, Australia.
pmathugama@student.unimelb.edu.au
arXiv:1806.08055v1 [cs.AI] 21 Jun 2018

{tmiller,f.vetere,l.sonenberg}@unimelb.edu.au

Abstract. To generate trust with their users, Explainable Artificial In-


telligence (XAI) systems need to include an explanation model that can
communicate the internal decisions, behaviours and actions to the inter-
acting humans. Successful explanation involves both cognitive and social
processes. In this paper we focus on the challenge of meaningful interac-
tion between an explainer and an explainee and investigate the structural
aspects of an explanation in order to propose a human explanation dialog
model. We follow a bottom-up approach to derive the model by analysing
transcripts of 398 different explanation dialog types. We use grounded
theory to code and identify key components of which an explanation di-
alog consists. We carry out further analysis to identify the relationships
between components and sequences and cycles that occur in a dialog. We
present a generalized state model obtained by the analysis and compare
it with an existing conceptual dialog model of explanation.

Keywords: Explainable AI · Explanation Dialog Models · Socio-cognitive


trust.

1 Introduction
Explanation is important to artificial intelligence (AI) systems that aim to be
transparent about their actions, behaviours and decisions. This is especially true
in scenarios where a human needs to take critical decisions based on outcome
of an AI system. A proper explanation model aided with argumentation can
promote trust humans have about the system, allowing better cooperation [20]
in their interactions, in that humans have to reason about the extent to which,
if at all, they should trust the provider of the explanation.
As Miller [15, pg 10] notes, the process of Explanation is inherently socio-
cognitive as it involves two processes: (a) a Cognitive process, namely the process
of determining an explanation for a given event, called the explanandum, in which
the causes for the event are identified and a subset of these causes is selected
as the explanation (or explanans); and (b) the Social process of transferring
knowledge between explainer and explainee, generally an interaction between a
group of people, in which the goal is that the explainee has enough information
to understand the causes of the event.
2 P. Madumal et al.

However, much research and practice in explainable AI uses the researchers’


intuitions of what constitutes a ‘good’ explanation rather basing the approach
on a strong understanding of how people define, generate, select, evaluate, and
present explanations [15,16]. Most modern work on Explainable AI, such as
in autonomous agents [25,4,7,11] and interpretable machine learning [10], does
not discuss the interaction aspect of the explanations. Lack of a general dialog
model of explanation that takes into account the end user can be attributed
as one of shortcomings of existing explainable AI systems. Although there are
existing conceptual explanation dialog models that try to emulate the structure
and sequence of a natural explanation [2,22], we propose that improvements will
come from further generalization.
Explanation naturally occurs as a continuous interaction which gives the in-
teracting party the ability to question and interrogate the explanations. This
enables the explainee to clear doubts about the given explanation by further
interrogations and user-driven questions. Further, the explainee can express con-
trasting views about the explanation that can set the premise for an argumenta-
tion dialog. This type of iterative explanation has the ability to provide more rich
and satisfactory explanations as opposed to singular explanation presentations.
Understanding how humans engage in conversational explanation is a prereq-
uisite to building an explanation model as noted by Hilton [13]. De Graaf [9] note
that humans attribute human traits, such as beliefs, desires, and intentions, to
intelligent agents, and it is thus a small step to assume that people will explain
agent behaviour using human frameworks of explanation. AI explanation models
with designs that are influenced by human explanation models should be able to
provide more intuitive explanations to humans and therefore be more likely to
be accepted. We suggest it is easier for the AI to emulate human explanations
rather than expecting humans to adapt to a novel and unfamiliar explanation
model. While there are mature existing models for explanation dialogs [22,23],
these are idealised conceptual models that are not grounded on or validated by
data, and seem to lack iterative features like cyclic dialogs.
In this paper our goal is to introduce a Human explanation model that is
based on conversational data. We follow a bottom up approach and aim to pro-
vide an explanation dialog model that is grounded on data from different types of
explanations in conversations. We derive our model by analysing 398 explanation
dialogs across six different types of dialogs. Frequency, sequence and relation-
ships between the basic components of an explanation dialog were obtained and
analyzed in the study. We believe by following a data driven approach to formu-
late the model, a more generalized and accurate model can be developed that
can define the structure and the sequence of an explanation dialog.
This paper is structured as follows, we discuss related work regarding expla-
nation in AI and explanation dialog models, then we explain the methodology
of the study and collection of data and its properties. We analyse and evaluate
the data in section 4, identifying key components of an explanation dialog and
gaining insight to the relationships and sequence of these components. We also
develop the explanation dialog model based on the analysis and compare it with
Towards a Grounded Dialog Model for Explainable Artificial Intelligence 3

a similar model by Walton [24]. We end the paper by discussing the model with
its contribution and significance in socio-cognitive systems.

2 Related Work

Explaining decisions of intelligent systems has been a topic of interest since the
era of expert systems, e.g. [8,14]. Early work focussed particularly on the explana-
tions content, responsiveness and the human-computer interface through which
the explanation was delivered. Kass and Finin [14] and Moore and Paris [17]
discussed the requirements a good explanation facility should have, including
characteristics like “Naturalness”, and pointed to the critical role of user mod-
els in explanation generation. Cawsey’s [6] EDGE system also focused on user
interaction and user knowledge. These were used to update the system through
interaction. So, from the very early days, both the cognitive and social attributes
associated with an agent’s awareness of other actors, and capability to interac-
tion with them, has been recognized as an essential feature of explanation re-
search. However, limited progress has been made. Indeed recently, de Graaf and
Malle [9] still find the need to emphasize the importance of understanding how
humans respond to Autonomous Intelligent Systems (AIS). They further note
how Humans will expect a familiar way of communication from AIS systems
when providing explanations.
Castelfranchi [5] argues the importance of ‘trust’ in a social setting in Multi-
Agent Systems (MAS). MAS often have socialites such as Cooperation, collab-
oration and team work which are influenced by trust. Castelfranchi further ex-
plains the components of social trust which are rooted in delegation. Persistence
belief and self-confidence belief [5] in weak delegation can arguably fulfilled by
providing satisfactory explanations about agents intentions and behaviors.
To accommodate the communication aspects of explanations, several dialog
models have been proposed. Walton [22,23] introduces a shift model that has
two distinct dialogs: an explanation dialog and an examination dialog, where
the latter is used to evaluate the success of an explanation. Walton draws from
the work of Memory Organizing Packages (MOP) [19] and case-based reasoning
to build the routines of the explanation dialog models. Walton’s dialog model
has three stages namely the opening stage, an argumentation stage and a closing
stage [22]. Walton suggests an examination dialog with two rules as the closing
stage. These rules are governed by the explainee, which corresponds to the un-
derstanding of an explanation [21]. This sets the premise for the examination
dialog of an explanation and the shift between explanation and examination to
determine the success of an explanation [23].
A formal dialogical system of explanation is also proposed by Walton [21] that
has three types of conditions: Dialog conditions; Understanding conditions; and
the success conditions that constitutes an explanation. Arioua and Croitoru [2]
formalized and extended Waltons explanatory CE dialectical system by incor-
porating Prakkens [18] framework of dialog formalisation.
4 P. Madumal et al.

Argumentation also comes into to play in explanation dialogs. Walton [24]


introduced a dialog system for argumentation and explanation that consists of
communication language that defines the speech acts and protocols which allow
transitions in the dialog. This allows the explainee to challenge and interrogate
the given explanations to gain further understanding. Villata et al. [20] focus
on modelling information sources to be suited in an argumentation framework,
and introduce a socio-cognitive model of trust to support judgements about
trustworthiness.

3 Research Design and Methodology


To address the lack of an explanation dialog model that is based on and evaluated
through conversation data, we opt to use a data-driven approach. This study
consists of a data selection and gathering phase, data analysis phase and a model
development phase.
We designed the study as a bottom up approach to develop an explanation
dialog model. We aimed to gain insights into three areas: 1. Key components
that makeup an explanation dialog; 2. Relationships that exist within those
components; and 3. Component sequences that occur in an explanation dialog
and cycles.

3.1 Research design


We formulate our design based on an inductive approach. We use grounded
theory [12] as the methodology to conceptualize and derive models of explana-
tion. The key goal of using grounded theory as opposed to using a hypothetico-
deductive approach is to formalize an explanation dialog model that is grounded
on actual conversation data of various types rather than a purely conceptual
model.
The study is divided into three distinct stages, based on grounded theory.
The first stage consists of coding [12] and theorizing, where small chunks of data
are taken, named and marked according to the concepts they might hold. For
example a segment of a paragraph in an interview transcript can be identified
as an ‘Explanation’ and another segment can be identified as a ‘Why question’.
This process is repeated until the whole data set is coded. The second stage
involves categorizing, where similar codes and concepts are grouped together by
identifying their relationship with each other. The third stage involves deriving
a theoretical model from the codes, categories and their relationship.

3.2 Data
We collect data from six different data sources that have six different types
of explanation dialogs. Table 1 shows the explanation dialog types, explanation
dialogs that are in each type and number of transcripts. We gathered and coded a
total of 398 explanation dialogs from all of the data sources. All the data sources
Towards a Grounded Dialog Model for Explainable Artificial Intelligence 5

are text based, where some of them are transcribed from voice and video-based
interviews. Data sources consist of Human-Human conversations and Human-
Agent conversations1 . We collected Human-Agent conversations to analyze if
there are significant differences in the way humans carry out the explanation
dialog when they knew the interacting party was an agent.

Table 1. Coded data description.

Explanation Dialog Type # Dialogs # Transcripts


1. Human-Human static explainee 88 2
2. Human-Human static explainer 30 3
3. Human-Explainer agent 68 4
4. Human-Explainee agent 17 1
5. Human-Human QnA 50 5
6. Human-Human multiple explainee 145 5

Data source selection was done to encompass different combinations and


frequencies an explainee and explainer included in the explanation dialog. These
combinations are given in Table 2. We diversify the dataset by including data
sources of different mediums such as verbal based and text based.

Table 2. Explanation dialog type description.

Participants Number Medium Data source


1. Human-Human One-to-one Verbal Journalist Interview transcripts
2. Human-Human One-to-one Verbal Journalist Interview transcripts
3. Human-Agent One-to-one Text Chatbot conversation transcripts
4. Agent-Human One-to-one Text Chatbot conversation transcripts
5. Human-Human Many-to-many Text Reddit AMA records
6. Human-Human One-to-many Verbal Supreme court transcripts

Table 3 presents the coding, categories and their definition. We identify why,
how and what questions as questions that ask contrastive explanations, ques-
tions that ask explanations of causal chains and questions that ask causality
explanations in that order.
Explanation dialog type 1: Same Human explainee with different Human
explainers in each data source. Data sources for this type is journalist interviews
where the explainee is the journalist.
1
Links to all data sources (including transcripts) can be found at https://
explanationdialogs.azurewebsites.net
6 P. Madumal et al.

Table 3. Code description.

Code Category Description


QE start Dialog Explanation dialog start
QE end Dialog Explanation dialog end
How Question Type How questions
Why Question Type Why questions
What Question Type What questions
Explanation Explanation Explanation given for questions
Explainee Affirmation Explanation Explainee acknowledges explanation
Explainer Affirmation Explanation Explainer acknowledges explainee’s ac-
knowledgment
Question context Information Background to the question provided
by the explainee
Preconception Information Preconceived idea that the explainee
has about some fact
Counterfactual case Information Counterfactual case of the how/why
question
Argument Argumentation Argument presented by explainee or
explainer
Argument-s Argumentation An argument that starts the Dialog
Argument-a Argumentation Argument Affirmation by explainee or
explainer
Argument-c Argumentation Counter argument
Argument-contrast case Argumentation Argumentation contrast case
Explainer Return question Questions Clarification question by explainer
Explainee Return question Questions Follow up question asked by explainee

Explanation dialog type 2: Same Human explainer with different Human


explainees. Data source consist of journalists interviewing Malcolm Turnbull
(Current Australian Prime Minister).
Explanation dialog type 3: Same Agent explainer with different Human
explainees. Data source is 2016 Loebner Prize [3] judge transcripts.
Explanation dialog type 4: Same Agent explainee with different Human
explainers. Data source consists of transcript of conversation of Eliza chatbot [3].
Explanation dialog type 5: Multiple Human explainers and Multiple Hu-
man explainees. Data source is from reddit ask me anything (ama) [1] records.
Explanation dialog type 6: Multiple Human explainees with a Human
explainer. Data source consists of cases transcribed in supreme court of United
States of America.

4 Results

In this section, we present the model resulting from our study, compare it to
an existing conceptual model, and analyse some patterns of interaction that we
observed.
Towards a Grounded Dialog Model for Explainable Artificial Intelligence 7

Q: Ask Question E: Explain

E: Explainer
Return Question
Q: Argue Composite
Composite E: Explain Explanation
Question Presented Argument
Presented

Q: Affirm E: Affirm
E: Argument
Q: Explainee Explanation
Return Question

Explainee Argument
Affirmed Affirmed
E: Further
Explanation
E: Counter Argument
E: Affirm
E: Counter Argument
Explanation

Explainer Counter
Affirmed Argument
Q: Preconception statement
Presented
Succesfull Explanation

Explanation End
Argument End

Fig. 1. Explanation Dialog Model

4.1 Explanation Dialog Model


We present our explanation dialog model derived from the study. Figure 1 depicts
the derived explanation dialog model as a state diagram where the sequence of
each component is preserved. The labels ‘Q’ and ‘E’ refer to the Questioner (the
explainee) and the Explainer respectively.
While most codes are directly transferred to the model as states and state
transitions, codes that belonged to information category are embedded in differ-
ent states. These are: 1. Question Context, Preconception and Counterfactual
Case embedded in the Composite Question state; and 2. Argumentation Contrast
Case embedded in the Argumentation Dialog Initiated state. These embedded
components can potentially be used to decode the question or the argument by
systems that uses the model.
We identify two loosely coupled sections of the model: the Explanation Di-
alog and the Argumentation Dialog. These two sections can occur in any order,
frequencies and cycles. An explanation dialog can either be initiated by an argu-
ment or a question. An argument can occur at any point in the dialog time-line
8 P. Madumal et al.

after an explanation was given, which will then continue on to an argumentation


dialog. The dialog can then end in the argumentation dialog or can again be
switched to the explanation dialog as shown in Figure 1. A single dialog can
contain many argumentation dialogs and can go around the explanation dialog
loop several times, which will switch in between them according to the flow of
the dialog. Note that a loop in the explanation dialog implies that the ongo-
ing explanation is related to the same original question. We coded explanation
dialogs to end when a new topic was raised in a question. Questions that ask
for follow-up (Return Questions) were coded when the questions were clearly
identifiable as requesting more information about the given explanation.
A typical example would be, an explainee asking a question, receiving a
reply, presenting and argument of the explanation, explainer acknowledging the
argument, explainer agreeing to the argument. This scenario can be described
in states as Start →Composite Question →Explanation Presented →Explainee
Affirmed →Argumentation Dialog Initiated →Composite Argument Presented
→Argument Affirmed →End.

Fig. 2. Argumentation and explanation in dialogue [24]

Model Comparison We compare the developed explanation dialog model,


which also contains an argumentation sub-dialog, by Walton [24]. Walton pro-
posed the model shown in Figure 2, which consists of 10 components. This model
focus on combining explanation and examination dialogs with argumentation. A
similar shift between explanation and argumentation/examination can be seen
between our model and Walton’s. According to the data sources, argumentation
is a frequently present component of an explanation dialog, which is depicted by
the Explainee probing component in Walton’s Model. The basic flow of explana-
tion is the same between the two models, but the models differ in two key ways.
First, is the lack of examination dialog shift in our model. Although we did not
derive an examination dialog, a similar shift of dialog can be seen with respect
to affirmation states. That is, our ‘examination’ is simply the explainee affirm-
Towards a Grounded Dialog Model for Explainable Artificial Intelligence 9

ing that they have understood the explanation. Second is Waltons model [24]
focus on the evaluation of the successfulness of an explanation in the form of
examination dialog whereas our model focus on delivering an explanation in a
natural sequence without an explicit form of explanation evaluation.
Thus, we can see similarities between Walton’s conceptual model and our
data-driven model. The differences between the two are at a more detailed level
than at the high-level, and we attribute these differences to the grounded nature
of our study. While Walton proposes an idealised model of explanation, we assert
that our model captures the subtleties that would be required to build a natural
dialog for human-agent explanation.

4.2 Analysis and Evaluation

We focus our analysis on three areas: 1. The key components of an Explanation


Dialog; 2. Relationships between these components and their variations between
different dialog types; and 3. The sequence of components that can successfully
carry out an explanation dialog. We also evaluate the frequencies of certain
component sequences to occur across different types of explanation dialogs.

800

750

700

650

600

550

500
Frequency

450

400

350

300

250

200

150

100

50

0
og w y at on on on xt al on ion ion en
t se ion ion ion
ial Ho Wh Wh anati mati mati nte ctu ati tat rmat gum st Ca uest uest cept
onD pl ffir Affir Co terfa ment men f fi A r r a Q Q o n
ati E x
e A
er u n gu Argu ion A t
ter Con etur etur n n e c
pla
n
ine ain Co Ar l t un Pr
Ex pla Expl tia nta Co tation ner R ee R
Ex Ini ume n la i a in
g e p p l
Ar gu
m Ex Ex
Ar
Codes

Fig. 3. Code Frequency.


10 P. Madumal et al.

Code Frequency Analysis Figure 3 shows the frequency of codes from all
data sources. Across six different explanation types and 398 explanation dialogs,
overall the most prevalent question type is ‘What’ questions. Coding of ‘What’
questions include all the other questions that are not categorized to ‘Why’ and
‘How’ questions and include questions types such as Where, Which, Do, Is etc.
‘Explanation’ coding type has the highest frequency, which suggests it is likely
to occur multiple times in a single explanation dialog.

2.5
Human-Human static explainee
Human-Human static explainer
Human-Explainer agent
Human-Explainee agent
2 Human-Human QnA
Human-Human multiple explainee
Average Occurrence in a Dialog

1.5

0.5

0
w y at on n n xt l n n n nt e o on on
Ho Wh Wh anati tio tio nte tua tio tio tio me as sti sti pti
ma ma Co fac nta nta firma gu tC ue ue ce
pl ffir ffir ter me me f Ar t ras urn Q rn Q on
Ex e A r A u n g u rg u n A
te r o n et e c
ine ain
e Co Ar lA tio un n C er R etu Pr
pla Expl tia nta Co tio eR
Ex Ini u me nta xplai
n
a ine
g e p l
Ar gu
m E Ex
Ar
Codes

Fig. 4. Average code occurrence in per dialog in different explanation dialog types

The average code occurrence per dialog in different dialog types is depicted
in Figure 4. According to Figure 4 in all dialog types, a dialog is most likely to
have multiple what questions, multiple explanations and multiple affirmations.
Humans tend to provide context and further information surrounding a ques-
tion, whereas Human-Agent dialogs almost never initiate questions with context.
This could be due to the humans prejudice of the incapability of the agent to
identify the given context. In contrast, Human-Explainer agent dialog types have
explainee return questions present more than Human-Human dialogs. Further
analysis of the data revealed this is due to nature of the chatbot the data source
was based upon, where the chatbot try to hide its weaknesses by repeatedly
asking unrelated questions. Human-only dialogs also have a higher frequency of
Towards a Grounded Dialog Model for Explainable Artificial Intelligence 11

‘Preconception’ occurrences. We attribute this to the nature of Human-Human


explanations where humans try to express their opinion or preconceived views.
Argumentation is a key component of an explanation dialog. The explainee
can have different or contrasting views to the explainer regarding the explana-
tion, at which point an argument can be put forth by the explainee. An argument
in the form of an explanation that is not in response to a question can also oc-
cur at the very beginning of an explanation dialog, where the argument set the
premise for the rest of the dialog. An argument is typically followed by an af-
firmation and may include a counter argument by the opposing party. From
Figure 4 Human-Human dialogs with the exception of QnA have argumentation
but Human-Agent dialogs lacks any substantial occurrences of argumentation.

Code Occurance in a Dialog = 0 Code Occurance in a Dialog = 2


Code Occurance in a Dialog = 1 Code Occurance in a Dialog > 2
Code Occurrence Type % per Dialog

Code Occurrence Type % per Dialog


Explainer Affirmation Explainee Affirmation
80 100

80
60

60
40
40

20
20

0 0
1 2 3 4 5 6 1 2 3 4 5 6
Explanation Dialog Type Explanation Dialog Type
Code Occurrence Type % per Dialog

Explainer Return Question


Code Occurrence Type % per Dialog

Explainee Return Question


100 100

80 80

60 60

40 40

20 20

0 0
1 2 3 4 5 6 1 2 3 4 5 6
Explanation Dialog Type Explanation Dialog Type

Fig. 5. Average code occurrence in per dialog in different explanation dialog types

Code Occurrence Analysis per Dialog It is important to identify what is


the most common occurrence frequency per dialog of certain codes in different
dialog types. Analyzing common occurrences can help in identifying cyclic paths
between components. For example, a majority of Human-Human static explainer
dialogs have zero explainer affirmations, while Human-Human multiple explainer
12 P. Madumal et al.

dialogs can have one explainer affirmation per dialog for majority of the dialogs.
However, it is important to note that non-verbal cues were not in the transcripts,
so non-verbal affirmations such as nodding were not coded.
Figure 5 illustrate the occurrence frequency and the likelihood of four codes,
and the above example can be seen in the explainer affirmation plot. These
frequencies can be used to determine the sequence and cycles of components
when formulating an explanation dialog model. For example, from Figure 5, the
path or the sequence of a dialog model that have explainer return question is the
least likely to occur as all explanation dialog types have zero explainer return
question as the most probable one.

Explanation Dialog Ending Sequence Analysis Participants should be


able to identify when an explanation dialog ends. We analyse the different types
of explanation dialogs to identify the codes that are most likely to signify the
ending.

70
Dialog ending in explanation
Dialog ending in Explainee Affirmation
Dialog ending in Explainer Affirmation
60 Dialog ending in Preconception
Dialog ending in Argument Affirmation
Dialog ending in Argument Counter
50 Dialog ending in Other
Ending Type % per Dialog

40

30

20

10

0
e
ine
r nt nt A ne
e
ine ge ge Qn lai
pla xpla ra ea an p
ce
x
ce ine ai n e um ex
tat
i
sta
ti xpla xp
l n-H ltip
le
ns n n-E n-E ma u
um
a
um
a
ma ma Hu nm
-H -H Hu Hu ma
an m an n- Hu
m
Hu Hu ma
Hu
Explanation Dialog Type

Fig. 6. Average code occurrence in per dialog in different explanation dialog types

From Figure 6, all explanation dialog types except Human-Human QnA type
are most likely to end in an explanation. The second most likely code to end an
explanation is explainer affirmation. Ending with other codes such as explainee
Towards a Grounded Dialog Model for Explainable Artificial Intelligence 13

and explainer return questions is presented by ‘Dialog ending In Other’ bar in


Figure 6. It is important to note that although a dialog is likely to end in an
explanation, that dialog can have previous explainee affirmations and explainer
affirmations.

5 Discussion

The purpose of this study is to introduce an Explanation Dialog model that can
bridge the gap of communicating explanations that are given by Explainable
Artificial intelligence (XAI) systems. We derive the model from components
identified through conducting a grounded study. The study consisted of coding
data of six different types of explanation dialogs from six data sources.
We believe our proposed model can be used as a basis for building commu-
nication models for XAI systems. As the model is developed through analysing
explanation dialogs of various types, a system that has the model implemented
can potentially carry out an explanation dialog that can closely emulate a Hu-
man explanation dialog. This will enable the XAI systems to better communicate
the underlying explanations to the interacting humans.
One key limitation of our model is its inability to evaluate effectiveness of
the delivered explanation. In natural explanation dialogs that we analysed this
evaluation did not occur, although we note that other data sources, such as
verbal examinations or teaching sessions, would likely change these results. Al-
though the model does not have a hard explanation evaluation system, it does
accommodate explanation acknowledgement in the form of affirmation where
the explanation evaluations can be embedded in the affirmation. In this case,
further processing of the affirmation may be required. A key limitation if our
study in general is the lack of empirical study to evaluate the suitability of model
in a human-agent explanation. The proposed model is based on text based tran-
scribed data sources, where an empirical study can introduce new components
or relations; for example, interactive visual explanations.

6 Conclusion

Explainable Artificial Intelligent systems can benefit from having a proper ex-
planation model that can explain their actions and behaviours to the interacting
users. Explanation naturally occurs as a continuous and iterative socio-cognitive
process that involves two processes: a cognitive process and a social process. Most
prior work is focused on providing explanations without sufficient attention to
the needs of the explainee, which reduces the usefulness of the explanation to
the end-user.
In this paper, we propose a dialog model for the socio-cognitive process of
explanation. Our explanation dialog model is derived from different types of nat-
ural conversations between humans as well as humans and agents. We formulate
the model by analysing the frequency of occurrences of patterns to identify the
14 P. Madumal et al.

key components that makeup an explanation dialog. We also identify the rela-
tionships between components and their sequence of occurrence inside a dialog.
We believe this model has the ability to accurately emulate the structure of
an explanation dialog similar to a one that occurs naturally between humans.
Socio-cognitive systems that deal in explanation and trust will benefit from such
a model in providing better, more intuitive and interactive explanations. We
hope XAI systems can build on top of this explanation dialog model and so lead
to better explanations to the intended user.
In future work, we aim to evaluate the model in a Human-Agent setting. Fur-
ther evaluation can be done by introducing other forms of interaction modes such
as visual interactions which may introduce different forms of the components in
the model.

7 Acknowledgments
The research described in this paper was supported by the University of Mel-
bourne research scholarship (MRS); SocialNUI: Microsoft Research Centre for
Social Natural User Interfaces at the University of Melbourne; and a Sponsored
Research Collaboration grant from the Commonwealth of Australia Defence Sci-
ence and Technology Group and the Defence Science Institute, an initiative of
the State Government of Victoria.

References
1. Anderson, K.E.: Ask me anything: what is Reddit? Library Hi Tech
News 32(5), 8–11 (2015). https://doi.org/10.1108/LHTN-03-2015-0018,
http://www.emeraldinsight.com.proxy.lib.sfu.ca/doi/full/10.1108/
LHTN-03-2015-0018
2. Arioua, A., Croitoru, M.: Formalizing Explanatory Dialogues. Scalable Uncertainty
Management 6379, 282–297 (2010). https://doi.org/10.1007/978-3-642-15951-0,
http://link.springer.com/10.1007/978-3-642-15951-0
3. Bradeško, L., Mladenić, D.: A Survey of Chabot Systems through a Loebner Prize
Competition. Researchgate.Net 2, 1–4 (2012), https://pdfs.semanticscholar.
org/9447/1160f13e9771df3199b3684e085729110428.pdf
4. Broekens, J., Harbers, M., Hindriks, K., Van Den Bosch, K., Jonker, C., Meyer,
J.J.: Do you get it? user-evaluated explainable BDI agents. In: German Conference
on Multiagent System Technologies. pp. 28–39. Springer (2010)
5. Castelfranchi, C., Falcone, R.: Principles of trust for mas: Cognitive anatomy, social
importance, and quantification. In: Proceedings of the 3rd International Conference
on Multi Agent Systems. pp. 72–. ICMAS ’98, IEEE Computer Society, Washing-
ton, DC, USA (1998), http://dl.acm.org/citation.cfm?id=551984.852234
6. Cawsey, A.: Planning interactive explanations. International
Journal of Man-Machine Studies 38(2), 169 – 199 (1993).
https://doi.org/http://dx.doi.org/10.1006/imms.1993.1009
7. Chakraborti, T., Sreedharan, S., Zhang, Y., Kambhampati, S.: Plan explana-
tions as model reconciliation: Moving beyond explanation as soliloquy. IJCAI
International Joint Conference on Artificial Intelligence pp. 156–163 (2017).
https://doi.org/10.24963/ijcai.2017/23
Towards a Grounded Dialog Model for Explainable Artificial Intelligence 15

8. Chandrasekaran, B., Tanner, M.C., Josephson, J.R.: Explaining Control Strategies


in Problem Solving (1989). https://doi.org/10.1109/64.21896
9. De Graaf, M.M.A., Malle, B.F.: How People Explain Action (and Autonomous
Intelligent Systems Should Too). AAAI 2017 Fall Symposium on AI-HRI pp. 19–
26 (2017)
10. Doshi-Velez, F., Kim, B.: Towards A Rigorous Science of Interpretable Machine
Learning (Ml), 1–13 (2017), http://arxiv.org/abs/1702.08608
11. Fox, M., Long, D., Magazzeni, D.: Explainable Planning. IJCAI - Workshop on
Explainable AI (2017)
12. Glaser, B.G., Strauss, A.L.: The Discovery of Grounded Theory: Strategies for
Qualitative Research, vol. 1 (1967). https://doi.org/10.2307/2575405, http://www.
amazon.com/dp/0202302601
13. Hilton, D.J.: A Conversational Model of Causal Explanation. European Review of
Social Psychology 2, 51–81 (1991). https://doi.org/10.1080/14792779143000024
14. Kass, R., Finin, T.: The Need for User Models in Generating Expert System Expla-
nations. Tech. rep., University of Pennsylvania (June 1988), http://repository.
upenn.edu/cis_reports/585
15. Miller, T.: Explanation in Artificial Intelligence: Insights from the Social Sciences
(2017). https://doi.org/arXiv:1706.07269v1, http://arxiv.org/abs/1706.07269
16. Miller, T., Howe, P., Sonenberg, L.: Explainable AI: Beware of inmates running
the asylum; or: How I learnt to stop worrying and love the social and behavioural
sciences. In: IJCAI-17 Workshop on Explainable AI (XAI). p. 36 (2017)
17. Moore, J.D., Paris, C.L.: Requirements for an expert system explanation facility.
Computational Intelligence 7(4), 367–370 (1991). https://doi.org/10.1111/j.1467-
8640.1991.tb00409.x
18. Prakken, H.: Formal systems for persuasion dialogue. Knowl. Eng. Rev. 21(2), 163–
188 (Jun 2006). https://doi.org/10.1017/S0269888906000865, http://dx.doi.
org/10.1017/S0269888906000865
19. Schank, R.C.: Explanation patterns: Understanding mechanically and creatively.
Erlbaum, Hillsdale, NJ (1986)
20. Villata, S., Guido Boella, Gabbay, D.M., Van Der Torre, L.: A socio-cognitive
model of trust using argumentation theory. International Journal of Approxi-
mate Reasoning 54(4), 541–559 (2013). https://doi.org/10.1016/j.ijar.2012.09.001,
http://dx.doi.org/10.1016/j.ijar.2012.09.001
21. Walton, D.: Dialogical Models of Explanation. Explanation-aware
computing:\nPapers from the 2007 AAAI Workshop (1965), 1–9 (2007)
22. Walton, D.: A dialogue system specification for explanation. Synthese 182(3), 349–
374 (2011). https://doi.org/10.1007/s11229-010-9745-z
23. Walton, D.: A Dialogue System for Evaluating Explanations, pp. 69–116. Springer
International Publishing, Cham (2016). https://doi.org/10.1007/978-3-319-19626-
8 3, https://doi.org/10.1007/978-3-319-19626-8_3
24. Walton, D., Bex, F.: Combining explanation and argumentation in dialogue. Argu-
ment and Computation 7(1), 55–68 (2016). https://doi.org/10.3233/AAC-160001
25. Winikoff, M.: Debugging agent programs with Why?: Questions. In: Proceedings of
the 16th Conference on Autonomous Agents and MultiAgent Systems. pp. 251–259.
AAMAS ’17, IFAAMAS (2017)

You might also like