You are on page 1of 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/341568089

ExtraBot vs IntroBot: The Influence of Linguistic Cues on Communication


Satisfaction

Conference Paper · August 2020

CITATIONS READS

5 383

3 authors:

Rangina Ahmad Dominik Siemon


Technische Universität Braunschweig Lappeenranta – Lahti University of Technology LUT
15 PUBLICATIONS   56 CITATIONS    88 PUBLICATIONS   420 CITATIONS   

SEE PROFILE SEE PROFILE

Susanne Robra-Bissantz
Technische Universität Braunschweig
227 PUBLICATIONS   863 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Collaboration with AI View project

Needs-based workplace IT configuration View project

All content following this page was uploaded by Dominik Siemon on 16 April 2021.

The user has requested enhancement of the downloaded file.


The Influence of Linguistic Cues on Communication Satisfaction

ExtraBot vs IntroBot:
The Influence of Linguistic Cues on
Communication Satisfaction
Completed Research

Rangina Ahmad Dominik Siemon


TU Braunschweig TU Braunschweig
rangina.ahmad@tu-bs.de d.siemon@tu-bs.de

Susanne Robra-Bissantz
TU Braunschweig
s.robra-bissantz@tu-bs.de
Abstract
Conversational agents (CA) have emerged as a new type of dialogue systems, able to simulate human
conversation. However, research suggests that current CAs fail to provide convincing interactions due to a
lack of satisficing communication with users. To address this problem, we propose the idea of a personality
adaptive CA that could enhance communication satisfaction during a user's interaction experience. As
personality differences manifest themselves in language cues, we investigate in an experiment, whether
linguistic styles have an influence regarding a user's communication satisfaction, when interacting with a
CA. The results show that users perceive greater satisfaction when communicating with an extraverted CA
(ExtraBot) than with an introverted CA (IntroBot). The outcomes of our study highlight that different
linguistic styles can influence the course of the conversation and determine whether the user is satisfied
with the communication and sees any value in the interaction with the CA.

Keywords

Conversational Agents, Personality, Language, Chatbots, Big Five

Introduction
Communicating with robots and virtual agents in “human” language is no longer just considered a realm of
science fiction. In fact, the ability to conduct dialogues between humans and machines in natural language
has immensely improved recently to technological progress in the field of artificial intelligence (AI)(Mallios
and Bourbakis 2016). The desire to communicate with computers in natural language evolved naturally in
the past years, due to the fact that almost every facet of people’s lives is affected by social technologies,
directly or indirectly (Guzman 2018; Shawar and Atwell 2007). Communication specifically is about the
meaning people derive in and through their interactions with machines, and one way of facilitating such
interaction is by allowing users to express their wishes and queries by typing and speaking (Guzman 2018).
Defined as “dialogue systems often endowed with ‘humanlike’ behavior” (Vassallo et al. 2010, p. 357),
conversational agents (CA) have emerged as a new type of human-computer interaction (HCI) systems
(Mallios and Bourbakis 2016). Communicating in spoken or written form (e.g. chatbots or virtual
assistants), the driver behind the development of CAs is to simulate human conversation (Shawar and
Atwell 2007). The majority of today’s CA applications provide assistant functionalities, such as sending
messages, creating calendar entries or asking for the weather forecast and not only have been integrated in
personal smartphones, but have also been incorporated in many organizations and companies specifically
for customer service (Knijnenburg and Willemsen 2016). Due to these and a variety of other possible
applications, the design and implementation of CAs and especially its communication abilities have been
central to Information Systems (IS) research in the last few years (Grudin and Jacques 2019; McTear et al.
2016). However, natural language conversations are not linear but rather multi-threaded, unlike scripted

Americas Conference on Information Systems 1


The Influence of Linguistic Cues on Communication Satisfaction

dialogue trees (Grudin and Jacques 2019). Thus, providing the machine with the ability to converse with
humans in a natural and for the user satisficing way, is to this day one of the fundamental challenges in AI
(McTear et al. 2016; Turing 1950).
Reports from both industry and research suggest that current CAs fail to provide convincing and engaging
interactions (Gnewuch et al. 2017; Schuetzler et al. 2014). Insufficient interaction during the phase of
transaction in e-commerce for instance, led to a lack of service satisfaction and a high number of purchase
cancellations, that often turned into customer frustration (Knijnenburg and Willemsen 2016; Robra-
Bissantz 2018; Shawar and Atwell 2007). Grönroos (1982) states that the manner in which a provider
behaves and communicates with the customer within a service encounter, is crucial for the customer’s
perception of the service. Both the provider and the customer actively participate in a dialogue process
during a service encounter, and it is here where creation or destruction of value can take place (Mustelier-
Puig et al. 2018). Robra-Bissantz (2018) proceeds on the assumption, that an increased quality during
interaction can lead to an enhanced value in use and communication satisfaction and thus to an improved
service satisfaction. Transferring this concept to HCI and an e-commerce context, where CAs handle
communication with customers via natural language to assist them during the sales process for instance,
can be particularly challenging, if the interaction does not meet the individual’s requirements. Another
context, in which CAs have the potential to play an increasingly important role is in health and medical care,
supporting consumers with mental health challenges, or assisting patients and elderly individuals in their
living environments (Laranjo et al. 2018). A lack of communication satisfaction, however, can also lead to
frustration, since language is a primary tool to understand patients’ experiences and express therapeutic
interventions (Laranjo et al. 2018). This rises the question, whether language and specifically certain styles
of language have an influence on the perceived communication satisfaction of the user.
When designing CAs to ensure better interaction, a large body of research suggests incorporating social
behaviors (Feine et al. 2019; Gnewuch et al. 2017; Strohmann et al. 2019). In their taxonomy of social cues
for CAs, Feine et al. (2019) identify verbal cues as one of four major categories, with verbal cues referring
to all social cues that are created by words. In fact, forms of linguistic fingerprinting have been suggested
in research for generations, as to some extent, the way people write and talk have been recognized as stamps
of individual identity (Pennebaker and King 1999). Over the last decades, research in the field of psychology
has demonstrated, that the words people use in everyday life reflect their personality, and the ways in which
people use words is internally consistent, reliable over time, predictive of a wide range of behaviors and
varies considerably from person to person (Boyd and Pennebaker 2017; Pennebaker 2011). Language, thus,
is a fundamental dimension of personality, and unlike other standard personality markers, people do not
need to complete questionnaires in order to provide useful personality data in the form of language (Boyd
and Pennebaker 2017). These findings can be substantiated with one of the early HCI studies by Nass et al.
(1995) and Moon and Nass (1996) who found, that depending on the strength of a computer’s language, the
expressed confidence level and the interaction order, participants were ascribing a certain personality to
the computer. This implicates, that different personality dimensions have stylistic differences in language
use that show even when describing the exact same content (Beukeboom et al. 2013). Linguistic styles
also influence how conversations develop and what impression speakers leave (Beukeboom et al. 2013),
which in turn has likely an influence on communication satisfaction. In order to address the problem of
providing more value to a person’s communication satisfaction during their interaction experience with an
agent, our paper posits the following research question (RQ):
Do personality differences manifested in language use have an influence regarding a user’s perceived
communication satisfaction, when interacting with a CA?
Incorporating personality into a machine is receiving more emphasis as a crucial part of designing HCI
(Kim et al. 2019). Previous studies have dealt with personality expressed via behavioral features (e.g.
gestures, movements) and other verbal traits such as voice and emotions (Lee et al. 2019; Robert et al.
2020). However, while embodied physical action (EPA) robots can include a combination of several of these
personality factors and therefore invoke strong emotional reactions that can lead individuals to project
personalities onto them (Robert 2018; You and Robert 2018), CAs and specifically chatbots mainly express
their personality through language. Consequently, text being one of the few channels of communication
between chatbots and users, it is all the more important to study personality markers in language and their
impact on the HCI quality. While most studies, especially in the field of affective computing, have applied
sentiment analysis with adaptive responses to reduce user frustration during interactions (e.g. Diederich et

Americas Conference on Information Systems 2


The Influence of Linguistic Cues on Communication Satisfaction

al. 2019) our paper focuses on finding empirical support that the concept of a CA with personality adaptive
responses influences a person’s perceived communication satisfaction. Further, due to its importance
ascribed in the field of human-human interaction (HHI), the majority of personality in HCI studies have
particularly investigated the psychological opposites of extraversion and introversion (Robert 2018; Robert
et al. 2020). Since the underlying components of extraversion have been well-established to date across
various methodologies (Boyd and Pennebaker 2017; Mairesse et al. 2007), we base our experiment solely
on these two contrasting personality dimensions and their identified language cues. We conducted an
online experiment, simulating a pre-defined conversation between an extraverted personality adaptive CA
(ExtraBot) and a fictional human and respectively an introverted personality adaptive CA (IntroBot) and a
fictional person. Participants then had to complete a survey assessing the construct communication
satisfaction (Hecht 1978) and had to indicate, which conversation they preferred. The results of the
experiment provide design implications for personality expressions applicable for CAs as well as EPA
robots.

Theoretical Foundations & Related Work


Personality & Language Cues
Personality is loosely defined as the construct that differentiates individuals from another but at the same
time makes a human being’s behavior, thoughts and feelings (relatively) consistent (Allport 1961). In order
to measure an individual’s personality, a widely used classification of personality – the Big Five model –
has been applied in research (McCrae and John 1992). For a comprehensive assessment of individuals, the
following five fundamental traits or dimensions have been defined and derived through factorial studies:
Conscientiousness, Openness, Neuroticism, Agreeableness and Extraversion which refers to the extent to
which people enjoy company and seek excitement and stimulation (Costa and McCrae 2008). A well
accepted theory of psychology is that human language reflects the emotional state and personality, based
on the frequency with which certain categories of words are used as well as the variations in word usage
(Boyd and Pennebaker 2017; Golbeck et al. 2011; Yarkoni 2010). In fact, language use has been scientifically
proven to be unique, relatively reliable over time and internally consistent, and as Boyd and Pennebaker
(2017, p. 63) further state: „Language-based measures of personality can be useful for capturing/modeling
lower-level personality processes that are more closely associated with important objective behavioral
outcomes than traditional personality measures.”
In addition to a speaker’s semantic content, utterances convey a great deal of information about the speaker,
and one such type of information comprises cues to the speaker’s personality traits (Mairesse et al. 2007).
So even when the content of a message is the same, individuals express themselves verbally with their own
distinctive styles, and both spoken language as well as written language is unique from person to person
(Pennebaker and King 1999). Psychologists have documented the existence of such cues by discovering
correlations between a range of linguistic variables and personality traits, across a wide range of linguistic
levels (Mairesse et al. 2007). Of all Big Five traits, extraversion has specifically received the most attention
from researchers, since the underlying components of extraversion have been well-established to date
across various methodologies (Boyd and Pennebaker 2017; Mairesse et al. 2007). For example, speaker
charisma has been shown to correlate strongly with extraversion (Mairesse et al. 2007). Extraverts also use
more positive emotion words and show more agreements and compliments than introverts (Pennebaker
and King 1999). Furthermore, relative to introverts, extraverts generally engage in more social activity,
experience greater positive affect and well-being, and are reactive to external stimulation (Furnham 1990;
Mairesse et al. 2007; Scherer 1979). Relative to their introverted counterparts, extraverts tend to talk more,
with fewer pauses and hesitations, have shorter silences, a higher verbal output and a less formal language,
while introverts use a broader vocabulary (Furnham 1990; Gill and Oberlander 2002; Pennebaker and King
1999; Scherer 1979). Extraverts also exert a more imprecise and “looser” style with reduced concreteness,
whereas introverts exhibit a more analytic, careful, precise and focused style (Gill and Oberlander 2002).
Research also showed that conversations between extraverts are more expansive and characterized by a
wider range of topics whereas a conversation between two introverts are more serious and have a
greater topic focus (i.e., discussing one topic in depth) (Furnham 1990). Table 1 shows a small overview
of some of the identified language cues for extraversion and various production levels, based on studies by
Scherer (1979), Furnham (1990), Pennebaker and King (1999), Gill and Oberlander (2002) and Mairesse
et al. (2007).

Americas Conference on Information Systems 3


The Influence of Linguistic Cues on Communication Satisfaction

Level Introvert Extravert


Conversational Listen, less back-channel behavior Initiate conversation, more back-
Behavior channel behavior
Style Formal Informal
Syntax Many nouns, adjectives, elaborated Many verbs, adverbs, pronouns
constructions, many words per (implicit), few words per sentence, few
sentence, many articles and negations articles, few negations
Topic selection Self-focused, problem talk, dissatis- Pleasure talk, agreement, compliment,
faction, single topic, few semantic errors many topics, many semantic errors
Speech Slow speech rate, Many unfilled pauses, High speech rate, few unfilled pauses,
long response latency, quiet, low voice short response latency, loud, high voice
quality, low frequency variability quality, high frequency variability
Lexicon Rich, high diversity, many exclusive and Poor, low diversity, few exclusive and
inclusive words, few social words, few inclusive words, many social words,
positive emotion words, many negative many positive emotion words, few
emotion words negative emotion words

Table 1. Summary of Identified Language Cues for Extraversion

Personality in Human-Computer Interaction


Personality has been identified as one of the vital factors in understanding the quality and nature of HCI
(Robert et al. 2020), but also as one of the key components when designing CAs (Strohmann et al. 2019).
In their review of personality in human-robot interactions (HRI), Robert et al. (2020) divide their literature
search into four thrust areas, including human personality in HRI and robot personality in HRI. Their
resume coincides with findings from literature on HHI, that is that the majority of studies in both thrust
areas investigated the personality dimension extraversion/introversion due to its importance ascribed in
the field of HHI (Robert 2018; Robert et al. 2020). Robert et al. (2020, p. 10) continue, stating that
“[g]enerally, most studies have assumed that human personality can be used to determine whether an
individual would be more or less likely to interact with a robot and whether those interactions were likely
to be enjoyable.” In the framework of this paper, findings concerning machine personality are specifically
of interest for us. Studies, such as Lohse et al. (2008) investigated whether people perceived distinctive
characteristics of extraverted and introverted robots from each other, and Walters et al. (2011) studied
whether people recognized differences between robots displaying either extravert or introvert
characteristics (Robert et al. 2020). While these studies focus on EPA robots, we want to ascertain whether
similar findings can be transferred to CAs that use language as an output. Focusing on chatbots, Smestad
and Volden (2019) used an experimental study to investigate whether subjects could tell the difference
between two chatbots that were designed based on different personality traits. However, the authors’
conversation designs are not based on specific language cues derived from literature. One chatbot was
considered to be agreeable, while the other one was described as mechanical and by the authors’ definition
as a chatbot with "no personality". To the best of our knowledge, we could not find any other previous work
addressing personality-based language cues (and specifically extraversion) in connection with
communication satisfaction during human-machine interaction. In addition, based on findings on
extraversion mentioned earlier, we further want to investigate if personality differences manifested in
language consequently lead to conversation preferences. Hence, we hypothesize as follows:
H: The ExtraBot achieves a higher perceived communication satisfaction than the IntroBot.
As it has been found that extraverts use more positive emotion words, show more agreements and
compliments, and since it further has been shown that speaker charisma correlates strongly with
extraversion (Mairesse et al. 2007), we argue that users perceive greater satisfaction when communicating
with the ExtraBot, than with an introverted personality adaptive CA (IntroBot).

Americas Conference on Information Systems 4


The Influence of Linguistic Cues on Communication Satisfaction

Method
Sample and Data Collection Procedure
In order to test our hypothesis, we conducted an online experiment that took place over the span of three
months. We aimed to obtain a relatively large sample size of participants to ensure more reliable results
and greater significance. We therefore chose to run our study through the crowdsourcing platform
Mechanical Turk (mTurk). On Amazon.com’s mTurk individuals perform small tasks such a surveys for
micro payments (Downs et al. 2010). Another reason to use crowdsourcing for our study was to find diverse
characteristics as well as native respectively advanced English speakers among a large pool of respondents.
Since the experiment and more important the simulated conversations between the CA and the humans
were conducted throughout in English - and language being a pivotal aspect of the hypothesis - it was
necessary that only people whose first or second language is English, were participating – otherwise it would
have biased the results. Although mTurk was the main source for collecting our data, we also recruited test
persons via personal networks, who were not compensated for their participation. Of the total of 478 people
participating in the study, we eliminated the data of 113 test persons who cancelled the experiment in
advance. We also identified 56 invalid responses concerning our control question (i.e. entering a specific
number, after having watched the conversations between the humans and the machine) and excluded these
answers from our analysis. This reduced our sample size to 309 participants, out of which 206 are male,
101 female and 2 other. The age of the subjects ranged from 17 to 74 years (M = 32.9 years). 232 people
indicated that English is their first language, while the remaining 77 speak English as their second language.
The test persons were first informed about the task and general procedure of the experiment via a link for
a website especially created for the study. The website then randomly assigned participants to LimeSurvey
(an online survey tool), where they either watched the conversation of the ExtraBot first (and IntroBot
second) or vice versa. We chose a within-subject design, where the participants were exposed to both levels
of treatment one after the other (Charness et al. 2012). This way we ensured that individual differences were
not distorting the results, since every subject acted as their own control. This reduced the chance of
confounding factors. The order of the two conditions was hence distributed randomly, and the dependent
variable was measured after each condition by means of a subsequent survey. Every participant was
provided with the exact same sets of information for the experiment (Dennis and Valacich 2001). The
complete experiment took approximately 15 minutes per participant.

Conversational Agent Design


Our within-subject design was structured as follows: Prior to the actual experiment, we created two pre-
defined dialogue structures using the conversational design tool Botsociety (2020), which allows visualizing
and prototyping CAs. Subject of the dialogues are communications between the CA called Raffi and the
humans Jamie and respectively Francis. While in the dialogue between Raffi and Jamie the CA is intended
to take on an extraverted personality (ExtraBot), the CA in the Raffi-Francis conversation is aimed to be
more introverted (IntroBot). In order to create and simulate a personality adaptive CA, we based our
conversational designs on the previously mentioned language cues for extraversion and introversion (see
Table 1). For instance, the ExtraBot uses a rather informal language (e.g. “what r u up to?”, “…cause TGIF!”),
compliments and uses many positive emotion words (“Sounds amazing!” “Have fun at the party!”) and uses
few words per sentence (“Nope. Locals as well.”, “Told ya!”). The IntroBot on the other hand sticks to mainly
two topics (travel and books) while chatting with Francis, and also has a rather rich vocabulary throughout
the whole dialogue by using many words per sentence. Further, the IntroBot has fewer semantic errors (“At
least it's Friday! How was your day?”) and uses fewer emotional words compared to the ExtraBot. Although
the leitmotiv in both conversations are similar, the ExtraBot talks about many topics in a short amount of
time (weekend plans, music, travel, party).
Concerning the context of the conversations, we chose to not put the CA in a specific service encounter
setting or the like, as this could have been a confounding factor. The idea behind this reasoning was to not
distract the subjects by the service quality of the CA, but to merely focus on communication satisfaction by
texting about day-to-day conversations. Raffi should be considered more as a “virtual” friend, who gives
travel recommendations, but is detached from the thought that it is a chatbot of a certain company. We
embedded the conversational designs as a video format in LimeSurvey. The videos lasted about 3 minutes,
skipping was not allowed, and we added a control question at the end of the video. The only task that the

Americas Conference on Information Systems 5


The Influence of Linguistic Cues on Communication Satisfaction

participants had in this part of the experiment, was to put themselves in the shoes of Francis and Jamie and
closely observe the conversations with Raffi the CA. The videos of the complete conversations can be
watched at the following links: https://youtu.be/B1N7XwcdCE0, https://youtu.be/d26eKdHBKeQ. Figure
1 shows a snippet of the two conversations between the ExtraBot and Jamie (left) and the IntroBot and
Francis (right).

Figure 1. Mockups of the ExtraBot (left) and IntroBot (right) Conversation


In order to verify (to the best possible extent) that the language cues we used in our conversation designs
reflect extraversion/introversion to a certain degree, we double checked the dialogues. First, we used the
IBM Watson Personality Insights tool (2020). The personality mining service returns percentiles for the
Big Five dimensions based on text that is being analyzed. In this context, percentiles are defined as scores
that compare one person to a broader population (IBM Watson PI 2020). For the ExtraBot dialogue, we
received a score of 83%, meaning that our ExtraBot is more extraverted than 83% of the people in the
population. The IntroBot on the other hand, had a percentile of 36%, thus scoring low in extraversion (and
high in introversion). Although Watson’s PI service is in some instances criticized for being a black box, it
did validate our conversation designs to be considered as extraverted respectively introverted language.
Second, as part of our survey, we asked the participants to indicate the extent to which the attributes
sociable, talkative, active, impulsive, outgoing, shy, reticent, passive, deliberate, reserved apply to the CAs
(Back et al. 2009). While the first five items reflect high extraversion, the last five attributes stand for low
extraversion (Back et al. 2009). The results showed on average, that the ExtraBot (M = 4.49) was indeed
perceived as more extraverted than the IntroBot (M = 3.85).

Measures and Results


Following the video of the conversation, the participants completed a survey that included questions about
the CA’s attributes (see above), demographic questions (gender, age, language), an open question (Which
conversation did you personally prefer and why?) and their perceived communication satisfaction (Hecht
1978). In order to measure, whether the subjects are more satisfied with the communication style of the
ExtraBot or the IntroBot, we used the established measuring construct communication satisfaction by
Hecht (1978). The inventory shows a high degree of reliability and validity when measuring communication
satisfaction with “actual and recalled conversations with another perceived to be a friend, acquaintance, or
stranger” (Hecht 1978, p. 253). Originally intended for HHI, we transferred this construct to a HCI context.

Americas Conference on Information Systems 6


The Influence of Linguistic Cues on Communication Satisfaction

The construct consists of 19 items, and as suggested in the study, we used a 7-point Likert scale. However,
we adapted the phrasing of the items accordingly to our CA. For example, we changed the wording of the
original item “The other person expressed a lot of interest in what I had to say” to “Raffi expressed a lot of
interest in what I had to say”.

We analyzed the data by means of descriptive analysis and a Mann-Whitney-U test, as the data is non-
normally distributed (the Shapiro-Wilk test of normality was used to investigate this assumption) and we
used ordinal scales (Wu and Leung 2017). Prior to that, we computed Cronbach’s alpha for the construct
communication satisfaction to ensure the internal validity of our measure. With α = .90 our construct with
19 items shows a high internal validity. All analyses were carried out using the statistical computing
software RStudio (Version 1.2.5033). Table 3 provides an overview of the descriptive statistics, the Shapiro-
Wilk normality tests and the Mann-Whitney-U test.

Communication satisfaction (n=309)


MeanExtraBot SDExtraBot Shapiro- MeanIntroBot SDIntroBot Shapiro- Mann-
Wilk Wilk Whitney-U
5.01 0.98 W = .92 4.84 0.92 W = .93 W =52890
p < .01 p < .01 p = .02

Table 3. Results of the Experiment for the Construct Communication Satisfaction


The results (Table 3) show that the participants evaluated the communication satisfaction of the ExtraBot
as higher (M = 5.01, SD = 0.98) than the communication satisfaction of the IntroBot (M = 4.84, SD = 0.92).
The data further reveals that there is a significant difference between the perceived communication
satisfaction by the participants. Thus, these findings support our hypothesis that the subjects were more
satisfied with the communication of the ExtraBot than the IntroBot. These results were also confirmed by
the open question (“Which conversation did you personally prefer and why?): Out of the 309 participants,
189 preferred the ExtraBot’s conversation, while 92 liked the conversation of the IntroBot more. 28 people
could not decide, which conversation they preferred. Table 4 summarizes some of the participants’
responses in terms of their preferences over conversation.
ExtraBot: Jamie & Raffi IntroBot: Francis & Raffi Neutral
“I preferred conversation 1. It “I am not as social, outgoing, and “Both conversations were similar
was more fluent and "party happy" as Jamie. I'm more and Raffi reacted differently
knowledgeable and outgoing. In like Francis... bookish and because Jamie and Francis acted
conversation about Dan Browns reserved. I don't want the in your different from each other. Both
book at the beginning I felt little face enthusiasm and energy Raffi were fine.”
attention and sensibility from showed in the first conversation,
Raffi.” and prefer the more sedate,
subtly humorous Raffi of the
second conversation.”
“The first one was more natural “Francis was better able to “I really didn't prefer one
for me. It seemed like a express his interests and Raffi conversation over the other.”
conversation I would have with a listened and added to the
dear friend.” conversation.”
“Raffi seemed to pick up on the “I think Francis was more like me “Both conversations were similar
communication style of the and therefore I enjoyed following to me.”
human and adjust accordingly.” the conversation a bit more.”
“I was much more like Jamie “The first conversation was way “I preferred both because they
than Francis. It was easier to too over the top and aggressively both felt like I was talking to a
relate.” social. The second one was much human and not an AI or robot.”
more calm and chill.”

Table 4. Extract of Participants’ Responses to Their Preferred Conversation

Americas Conference on Information Systems 7


The Influence of Linguistic Cues on Communication Satisfaction

Discussion
In our experiment setting, language cues of extraversion showed to be the independent variable that proved
to be more suitable in achieving higher communication satisfaction, confirming our hypothesis. This could
be due to the fact that the “looser” writing style of the ExtraBot was better received by the test persons than
the somewhat more “serious” style of the IntroBot. Despite the results of this experiment, we however do
not propose to only include extraverted language cues to enhance the interaction experience when
designing a CA. Quite the contrary, in fact we strongly assume that the level of communication satisfaction
is very much dependent upon the user’s own personality. This idea corresponds with Hecht (1978, p. 263)
pointing out that “one’s own and other’s predispositions […] are important determinants of satisfaction
when the other is perceived to be an acquaintance.” Although not mentioned in the paper before, we have
paid attention to give Jamie and Francis similar personality traits as their corresponding conversation
partners. The participants’ responses (see Table 4) seem to coincide with Hecht’s statement: People who
considered themselves more extraverted (e.g. “I was much more like Jamie than Francis. It was easier to
relate.”) were more satisfied with the ExtraBot, whereas the participants who considered themselves
Introverts (“I think Francis was more like me and therefore I enjoyed following the conversation a bit
more.”) chose the IntroBot. This implicates that humans prefer machines that have a similar personality to
their own and thus speaks for the Law of Attraction, about which there are already numerous studies
(Robert 2018; Robert et al. 2020).

The initial goal of our conducted experiment was to examine whether personality differences manifested in
language use have an influence regarding a user’s perceived communication satisfaction when interacting
with a CA. The results of our experiment demonstrate that linguistic cues that are specific for a particular
personality dimension a) have been noticed by the majority of the participants and b) have an influence on
users’ perceived communication satisfaction when having a conversation with a CA (since the majority of
the subject preferred one Bot over the other). These findings are consistent with previous studies (e.g.
Schuetzler et al. 2014) that the subjects significantly perceive adaptive responses via language. Our results
further show that using personality-based language cues can impact the interaction quality and turn it into
a valuable conversation for the user. The outcomes of this study also highlight the importance of personality
adaptive CAs. Since every human being is unique in terms of their personality traits, the design of future
CAs needs to be able to respond and adapt accordingly to a user’s personality – and especially CAs that aim
for extended conversations, such as in service encounters or therapeutic conversations in healthcare. One
of the decisive reasons designing personality adaptive CAs with language cues is that implementing
consistent patterns of reactions is much easier than immediate and unregulated responses (Lee et al. 2019).
Different linguistic styles can influence the course of the conversation and ultimately determine whether
the user (be it a customer or patient) is satisfied with the communication and sees any value in the
interaction with the CA.

Conclusion
As a step towards designing and evaluating the value of personality adaptive CAs, we investigated in this
paper, whether personality differences manifested in language use have an influence in regard to a user’s
communication satisfaction, when interacting with a CA. Based on findings of previous studies in the field
of psychology as well as in HCI, we focused on the personality dimension extraversion/introversion and
have put forward the hypothesis that users perceive greater satisfaction when communicating with an
extraverted personality adaptive CA (ExtraBot), than with an introverted personality adaptive CA
(IntroBot). We tested the hypothesis by conducting an online experiment in which subjects were asked to
evaluate simulated conversations between the ExtraBot and a fictional person and the IntroBot and another
fictional person. In the subsequent survey, the subjects answered questions on the construct
communication satisfaction (Hecht 1978) and indicated their preferred dialogue. The results of the
experiment showed that our hypothesis proved to be true: The extraverted personality adaptive CA achieved
a higher perceived communication satisfaction than its introverted counterpart and its linguistic style of
writing was preferred over the IntroBot’s communication style by the majority of the participants. The
outcomes of this experiment further demonstrate, that language cues that reflect a certain personality
dimension should be taken into consideration when designing personality adaptive CAs, in order to enhance
a user’s communication satisfaction during their interaction experience. Further, the outcomes of the

Americas Conference on Information Systems 8


The Influence of Linguistic Cues on Communication Satisfaction

experiment provide design implications for personality expressions applicable for CAs as well as EPA
robots. The concept of a personality adaptive CA could be put into use to the development of responsive
services where interaction and particularly language-based communication plays an important role.

REFERENCES
Allport, G. W. 1961. Pattern and Growth in Personality.
Back, M. D., Schmukle, S. C., and Egloff, B. 2009. “Predicting Actual Behavior from the Explicit and Implicit Self-
Concept of Personality.,” Journal of Personality and Social Psychology (97:3), p. 533.
Beukeboom, C. J., Tanis, M., and Vermeulen, I. E. 2013. “The Language of Extraversion: Extraverted People Talk
More Abstractly, Introverts Are More Concrete,” Journal of Language and Social Psychology (32:2), pp.
191–201.
Botsociety. 2020. “Design, Preview and Prototype Your next Chatbot or Voice Assistant.” (https://botsociety.io,
accessed February 27, 2020).
Boyd, R. L., and Pennebaker, J. W. 2017. “Language-Based Personality: A New Approach to Personality in a Digital
World,” Current Opinion in Behavioral Sciences (18), pp. 63–68. (
Charness, G., Gneezy, U., and Kuhn, M. A. 2012. “Experimental Methods: Between-Subject and within-Subject
Design,” Journal of Economic Behavior & Organization (81:1), pp. 1–8.
Costa, P. T., and McCrae, R. R. 2008. “The Revised Neo Personality Inventory (Neo-Pi-R),” The SAGE Handbook
of Personality Theory and Assessment (2:2), pp. 179–198.
Dennis, A. R., and Valacich, J. S. 2001. “Conducting Experimental Research in Information Systems,”
Communications of the Association for Information Systems (7:1), p. 5.
Diederich, S., Janssen-Müller, M., Brendel, A. B., and Morana, S. 2019. “Emulating Empathetic Behavior in Online
Service Encounters with Sentiment-Adaptive Responses: Insights from an Experiment with a Conversational
Agent,” ICIS 2019 Proceedings.
Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. “Are Your Participants Gaming the System?
Screening Mechanical Turk Workers,” in Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems, pp. 2399–2402.
Feine, J., Gnewuch, U., Morana, S., and Maedche, A. 2019. “A Taxonomy of Social Cues for Conversational Agents,”
International Journal of Human-Computer Studies (132), pp. 138–161.
Furnham, A. 1990. “Faking Personality Questionnaires: Fabricating Different Profiles for Different Purposes,”
Current Psychology (9:1), pp. 46–55.
Gill, A. J., and Oberlander, J. 2002. “Taking Care of the Linguistic Features of Extraversion,” in Proceedings of the
Annual Meeting of the Cognitive Science Society (Vol. 24).
Gnewuch, U., Morana, S., and Maedche, A. 2017. “Towards Designing Cooperative and Social Conversational Agents
for Customer Service,” in ICIS.
Golbeck, J., Robles, C., Edmondson, M., and Turner, K. 2011. “Predicting Personality from Twitter,” in Privacy,
Security, Risk and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social Computing
(SocialCom), 2011 IEEE Third International Conference On, IEEE, pp. 149–156.
Grönroos, C. 1982. “An Applied Service Marketing Theory,” European Journal of Marketing (16:7), pp. 30–41.
Grudin, J., and Jacques, R. 2019. “Chatbots, Humbots, and the Quest for Artificial General Intelligence,” in
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19, Glasgow,
Scotland Uk: ACM Press, pp. 1–11.
Guzman, A. L. 2018. “What Is Human-Machine Communication, Anyway,” Human-Machine Communication:
Rethinking Communication, Technology, and Ourselves, pp. 1–28.
Hecht, M. L. 1978. “The Conceptualization and Measurement of Interpersonal Communication Satisfaction,” Human
Communication Research (4:3), pp. 253–264.
IBM Watson PI. 2020. “IBM Watson Personality Insights.” (https://personality-insights-demo.ng.bluemix.net/,
accessed February 27, 2020).
Kim, H., Koh, D. Y., Lee, G., Park, J.-M., and Lim, Y. 2019. “Designing Personalities of Conversational Agents,” in
Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6.
Knijnenburg, B. P., and Willemsen, M. C. 2016. “Inferring Capabilities of Intelligent Agents from Their External
Traits,” ACM Transactions on Interactive Intelligent Systems (TiiS) (6:4), p. 28.
Laranjo, L., Dunn, A. G., Tong, H. L., Kocaballi, A. B., Chen, J., Bashir, R., Surian, D., Gallego, B., Magrabi, F.,
Lau, A. Y. S., and Coiera, E. 2018. “Conversational Agents in Healthcare: A Systematic Review,” Journal
of the American Medical Informatics Association (25:9), pp. 1248–1258.

Americas Conference on Information Systems 9


The Influence of Linguistic Cues on Communication Satisfaction

Lee, S., Lee, G., Kim, S., and Lee, J. 2019. “Expressing Personalities of Conversational Agents through Visual and
Verbal Feedback,” Electronics (8:7), p. 794.
Mairesse, F., Walker, M. A., Mehl, M. R., and Moore, R. K. 2007. “Using Linguistic Cues for the Automatic
Recognition of Personality in Conversation and Text,” Journal of Artificial Intelligence Research (30), pp.
457–500.
Mallios, S., and Bourbakis, N. 2016. “A Survey on Human Machine Dialogue Systems,” in 2016 7th International
Conference on Information, Intelligence, Systems Applications (IISA), , July, pp. 1–7.
McCrae, R. R., and John, O. P. 1992. “An Introduction to the Five-Factor Model and Its Applications,” Journal of
Personality (60:2), pp. 175–215.
McTear, M., Callejas, Z., and Griol, D. 2016. The Conversational Interface: Talking to Smart Devices, Springer.
Moon, Y., and Nass, C. 1996. “How ‘Real’ Are Computer Personalities? Psychological Responses to Personality
Types in Human-Computer Interaction,” Communication Research (23:6), pp. 651–674.
Mustelier-Puig, L. C., Anjum, A., and Ming, X. 2018. “Interaction Quality and Satisfaction: An Empirical Study of
International Tourists When Buying Shanghai Tourist Attraction Services,” Cogent Business & Management
(5:1), p. 1470890.
Nass, C., Moon, Y., Fogg, B. J., Reeves, B., and Dryer, D. C. 1995. “Can Computer Personalities Be Human
Personalities?,” International Journal of Human-Computer Studies (43:2), pp. 223–239.
Pennebaker, J. W. 2011. “The Secret Life of Pronouns: How Our Words Reflect Who We Are,” New York, NY:
Bloomsbury.
Pennebaker, J. W., and King, L. A. 1999. “Linguistic Styles: Language Use as an Individual Difference.,” Journal of
Personality and Social Psychology (77:6), p. 1296.
Robert, L. 2018. “Personality in the Human Robot Interaction Literature: A Review and Brief Critique,” in Robert,
LP (2018). Personality in the Human Robot Interaction Literature: A Review and Brief Critique, Proceedings
of the 24th Americas Conference on Information Systems, Aug, pp. 16–18.
Robert, L. P., Alahmad, R., Esterwood, C., Kim, S., You, S., and Zhang, Q. 2020. “A Review of Personality in Human
Robot Interactions,” ArXiv Preprint ArXiv:2001.11777.
Robra-Bissantz, S. 2018. “Entwicklung von innovativen Services in der Digitalen Transformation,” in Service
Business Development: Strategien – Innovationen – GeschäftsmodelleBand 1, M. Bruhn and K. Hadwich
(eds.), Wiesbaden: Springer Fachmedien Wiesbaden, pp. 261–288.
Scherer, K. R. 1979. Personality Markers in Speech, Cambridge University Press.
Schuetzler, R., Grimes, M., Giboney, J., and Buckman, J. 2014. “Facilitating Natural Conversational Agent
Interactions: Lessons from a Deception Experiment,” ICIS 2014 Proceedings.
Shawar, B. A., and Atwell, E. 2007. “Chatbots: Are They Really Useful?,” in Ldv Forum (Vol. 22), pp. 29–49.
Smestad, T. L., and Volden, F. 2019. “Chatbot Personalities Matters,” in Internet Science, Lecture Notes in Computer
Science, S. S. Bodrunova, O. Koltsova, A. Følstad, H. Halpin, P. Kolozaridi, L. Yuldashev, A. Smoliarova,
and H. Niedermayer (eds.), Cham: Springer International Publishing, pp. 170–181.
Snyder, M. 1983. “The Influence of Individuals on Situations: Implications for Understanding the Links between
Personality and Social Behavior,” Journal of Personality (51:3), pp. 497–516.
Stieglitz, S., Brachten, F., and Kissmer, T. 2018. “Defining Bots in an Enterprise Context,” in ICIS.
Strohmann, T., Siemon, D., and Robra-Bissantz, S. 2019. “Introducing the Virtual Companion Canvas – Towards
Designing Collaborative Agents : Extended Abstract,” Proceedings of the Workshop on Designing User
Assistance in Intelligent Systems, Stockholm, Sweden, 2019. Ed.: S. Morana.
Turing, A. M. 1950. “Computing Machinery and Intelligence,” Mind, New Series (59:236), pp. 433–460.
Vassallo, G., Pilato, G., Augello, A., and Gaglio, S. 2010. “Phase Coherence in Conceptual Spaces for Conversational
Agents,” Semantic Computing, pp. 357–371.
Wu, H., and Leung, S.-O. 2017. “Can Likert Scales Be Treated as Interval Scales?—A Simulation Study,” Journal of
Social Service Research (43:4), pp. 527–532.
Yarkoni, T. 2010. “Personality in 100,000 Words: A Large-Scale Analysis of Personality and Word Use among
Bloggers,” Journal of Research in Personality (44:3), pp. 363–373.
You, S., and Robert, L. 2018. “Emotional Attachment, Performance, and Viability in Teams Collaborating with
Embodied Physical Action (EPA) Robots,” You, S. and Robert, LP (2018). Emotional Attachment,
Performance, and Viability in Teams Collaborating with Embodied Physical Action (EPA) Robots, Journal
of the Association for Information Systems (19:5), pp. 377–407.

Americas Conference on Information Systems 10

View publication stats

You might also like