You are on page 1of 20

Public Management Review

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/rpxm20

AI-driven public services and the privacy paradox:


do citizens really care about their privacy?

Jurgen Willems, Moritz J. Schmid, Dieter Vanderelst, Dominik Vogel & Falk
Ebinger

To cite this article: Jurgen Willems, Moritz J. Schmid, Dieter Vanderelst, Dominik Vogel & Falk
Ebinger (2022): AI-driven public services and the privacy paradox: do citizens really care about
their privacy?, Public Management Review, DOI: 10.1080/14719037.2022.2063934

To link to this article: https://doi.org/10.1080/14719037.2022.2063934

© 2022 The Author(s). Published by Informa View supplementary material


UK Limited, trading as Taylor & Francis
Group.

Published online: 13 Apr 2022. Submit your article to this journal

Article views: 2805 View related articles

View Crossmark data Citing articles: 2 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rpxm20
PUBLIC MANAGEMENT REVIEW
https://doi.org/10.1080/14719037.2022.2063934

AI-driven public services and the privacy paradox: do


citizens really care about their privacy?
Jurgen Willems a, Moritz J. Schmida, Dieter Vanderelstb, Dominik Vogel c

and Falk Ebinger a


a
Institute for Public Management & Governance, Vienna University of Economics and Business,
Vienna, Austria; bDepartment of Electrical Engineering and Computer Systems, University of
Cincinnati, Cincinnati, Ohio, USA; cFaculty of Economics and Social Sciences (social economy),
Hamburg University

ABSTRACT
Based on privacy calculus theory, we derive hypotheses on the role of perceived
usefulness and privacy risks of artificial intelligence (AI) in public services. In
a representative vignette experiment (n = 1,048), we asked citizens whether they
would download a mobile app to interact in an AI-driven public service. Despite
general concerns about privacy, we find that citizens are not susceptible to the
amount of personal information they must share, nor to a more anthropomorphic
interface. Our results confirm the privacy paradox, which we frame in the literature on
the government’s role to safeguard ethical principles, including citizens’ privacy.

KEYWORDS AI; virtual agents; privacy paradox; data privacy; vignette experiment

1. Introduction
Recent advances in artificial intelligence (AI), especially in deep learning, are prone to
revolutionize the way governments and bureaucratic entities interact with their citi­
zens (Vogl et al. 2020; Wirtz and Müller 2019; Wirtz, Weyerer, and Geyerer 2019). In
particular, AI-driven conversational agents (such as chatbots) offer tremendous poten­
tial to transform public service interactions in an effective manner (Vogl et al. 2020;
Wirtz, Weyerer, and Geyerer 2019), along with re-defining service interactions in
a cost-efficient way (Eggers, Fishman, and Kishnani 2017; Mehr 2017). However,
since the power of these technologies lies in the linking of information on individuals,
they also entail ethical trade-offs, especially in the context of public services. On the
one hand, this trade-off aims to make public service processes more adjusted to the
needs of citizens, with a strong focus on efficient and effective services, as well as the
right for equal access, fair treatments, and privacy for citizens (Dickinson and Yates
2021). On the other hand, this trade-off concerns several ethical aspects inherent to
how AI applications work, such as limited or no insight into actual decision mechan­
isms in AI, lack of accountability, inherent biases on gender and race, and difficulties in
interpretability (Miller and Keiser 2021; Busuioc 2021). These concerns are in direct

CONTACT Jurgen Willems jurgen.willems@wu.ac.at


Supplemental data for this article can be accessed at https://doi.org/10.1080/14719037.2022.2063934
© 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives
License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and repro­
duction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.
2 J. WILLEMS ET AL.

contrast to basic Weberian principles at the core of public sector bureaucracies


concerning transparency, equality, democratic oversight, and safeguarding citizens’
well-being (Howlader 2011; Kernaghan 2014; Singer 2011).
In this context, data privacy is a concern that has received substantial scholarly
attention in the ethical discourse. Since algorithmic processes require large sets of
(personal) data input in the back end, this aspect of privacy might be infringed. Often,
interdependent algorithms and their data input are opaque with no clear data owner­
ship structure, making it difficult, if not impossible, for individuals to know who has
access to their information and for what their data is used. This can lead to privacy
challenges for users. In the public domain, these more general privacy concerns can
hamper the successful implementation of advanced technologies, as citizens must
support the use of such technology to justify their implementation in public services
(Correia and Wünstel 2011; Nam and Pardo 2011; Zavattaro 2013).
Against this background, this study seeks to hypothesize and test the way citizens
trade-off the perceived usefulness of AI-driven applications with overall privacy con­
cerns to engage in AI-driven public services. Therefore, we build on the privacy
calculus theory and derive hypotheses on how (1) perceived usefulness, (2) the need
to share personal data in particular contexts, and (3) overall privacy concerns influence
citizens’ willingness to use AI-driven applications for public services. However, we
contrast this logic also with empirical findings on the privacy paradox, which focuses
on the fact that the behaviour of citizens, such as in the concrete use of applications, is
often inconsistent with their more general privacy concerns. Hence, our research
questions are as follows:

(1) How do the perceived usefulness, data sharing requirements, and citizens’
overall privacy concerns influence their willingness to use AI-driven public
services?
(2) Do citizens act accordingly to privacy concerns in concrete contexts?

Moreover, given the growing literature on the role of more humaMoreover, given the
growing literature on the role of more human like AI interactions and designs, we also
verify if these elements differ depending on the level of anthropomorphism of these
applications.
Answering these research questions is relevant to the growing debate on AI-driven
public services. First, insights on the role of perceived usefulness of an application as
well as the concrete data sharing requirements can help optimize AI-driven services. In
doing so, practical insights on concrete conditions and features of AI in public services
are developed, which, in turn, can directly help in formulating practical recommenda­
tions. Second, confirmation of the privacy paradox in the public area would require
a theoretical and practical debate on the inherent conflicting logics between efficient
public services and a potential violation of principles at the core of modern public
organizations, such as transparency, equality, democratic oversight, and citizens’ well-
being, on the other hand.
We conducted an online vignette experiment with a representative sample of
Austrian citizens (n = 1,048). We opted for a between-subjects 2-by-2 design (data-
sharing requirements × level of anthropomorphism in an AI application) in combina­
tion with measuring the exogenous variable of citizens’ general privacy concerns. The
combination of treatment variables for a specific privacy dilemma – based on an
PUBLIC MANAGEMENT REVIEW 3

experimental vignette – with attitudinal measures of respondents allows testing aspects


of the privacy calculus theory and the privacy paradox. These theoretical perspectives
focus on concrete citizen behaviour in specific cases, given their more general privacy
concerns. The treatments consider the variations of a concrete AI-application, while
the exogenous measure of more general privacy concerns focuses on an overall
citizens’ attitude, regardless of the particular context. It is relevant to combine these
two aspects to understand the extent to which privacy calculus theory and the privacy
paradox explain citizen behaviour in concrete contexts. For the first treatment, parti­
cipants were randomly assigned to either a low- or a high-data sharing condition,
differing in the amount of personal information required to share to use the applica­
tion efficiently. As a second treatment, we introduced participants to either an abstract
or a more anthropomorphic version of the chatbot differing in its name (CityBot vs.
Anna). Hence, combining both treatments can provide us with better insight into how
sharing personal information matters when interacting with AI-chatbots for public
services.
As the primary dependent variable, we asked participants whether they would
download an AI-driven public service mobile app that would enable them to commu­
nicate with a conversational agent (chatbot). The app would allow them to ask
questions about public infrastructure, book appointments with public authorities,
and give feedback. In other words, Research Question 1 refers to the relevance of the
main predictors in our analysis. These predictors are derived from our theoretical
elaboration on the privacy calculus theory and the privacy paradox. Research Question
2 focuses on the combined interpretation of these factors, giving us insight into the
underlying considerations of these two theoretical perspectives: privacy calculus theory
and the privacy paradox.
By using an experimental design, we add a critical empirical component to the
academic debate on AI implementation in public services, which is, at present, still
mainly descriptive and normative (Moon, Lee, and Roh 2014). This knowledge about
individuals’ behaviour and their privacy concerns can help predict the success and
failures when implementing advanced technologies in the public domain (Nam and
Pardo 2011; Neirotti et al. 2014).

2. Theoretical background and hypotheses


2.1 AI-driven public services and the application of public service chatbots
AI consists of technology-supported algorithms based on the data from (or similar to)
real-life observations. The developed algorithms can perform or assist in highly com­
plex processes, including public service delivery (Meijer, Lorenz, and Wessels 2021).
We refer to detailed overviews of the broad range of (potential) applications of AI in
a variety of public service processes (e.g. Wirtz and Müller 2019).
Interesting types of complex processes in public services are those which involve
recurrent interaction with citizens occur. In these processes, the potential impact for
citizens becomes evident, both concerning service improvement and potential privacy
concerns. An expanding set of applications, as good examples of AI-driven public
services with high interaction levels with citizens, are public service chatbots. Chatbots
are computer programs designed to interact with users, replicating human-like con­
versational capabilities during service encounters. As such, they have been increasingly
4 J. WILLEMS ET AL.

implemented across a wide range of internet-based public services (Makasi et al. 2020;
Vogl et al. 2020). In the public sector, the main proclaimed benefits of chatbots are that
they allow organizations to reduce their administrative burden and enhance commu­
nication, which, in turn, improves service delivery (Androutsopolou et al. 2019).
Hence, they are prone to radically improve citizens’ experience and engagement and
enable new forms of decision-making with the help of citizens’ interactions. Among
several use cases (e.g. UNA in Latvia, Bobbi in the City of Berlin, or NADIA in
Australia), the City of Vienna (Austria) introduced the ‘WienBot’ in 2017 to ensure
that information about the different services in the City of Vienna is accessible and
understandable (Urban Innovation Vienna, 2017).
However, AI-driven conversational agents have also been plagued with controversy
since users do not always feel that chatbot-mediated services demonstrate the appro­
priate public service values, such as trust, fairness, and transparency (Makasi et al.
2020). When implementing AI-driven conversational agents, public agencies are
responsible for appropriately handling citizens’ information to prevent it from being
used for unwarranted purposes. In short, governments must ensure and monitor their
citizens’ data privacy.

2.2 Privacy calculus theory – the role of perceived usefulness versus the
combination of privacy concerns with the amount of data to share
Over the last decade, smart devices have become ubiquitous. In the public domain,
conversational agents (chatbots) embedded in mobile apps offer cost-efficient service
interaction around the clock. As a result, the perceived usefulness of a mobile app, for
example, achieved by introducing more advanced AI features, plays a significant role in
accepting AI-driven public services (Vogl et al. 2020). Perceived usefulness is the extent
to which citizens consider the AI-driven application as valuable for their public service
needs and preferences.
However, this blend of features results in privacy risks if using AI-driven applica­
tions leads to undisclosed (and undesired) data usage. In the context of AI-driven
processes, this is a substantial threat. Indeed, the core principle of AI is that existing
data is used to update decisions in other and future public service encounters.
Data privacy is considered one of the most critical ethical issues of the information
age (Mason 1986; Smith, Milberg, and Burke 1996) and has been studied extensively
across multiple disciplines in the social sciences. It refers to individuals’ control over
the release of personal information (Belanger and Crossler 2011), including its collec­
tion, use, access, and correction of errors (Smith, Milberg, and Burke 1996; Keith et al.
2013). As such, data privacy has implications for human well-being and can be
conceptualized both as a personal right, making it subject to law enforcement, and as
a commodity (Davies 1997) that can be traded and marketed (Jentzsch, Preibusch, and
Harasser 2012; Smith, Dinev, and Xu 2012).
Citizens’ active and passive evaluation of perceived usefulness and privacy concerns
form the building blocks of the privacy calculus theory on which we formulate
Hypotheses 1 to 4 in this section.
Starting from the assumption that data privacy is a commodity, the privacy calculus
theory states that an individual’s decision to disclose versus retain information is
a rational choice made by weighing the costs and benefits of information disclosure
(Becker and Murphy 1988). From this theoretical perspective, it has been argued that
PUBLIC MANAGEMENT REVIEW 5

the online sharing of personal information is affected by both the respective costs and
the anticipated benefits (Culnan and Armstrong 1999). Accordingly, individuals seem
to perform a privacy calculus, defined as a rational choice between the risks and
benefits when disclosing personal information (Culnan and Armstrong 1999). They
might express strong concerns about their privacy being infringed and still give their
personal details if they have something to gain in return. In other words, engaging in
the AI-driven public service is perceived as sufficiently useful, given their needs and
preferences. However, considering the public context, perceived returns are not
necessarily focused on personal and private benefits but could also relate to creating
a public good that a citizen finds relevant. Perceived usefulness of the AI-driven public
service is, thus, a prerequisite for citizens to use AI-driven public services. Building on
the privacy calculus theory, we can assume that the willingness to use AI-driven public
services will be higher if individuals perceive them as more useful.
Against this background, we propose the first hypothesis focusing on the para­
mount importance of perceived usefulness:

Hypothesis 1 (H1): The willingness to use AI-driven public services will be higher the
more useful individuals perceive them.

However, the inherent trade-off of the privacy calculus does not only include
perceived usefulness (related to the expected benefits), it also includes the potential
risks to privacy (related to the expected potential costs). These risks relate to (1) the
citizens’ more general privacy concerns as well as (2) the amount and type of personal
data one must share to engage in a particular AI-driven public service.
Privacy concerns refer to individuals’ general beliefs about the risks and potential
negative consequences of sharing information (Zhou and Li 2014). They have co-
evolved with advances in information technology for more than a century (Castañeda
and Montoro 2007; Norris and Reddick 2012) and have been used frequently as
a predictor of privacy-enhancing behaviour when using online services, sharing infor­
mation online, and engaging in privacy-protecting behaviours, such as deleting cookies
or un-tagging photos on social networking services (Dienlin and Trepte 2015; Zhou
and Li 2014). In the domain of e-commerce, for instance, concerns about online
privacy are associated with engaging in privacy-protective behaviours, including
removing one’s personal information (e.g. full name, address, etc.) from commercial
databases or completely refraining from self-disclosure (Son and Kim 2008;
Spiekermann, Grossklags, and Berendt 2001). Empirical research has shown that
overall concerns affect behaviour (e.g. Bamberg 2003; Reel et al. 2007). Therefore, it
is reasonable to expect that concerns regarding online privacy will be reflected in the
willingness to share information with online services or, in our case, AI-driven public
services. Against this background, we propose:

Hypothesis 2 (H2): The willingness to use AI-driven public services will be lower the
more concerned individuals are about their data privacy.

A critical success factor for implementing AI-driven public services is access to


factual and detailed user data. However, individuals might refrain from sharing their
information when usage, storage, and dissemination of their data is unclear or not
communicated. Even if clearly stated, they might not trust the claim that the use of
6 J. WILLEMS ET AL.

their data is limited to the described purposes. Their concerns regarding the misuse of
their information may result in them not using the service. For example, Tsai et al.
(2011) found that the availability and accessibility of privacy policy information affect
individuals’ online behaviour. According to their findings, customers prefer online
businesses that better communicate their data privacy policy and provide more
information on potential dangers. While individuals might have an overall concern
regarding online privacy, they might be willing to share certain types of information
more readily than other types of data. The type of information required to be shared
can determine whether individuals engage in online actions. In contrast, Brown and
Muchira (2004) found that internet users, who have had a prior online experience
where personal information was requested, showed lower levels of online purchase.
Similarly, Castañeda and Montoro (2007) have shown experimentally that requesting
more personal information makes users less inclined to complete an online action.
Drawing on this insight, we expect individuals to be more reluctant to rely on AI-
driven public services the more personal information they must grant access to:

Hypothesis 3 (H3): The willingness to use AI-driven public services will be lower if
individuals are required to share more personal information.

Hence, when assessing the potential cost of privacy risks, the overall privacy
concerns and the case-specific data to be shared are combined. Concretely, in addition
to the two main effects hypothesized above, namely the effects of overall privacy
concerns and sharing more personal information, we predict that citizens with high
overall privacy concerns would be more reluctant to use AI-driven public services
when that requires more personal information to share.

Hypothesis 4 (H4): The negative effect of overall privacy concerns on users’ willingness to
use AI-driven public services is stronger when more personal data is required.

2.3 Privacy paradox


A recent survey showed that 81% of U.S. respondents believe that the potential risks of
data gathering by companies outweigh the benefits of using data-driven products or
services (Auxier et al. 2019). In total, 66% of respondents indicated the same concern­
ing governmental data collection (Auxier et al. 2019). However, experimental evidence
suggests that individuals are willing to trade their data for relatively small rewards
(Acquisti 2004; Acquisti and Spiekermann 2011). For example, Carrascal et al. (2013)
found that internet users would trade their browsing history, which might be rather
explicit, for about seven Euros. This inconsistency between privacy attitudes and
privacy behaviour has been coined the term privacy paradox (Brown 2001; Norberg,
Horne, and Horne 2007) and studied by behavioural economists, social theorists, and
psychologists.
Given the hypotheses formulated above, for which we build on the privacy calculus
theory, further theoretical elaboration is based on theoretical considerations and
empirical evidence in this area. The privacy paradox has been extensively used to
describe the observation that, on the one hand, data privacy is a primary concern for
individuals in the digital age (Auxier et al. 2019), while on the other hand, citizens
PUBLIC MANAGEMENT REVIEW 7

excessively share personal data in a multitude of AI-driven applications and services.


With individuals becoming increasingly educated and, consequently, concerned
regarding the privacy risks of mobile app use (Jaiswal 2010), the question of why
many people continue to share their personal and real-time location data remains. In
essence, the privacy paradox can be defined as the discrepancy between individuals’
stated privacy risk beliefs and their actual behaviour when relying on concrete data-
driven services (Norberg, Horne, and Horne 2007; Keith et al. 2013).
Others have argued that the online disclosure of personal information is thus
paradoxical, as several empirical studies have shown that many people do not take
action to protect their personal information, even when the cost to do so is minimal
(Acquisti and Grossklags 2005; Dienlin, Masur, and Trepte 2019). Norberg, Horne,
and Horne (2007) were among the first to analyse this paradoxical behaviour empiri­
cally. They found an attitude-behaviour discrepancy (see also Barnes (2006) and
Fishbein and Ajzen (2010)); a mismatch between privacy concerns and actual beha­
viour. This implies that in individuals’ heuristic assessment, the present benefits of
information disclosure outweigh future privacy risks, resulting in poor protection
against risks (Acquisti 2004; Acquisti and Grossklags 2005; Bol et al. 2018; Dienlin
and Metzger 2016; Spiekermann, Grossklags, and Berendt 2001).
A few studies reported that overall privacy concerns were not significantly related to
the disclosure of personal information (Acquisti and Grossklags 2005; Taddicken
2014), lending credence to the privacy paradox. Other studies showed significant
relations (Dienlin and Trepte 2015; Heirman, Walrave, and Ponnet 2013), thus refut­
ing the privacy paradox. Against this background, it is relevant to explore whether this
privacy paradox also typifies citizen interaction in AI-driven processes. If our empirical
analysis shows that privacy concerns do not affect citizens’ willingness to rely on AI-
driven public services (i.e. we will not reject the null hypotheses for which Hypothesis 3
and 4 are the alternative hypotheses), a privacy paradox would be observable the
context of our study.

2.4 Anthropomorphism in user-chatbot interactions


A central component in research on the effective design of AI-based, autonomous
agents has been the role of anthropomorphism. Anthropomorphism is the attribution
of human-like qualities to non-human entities, such as machines, animals, or other
objects (Duffy 2003). In essence, this represents a human heuristic that helps to
understand unknown agents by applying anthropocentric knowledge (Griffin and
Tversky 1992). In this context, a popular paradigm in human-to-computer interaction
is known as Computers are Social Actors, which suggests that individuals, when
presented with a technology that contains human features, identify those pieces of
technology as a social actor with a social presence (Moon 2000; Nass and Lee 2000).
As chatbots can be short on humanness, interaction with these bots can become
somewhat artificial and stilted. Hence, increasing the social presence of robo-advisors
using an anthropomorphic design can positively affect users’ trusting beliefs and
willingness to engage in the service and accept its recommendations (Kim, Schmitt,
and Thalmann 2019; Willems et al. 2022). Furthermore, if chat agents are to assume
roles hitherto fulfilled by humans, it is necessary to make their interactions as similar as
possible to those of human beings (Go and Sundar 2019). According to the Uncanny
Valley Theory (Broadbent 2017; Mori 1970; Mori, MacDorman, and Kageki 2012),
8 J. WILLEMS ET AL.

a robot’s degree of human likeness, indeed, relates to feeling comfortable with the
robot. However, as the human likeness increases, the emotional response increases up
to an ‘Uncanny Valley’, where emotion suddenly turns negative and then increases
again as the likeness becomes almost indistinguishable from a human being (Murhy,
Gretzel, and Personen 2019).
The easiest way to enhance the humanness of a virtual agent is the use of human
labels or identities. Cognitive psychologists have emphasized the importance of cate­
gory-based perceptions activated by the social labels assigned to objects and noted that
individuals tend to use major attributes attached to labels to minimize cognitive effort
when making judgements or in forming impressions of others (Ashforth and
Humphrey 1997; Heyman and Gelman 1999).
Against this background, we propose the following hypothesis:

Hypothesis 5 (H5): The willingness to use AI-driven public services will be higher the
more human (anthropomorphic) the interface is designed (for limited levels of anthro­
pomorphic features, such as a human naming).

While an anthropomorphic identity cue might lead users to appreciate the dialogue
and enjoy the interaction (Chung et al. 2020), they also need to share personal
information to receive a valuable recommendation or answer, which, in turn, can
evoke privacy concerns. However, users’ privacy concerns might differ for chatbots
when they convey a human-like appeal (Ischen et al. 2020). A human-like chatbot
might be perceived as more personal and less anonymous, leading to fewer privacy
concerns. Users might experience a closer connection to the human-like chatbot,
increasing the willingness to use it as a companion (Birnbaum et al. 2016). Hence,
interacting with a human-like chatbot can mimic interpersonal communication, posi­
tively influencing (personal) information disclosure and recommendation adherence.
Moreover, when relying on the assumption that an anthropomorphic interface can
induce trust in the AI-driven public service, which in turn might reduce the associated
concern when sharing personal data, we expect that the negative effect of having to
share more personal data will be buffered when the interface is more human.

Hypothesis 6 (H6): The negative effect of needing to share more personal information on
users’ willingness to use AI-driven public services is less strong when the interface is more
human (anthropomorphic) compared to a less human interface.

3. Methods and data


3.1 Experiment design, vignettes, and measures
We conducted an online vignette experiment in Austria (n = 1,048). We asked
participants whether they would download an AI-driven public service app. The app
would enable them to interact with a chatbot and ask questions about the civil
infrastructure, book appointments with administrative authorities, and give feedback
about the service provision. The concrete examples of public services described in the
vignette in Austria are common services that are directly or indirectly the responsi­
bility of a public institution, and concretely in the case of this experiment of
PUBLIC MANAGEMENT REVIEW 9

a municipality administration. We have chosen this description as it describes an AI-


driven application that already exists. In doing so, we did not expect respondents to
possess advanced knowledge of AI-process, nor did we ask them to evaluate their
willingness for a hypothetical application that might not be implemented in the near
future. The full description of the vignettes is given in the Appendix.
We relied on vignettes in the survey (i.e. a between-subject design) to test
Hypotheses 1, 3, 4, 5, and 6. As these hypotheses focus on context-specific decisions
of using digital public services, we can vary elements of a particular context to single
out the effects hypothesized. Concretely, for the hypotheses that focus on AI applica­
tion features (‘information to share’ and ‘anthropomorphism’), we use a 2 × 2 experi­
mental vignette design. Each participant saw one vignette differing in the degree of
information that needed to be shared and the anthropomorphism of the chatbot. For
Hypotheses 2, on the role of overall privacy concerns independent of a particular
context, we included an attitudinal measure on overall privacy concerns.
For the first experimental treatment, we varied the level of personal information
required to download the app. In the low information sharing condition, respondents
were asked to provide access to two sources of personal information on their mobile
phones (‘Microphone’ and ‘Location’). In contrast, in the high information sharing
condition, respondents were required to provide access to four additional sources
(‘Calendar’, ‘WiFi Connection Information’, ‘Device ID’ and ‘Call Information’). For
the second experimental treatment, we varied the application’s name, where the name
in the low anthropomorphism group focused on a technical component (‘CityBot’).
Inspired by the practice of large IT companies and public administrations to refer to
their AI devices and software with names such as ‘Alexa’, ‘Amelia’, ‘Bobbi’, or ‘Erica’,
we used one of the most popular names in Austria in 2019, ‘Anna’, for the high
anthropomorphism condition. The four vignettes are provided in the Appendix.
As the primary dependent variable, we asked whether respondents would download
the app (‘yes’ = 1; ’no’ = 0) on the same survey page where the vignettes were given. To
test the main and interaction effects with citizen-related variables, we measured
usefulness with a one-item 9-point scale ranging from ‘Extremely useless’ (−4) to
‘Extremely useful’ (+4). We decided on an item that directly focused on usefulness,
and also, the labels accompanying the response options included a direct reference to
usefulness. This helped us in capturing perceived usefulness. We measured overall
privacy concerns with the average of three items presented in a random order
(Cronbach’s alpha = .76). Respondents could indicate (i) how concerned they were
about sharing personal data for apps, measured with a 5-point scale (1 = ‘Not con­
cerned at all’ to 5 = ‘Extremely concerned’); (ii) the level of which their online activities
were influenced by this concern, measured with a 7-point Likert scale (−3 = ‘Strongly
disagree’ to 3 = ‘Strongly agree’), and (iii) how vulnerable they felt for cyber-attacks
when sharing personal information through apps measured by a 5-point Likert scale
(−2 = ‘Not vulnerable at all’ to 2 = ‘Extremely vulnerable’). We developed and
combined these items to grasp different aspects of privacy concerns by directly asking
for concerns and a self-evaluation of the impact of these concerns on one’s planned
behaviour. All items and descriptive statistics are provided in the Appendix.
We asked these questions after respondents had read the vignettes. This was
necessary for ‘perceived usefulness of the app’, as respondents could only evaluate
the usefulness after being thoroughly informed about the app. For the measures of
general privacy concerns, we did this to avoid these questions priming respondents and
10 J. WILLEMS ET AL.

make them more focused on particular privacy cues in the vignettes. However, as we
measured these independent variables after presenting the vignette, we verified
whether they were dependent on the experimental treatments for both measures. We
found no significant differences between the experimental treatments for both mea­
sures. Moreover, the measures of general privacy concerns were collected after a set of
distraction questions in the survey.
We also included two attention check questions on the page after the willingness to
download and asking for perceived usefulness. We asked respondents to remember the
correct vignette information about the chatbot’s name and the exact information
required to share to download the app.

3.2 Data collection and sample


Data collection was performed by using a Qualtrics survey, and allocation to vignettes
was based on the built-in randomization functionality of Qualtrics Surveys. We relied
on a professional panel provider to address a sample in Austria, representative for
region (place of residence), and gender over different age categories. Respondents were
rewarded through the rewarding system of Qualtrics panels (ESOMAR approved),
with about 60 panel points for around 15 minutes, where 60 panel points approxi­
mately equate to about €3.00.
Ethical research standards of the University of Hamburg at the time of data
collection (2019) were followed, and respondents were informed that their responses
were anonymous, voluntarily, and would serve scientific purposes.
In total, 1,746 respondents started the questionnaire, from which 1,338 completed
all questions. However, only 1,048 respondents answered the attention questions
correctly. As we assume that respondents not correctly answering these questions
did not pay attention to the vignettes (e.g. by fastly clicking through the questions),
we deleted them for further analysis. In the final sample, 49.61% are female, and the
average age is 46.73 (median 48.00; min = 18; max = 81).

3.3 Analysis
As our main dependent variable is binary, we analyse the willingness to download the
app (yes/no) using binomial logistic regression analysis. The experimental treatments
(‘information to share’ and ‘anthropomorphism’), as well as the covariates (‘perceived
usefulness’ and ‘overall privacy concerns’) are the independent variables in this
regression. Data were analysed with R (R Core Team 2020, Version 1.2.5019).

4. Results
Table 1 reports the number of respondents and the percentage per treatment group
that would download the app, along with the lower and upper bounds of the 95%
confidence intervals.
Table 2 reports the estimated odds ratios from the binomial regression model to
explain whether people would download the application. Results are reported in four
models: with only the covariates (Model 1), with only the experimental treatments
PUBLIC MANAGEMENT REVIEW 11

Table 1. Overview of respondents per treatment group, and percentage that would download the app (with 95%
confidence intervals).
95%CI

Anthropo- Percentage in group Lower Upper


Information Sharing morphism N downloading app bound bound
1 Low information CityBot 275 54.18% 48.28% 60.08%
sharing
2 Low information Anna 270 48.52% 42.55% 54.49%
sharing
3 High information CityBot 245 48.16% 41.89% 54.43%
sharing
4 High information Anna 258 45.35% 39.26% 51.44%
sharing

(Model 2), with the main effects (Model 3), and with the hypothesized interaction
effects (Model 4). Findings were consistent across these models, and further inter­
pretation will be based on the full model (Model 4).
Hypothesis 1 stated that the willingness to use AI-driven public services will be
higher the more useful individuals perceive them to be. This hypothesis is supported by
the results, as for every step on the perceived usefulness scale, the likeliness to down­
load the app increases with an odds factor of 2.66 (p < .001). Moreover, Hypothesis 2 is
also supported, as respondents more concerned about their privacy show a decreased
likelihood of downloading the app (Odds ratio = 0.77, p = .012).
However, the other hypotheses are not supported. The experimental treatment
groups in this study have at least 245 observations, which allows discovering a small
to medium effect (Champely 2018, based on Cohen 1988). The amount of information
individuals must share does not influence people to use an AI-based public service app
(H3). Even when people do indicate to be concerned about their privacy, the factor
does not interact with the fact that more (personal) data must be shared. This strongly
supports the privacy paradox being at play in this public service context. Moreover,
a more human-like interface did not impact this paradox either, i.e. there is no support
for H5 and H6. The anthropomorphic naming of an application does not interfere with
the dynamics of the privacy calculus theory and the privacy paradox.

5. Discussion
Our research questions focused on (1) how perceived usefulness, data sharing require­
ments, and citizens’ overall concerns influence their willingness to rely on AI-driven
public services, and (2) whether citizens act accordingly to their overall privacy
concerns in concrete contexts.
Our empirical analysis shows that perceived usefulness is the main explanatory
factor for citizens’ willingness to download an AI-driven app to interact with and
request information on public services. Moreover, general privacy concerns do reduce
citizens’ willingness to download the AI-driven app; however, this does not interact
with the amount of personal information to be shared. In sum, in this context of public
services, citizens seem to trade-off the usefulness of a particular AI-application with
their general privacy concerns; however, the amount of data that must be shared – and
would thus be at the basis of privacy risks – is not considered in this trade-off.
12

Table 2. Binomial logistic regression explaining willingness to download the app (‘yes’ = 1/‘no’ = 0).
J. WILLEMS ET AL.

Model 1 Model 2 Model 3 Model 4

Odds Odds Odds Odds


Predictors Ratios CI p Ratios CI p Ratios CI p Ratios CI p
(Intercept) 0.20 0.15– <0.001 1.15 0.94– 0.183 0.22 0.16– <0.001 0.22 0.15– <0.001
0.26 1.42 0.32 0.33
Perceived usefulness (H1) 2.66 2.35– <0.001 2.65 2.34– <0.001 2.66 2.35– <0.001
3.01 3.00 3.02
Privacy concerns (H2) 0.70 0.61– <0.001 0.70 0.60– <0.001 0.77 0.63– 0.012
0.82 0.82 0.94
High information sharing (H3) 0.83 0.65– 0.138 0.90 0.66– 0.521 0.90 0.58– 0.658
1.06 1.23 1.41
‘High information sharing’ × ‘Privacy concerns’ (H4) 0.82 0.60– 0.208
1.12
High anthropomorphism (H5) 0.84 0.66– 0.164 0.85 0.63– 0.322 0.83 0.54– 0.418
1.07 1.17 1.29
‘High information sharing’ × ‘High 1.06 0.57– 0.859
anthropomorphism’ (H6) 1.98
Observations 1,048 1,048 1,048 1,048
R2 Tjur 0.405 0.004 0.406 0.407
Note: CI refers to the 95%-Confidence Interval.
PUBLIC MANAGEMENT REVIEW 13

Hence, our empirical analysis supports the existence of a privacy paradox in the
context of AI-driven public service apps. Therefore, our results are in line with earlier
studies supporting the privacy paradox in for-profit and public contexts (e.g. Sevignani
2013). This implies that even when respondents had general privacy concerns, they
were still consenting to download and use a specific app, especially when the perceived
usefulness of the app was high. This is important for the growing field of AI in public
services for two reasons.
First, our study specifies an important element that should not be ignored in the
growing debate of how new technologies can and should (not) be used in a public
service context (Bullock 2019; Criado and Gil-Garcia 2019; Lember, Brandsen, and
Tonurist 2019). Hypothesis 1, which was built on the privacy calculus theory, is
convincingly supported. This leads to questions about the relative value of private
information for citizens and public organizations. From a privacy calculus perspec­
tive, personal data has an economic value that can be traded for a benefit. However,
there is the additional risk that the personal information can be used for purposes
unknown and undesired by the person sharing the information, which might cancel
out any short-term and minimal personal benefits. However, personal data do not
have the same economic value for a public organization as it has for many for-profit
organizations (Douglass et al. 2014; Krishnamurthy and Awazu 2016). In fact,
aggregated personal data could potentially be used for better decision-making or
better public service delivery, which in turn might lead to an additional, indirect,
and shared benefit for citizens (Krishnamurthy and Awazu 2016). As a result,
privacy concerns related to providing data to a public institution should relate
less to the direct economic benefit one might gain or lose from it. Instead, its
value should relate to trading off one’s own service experience, as well as the overall
public value, with the risk that the data might be wrongly used. However, given
strict regulations in many countries and general principles of controllability and
transparency for public organizations, it is hard to argue that data security in
a public setting would a priori be less safe than a for-profit setting (Romansky
and Noninska 2020).
Second, confirming the privacy paradox in a public context not only indicates
inconsistency between general concerns and actual behaviour, but it also suggests
that actual behavioural logic for private market-based transactions and public service
contexts are likely not very different. This calls for a broader discussion, as it is the role
of the state and public organizations to operate within boundaries of civil principles,
such as transparency, privacy, equality, democratic participation by citizens, etc. When
citizens’ behaviour is not consistent with these overall concerns, for example, trans­
lated in national and international legislation (Borlini 2017), public governance
mechanisms could and should be developed to reduce these inconsistencies.
Consequently, the confirmation of a privacy paradox in the public content, and the
debate on how to manage it from a civic values perspective, could be a relevant topic in
the broader debate on the civic values of increasing policy attention to behavioural
public administration. While a growing body of literature has focused on confirming
that various biases exist in human behaviour, as well as in the specific role as a citizen
in interaction with public organizations (for an overview, see: Battaglio et al. 2019),
other more critical contributions have focused on what should, or should not, be done
with the knowledge of such behavioural biases (Berg 2003; Brown 2012). For example,
a significant critique concerning policy recommendations based on behavioural
14 J. WILLEMS ET AL.

(public) insights is that they often treat citizens paternalistically, which contrasts with
civic values, such as democratic participation, transparency, and equality (Menard
2010; Schnellenbach 2012). A similar debate on the privacy paradox in the public
context seems necessary.

6. Limitations and further research


Although the sample used in this experimental study was sufficiently stratified in terms
of age, gender, and location, we note that further studies can complement the results of
this study, such as nationality or cultural background. Moreover, we varied the level of
personal data to be given by downloading an application, which still differs from
sharing personal information in an AI-based conversation. More elaborate designs
where different types of personal information and the way the information is provided
can further explore citizens’ behaviours in the context of AI-driven public services.
Future research could explore how behaviour related to privacy concerns changes
when individuals face different simultaneous situations. It remains important to
investigate why the behaviour of individuals changes in a particular case and which
variables have explanatory power to shed light on these behavioural changes. Especially
from a public perspective, it is crucial to increase the knowledge on the structure of
data requirements, considering these variables to offer greater protection of sensitive
data. Moreover, in our study, we focused on a one-time decision on willingness to
engage by downloading an app. Further exploration and testing of actual behaviour are
also necessary. Additionally, several AI processes rely on the continuous registration of
data over more extended periods, with potentially more and recurrent interactions of
citizens. This recurrent nature also deserves more attention, particularly given the role
of habitual behaviour and building trust in citizen-state interactions (Kattel, Lember,
and Tõnurist 2020).

7. Conclusion
This experimental study shows that the privacy paradox exists in the context of AI-driven
public services. Despite substantial statistical power and attention checks in the survey, the
experimental treatments that varied the level of personal information participants had to
grant access to, and the anthropomorphic representation of the interface did not have
significant effects. This experiment opens the discussion about privacy concerns related to
automated, digital service interaction with citizens. Public administrators are urged to
monitor potential privacy concerns when implementing such technologies closely. New
technological approaches offer great potential to encourage, enable, and improve service
interactions in a democratic and cost-efficient manner. If the ultimate goal is a sustainable
implementation of AI-driven technologies in the public domain, those potentials need to
be communicated accordingly as citizens are likely to engage in them if they perceive them
as useful and ethically justifiably.

Data availability
Data and research protocol are available at: <Link removed for anonymousness; will be made available
after the peer-review process>.
PUBLIC MANAGEMENT REVIEW 15

Disclosure statement
No potential conflict of interest was reported by the author(s).

Notes on contributors
Jurgen Willems is Full Professor for Public Management and Governance in the Department of
Management at the Vienna University of Economics and Business (WU Wien). His teaching and
research cover a variety of topics on citizen-state and citizen-society interactions. He worked as
a researcher at the Vlerick Business School (Belgium) in the field of Management & ICT. Concrete
projects and executive teaching programs focused on Business Process Management and Business
Intelligence.
Moritz J. Schmid is Research and Teaching Assistant at the institute for Public Management and
Governance in the Department of Management at the Vienna University of Economics and Business
(WU Wien). Moritz Schmid’s research interests revolve around the managerial and governance
challenges that public sector organizations and bureaucratic entities are currently facing. He is
particularly interested in assessing the effects that technological advances have on public service
processes and their interactions with citizens by means of quantitative methods.
Dieter Vanderelst is Assistant Professor at the University of Cincinnati, with a joint appointment in
the departments of Psychology, Biological Sciences; Electrical Engineering & Computing Systems; and
Mechanical & Materials Engineering. His research focusses on a broad set of topics including ethical
boundaries for robot-human interactions.
Falk Ebinger is Postdoctoral Researcher at the Department of Management’s Institute for Public
Management and Governance at Vienna University of Economics and Business (WU Wien). He holds
a M.A. in Public Policy & Management from the University of Konstanz, Germany and earned
a doctorate (Dr.rer.soc.) at the Faculty for Social Science at Ruhr-University Bochum, Germany.
Before joining the WU he worked for several years as research fellow at the Chair for Public
Administration & Regional Politics at Ruhr-University Bochum and as senior research fellow and
substitute professor for Administrative Science at the Department of Politics and Public
Administration at the University of Konstanz.
Dominik Vogel is Assistant Professor of Public Management at the University of Hamburg. In his
research Dominik focusses on what motivates public sector employees, how public sector leadership
can succeed, how citizens interact with the administration and the performance management of public
organizations.

ORCID
Jurgen Willems http://orcid.org/0000-0002-4439-3948
Dominik Vogel http://orcid.org/0000-0002-0145-7956
Falk Ebinger http://orcid.org/0000-0002-1861-5359

References
Acquisti, A. May 2004. “Privacy in Electronic Commerce and the Economics of Immediate Gratification.”
In: Proceedings of the 5th ACM conference on Electronic commerce, New York, NY, USA, 21–29. doi:
10.1145/988772.982777.
Acquisti, A., and J. Grossklags. 2005. “Privacy and Rationality in Individual Decision Making.” IEEE
Security and Privacy Magazine 3 (1): 26–33. doi:10.1109/MSP.2005.22.
Acquisti, A., and S. Spiekermann. 2011. “Do Interruptions Pay Off? Effects of Interruptive Ads on
Consumers’ Willingness to Pay.” Journal of Interactive Marketing 25 (4): 226–240. doi:10.1016/j.
intmar.2011.04.003.
16 J. WILLEMS ET AL.

Androutsopoulou A, Karacapilidis N, Loukis E and Charalabidis Y. 2019. Transforming the commu­


nication between citizens and government through AI-guided chatbots. Government Information
Quarterly 36 (2): 358–367. doi:10.1016/j.giq.2018.10.001.
Ashforth, B. E., and R. H. Humphrey. 1997. “The Ubiquity and Potency of Labelling in
Organizations.” Organization Science 8 (1): 43–58. doi:10.1287/orsc.8.1.43.
Auxier, B., L. Rainie, M. Anderson, A. Perrin, M. Kumar, and E. Turner. 2019. “Americans and
Privacy: Concerned, Confused, and Feeling Lack of Control over Their Personal Information.” Pew
Research Center. [accessed 24 July 2020]. https://www.pewresearch.org/internet/wp-content
/uploads/sites/9/2019/11/Pew-Research-Center_PI_2019.11.15_Privacy_FINAL.pdf
Bamberg, S. 2003. “How Does Environmental Concern Influence Specific Environmentally Related
Behaviors? A New Answer to an Old Question.” Journal of Environmental Psychology 23 (1): 21–32.
doi:10.1016/S0272-4944(02)00078-6.
Barnes, S. B. 2006. “A Privacy Paradox: Social Networking in the United States.” First Monday 11 (9)
Retreived from: https://firstmonday.org/article/view/1394/1312
Battaglio, R. P., P. Belardinelli, N. Belle, and P. Cantarelli. 2019. “Behavioral Public Administration Ad
Fontes: A Synthesis of Research on Bounded Rationality, Cognitive Biases, and Nudging in Public
Organizations.” Public Administration Review 79 (3): 304–320. doi:10.1111/puar.12994.
Becker, G. S., and K. M. Murphy. 1988. “A Theory of Rational Addiction.” Journal of Political Economy
96 (4): 675–700. doi:10.1086/261558.
Belanger, F., and R. E. Crossler. 2011. “Privacy in the Digital Age: A Review of Information Privacy
Research in Information Systems.” Management Information Systems Quarterly 35 (42):
1017–1041. doi:10.2307/41409971.
Berg, N. 2003. “Normative Behavioural Economics.” The Journal of Socio-Economics 32 (4): 411–427.
doi:10.1016/S1053-5357(03)00049-0.
Birnbaum, G. E., M. Mizrahi, G. Hoffman, H. T. Reis, E. J. Finkel, and O. Sass. 2016. “Machines as
a Source of Consolation: Robot Responsiveness Increases Approach Behavior and Desire for
Companionship.” Proceedings of the 11th ACM/IEEE International Conference on Human-
Robot Interaction (HRI 2016), Christchurch, Middle Zealand, 165–171. doi: 10.1109/
HRI.2016.7451748.
Bol, N., T. Dienlin, S. Kriukemeier, M. Sax, S. C. Boerman, and C. H. de Vreese. 2018. “Understanding
the Effects of Personalization as a Privacy Calculus. Analyzing Self-disclosure across Health, News,
and Commerce Context.” Journal of Computer-Mediated Communication 23 (6): 370–388.
doi:10.1093/jcmc/zmy020.
Borlini, L. S. 2017. “Rights to Privacy and Data Protection V. Public Security and the Integrity of the
European Financial System.” Bocconi Legal Studies Research Paper No 3010755.
Broadbent, E. 2017. “Interactions with Robots: The Truths We Reveal about Ourselves.” Annual
Review of Psychology 68 (1): 627–652. doi:10.1146/annurev-psych-010416-043958.
Brown, B. 2001. “Studying the Internet Experience.” HP Laboratories Technical Report (HPL – 2001-
49). http://www.hpl.hp.com/techreports/2001/HPL-2001-49.pdf [accessed 27 march 2019].
Brown, P. 2012. “A Nudge in the Right Direction? Towards A Sociological Engagement with
Libertarian Paternalism.” Social Policy and Society 11 (3): 305–317. doi:10.1017/
S1474746412000061.
Brown, M., and R. Muchira. 2004. “Investigating the Relationship between Internet Privacy Concerns
and Online Purchase Behavior.”Journal of Electronic Commerce Research 5 (1): 62–70. 6.
Bullock, J. 2019. “Artificial Intelligence, Discretion, and Bureaucracy.” The American Review of Public
Administration 49 (7): 21–32. doi:10.1177/0275074019856123.
Busuioc, M. 2021. “Accountable Artificial Intelligence: Holding Algorithms to Account.” Public
Administration Review 81 (5): 825–836. doi:10.1111/puar.13293.
Carrascal, J. P., C. Riederer, V. Erramili, M. Cherubini, and R. de Oliveira. May 2013. “Your Browsing
Behaviour for a Big Mac: Economics of Personal Information Online.” In: Proceedings of the 22nd
international conference on World Wide Web, Rio de Janeiro, Brazil, 189–200.
Castañeda, A.J., and F. J. Montoro. 2007. “The Effect of Internet General Privacy Concerns on Customer
Behavior.” Electronic Commerce Research 7 (2): 117–141. doi:10.1007/s10660-007-9000-y.
Champely, S. 2018. “Pwr: Basic Functions for Power Analysis”. R package version 1.2-2. https://CRAN.
R-project.org/package=pwr
PUBLIC MANAGEMENT REVIEW 17

Chung, M., E. Ko, H. Joung, and K. Sang. 2020. “Chatbot E-service and Customer Satisfaction
regarding Luxury Brands.” Journal of Business Research 117: 587–595. forthcoming 10.1016/j.
jbusres.2018.10.004.
Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Erlbaum.
Correia, L., and K. Wünstel. 2011. “Smart Cities Applications and Requirements.” White paper of
Experts Working Group Net!Works European Technology Platform. [21 march 2020]. http://www.
scribd.com/doc/87944173/White-Paper-Smart-Cities-Applications
Criado, J. I., and J. R. Gil-Garcia. 2019. “Creating Public Value through Smart Technologies and
Strategies: From Digital Services to Artificial Intelligence and Beyond.” International Journal of
Public Sector Management 32 (5): 438–450. doi:10.1108/IJPSM-07-2019-0178.
Culnan, M. J., and P. K. Armstrong. 1999. “Information Privacy Concerns, Procedural Fairness, and
Impersonal Trust: An Empirical Investigation.” Organization Science 10 (1): 104–115. doi:10.1287/
orsc.10.1.104.
Davies, S. 1997. “Re-engineering the Right to Privacy: How Privacy Has Been Transformed from
a Right to a Commodity.” In Technology and Privacy: The New Landscape, edited by P. Agre and
M. Rotenbergm 143–165. Sabon, US: MIT Press.
Dickinson, H., and S. Yates. 2021. “From External Provision to Technological Outsourcing: Lessons
for Public Sector Automation from the Outsourcing Literature.” Public Management Review 1–19.
doi:10.1080/14719037.2021.1972681.
Dienlin, T., P. K. Masur, and S. Trepte. 2019. “A Longitudinal Analysis of the Privacy Paradox.”
SOCARXIV. [accessed 23 May 2020]. https://doi.org/10.31235/osf.io/fm4h7
Dienlin, T., and M. J. Metzger. 2016. “An Extended Privacy Calculus Model for SNSs: Analyzing
Self-Disclosure and Self-Withdrawal in a Representative U.S. Sample.” Journal of Computer-
Mediated Communication 21 (5): 368–383. doi:10.1111/jcc4.12163.
Dienlin, T., and S. Trepte. 2015. “Is the Privacy Paradox a Relic of the Past? an In-depth Analysis of
Privacy Attitudes and Privacy Behaviors.” European Journal of Social Psychology 45 (3): 285–297.
doi:10.1002/ejsp.2049.
Douglass, K., S. Allard, C. Tenopir, and M. Frame. 2014. “Managing Scientific Data as Public Assets:
Data Sharing Practices and Policies among Full-time Government Employees.” Journal of the
Association for Information Science and Technology 65 (2): 251–262. doi:10.1002/asi.22988.
Duffy, B. R. 2003. “Anthropomorphism and the Social Robot.” Robotics and Autonomous Systems
42 (4): 104–123. doi:10.1016/S0921-8890(02)00374-3.
Eggers, W.D., T. Fishman, and P. Kishnani. 2017. “AI-augmented Human Services: Using Cognitive
Technologies to Transform Program Delivery.” Accessed 14 April 2020. https://www2.deloitte.
com/content/dam/insights/us/articles/4152_AI-human-services/4152_AI-human-services.pdf
Fishbein, M., and I. Ajzen. 2010. Predicting and Changing Behavior: The Reasoned Action Approach.
New York: Psychology Press (Taylor & Francis).
Go, E., and S. S. Sundar. 2019. “Humanizing Chatbots: The Effects of Visual, Identity and
Conversational Cues on Humanness Perceptions.” Computers in Human Behavior 97: 304–316.
doi:10.1016/j.chb.2019.01.020.
Griffin, D., and A. Tversky. 1992. “The Weighing of Evidence and the Determinants of Confidence.”
Cognitive Psychology 24 (3): 411–435. doi:10.1016/0010-0285(92)90013-R.
Heirman, W., M. Walrave, and K. Ponnet. 2013. “Predicting Adolescents’ Disclosure of Personal
Information in Exchange for Commercial Incentives: An Application of an Extended Theory of
Planned Behavior.” Cyberpsychology, Behavior and Social Networking 16 (2): 81–87. doi:10.1089/
cyber.2012.0041.
Heyman, G. D., and S. A. Gelman. 1999. “The Use of Trait Labels in Making Psychological
Inferences.” Child Development 70 (3): 604–619. doi:10.1111/1467-8624.00044.
Howlader, D. 2011. “Moral and Ethical Questions for Robotics Public Policy.” Synthesis: A Journal of
Science, Technology, Ethics and Policy 2: 1–6.
Ischen, C., T. Araujo, H. Voorveld, G. van Noort, and E. G. Smit. 2020. Privacy Concerns in Chatbot
Interactions 1–6. doi:10.1007/978-3-030-39540-7_3.
Jaiswal, J. 2010. “Location-aware Mobile Applications, Privacy Concerns, and Best Practices.” 9.
https://www.truste.com/resources/Whitepapers
Jentzsch, N., S. Preibusch, and A. Harasser. 2012. “Study on Monetising Privacy. An Economic Model
for Pricing Personal Information.” European Network and Information Security Agency (ENISA) 1–
76. https://www.enisa.europa.eu/publications/monetising-privacy
18 J. WILLEMS ET AL.

Kattel, Rainer, Veiko Lember, and Piret Tõnurist. 2020. “Collaborative Innovation and
Human-machine Networks.” Public Management Review 22 (11): 1652–1673. doi:10.1080/
14719037.2019.1645873.
Keith, M., S. Thompson, J. Hale, J. Lowry, and C. Greer. 2013. “Information Disclosure on Mobile
Devices: Re-examining Privacy Calculus with Actual User Behavior.” International Journal of
Human-Computer Studies 71 (12): . 1163–1173. doi:10.1016/j.ijhcs.2013.08.016.
Kernaghan, K. 2014. “The Rights and Wrongs of Robotics: Ethics and Robots in Public
Organizations.” Canadian Public Administration 57 (4): 485–506. doi:10.1111/capa.12093.
Kim, Seo Young, Bernd H. Schmitt, and Nadia M. Thalmann. 2019. “Eliza in the Uncanny Valley:
Anthropomorphizing Consumer Robots Increases Their Perceived Warmth but Decreases Liking.”
Marketing Letters 30 (1): 1–12. doi:10.1007/s11002-019-09485-9.
Krishnamurthy, R, and Y. Awazu. 2016. “Liberating Data for Public Value: The Case of Data.gov.”
International Journal of Information Management 36 (4): 668–672. doi:10.1016/j.
ijinfomgt.2016.03.002.
Lember, V., T. Brandsen, and P. Tonurist. 2019. “The Potential Impacts of Digital Technologies on
Co-production and Co-creation.” Public Management Review 21 (11): 1665–1686. doi:10.1080/
14719037.2019.1619807.
Makasi, T., A. Nili, K. Desouza, and M. Tate. 2020. “Chatbot-mediated Public Service Delivery:
A Public Value Based Framework.” First Monday 25 (12). doi:10.5210/fm.v25i12.10598.
Mason, R. 1986. “Four Ethical Issues of the Information Age.” Management Information Systems
Quarterly 10 (1): 54–142. doi:10.2307/248873.
Mehr, H. 2017. Artificial Intelligence for Citizen Services and Government. Cambridge, MA: Harvard
Kennedy School, Ash Center for Democratic Governance and Innovation. https://ash.harvard.edu/
publications/artificial-intelligence-citizen-services-and-government
Meijer, A., L. Lorenz, and M. Wessels. 2021. “Algorithmization of Bureaucratic Organizations: Using
a Practice Lens to Study How Context Shapes Predictive Policing Systems.” Public Administration
Review 81 (5): 837–846. doi:10.1111/puar.13391.
Menard, J.-F. 2010. “A ‚nudge’ for Public Health Ethics.: Libertarian Paternalism as A Framework for
Ethical Analysis of Public Health Interventions?” Public Health Ethics 3 (3): 229–238. doi:10.1093/phe/
phq024.
Miller, Susan M., and Lael R. Keiser. January 2021. “Representative Bureaucracy and Attitudes toward
Automated Decision Making.” Journal of Public Administration Research and Theory 31 (1):
150–165. doi:10.1093/jopart/muaa019.
Moon, Y. 2000. “Intimate Exchanges: Using Computers to Elicit Self-disclosure from Consumers.”
Journal of Consumer Research 26 (4): 323–339. doi:10.1086/209566.
Moon, M. J., J. Lee, and C. Roh. 2014. “The Evolution of Internal IT Applications and E-government
Studies in Public Administration: Research Themes and Methods.” Administration & Society
46 (1): 3–36. doi:10.1177/0095399712459723.
Mori, M. 1970. “The Uncanny Valley.” Energy 7 (4): 33–35. accessed 4 August 2020. https://spectrum.
ieee.org/automaton/robotics/humanoids/the-uncanny-valley Retrieved from
Mori, M., K. F. MacDorman, and N. Kageki. June 2012. “The Uncanny Valley [From the Field].” IEEE
Robotics & Automation Magazine 19 (2): 98–100. doi:10.1109/MRA.2012.2192811.
Murhy, J., U. Gretzel, and J. Personen. 2019. “Marketing Robot Services in Hospitality and Tourism:
The Role of Anthropomorphism.” Journal of Travel & Tourism Marketing 36 (17): 784–795.1 – 12.
10.1080/10548408.2019.1571983.
Nam, T., and T. A. Pardo. 2011. “Conceptualizing Smart City with Dimensions of Technology, People,
and Institutions.” In: The Proceedings of the 12th Annual International Conference on Digital
Government Research, University of Maryland College Park, US.
Nass, C., and K. M. Lee. 2000. “Does Computer-synthesized Speech Manifest Personality?
Experimental Tests of Recognition, Similarity-attraction, and Consistency-attraction.” Journal of
Experimental Psychology 7 (3): 171–181. doi:10.1037/1076-898X.7.3.171.
Neirotti, P., A. De Marco, A. C. Cagliano, G. Mangano, and F. Scorrano. 2014. “Current Trends in
Smart City Initiatives: Some Stylised Facts.” Cities 38: 25–36. doi:10.1016/j.cities.2013.12.010.
Norberg, P., D Horne, and D. Horne. 2007. “The Privacy Paradox: Personal Information Disclosure
Intentions Versus Behaviors.” Journal of Consumer Affairs 41 (1): 100–126. doi:10.1111/j.1745-
6606.2006.00070.x.
PUBLIC MANAGEMENT REVIEW 19

Norris, D. F., and C. G. Reddick. 2012. “Local E-government in the United States: Transformation or
Incremental Change?” Public Administration Review 73 (1): 165–175. doi:10.1111/j.1540-
6210.2012.02647.x.
R Core Team. 2020. “R: A Language and Environment for Statistical Computing.” Vienna, Austria.
https://www.R-project.org/
Reel, J. J., C. Greenleaf, W. K. Baker, S. Aragon, D. Bishop, C. Cachaper, P. Handwerk, et al. 2007.
“Relations of Body Concerns and Exercise Behavior: A Meta-analysis.” Psychological Reports
101 (3): 927–942. doi:10.2466/pr0.101.3.927-942.
Romansky, R., and I. Noninska. 2020. “Business Virtual Systems in the Context of E-governance:
Investigation of Secure Access to Information Resources.” Journal of Public Affairs 20 (15).
doi:10.1002/pa.2072.
Schnellenbach, J. 2012. “Nudges and Norms: On the Political Economy of Soft Paternalism.” European
Journal of Political Economy 28 (2): 266–277. doi:10.1016/j.ejpoleco.2011.12.001.
Sevignani, S. 2013. “The Commodification of Privacy on the Internet.” Science & Public Policy 40 (6):
733–739. doi:10.1093/scipol/sct082.
Singer, Peter W. 2011. “Robots at War: The New Battlefield.” In The Changing Character of War,
edited by Hew Strachan and Sibylle Scheipers, 143–165. Sabon, US: Oxford University Press.
Smith, H.J., T. Dinev, and H. Xu. 2012. “Information Privacy Research: An Interdisciplinary Review.”
Management Information Systems Quarterly 35 (41): 989–1015. doi:10.2307/41409970.
Smith, H.J., S.J. Milberg, and S.J. Burke. 1996. “Information Privacy: Measuring Individuals’ Concerns
about Organizational Practices.” Management Information Systems Quarterly 2035 (21): 167–196.
doi:10.2307/249477.
Son, J.-Y., and S. S. Kim. 2008. “Internet Users’ Information Privacy-Protective Responses:
A Taxonomy and A Nomological Model.” Management Information Systems Quarterly 32 (3):
503–529. doi:10.2307/25148854.
Spiekermann, S., J. Grossklags, and B. Berendt. 2001. “E-privacy in 2nd Generation E-Commerce:
Privacy Preferences versus Actual Behavior.” In: Proceesings of the 3rd ACM Conference on
Electronic Commerce, 14-17 October, Florida, USA.
Sundar, S., S. Kang, B. Zhang, E. Go, and M. Wu. 2013. “Unlocking the Privacy Paradox: Do Cognitive
Heuristics Hold the Key?.” In: Proceedings of the 31st Annual Conference on Human Factors in
Computing Systems, Association for Computing Machinery, Paris, France, 811–816.
Taddicken, M. 2014. “The ‘Privacy Paradox’ in the Social Web: The Impact of Privacy Concerns,
Individual Characteristics, and the Perceived Social Relevance on Different Forms of Self-
disclosure.” Journal of Computer- Mediated Communication 19 (2): 248–273. doi:10.1111/
jcc4.12052.
Tsai, Y., C. A. Egelman, L. Cranor, and A. Acquisti. 2011. “The Effect of Online Privacy Information on
Purchasing Behavior: An Experimental Study.” Information Systems Research 22 (2): 254–268.
doi:10.1287/isre.1090.0260.
“Urban Innovation Vienna: WienBot.” [accessed 25 august 2021]. https://smartcity.wien.gv.at/wienbot/
Vogl, T M., Cathrine Seidelin, Bharath Ganesh, and Jonathan Bright. 2020. “Smart Technology and
the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities.” Public
Administration Review 80 (6): 946–961. doi:10.1111/puar.13286.
Willems, J., L. Schmidthuber, D. Vogel, F. Ebinger, and D. Vanderelst. 2022. “Ethics of Robotized
Public Services: The Role of Robot Design and Its Actions.” Government Information Quarterly
39 (2): 101683. doi:10.1016/j.giq.2022.101683.
Wirtz, B., and W. Müller. 2019. “An Integrated Artificial Intelligence Framework for Public
Management.” Public Management Review 21 (7): 1076–1100. doi:10.1080/14719037.2018.1549268.
Wirtz, B., J. Weyerer, and C. Geyerer. 2019. “Artificial Intelligence and the Public Sector –
Applications and Challenges.” International Journal of Public Administration 42 (7): 596–6156.
doi:10.1080/01900692.2018.1498103.
Zavattaro, S. M. 2013. “Social Media in Public Administration’s Future: A Response to Farazmand.”
Administration & Society 45 (2): 242–255. doi:10.1177/0095399713481602.
Zhou, T., and H. Li. 2014. “Understanding Mobile SNS Continuance Usage in China from the
Perspectives of Social Influence and Privacy Concerns.” Computers in Human Behavior 37:
283–289. doi:10.1016/j.chb.2014.05.008.

You might also like