You are on page 1of 18

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/358234837

From Ethical AI Principles to Governed AI

Conference Paper · December 2021

CITATIONS READS
8 726

3 authors, including:

Teemu Birkstedt Matti Mäntymäki


University of Turku University of Turku
7 PUBLICATIONS   23 CITATIONS    129 PUBLICATIONS   2,328 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Blockhain disruptions View project

Data Management View project

All content following this page was uploaded by Matti Mäntymäki on 31 January 2022.

The user has requested enhancement of the downloaded file.


From Ethical AI Principles to Governed AI

From Ethical AI Principles to Governed AI


Completed Research Paper

Akseli Seppälä Teemu Birkstedt


University of Turku University of Turku
FI-20014 University of Turku FI-20014 University of Turku
akseli.k.seppala@utu.fi teemu.birkstedt@utu.fi

Matti Mäntymäki
University of Turku
FI-20014 University of Turku
matti.mantymaki@utu.fi

Abstract
This study explores how organizations translate principles of ethical artificial intelligence
(AI) into practice. To date, the research on AI ethics has been mostly conceptual, with a
significant emphasis on defining the principles of ethical AI. Thus, there is momentum for
a shift from principle-based ethics toward an increased focus on the implementation of
ethical principles in practice. In this study, we analyzed data collected through a set of
expert interviews in organizations deploying AI systems. We identified that ethical AI
principles are implemented through four sets of practices: i) governance, ii) AI design
and development, iii) competence and knowledge development, and iv) stakeholder
communication. As our contribution to IS research, we empirically elucidate how the
principles of ethical AI are translated into organizational practices. For IS practice, we
provide organizations deploying AI with novel insights on how to govern their AI
systems.

Keywords: artificial intelligence, governance, ethics, AI ethics, AI governance

Introduction
Artificial intelligence (AI), which is defined as a “system’s ability to interpret external data correctly, to learn
from such data, and to use those learnings to achieve specific goals” (Kaplan and Haenlein 2019), has made
rapid inroads into application areas such as medical diagnostics (Ho et al. 2019), recruiting (van Esch et al.
2019), finance (Wall 2018), and autonomous vehicles (Stilgoe 2018). At the same time, negative side effects
of AI, such as biased decision-making, privacy violations, and challenges to human rights, as well as
erroneous decisions taken by ungoverned, inscrutable algorithms, have raised both public and academic
interest in AI ethics (Asatiani et al. 2020, 2021; Benbya et al. 2021; Martin 2019). To date, the research on
AI ethics has been mostly conceptual, with a significant emphasis on defining the principles of ethical AI
(see Breidbach and Maglio 2020; Chiao 2019; Floridi et al. 2018; Harlow 2018; Kumar et al. 2020; Martin
2019; Veale et al. 2018; Whittlestone et al. 2019). Echoing the increased public awareness of and concerns
related to the risks and unintended side effects of AI, governmental and international organizations such
as the EU and OECD, professional bodies such as the IEEE, and various companies have published their
principles and guidelines of ethical AI (Fjeld et al. 2020; Hagendorff 2020; Jobin et al. 2019).
However, this so-called principle-based ethics provides limited insights into how to ensure that the
principles are being met in practice (Hagendorff 2020; Mittelstadt 2019) as they primarily focus on the
what rather than the how of AI ethics (Morley et al. 2020). Indeed, the challenge with ethical AI principles

Forty-Second International Conference on Information Systems, Austin 2021


1
From Ethical AI Principles to Governed AI

is that they are most often highly general in nature and too broad to be action-guiding, which may lead to
varying interpretations by different stakeholders (Whittlestone et al. 2019). Thus, for ethical principles to
be useful in practice, they must be sufficiently concrete to provide guidance to organizations deploying AI
(Morley et al. 2020; Whittlestone et al. 2019) and be enforceable through governance (Cath 2018;
Minkkinen et al. 2021; Morley et al. 2020). The European Artificial Intelligence Act proposal,1 published by
the European Commission in April 2021, underscores that there is momentum for a shift from principle-
based ethics toward an increased emphasis on the implementation of ethical principles in practice.
All in all, there is a gap in the literature related to how to translate ethical principles into practice (Morley
et al. 2020). Hence, the extant literature offers nascent practical guidance for organizations on how to
implement and deploy AI in a socially responsible way (see Mayer et al. 2021). To address this gap, this
study answers the following research question: How do organizations translate principles of ethical AI
into practice?
To shed light on this question, we have undertaken a set of expert interviews in organizations deploying AI
systems. We have analyzed the data using the Gioia method (Gioia et al. 2013). The results show that ethical
AI principles are implemented through four sets of practices: i) governance, ii) AI design and development,
iii) competence and knowledge development, and iv) stakeholder communication. This study contributes
to IS research regarding the implementation of AI ethics (Mayer et al. 2021) as a part of the deployment of
AI in organizations (Ågerfalk 2020; Asatiani et al. 2020). For this research, we empirically elucidate how
the principles of ethical AI are translated into organizational practices. As our contribution to IS practice,
we provide organizations deploying AI with novel insights on how to govern their AI systems.
The remainder of this paper is structured as follows. After the introduction, we review prior research on AI
ethics and present the most prominent ethical AI principles and guidelines. The third section covers the
research methods and methodology, and the fourth section the results. In the fifth and final section, we
discuss the key findings and implications and suggest areas of future research in conjunction with the
study’s limitations.

Principles of Ethical AI
A systematic review of 84 ethical AI documents by Jobin et al. (2019) found that, although no single AI
principle is featured in all of them, more than half of them included the themes of transparency, justice
and fairness, non-maleficence, responsibility, and privacy. These findings are similar to those reported by
Hagendorff (2020) of the 22 major ethical AI guidelines, including the European Commission’s High-Level
Expert Group on AI (AI HLEG) “Ethics Guidelines for Trustworthy AI” and the IEEE document of “Ethically
Aligned Design.” The study concluded that privacy, fairness, and accountability were present in about 80
percent of them. Moreover, a review of the 36 most visible and influential AI principles documents found
that some of the recurring themes are fairness and non-discrimination, privacy, accountability, and
transparency and explainability, featuring in over 90 percent of the documents (Fjeld et al. 2020).
According to the same study, most of the recent documents tend to cover all these themes, suggesting a
convergence around them.
AI HLEG (2019) suggests that both technical and non-technical methods are required for AI principles to
be implemented, and that the methods should encompass all stages of AI’s life cycle. According to
Hagendorff (2020), some of the principles, such as accountability, explainability, privacy, fairness,
robustness, and safety, are most easily operationalized mathematically and thus tend to be implemented in
terms of technical methods. Some of the technical methods include continuous monitoring and rigorous
testing and validation of the system, whereas the non-technical methods comprise standardization (e.g.,
IEEE P7000 or ISO standards), certification, governance frameworks, education and awareness,
stakeholder participation, and diversity in AI design and development teams (AI HLEG 2019).
Fairness is closely linked to non-discrimination and the prevention of bias (Fjeld et al. 2020). It is suggested
that these can be addressed with high-quality and representative datasets, which is why these should be
measured and monitored for accuracy, consistency, and validity (Fjeld et al. 2020). Moreover, the design

1 Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence

Act) https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-
artificial-intelligence

Forty-Second International Conference on Information Systems, Austin 2021


2
From Ethical AI Principles to Governed AI

and development phase may also suffer from bias. Thus, AI HLEG (2019) encourages diversity in AI design
and development teams by “hiring from diverse backgrounds, cultures, and disciplines” to ensure diversity
of opinions and non-discriminatory AI systems. Furthermore, Kroll (2018) suggests that data governance
practices can manage fairness issues. These practices include data minimization (i.e., collect, use, and store
only the least amount of data necessary), review boards (i.e., diverse and cross-functional board to analyze
legal compliance, risks, and impacts), impact statements (i.e., formal and systematic process to investigate
foreseeable issues and risks, and how to mitigate them), and continuous monitoring of correctness (e.g.,
review modeling errors, concept drifts, and bias).
Responsibility and accountability of AI systems cannot lie with the technology itself, as AI systems cannot
be held responsible for their actions (Ryan 2020). The burden of responsibility should instead be allocated
between those who develop, deploy, and use these systems (Ryan 2020). Indeed, many ethical AI guidelines
recommend that organizations clearly allocate their responsibilities and legal liabilities (Jobin et al. 2019).
Moreover, AI HLEG (2019) suggests using impact assessments to identify, assess, document, and minimize
the potential negative impacts of AI systems. The impact assessments can also function as an accountability
mechanism by preventing an AI system from ever being deployed or developed if the risks are deemed to
be too high or impossible to mitigate (Fjeld et al. 2020). To that end, internal review boards can be created
to oversee the use and development of AI (Fjeld et al. 2020). Such boards should have the power to approve
or deny any use cases and examine all AI systems closely for legal compliance, potential risks, and impacts
(Kroll 2018). Boards should contain stakeholders from many functions, including legal, compliance,
marketing, data science, and information security (Kroll 2018). Human control and auditability are also
linked to accountability (Fjeld et al. 2020). Indeed, AI systems should be built in such a way that humans
can intervene in their actions and such that they are capable of being audited (Fjeld et al. 2020). AI HLEG
(2019) defines human control as human agency and oversight. Moreover, individuals must have the
possibility to opt out of automated decisions related to them (AI HLEG 2019). Indeed, Fjeld et al. (2020)
suggest that individuals should be able to request and receive a human review of the decisions made by AI.
Transparency can be understood in two ways: i) the transparency of the AI system itself and ii) the
transparency of the organization(s) developing and using it (Ryan and Stahl 2020). The former refers to the
understanding of how the system is designed and how it reaches a decision (Barredo Arrieta et al. 2020),
and the latter refers to the understanding of what, why, and by whom the decisions were made during the
development and design processes (Vakkuri, Kemell, and Abrahamsson 2019). A simple solution to increase
transparency would be to release the algorithm’s source code or the inputs and outputs that are used to
make the decisions (Lepri et al. 2018). Furthermore, organizations could also consider minimizing the use
of black boxes in sensitive domains, such as healthcare or criminal justice (Ryan and Stahl 2020). For
traceability and increased transparency, AI HLEG (2019) suggests that organizations should document all
the used algorithms and datasets, as well as the model’s behavior and decisions, to the best possible
standards. Moreover, transparency is regularly associated with efforts to increase explainability,
interpretability, or other acts of communication and disclosure (Jobin et al. 2019; Ryan and Stahl 2020).
Indeed, transparency can also be achieved by providing interpretable explanations regarding the processes
that led to the decisions (Lepri et al. 2018). Therefore, the topic of explainable AI (XAI) has gained much
attention in recent years and has become an active field of research (Barredo Arrieta et al. 2020). Barredo
Arrieta et al. (2020) suggest that XAI techniques have the potential to ensure numerous AI principles, such
as fairness, transparency, accountability, safety and security, and privacy. In addition, AI HLEG (2019)
suggests that X by design approaches be used more widely, whereby the general idea is to implement these
principles into the design of the AI system. Indeed, privacy by design, transparency by design, and
security by design are recommended by a number of ethical AI guidelines (see AI HLEG 2019; Felzmann
et al. 2020; Fjeld et al. 2020; Jobin et al. 2019).
However, principles alone cannot guarantee ethical AI (Mittelstadt 2019), and for principles to be useful in
practice, they must be able to guide actions (Whittlestone et al. 2019). Moreover, principles tend to come
into conflict with practice as they are often highly general by nature and thus too broad to be action-guiding
(Whittlestone et al. 2019). Nevertheless, principles can be a valuable part of AI ethics, consolidating
complex ethical issues into a more understandable form that can be agreed upon by a diverse group of
people from multiple fields and sectors (Whittlestone et al. 2019). Moreover, principles can be used to guide
the development of governance practices, international standards, and further regulation (Whittlestone et
al. 2019). Indeed, Fjeld et al. (2020) suggest that “principles are a starting place for governance,” which has
led many national and international organizations to develop expert committees on AI (Jobin et al. 2019).

Forty-Second International Conference on Information Systems, Austin 2021


3
From Ethical AI Principles to Governed AI

These committees include the European Commission’s High-Level Expert Group on AI (AI HLEG), the
OECD Expert Group on AI (AIGO), and Singapore’s Advisory Council on the Ethical Use of Artificial
Intelligence and Data. On top of these committees, companies such as Google, alongside non-governmental
organizations such as the Institute of Electrical and Electronics Engineers (IEEE) and the Association of
Computing Machinery (ACM), have issued guidelines, principles, and other documents on ethical AI and
AI governance. In the following section, we will empirically explore how ethical AI principles are being
translated into practice in organizations.

Methodology
Data Collection
The empirical data was collected through 13 semi-structured expert interviews representing 12
organizations operating in Finland. We used purposeful sampling (Patton 1990) and focused on
organizations that are frontrunners in AI deployment. AI production is an ecosystem with multiple
stakeholders involved in the development, deployment, and use of the actual system (Minkkinen et al. 2021;
Newlands 2021; Tubaro and Casilli 2019). The data and AI system may pass through a number of data
professionals and organizations in the AI supply chain before being deployed (Newlands 2021). Thus, in
data collection, we decided to include the different stakeholders involved in the AI ecosystem. The
organizations were directly associated with AI systems. They either had AI systems in the organization’s
own use or as a sales product, created AI systems for clients, or provided AI-related services. Four of the
interviewed organizations were in the public sector and eight in the private sector.
Thereafter, we identified knowledgeable informants from these organizations. Gioia et al. (2013) suggest
that organizational phenomena are socially constructed by knowledgeable agents who can explain their
thoughts, intentions, and actions. Since this study concerns organizational practices, we considered
management-level experts directly involved in AI development and deployment as the most appropriate
informants for the interviews. All the informants were closely involved in the development or deployment
of AI systems, and they represented a range of managerial positions, from CEOs to lead data scientists.
Identifiable job titles were modified for pseudonymization purposes.
The interviews were conducted in October–November 2020 via Zoom and MS Teams. The interviews were
recorded and transcribed to enable further analysis. We used publicly available materials from the
organizations’ websites and press releases to further examine and interpret the information referred to in
the interviews, such as corporate ethical AI guidelines, and to obtain contextual information about the
organizations. Table 1 provides additional information about the interviews conducted.
Informant Business Field Job Title Interview
Duration
I1 Software Service, AI Platform Chairperson of the Board 40 min
I2 IT Consultancy Analytics Executive 40 min
I3 Software Service, AI Platform Chief Executive Officer 30 min
I4 Public Service Analytics Lead 50 min
I5 IT Consultancy Chief Executive Officer 55 min
I6 IT Consultancy Lead Consultant 70 min
I7 Financial Services Lead Data Scientist 60 min
I8 Software Service, Maritime Industry Chief Executive Officer 50 min
I9 Public Service Chief Innovation Officer 45 min
I10 University Chief Information Officer 35 min
I11 Software Service, Business Applications Chief Executive Officer 30 min
I12 Public Service Senior Specialist 65 min
I13 Retail Head of Analytics 55 min
Table 1 Details of the interviews
Participation was entirely voluntary, and the informants were assured anonymity. This created an
opportunity for the participants to talk freely and describe their own experiences without being worried
about leaking confidential information. The interview questions focused on how organizations addressed

Forty-Second International Conference on Information Systems, Austin 2021


4
From Ethical AI Principles to Governed AI

the ethical concerns and AI principles presented in the previous section and were adjusted as new
knowledge emerged from the interviews (Gioia et al. 2013). Examples of the interview questions include:
“Which measures, mechanisms, and practices are carried out to ensure the fair and ethical development,
deployment, and use of AI systems?”; “Which measures, mechanisms, and practices are carried out to
ensure the proper functioning of AI systems throughout their lifecycle?”; “Which measures, mechanisms,
and practices are carried out to increase transparency in the development, deployment, and use of AI
systems?”; and “How would you say is accountability for AI is delegated at [organization]?” The informants
were encouraged to discuss these with their own terms and concepts (Gioia et al. 2013). The idea was to
avoid the excess use of existing terminology and practices to discover new concepts and best practices (Gioia
et al. 2013).

Data Analysis
We used the Gioia method (Gioia et al. 2013) to guide the analysis process for two reasons. First, it provides
a systematic guide on how to code the data. Second, it focuses on creating a data structure that visualizes
the analysis process (Gioia et al. 2013; Murphy et al. 2017). The data structure visualizes how 1st order
codes, 2nd order categories, and aggregate dimensions relate to each other.
Each interview was recorded with the informant’s permission and later transcribed for data analysis. The
transcripts were then coded using qualitative research software NVivo. Coding and analysis were performed
in four stages. The analysis began with open coding (Strauss and Corbin 1998) by reading each transcript
and generating initial and in vivo codes, i.e., the meaningful terms used by informants or reflecting their
underlying meaning (Gioia et al. 2013). The research questions were used to guide the first round of coding.
In the second stage of the analysis process, axial coding (Strauss and Corbin 1998), similarities and
differences were identified among the initial codes to reduce the number to a more manageable level.
Moreover, these 1st order codes were refined according to our evolving understanding (Strauss and Corbin
1998) and organized into 2nd order categories by similarity in content and logical connection. The second
stage of the analysis process resulted in 28 1st order codes and eight 2nd order categories, namely data
governance, AI governance, AI design, MLOps, education and training, research, AI and data
understanding, and AI and data communication. The complete list of codes with data examples is available
from the first author upon request.
In the third stage of the analysis process, selective coding (Strauss and Corbin 1998), the 2nd order
categories were examined for underlying connections at a higher level of abstraction and distilled even
further into aggregate dimensions. This stage led to four aggregate dimensions, namely governance, AI
design and development, competence and knowledge development, and stakeholder communication, and
a data structure representing the research results (see Figure 1). As an example of the analysis process, the
quote “Things can change over time. And the other is continuous monitoring. We create regular cycles
where we assess if the algorithm is still fully functional or if it should be renewed.” (I2) led to our 1st order
code continuous monitoring. Combined with three other 1st order codes (i.e., bias detection and
mitigation, model validation, and continuous development), we derived the 2nd order category MLOps,
which was later aggregated into the AI design and development dimension. Finally, in the fourth stage, the
results were compared with the relevant literature (e.g., Asatiani et al. 2021; Mayer et al. 2021) to see how
they relate to each other, and whether new concepts had been discovered.
Several measures were taken to ensure a rigorous research process and trustworthy interpretations (Lincoln
and Guba 1985). First, the Gioia method (Gioia et al. 2013) was followed throughout the analysis to make
the process transparent. The Gioia method was designed to bring rigor to qualitative research through a
“systematic approach to new concept development and grounded theory articulation” (Gioia et al. 2013).
Second, the data, analysis, and results were continually discussed with two other researchers. Third, the
results were presented to and discussed with the informants and other experts in the field. Fourth,
secondary sources of information, such as organizations’ websites and press releases, were used as
supplementary material. Fifth, the results are suffused with informants’ quotes and own terms to make
their experiences and voices explicit. Furthermore, the data structure of 1st order codes, 2nd order
categories, and aggregate dimensions is presented in Figure 1, and the complete list of codes with data
examples is available from the first author upon request.

Forty-Second International Conference on Information Systems, Austin 2021


5
From Ethical AI Principles to Governed AI

1st Order Codes 2nd Order Categories Aggregate Dimensions

• Responsible data collection and


processing
• Implementation of GDPR requirements
Data
• Strong information security and data
Governance
protection practices
• Information security and data
protection audits

• AI ethics guidelines and principles to Governance


guide development
• Clear roles and responsibilities
• Impact or risk assessment
• Diverse and cross-functional team
• Approval process or formal discussion AI Governance
of AI projects
• Use AI in less sensitive domains
• Backup plan for AI systems
• Standard process or framework for AI
development

• Human oversight
• Simplest possible solution
• Responsibility by design
AI Design
• Make explainability understandable
• Stakeholder engagement in AI design
and development AI Design and
Development

• Bias detection and mitigation


• Continuous development
MLOps
• Model validation
• Continuous monitoring

• AI or data-related education for Education and


employees Training

• Follow the latest research, guidelines,


Competence and
and trends
Research Knowledge
• Participate in AI-related initiatives,
Development
projects, or research

• Understanding of the organization’s


data and algorithms AI and Data
• Process documentation or modeling AI Understanding
components

• Provide information about the


organization’s data and algorithms
AI and Data Stakeholder
• Inform transparently about human–AI
Communication Communication
interaction and automated decision-
making

Figure 1 Data Structure for Ethical AI Practices

Forty-Second International Conference on Information Systems, Austin 2021


6
From Ethical AI Principles to Governed AI

Results
As a result of the analysis process, we identified four dimensions of ethical AI practices: i) governance, ii)
AI design and development, iii) competence and knowledge development, and iv) stakeholder
communication. These four aggregate dimensions consist of eight 2nd order categories, namely data
governance, AI governance, AI design, MLOps, education and training, research, AI and data
understanding, and AI and data communication, each comprised of a set of ethical AI practices.

Ethical AI as Governance Practices


The first dimension, governance, refers to the set of administrative decisions and practices organizations
use to address ethical concerns regarding the deployment, development, and use of AI systems. The
governance practices consist of data governance and AI governance.
Data governance practices included responsible data collection and processing, implementation of GDPR
requirements, strong information security and data protection practices, and information security and
data protection audits. Moreover, responsibility and transparency in data use and governance were
frequently highlighted.
Responsible data collection and processing refers to how customer data is gathered, stored, and used.
Organizations informed clearly and transparently about the collection and processing of data. Consumer
data rights and consent requirements for data processing were sometimes seen as prerequisites for AI
development. To that end, no data was collected without proper consent.
“First of all, we must be able to demonstrate how we process that data, and that we in fact process
it responsibly and to the purpose it was intended to be used for.” (I4)
Some ethical AI practices were initiated by GDPR requirements. However, it was also noted that in today’s
surveillance economy, these requirements are frequently violated for financial gain. Nonetheless, some
organizations value consumer data rights and even went beyond the bare minimum of GDPR requirements
by turning this into a service application that benefits the customer and increases transparency. GDPR was
seen to encompass every section of data governance, and thus, implementation of GDPR requirements was
noted to solve many ethical issues related to the subject.
“Many ethical issues can be solved by implementing GDPR requirements. You have a clear
understanding of the data collection purposes, how it is processed, and based on what [legal]
grounds. And inform openly and transparently how it is processed and provide a chance to
influence how it is used. There you have the informing, the possibility to influence and consideration
of the legal basis.” (I2)
Information security and data protection were perceived as an essential part of responsible data
governance. Thus, attention was paid to strong information security and data protection practices with
technical solutions, and internal processes and policies. Even a specific ISO 27001 certification for
information security management was used, which encompasses every section of information security-
related measures.
“We have always paid close attention to data protection and information security in all of our
systems.” (I4)
On top of the information security and data protection practices, external information security and data
protection audits were used for tech solutions involving customers. In addition, AI systems were also
scrutinized internally.
“We conduct an external audit of data protection and information security aspects for every system
that involves the customer in any way, before it is even piloted.” (I9)
AI governance practices included AI ethics guidelines and principles to guide development, clear roles and
responsibilities, impact or risk assessment, diverse and cross-functional team, approval process or formal
discussion of AI projects, use AI in less sensitive domains, backup plan for AI systems, and standard
process or framework for AI development.

Forty-Second International Conference on Information Systems, Austin 2021


7
From Ethical AI Principles to Governed AI

The most evident ethical AI practice was for organizations to define their own AI ethics guidelines and
principles (also called tech strategy or rule book) and use them to guide development. For some, these were
a concrete rule book that guided the entire development process, and for others, it was a list of questions
that had to be addressed before the AI system can be deployed. Moreover, these guidelines were used to
identify and mitigate ethical issues in AI development. It was also common to publish these on the
organization’s website.
“We have, for example, published online our ethical AI principles, which are intended to be complied
with in AI initiatives and projects.” (I4)
The importance of clear roles and responsibilities regarding AI development and data governance was
widely highlighted. However, these were still relatively unclear for many, and the roles and responsibilities
were assigned differently in every organization. It was common for the CEO to be ultimately responsible for
the AI system in smaller organizations, whereas the development team or user were accountable in other
organizations. Surprisingly, accountability was frequently shifted downstream in the AI supply chain
toward the user (i.e., deployer organization or end-user), although roles and responsibilities were also
commonly assigned to and within the development team. If the user was held accountable, the AI systems
were mostly used to support decision-making, whereby the user is the one who makes the final decision. To
that end, the user decides how to use the AI system and is responsible for the consequences.
“We began to define the principles for ethical AI, and through that also to define different roles and
responsibilities internally in the organization.” (I9)
Impact or risk assessment was used to identify, manage, or mitigate potential risks and outcomes caused
by AI systems, but also to determine whether the systems should be developed or deployed in the first place.
It was emphasized that the assessment should be as comprehensive as possible and include both ethical
and legal aspects (e.g., how the system could be misused; what are the potential negative or positive
impacts; are there any safety risks or unsafe situations; are there any bias, data protection, privacy or legal
issues). A systematic process or a list of high-risk analytics scenarios was created for this purpose. The
assessment was often conducted by risk management, a separate review board (also called a round table),
or even by the company’s board of directors. In addition, it was emphasized that a diverse team should be
involved in the assessment with stakeholders from legal, privacy, development, and management teams.
“We generally make a risk statement of even the slightly unclear issues, which is conducted by risk
management, who gathers all the [necessary] information and creates a statement of the risks
involved with it.” (I7)
The lack of diversity in AI design and development teams was noted to be a challenge, and thus, the
importance of a diverse and cross-functional team was frequently emphasized. It was noted that a team
with various backgrounds and competencies should be used in AI design and development to better
understand and identify the ethical challenges throughout the system’s life cycle.
“A team with various competencies, not just the coder with a technical background but a
multidisciplinary team and also a diverse team so that it has a broad spectrum of our society at
large.” (I3)
An approval process or formal discussion of AI projects was used to determine whether to approve or deny
a project. It was common to use an impact or risk assessment to that end. Both legal and ethical aspects
were considered, and if the risks were too high or legal requirements could not be fulfilled, the AI project
would not be approved. The approval process was often conducted by a separate review board with people
from multiple business functions (e.g., legal, privacy, development, and management). However, the
process was not always very systematic and extensive as it was sometimes performed merely by the
company’s board. Furthermore, it was a recurring theme to not approve or develop AI systems for certain
purposes, customers, or industries, such as military, instant loan, or gambling organizations.
“All new projects are reviewed by the company’s board, and the risks are also assessed in that
context, particularly if the data includes personal information, so that are we allowed to do this
and what are the potential consequences, negative or positive, if we decide to approve it. There have
been cases where we, I mean the board, have decided not to approve a project as it sounds too
sensitive.” (I5)

Forty-Second International Conference on Information Systems, Austin 2021


8
From Ethical AI Principles to Governed AI

Due to the apparent risks posed by AI and automation, it was a frequent theme to only use AI in less
sensitive domains. The sensitive domains included decisions with significant impacts on people’s lives and
extensive data enrichment, which could be easily misused. However, the concept of sensitivity varied by
organization, and a sensitive domain for some could be basic business for others, such as the use of medical
or biometric data. Some had decided to use AI systems only in supportive functions, not in the main
business operations, to avoid the use of AI in sensitive domains. This was particularly true for public
organizations.
“At the moment, how should I say it, we don’t use so sensitive data, or AI in the kind of automated
decision-making that would result in a real threat to the customer.” (I13)
It was common to have a backup plan for AI systems for situations where the system malfunctions or
cannot be used. The purpose of a backup plan was to maintain operation levels and minimize downtime.
For some, the backup plan was a former way of doing the same process (e.g., manual process or earlier
version), and for others, it was simply an action plan to take the system offline and react quickly to fix the
issue.
“We consider these operating situations where our AI systems are not in use. In these activities and
processes, we build these by default in a way that they also function in situations where the
analytics solutions are not working. So, in these situations, we can take the analytics models offline
and revert to the raw process.” (I7)
The lack of universal taxonomy and standardization was noted to be a major challenge for AI research and
development (R&D). To address this, organizations had created a standard process or framework for AI
development (also called a pipeline, design model, or methodology). The process commonly comprised
clear rules and tools for AI design and development (i.e., bias detection, monitoring, testing, or training) to
make the entire process more coherent and systematic. The purpose of a standard process was to eliminate
potential dependencies, human or software, and to ensure that the AI systems are of uniform quality.
“We standardize the AI development so that it would be just like any other software. There is a
standard process to develop it, so that there are no human or software dependencies. The
dependencies are in a way eliminated.” (I8)

Ethical AI as AI Design and Development Practices


While the governance dimension outlined the higher-level ethical AI policies and practices, the second
dimension, AI design and development, refers to the set of practical methods and practices. The AI design
and development practices consist of AI design and MLOps.
AI design practices included human oversight, simplest possible solution, responsibility by design, making
explainability understandable, and stakeholder engagement in AI design and development. It was
highlighted that AI systems should be developed with responsible functionalities and values in mind and
that these should be implemented in the AI design.
Human oversight refers to using AI systems to support, improve, or facilitate decision-making while
retaining human oversight or control of the process. Organizations had the capability to intervene in the AI
systems operation by keeping a human in command. Therefore, AI systems were mostly used to support
decision-making, and the user was the one who made the final decision. Automated decision-making was
also used with decisions and processes that only had a positive outcome to the user or people affected by it.
As an example, a financial organization used an AI system to approve loan applications, and if the
application could not be approved by the system, it would be transferred to a human operator for further
assessment. In that case, the human operator makes the final decision.
“Right from the start, we decided that our AI system will not make a single decision on the behalf
of the user. So our AI system gives recommendations, but everything that requires a decision, it is
always the end-user who makes the final call.” (I11)
It was a recurring theme to use the simplest possible solution to better understand and manage the AI
systems. Simpler solutions were also noted to be easier and cheaper to develop and maintain. Furthermore,
it was emphasized that AI systems should not be used in every scenario. Instead, every system should be
designed with the objective in mind and the right solution chosen that best fits the objective. Sometimes,

Forty-Second International Conference on Information Systems, Austin 2021


9
From Ethical AI Principles to Governed AI

more complex solutions, such as DNNs, are required, and other times the same objective can be achieved
without the use of AI. However, a decision and balance between accuracy, complexity, and interpretability
might have to be made.
“[We] always try to solve every problem as simply as possible and use the kind of algorithms that
are not too complex and that we understand how they work, etc. where possible.” (I5)
It was highlighted that AI systems should be developed with responsible functionalities and values in mind
and that these should be implemented in the AI design. This is referred to as responsibility by design (also
called traceability by design, privacy by design, or transparency by default). Indeed, it was noted that it
is easier to implement these into the design right from the start, rather than build on top of the system
afterward. Furthermore, it was common for organizations or their clients to have transparency
requirements that had to be implemented in the AI systems.
“It’s mostly that if you get traceability on from the start, then it is much easier to maintain. But if
you don’t, it’s very difficult to build on top of it afterward.” (I8)
The importance of making explainability understandable was highlighted. Indeed, sometimes it is not
enough to provide the source code or mathematical explanations, which are only understandable to experts
in the field. Explainability should rather be created with the audience in mind, which are more often basic
consumers with limited know-how of AI. However, it was also noted that providing too much information
may backfire and confuse or distress the customer. An example of this would be today’s cookie notifications
with too much information and too many options to opt out. Therefore, it is essential to know your customer
and the target audience.
“When we deliver new applications with AI solutions to our retailers, we have to inform what it is
about and based on what the decisions are made in an uncomplicated manner if they are made by
an AI. So this requires us to simplify and make things transparent, which requires new kinds of
skills for the development team and the entire system.” (I13)
Organizations used stakeholder engagement in AI design and development to achieve a customer-driven
approach for AI. Indeed, AI consultancies were in close collaboration with their clients and used their
expertise to verify the AI system’s results. In addition, organizations surveyed their users’ opinions and
views of AI use to gain a better understanding. For example, a public organization had studied the opinions
of AI use in the public sector.
“We do not deploy anything before the results are checked multiple times together with the client so
that they look logical to the experts as well.” (I5)
MLOps (machine learning operations) refers to the AI development practices throughout the entire
development pipeline—from model training and validation to the detection of errors and biases. These
included bias detection and mitigation, continuous development, model validation, and continuous
monitoring.
Bias detection and mitigation was emphasized as an essential part of ethical AI development. The datasets
should be examined for existing biases before they are used to train the AI systems. Indeed, it was noted
that many bias issues could be addressed or prevented with a thorough examination of the dataset’s
features. Moreover, it was highlighted that the datasets should be as representative and diverse a sample of
the target population as possible.
“By examining the data and the features of the data before we even begin to develop the AI system.
It is a key component in the entire data and AI processing that we know the data we are about to
use and the features of that data … Of course you can see the bias from the data. If the credit limit
is always higher for men than women, or women get loan approvals easier than men, so you can
see that already from the data. Just like all the age, sex, and race related biases. These are already
in the data and if you just have the patience to take the time and examine the data before you throw
it in the AI system’s black box.” (I2)
Many informants noted that AI development is a continuous cycle of training, testing, monitoring, and
retraining. In other words, it is continuous development. Indeed, it was common for organizations to use
continuous integration/continuous deployment (CI/CD) pipelines to facilitate AI development. Moreover,

Forty-Second International Conference on Information Systems, Austin 2021


10
From Ethical AI Principles to Governed AI

it was highlighted that AI systems can always be improved with cumulative training data and advances in
AI technologies.
“The basic assumption of AI is that the system is never really complete, but it rather improves over
time as the training data accumulates, and it gets better all the time, that’s the basic statement.”
(I8)
Model validation was used to verify the correctness of the AI system’s results. Both fairness and correctness
should be tested rigorously and within all the target groups, including smaller subgroups. Thus, not only
should the average performance be monitored, but the systems should be “as rigorous as any other code.”
The practices mentioned included peer reviews, internal audits, and even an entire model validation unit
dedicated to all mathematical model validations. Furthermore, pilot projects were used to test the AI
system’s functionality before full deployment.
“We try to train it comprehensively. And of course, test it too. And I must emphasize that we cannot
just look at the algorithm’s general performance but also check it in different target groups. If we
get good results on average, [how] can we be sure that they are good in various subgroups that
might be smaller.” (I2)
Even if the AI system is perfectly functional today, it might not be as good and reliable in the future.
Therefore, continuous monitoring was used to maintain and manage the AI systems but also to detect
model drifts, model decay, errors, and biases. The monitoring activities included scheduled basic reports
and random inspections. The world changes, and the systems should change with it. Indeed, the AI system’s
performance should be monitored occasionally and even be retrained or otherwise modified when
necessary.
“Things can change over time. And the other is continuous monitoring. We create regular cycles
where we assess if the algorithm is still fully functional or if it should be renewed.” (I2)

Ethical AI as Competence and Knowledge Development Practices


The third dimension, competence and knowledge development, refers to the set of practices used to
promote the skills, know-how, and awareness required to implement ethical AI. The competence and
knowledge development practices consist of education and training, research, and AI and data
understanding.
Education and training included AI and data-related education for employees. The purpose of internal
training was to promote general AI knowledge and awareness of AI ethics themes, such as responsibility
and privacy. Furthermore, AI systems are subject to many misconceptions and unrealistic expectations.
Therefore, education was used to dispel these illusions and inform of the real-life risks and opportunities
AI presents.
One of the main challenges of implementing ethical AI was noted to be the lack of understanding and
knowledge. To address this challenge, it was common for organizations to organize AI and data-related
education for employees. The internal education themes included general information on AI and data, data
protection and privacy (e.g., GDPR, personal data, data quality, anonymization, and pseudonymization),
and communication about the organization’s AI systems. Moreover, both ethical and legal aspects were
often discussed. The training sessions included seminars, webinars, workshops, and short online training
courses. It was noted that AI education should be targeted to the entire organization, not only to the
development team.
“In the AI side, especially with the data crew, we have had these internal webinars, where we have
presented, for example, this privacy-preserving AI or data pseudonymization or anonymization
and how data leaks from anonymized data. And then we have had various GDPR and ‘my data’
kind of presentations for the whole firm, which have had a few dozen people listening, and there
we have discussed this ethics aspect too. So yeah, I have held, well maybe a few in a year, like these
kinds of meetings and seminars that reach several dozen people, and there we have discussed what
is personal data, what is modern analytics, and how data protection and ethics are involved with
these.” (I6)

Forty-Second International Conference on Information Systems, Austin 2021


11
From Ethical AI Principles to Governed AI

Research refers to organizations’ own research activities, including participation in AI studies and
knowledge of the existing trends, frameworks, guidelines, and other literature. Indeed, organizations
emphasized “proactivity” in this section. The research practices included following the latest research,
guidelines, and trends and participating in AI-related initiatives, projects, or research.
To keep up with the fast progress of AI, organizations have to follow the latest research, guidelines and
trends. For example, explainable AI has gained much attention, and thus, many of the interviewed
organizations were interested in XAI solutions. Even though none of the organizations had any actual
implementations of these solutions to date, some had started research and development activities on the
topic and studied the potential opportunities it presents.
“[Explainable AI] is actually a subject I would be interested in using in our development pipeline.
Currently, in these experiment projects, we already have components where we have implemented
these. There’s the benefit that with these we may get ideas on how to improve the model. So yes, we
have made some groundwork so that we could implement these … For example, a DALEX package,
which provides a wide range of different solutions to implement explainable AI. We also experiment
with other types of solutions, but we are mainly using R, so these applications by R are the easiest
to implement.” (I4)
Moreover, organizations showed proactivity by participating in AI-related initiatives, projects, or
research. Participating in AI research projects was a way for organizations to become aware of the latest
trends but also to be the ones shaping them. Indeed, some of the interviewed organizations were involved
in the IEEE and ECPAIS (The Ethics Certification Program for Autonomous and Intelligent Systems)
workgroups creating AI standards and certificates.
“We have been involved in the ECPAIS workgroup, so we try to stay up to date on these things, and
in a way, be involved in the discussion of international standards, etc.” (I9)
AI and data understanding refers to the documentation and understanding of the organization’s own AI
systems and data. These practices included understanding of the organization’s data and algorithms and
process documentation or modeling AI components.
Organizations highlighted that they must have an understanding of the organization’s data and algorithms
to manage the AI systems and take responsibility for their decisions and impacts. To that end, some
organizations tried to avoid the use of algorithms they did not understand (i.e., black boxes), while others
used mathematical model validation, documentation, or graphic modeling to understand their AI systems’
behavior and the most significant concepts. Moreover, it was noted that a comprehensive understanding of
the organization’s datasets was needed to identify the existing biases. Thus, systematic data collection
pipelines and curation processes were used to maintain data quality and control but also to pseudonymize
or anonymize sensitive data by design. However, even though algorithmic transparency was often
emphasized, it was not always achievable due to opaque and complex AI systems.
“Our leading thought is that, for example, we have to understand how our AI systems work. That
we cannot have the kind of, technologically or otherwise, total black boxes. And more precisely we
have to understand the systems we use.” (I4)
Organizations used process documentation or modeling AI components to promote transparency. Indeed,
a graphic database or some other type of modeling tool was used to visualize the AI system’s process flows
or the most significant concepts of the system. Moreover, the importance of documentation was
emphasized, which is why it was common to document the algorithms, data, and AI development processes.
“Representation of the internal processes are extremely important and all the data flows, etc. And
the kind of AI ecosystem we have and where it gets the data and what is involved with it, the process
descriptions and these kinds of bigger pictures.” (I13)

Ethical AI as Stakeholder Communication Practices


The fourth and last dimension of ethical AI practices, stakeholder communication, refers to the set of
communication practices organizations use to inform about their ethical AI practices, algorithms, or data.
The stakeholder communication practices consist of AI and data communication.

Forty-Second International Conference on Information Systems, Austin 2021


12
From Ethical AI Principles to Governed AI

AI and data communication included providing information about the organization’s data and
algorithms and informing transparently about human–AI interaction and automated decision-making.
Communication practices were used to promote the organization’s ethical AI practices to build trust with
society and customers.
Customer trust was built by providing information about the organization’s data and algorithms. It was a
recurring theme to provide additional information about the organization’s AI systems and data on top of
the GDPR bare minimum. To that end, an AI register platform had been piloted, to which organizations can
systematically group and classify their AI portfolio. The AI register can then be published to target
stakeholders. Furthermore, it was acknowledged that the organization’s AI systems are not always 100
percent correct, which was then emphasized in marketing communications. Moreover, organizations
communicated transparently about unsafe situations, data processing, or most significant concepts of the
AI systems to target stakeholders. Indeed, by providing information about the AI systems and their
strengths and weaknesses, organizations were trying to build trust with customers and increase the
credibility of their services.
“The goal we would like to achieve is that anyone who visits our customer service, no matter the
service channel, has the opportunity to get the adequate information of how AI or automation is
used in general.” (I9)
Lastly, it was common for organizations to inform transparently about human–AI interaction and
automated decision-making. Informing about human–AI interaction was particularly highlighted with AI
chatbot services but also with other services in general. Moreover, it was noted that organizations should
inform transparently about automated decision-making so that users have the option to revise it with a
human operator.
“The thing that maximizes that trust is this honest, open communication about it. Such as that we
inform what we do, inform, indicate when the customer is dealing with an actual human or AI. And
if an automated decision has been made, we are transparent about it, and it is possible to contact
customer support and find out why the decision was made and if it could be reverted now that I’m
dealing with a human.” (I7)

Discussion
Key Findings
Key Finding 1: Ethical AI principles are implemented through four sets of practices, namely i)
governance, ii) AI design and development practices, iii) competence and knowledge development, and
iv) stakeholder communication.
The ethical AI practices identified here correspond with those outlined in AI guidelines and frameworks
(e.g., AI HLEG 2020; Fjeld et al. 2020; Floridi et al. 2018; Hagendorff 2020; Jobin et al. 2019; Kroll 2018;
Ryan and Stahl 2020; Shneiderman 2020; Vakkuri, Kemell, and Abrahamsson 2020). Indeed, as suggested
by AI HLEG (2019), ethical AI practices encompassed many stages of AI’s life cycle and included both
technical and non-technical methods. This indicates that organizations are aware of the current ethical AI
guidelines and frameworks and that they understand the value of ethical AI. However, there was a
considerable variance in terms of applying the practices as there was no single practice that was mentioned
in all the interviews. This is something that supports the view that AI governance and the implementation
of AI ethics is in a formative stage (Mayer et al. 2021; Minkkinen et al. 2021; Vakkuri et al. 2020).
Furthermore, the objective was not to create a complete list of all the existing ethical AI practices but rather
to conceptualize some of the ones implemented today.
Key Finding 2: Roles, responsibilities, and accountabilities related to AI and its impacts are often
ambiguous.
The data exhibit considerable variance in the roles and responsibilities regarding AI ethics. For many of the
interviewed organizations, the roles and responsibilities were either unclear or in a formative stage. We also
observed that in larger organizations, the development team was considered responsible for the ethical
conduct of their algorithms, while the role and the ultimate responsibility of the CEO was highlighted in
start-up sized companies. In line with the findings reported by Vakkuri, Kemell, and Abrahamsson (2019)

Forty-Second International Conference on Information Systems, Austin 2021


13
From Ethical AI Principles to Governed AI

and Vakkuri et al. (2019), we also observed that the responsibility for the AI systems was frequently shifted
downstream in the AI supply chain toward the user, either to the deployer organization or even the end
user.
Key Finding 3: AI standards, certificates, and audits, as well as explainable AI systems, have not been
implemented in AI deployment.
While the use of AI-specific standards, certificates, and audits, as well as explainable AI systems, were
frequently recommended by the ethical AI guidelines and literature (see AI HLEG 2019; Barredo Arrieta et
al. 2020; Fjeld et al. 2020; Floridi et al. 2018; Jobin et al. 2019; Kroll 2018; Kumar et al. 2020; Shneiderman
2020), the actual use of these was very limited. This could be due to the fact that these are still mainly in a
formative stage, and not that many commercial products are available. Indeed, even the knowledge of AI
standards and certificates was limited, even though some international institutes are working on these (i.e.,
IEEE P7000 and ISO standards). Moreover, it seems that AI organizations were not yet capable of
developing explainable AI systems, even though these were emphasized by the literature.

Implications
This study contributes to the ongoing discussion of AI ethics (see Asatiani et al. 2020; Felzmann et al. 2020;
Fjeld et al. 2020; Floridi et al. 2018; Hagendorff 2020; Jobin et al. 2019; Kroll 2018; Morley et al. 2020;
Ryan and Stahl 2020; Shneiderman 2020; Vakkuri, Kemell, and Abrahamsson 2020; Vakkuri et al. 2020)
by providing empirical insights into how ethical AI principles are put into practice in AI development and
deployment (see Mayer et al. 2021). To this end, we identified that ethical AI principles unfold in four sets
of practices, namely i) governance, ii) AI design and development, iii) competence and knowledge
development, and iv) stakeholder communication. Collectively, these findings point out that the
implementation of ethical AI a) goes beyond technical methods, b) includes several governance practices,
and c) includes various stakeholders. In doing so, our results underscore the importance of a sociotechnical
perspective balancing the benefits and risks related to organizational AI deployment (Asatiani et al. 2021;
Mayer et al. 2021). Our findings align with prior research (see, e.g., Mayer et al. 2021) in highlighting the
role of practices such as risk assessment, competence development, and cross-functional collaboration. In
addition, we extend on Mayer et al. (2021) by underscoring and further elucidating practices related to data
governance, MLOps, and stakeholder engagement and technical requirement specification in AI design.
Second, we further the understanding of factors inhibiting the execution of ethical AI in practice and thus
contribute to the emerging literature of AI management (Berente et al. 2019) and AI governance in
organizations (Schneider et al. 2020). As our findings imply, large-scale implementation of ethical AI
standards and certification, as well as explainable AI systems, is yet to happen. In particular, the ambiguity
in roles and accountabilities related to AI ethics is something that implies that the field is in a formative
stage.
Third, this study advances the understanding of the business value and business drivers of ethical AI.
Indeed, the ethical AI practices identified here were relatively similar to those recommended by today’s
ethical AI guidelines and frameworks. This indicates that frontrunner organizations appear to be aware of
the current trends and that the value of ethical AI is well-understood. However, at the same time, tensions
between the ethical side and business side recurred in the data. Indeed, instead of deep conviction to ethics
as such, implementation of ethical AI principles was often considered beneficial for business or as
something that will ultimately become a necessary evil in the future. All in all, these findings highlight the
existence and the role of the business side of AI ethics.

Limitations and Future Research


Due to the empirical and qualitative nature of this research, the results are limited to the data available and
subject to interpretation. However, the limitations of this study provide opportunities for future research.
Therefore, these limitations are discussed in conjunction with the potential areas of future research.
First, the study was conducted in Finland, and the data was collected from organizations considered
frontrunners in AI. Future research could expand the scope of the inquiry to other contexts. In addition,
future research could focus on a specific company, industry, or application area of AI. Finally, future studies

Forty-Second International Conference on Information Systems, Austin 2021


14
From Ethical AI Principles to Governed AI

could conduct comparisons between different countries, industries, or small and large organizations to
obtain a more comprehensive picture of the field.
Second, AI governance and the implementation of AI ethics is in a formative stage (Mayer et al. 2021;
Minkkinen et al. 2021; Vakkuri et al. 2020). Therefore, the results are bound to this specific time, and they
would have arguably been different if the study had been conducted at another time. Ethical AI practices,
and AI specific standards, certificates, and audits, as well as explainable AI systems, are just beginning to
take shape and be commercialized. In addition, the regulatory landscape is in a formative stage. Therefore,
it would be insightful to conduct a longitudinal study to detect the developments and progress of the field.
Third, we observed that practices related to the governance of AI itself, as well as the data used by the
algorithms, play a crucial role in implementing ethical AI principles. Future research could, therefore, focus
deliberately on AI governance to examine to what extent existing tools and frameworks for, for example, IT
governance (Tiwana et al. 2013; Weill and Ross 2005) and data governance (Abraham et al. 2019; Kroll
2018) apply to AI.
Fourth, the current study and the respective results focused on governing AI at the organization deploying
the AI system as that is the entity legally liable for the impacts of the system. However, the development,
deployment, and use of the actual AI system often takes place in an ecosystem transcending organizational
boundaries (Minkkinen et al. 2021; Newlands 2021; Tubaro and Casilli 2019). In our results, this
collaboration manifested itself in stakeholder engagement and organizations frequently shifting the
responsibility for the AI system downstream in the AI supply chain, for example to the user of the system.
All in all, future research could deliberately examine the inter-organizational activities related to AI
governance.
In summary, prior research on AI ethics has been highly theoretical and conceptual, focusing on creating
ethical AI guidelines and frameworks, whereas empirical research is still scarce and poorly available.
Therefore, future research should focus on addressing this gap between theory and practice to discover
existing best practices, which can then aid in the creation of new methods, tools, frameworks, and guidelines
to implement ethical AI principles into practice.

Acknowledgements
This study was conducted under the AI Governance and Auditing (AIGA) research project and financially
supported by the AI Business Program of Business Finland.

References
Abraham, R., Schneider, J., and vom Brocke, J. 2019. “Data Governance: A Conceptual Framework,
Structured Review, and Research Agenda,” International Journal of Information Management (49),
pp. 424–438. (https://doi.org/https://doi.org/10.1016/j.ijinfomgt.2019.07.008).
Ågerfalk, P. J. 2020. “Artificial Intelligence as Digital Agency,” European Journal of Information Systems
(29:1), Abingdon: Taylor & Francis, pp. 1–8. (https://doi.org/10.1080/0960085X.2020.1721947).
Asatiani, A., Malo, P., Nagbøl, P. R., Penttinen, E., Rinta-Kahila, T., and Salovaara, A. 2020. “Challenges of
Explaining the Behavior of Black-Box AI Systems,” MIS Quarterly Executive (19:4), p. 259.
Asatiani, A., Malo, P., Nagbøl, P. R., Penttinen, E., Rinta-Kahila, T., and Salovaara, A. 2021. “Sociotechnical
Envelopment of Artificial Intelligence: An Approach to Organizational Deployment of Inscrutable
Artificial Intelligence Systems,” Journal of the Association for Information Systems (22:2), Atlanta:
Association for Information Systems, p. 8. (https://doi.org/10.17705/1jais.00664).
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-
Lopez, S., Molina, D., Benjamins, R., Chatila, R., and Herrera, F. 2020. “Explainable Artificial
Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI,”
Information Fusion (58), pp. 82–115. (https://doi.org/https://doi.org/10.1016/j.inffus.2019.12.012).
Benbya, H., Pachidi, S., and Jarvenpaa, S. 2021. “Special Issue Editorial: Artificial Intelligence in
Organizations: Implications for Information Systems Research,” Journal of the Association for
Information Systems (22:2), Atlanta: Association for Information Systems, p. 10.
(https://doi.org/10.17705/1jais.00662).
Berente, N., Gu, B., Recker, J., and Santhanam, R. 2019. “Managing AI,” Call for Papers, MIS Quarterly.

Forty-Second International Conference on Information Systems, Austin 2021


15
From Ethical AI Principles to Governed AI

Breidbach, C. F., and Maglio, P. 2020. “Accountable Algorithms? The Ethical Implications of Data-Driven
Business Models,” Journal of Service Management (31:2), Bingley: Emerald Group Publishing Ltd, pp.
163–185. (https://doi.org/10.1108/JOSM-03-2019-0073).
Cath, C. 2018. Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and
Challenges, The Royal Society Publishing.
Chiao, V. 2019. “Fairness, Accountability and Transparency: Notes on Algorithmic Decision-Making in
Criminal Justice,” International Journal of Law in Context (15:2), Cambridge, UK: Cambridge
University Press, pp. 126–139. (https://doi.org/10.1017/S1744552319000077).
van Esch, P., Black, J. S., and Ferolie, J. 2019. “Marketing AI Recruitment: The next Phase in Job
Application and Selection,” Computers in Human Behavior (90), Elsevier Ltd, pp. 215–222.
(https://doi.org/10.1016/j.chb.2018.09.009).
Felzmann, H., Fosch-Villaronga, E., Lutz, C., and Tamò-Larrieux, A. 2020. “Towards Transparency by
Design for Artificial Intelligence,” Science and Engineering Ethics (26:6), pp. 3333–3361.
(https://doi.org/10.1007/s11948-020-00276-4).
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., and Srikumar, M. 2020. Principled Artificial Intelligence:
Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, Berkman Klein
Center for Internet & Society.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R.,
Pagallo, U., Rossi, F., Schafer, B., Valcke, P., and Vayena, E. 2018. “AI4People—An Ethical Framework
for a Good AI Society: Opportunities, Risks, Principles, and Recommendations,” Minds and Machines
(28:4), pp. 689–707. (https://doi.org/10.1007/s11023-018-9482-5).
Gioia, D. A., Corley, K. G., and Hamilton, A. L. 2013. “Seeking Qualitative Rigor in Inductive Research,”
Organizational Research Methods (16:1), Los Angeles, CA: Sage Publications, pp. 15–31.
(https://doi.org/10.1177/1094428112452151).
Hagendorff, T. 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines,” Minds and Machines
(Dordrecht) (30:1), Dordrecht: Springer, pp. 99–120. (https://doi.org/10.1007/s11023-020-09517-8).
Harlow, H. 2018. “Ethical Concerns of Artificial Intelligence, Big Data and Data Analytics,” in European
Conference on Knowledge Management, Kidmore End: Academic Conferences International Limited,
pp. 316–323.
High-Level Expert Group on AI. 2019. “Ethics Guidelines for Trustworthy AI,” Brussels, April.
(https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai).
Ho, C. W. ., Soon, D., Caals, K., and Kapur, J. 2019. “Governance of Automated Image Analysis and Artificial
Intelligence Analytics in Healthcare,” Clinical Radiology (74:5), England: Elsevier Ltd, pp. 329–337.
(https://doi.org/10.1016/j.crad.2019.02.005).
Jobin, A., Ienca, M., and Vayena, E. 2019. “The Global Landscape of AI Ethics Guidelines,” Nature Machine
Intelligence (1:9), pp. 389–399. (https://doi.org/10.1038/s42256-019-0088-2).
Kaplan, A., and Haenlein, M. 2019. “Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the
Interpretations, Illustrations, and Implications of Artificial Intelligence,” Business Horizons (62:1),
Elsevier Inc, pp. 15–25. (https://doi.org/10.1016/j.bushor.2018.08.004).
Kroll, J. A. 2018. “Data Science Data Governance [AI Ethics],” IEEE Security & Privacy (16:6), pp. 61–70.
(https://doi.org/10.1109/MSEC.2018.2875329).
Kumar, A., Braud, T., Tarkoma, S., and Hui, P. 2020. Trustworthy AI in the Age of Pervasive Computing
and Big Data.
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., and Vinck, P. 2018. “Fair, Transparent, and Accountable
Algorithmic Decision-Making Processes,” Philosophy & Technology (31:4), pp. 611–627.
(https://doi.org/10.1007/s13347-017-0279-x).
Lincoln, Y. S., and Guba, E. G. 1985. Naturalistic Inquiry , Newbury Park, Calif: Sage.
Martin, K. 2019. “Designing Ethical Algorithms,” MIS Quarterly Executive, pp. 129–142.
(https://doi.org/10.17705/2msqe.00012).
Mayer, A.-S., Haimerl, A., Strich, F., and Fiedler, M. 2021. “How Corporations Encourage the
Implementation of AI Ethics,” in 29th European Conference on Information Systems (ECIS).
(https://eref.uni-bayreuth.de/65097/).
Minkkinen, M., Zimmer, M. P., and Mäntymäki, M. 2021. “Towards Ecosystems for Responsible AI:
Expectations, Agendas and Networks in EU Documents,” in Proceedings of the 20th IFIP Conference
on E-Business, e-Service and e-Society.
Mittelstadt, B. 2019. “Principles Alone Cannot Guarantee Ethical AI,” Nature Machine Intelligence (1:11),
pp. 501–507. (https://doi.org/10.1038/s42256-019-0114-4).

Forty-Second International Conference on Information Systems, Austin 2021


16
From Ethical AI Principles to Governed AI

Morley, J., Floridi, L., Kinsey, L., and Elhalal, A. 2020. “From What to How: An Initial Review of Publicly
Available AI Ethics Tools, Methods and Research to Translate Principles into Practices,” Science and
Engineering Ethics (26:4), Dordrecht: Springer, pp. 2141–2168. (https://doi.org/10.1007/s11948-019-
00165-5).
Murphy, C., Klotz, A. C., and Kreiner, G. E. 2017. “Blue Skies and Black Boxes: The Promise (and Practice)
of Grounded Theory in Human Resource Management Research,” Human Resource Management
Review (27:2), pp. 291–305. (https://doi.org/https://doi.org/10.1016/j.hrmr.2016.08.006).
Newlands, G. 2021. “Lifting the Curtain: Strategic Visibility of Human Labour in AI-as-a-Service,” Big Data
and Society (8:1). (https://doi.org/10.1177/20539517211016026).
Patton, M. Q. 1990. Qualitative Evaluation and Research Methods , (2nd ed.), Newbury Park, Calif: Sage.
Ryan, M. 2020. “In AI We Trust: Ethics, Artificial Intelligence, and Reliability,” Science and Engineering
Ethics (26:5), pp. 2749–2767. (https://doi.org/10.1007/s11948-020-00228-y).
Ryan, M., and Stahl, B. C. 2020. “Artificial Intelligence Ethics Guidelines for Developers and Users:
Clarifying Their Content and Normative Implications,” Journal of Information, Communication and
Ethics in Society. (https://doi.org/10.1108/JICES-12-2019-0138).
Schneider, J., Abraham, R., and Meske, C. 2020. AI Governance for Businesses.
Shneiderman, B. 2020. “Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and
Trustworthy Human-Centered AI Systems,” ACM Transactions on Interactive Intelligent Systems
(10:4), New York: Assoc. Computing Machinery. (https://doi.org/10.1145/3419764).
Stilgoe, J. 2018. “Machine Learning, Social Learning and the Governance of Self-Driving Cars,” Social
Studies of Science (48:1), United States, pp. 25–56. (https://doi.org/10.1177/0306312717741687).
Strauss, A. L., and Corbin, J. 1998. Basics of Qualitative Research : Techniques and Procedures for
Developing Grounded Theory , (2nd ed.), Thousand Oaks (Calif.): Sage.
Tiwana, A., Konsynski, B., and Venkatraman, N. 2013. “Information Technology and Organizational
Governance: The IT Governance Cube,” Journal of Management Information Systems (30:3), Taylor
& Francis, pp. 7–12.
Tubaro, P., and Casilli, A. A. 2019. “Micro-Work, Artificial Intelligence and the Automotive Industry,”
Journal of Industrial and Business Economics (46:3), pp. 333–345. (https://doi.org/10.1007/s40812-
019-00121-1).
Vakkuri, V., Kemell, K.-K., and Abrahamsson, P. 2019. “Ethically Aligned Design: An Empirical Evaluation
of the RESOLVEDD-Strategy in Software and Systems Development Context,” in 2019 45th Euromicro
Conference on Software Engineering and Advanced Applications (SEAA), pp. 46–50.
(https://doi.org/10.1109/SEAA.2019.00015).
Vakkuri, V., Kemell, K.-K., and Abrahamsson, P. 2020. ECCOLA - a Method for Implementing Ethically
Aligned AI Systems, Proceeding. (https://doi.org/10.1109/SEAA51224.2020.00043).
Vakkuri, V., Kemell, K.-K., Kultanen, J., and Abrahamsson, P. 2020. “The Current State of Industrial
Practice in Artificial Intelligence Ethics,” IEEE Software (37:4), pp. 50–57.
(https://doi.org/10.1109/MS.2020.2985621).
Vakkuri, V., Kemell, K.-K., Kultanen, J., Siponen, M., and Abrahamsson, P. 2019. Ethically Aligned Design
of Autonomous Systems: Industry Viewpoint and an Empirical Study.
Veale, M., Van Kleek, M., and Binns, R. 2018. Fairness and Accountability Design Needs for Algorithmic
Support in High-Stakes Public Sector Decision-Making, Proceeding.
(https://doi.org/10.1145/3173574.3174014).
Wall, L. D. 2018. “Some Financial Regulatory Implications of Artificial Intelligence,” Journal of Economics
and Business (100), pp. 55–63. (https://doi.org/https://doi.org/10.1016/j.jeconbus.2018.05.003).
Weill, P., and Ross, J. 2005. “A Matrixed Approach to Designing IT Governance,” MIT Sloan Management
Review (46:2), Cambridge: Massachusetts Institute of Technology, p. 26.
(http://libproxy.sdsu.edu/login?url=https://www.proquest.com/scholarly-journals/matrixed-
approach-designing-governance/docview/224976041/se-2?accountid=13758).
Whittlestone, J., Nyrup, R., Alexandrova, A., and Cave, S. 2019. The Role and Limits of Principles in AI
Ethics: Towards a Focus on Tensions, ACM. (https://doi.org/10.17863/CAM.37097).

Forty-Second International Conference on Information Systems, Austin 2021


17

View publication stats

You might also like