You are on page 1of 82

"Ethics and profits are not mutually exclusive.

Ethics failures put customer trust—and thus,


profits—at risk." -Cansu Canca, PhD, Founder and Director of AI Ethics Lab

"When it comes to figuring out how to achieve AI ethics in practice, we need the
technologists’ voices and perspectives just as much as we need input from philosophers,
sociologists, legal experts and enterprise leaders." -Beena Ammanath, Executive Director,
Deloitte AI Institute and Founder, Humans For AI

"[T]he bigger question here is—what are the ultimate societal success metrics for the AI we
build?"-John C. Havens, Executive Director, The IEEE Global Initiative on Ethics of
Autonomous and Intelligent Systems

"You can't quantify and operationalize everything. You will never be able to do
'automated' algorithmic audits that are comprehensive." -Dr. Rumman Chowdhury, CEO
and Founder of Parity

"Having principles, or boards or training and framework are good, but only the beginning
of the journey." - Maria Axente, Responsible AI and AI for Good Lead, PwC UK

THE BUSINESS CASE FOR AI ETHICS | 2


"There’s a real danger that without proper training on data evaluation and spotting the
potential for bias in data, vulnerable groups in society could be harmed or have their rights
impinged. AI also has intersectional implications on criminal and racial justice,
immigration, healthcare, gender equity, and current social movements."-Shalini Kantayya,
Director of Coded Bias

"Operationalizing of AI ethics requires a 'whole of organization' approach. It is not just a


matter for the data science team or an AI ethics board."-Ansgar Koene, Global AI Ethics
and Regulatory Leader / EY

"Operationalizing AI ethics approaches, in my opinion, continues to prioritize


scalability and generalizability over impact to people."-Brandeis Marshall, CEO and Co-
Founder/DataedX

"While regulation is necessary it is not able to pinpoint nuances that happen behind closed
doors. The people building such systems are in a unique position of power in terms of
knowledge about the nuances of the product and their ability to course-correct."
-Aparna Ashok, Technology Anthropologist, Service Designer, Applied AI Ethics
Researcher

"The first challenge is getting senior leaders to take it seriously. Some already do, but
that's a small percent of senior AI leaders."-Reid Blackman, PhD, CEO & Founder Virtue

"Operationalizing AI ethics first begins with a detailed plan that must originate from
key stakeholders at the C-level, workers including data science and annotations
professionals, and input from the communities you serve."-Liz O'Sullivan, VP of
Responsible AI at Arthur, and Technology Director of STOP (The Surveillance
Technology Oversight Project

THE BUSINESS CASE FOR AI ETHICS | 3


Table of
Contents
Welcome Community Interviews

5 About All Tech is Human | Contributors 24 Perspectives from 28 leaders in the field

6 Foreword by Abhishek Gupta

Recommendations Stay in Touch

7 Introduction 82 Ways to join the conversation | Contact


12 Ethical AI ownership and responsibility

15 Leverage the existing AI ethics

knowledge base

18 Ways to increase buy-in from leadership

21 Create a set of real solutions

23 Conclusion

THE BUSINESS CASE FOR AI ETHICS | 4


This report was prepared by a team at All Tech Is Human – an organization that is building the
Responsible Tech pipeline, making it more diverse, multidisciplinary, and aligned with the public
interest.

Team behind the report:

ALAYNA KENNEDY DAVID RYAN POLGAR


AI ETHICS RESEARCHER FOUNDER, ALL TECH IS HUMAN

EDITORIAL TEAM

Anna Slavina Jessica Ji


Anova Hou Jordan Famularo
Deborah Hagar Kevin Fumai
Eva Sachar Lorenn Ruster

SPECIAL THANKS TO THE MANY EDITORS AND EXPERTS WHO CONTRIBUTED TO THIS REPORT

Abhishek Mathur Charles Radclyffe Jessica Pham-Ruhland Moe Sunami Samuela Marchiori
Adrian J. Mack, PhD Charlie Craine Jigyasa Sharma Monika Viktorova Sanhitha Cherukupally
Aishwarya Jare Chhavi Chauhan Joan Mukogosi Nadia Piet Sara Jordan
Amanda Pogue Chris McClean Joey Gasperi Nandini Ranganathan Sara Kimmich
Amit Dar Claudia Igbrude John C. Havens Nandita Sampath Sara Murdock
Ana Chubinidze Cynthia Mancha Joshua Ukairo Stevens Nina Joshi Sara Rubinow
Ana Rollán Dan Gorman Kacie Harold Nupur Sahai Shea Brown
Andrew Sears Dan Wu Kapil Chaudhary Olivia Gambelin Sibel Allinson
Aneekah U Eli Clein Karen Aoysia Barreto Osiris Parikh Sidney Madison
Angelica Li Elisa Ngan Karina Alexanyan, PhD Oyidiya Oji Palino Prescott
Ankita Joshi Ellysse Dick Katherine Lewis Pamela Jasper Siva Mathiyazhagan
Ansgar Koene Emanuel Moss Katrina Ingram Pavani Reddy Supriyo Chatterjee
Arsh Shah Felicia Chen Kayla Brown Philip Walsh Swathi Young
Arushi Saxena Firat M. Kevin Macnish Ploipailin Flynn Susannah Shattuck
Ben Roome Haciahmetoglu Lauren Mobertz Phaedra Boinodiris Tania Duarte
Bethany Edmunds Fitz Mullins LavinaRamkisson Portia Pascal Tim Clements
Beverley Hatcher- Gabriel Kobus Lilia Brahimi Rachel Stockton Titus Kennedy
Mbu Gunjan Kishor Lydia Hooper Randall Tran Tracy McDowell
Bijal Mehta Harini Gokul Mark Cooper Rebekah Tweed Ursula Maria Mayer
Camilla Aitbayev Jack-Lucas Chang Matthew Chan Renee Wurth, Ph.D Victoria Allen
Cara Davies Janna Huang Mayra Ruiz-McPherson Roshni Londhe WillmaryEscoto, J.D.
Cara Hall Jeff Felice Merve Hickok Rumman Chowdhury Yada Pruksachatkun
Caryn Lusinchi Jennifer Dalby Michelle Calabro Sachi Bafna

THE BUSINESS CASE FOR AI ETHICS | 5


Tech Is Human team that captures some
FOREWORD
of the core challenges that we face today

Abhishek Gupta
in moving from abstract to concrete in
the field of AI ethics. Taking the lens of
business objectives and organizational
change, the ideas presented at the
beginning of this report help to situate
the reader, providing you with the
Founder and Principal Researcher, Montreal AI vocabulary to navigate the domain
Ethics Institute & Machine Learning Engineer and confidently. The vignettes from each of
the featured profiles provide us with the
CSE Responsible AI Board Member, Microsoft lessons learned by people who are
working on addressing these challenges
today in an applied manner.

As I mention in my forthcoming book


Actionable AI Ethics (2021), a handbook
to move from principles to practice, we
need to start taking action now and the
way to do that is through multiple
channels: community engagement,
technical measures, and organizational
change. This report gives insights into
how people have attempted changes
along these axes and I hope that the
lessons will be valuable to you just as
they have been to me.

No matter where you are in the AI


ecosystem, we all have a role to play in
making AI systems more ethical, safe, and
inclusive. Together with the right
knowledge and call to action, we can
make this a reality! Carpe diem!

Connect with Abhishek Gupta


@atg_abhishek

AI ethics has become one of the most- a framing can cause material harm to
watched buzzwords of the past us making progress toward realizing
couple of years. This is both a positive AI ethics in practice.
and a negative outcome. Positive
because there is increased awareness What we need is to learn from lived
of the harms that arise from experiences of those who face the
indiscriminate utilization of AI harms from such systems and from
systems. Negative because there are those who are actually trying to
currently a lot more abstract implement these ideas within their
discussions compared to communities and organizations. That
operationalization. Having worked in is the source of knowledge that will
this domain for several years, and help us create a more fair, just, and
through my work at the Montreal AI well-functioning society.
Ethics Institute, an international non-
profit institute that is democratizing This report is the result of an
AI ethics literacy, I’ve seen how such extensive survey conducted by the All

THE BUSINESS CASE FOR AI ETHICS | 6


Introduction
As artificial intelligence (AI) development has intensified over the last decade, the field of AI ethics has
evolved to guide innovation and competition toward sustainable goals. In response to AI’s scientific and
engineering development, a diverse community of advocates for responsible technology has stepped up
to provide frameworks, recommendations, and governance to improve AI’s potential for healthy
business competition, social good, and environmental viability. This group and its activity, which we
refer to as a responsible tech ecosystem, includes a wide range of individuals, organizations, and
initiatives that enhance mutual understanding about how AI is made, used, and monitored.

The technical and social complexity of AI systems has required a multi-voice effort to explore what AI
can do, what it should do, and what it could do in the future. The responsible tech ecosystem is a venue
where such issues are examined, guardrails are proposed, and value propositions are offered.

The course of AI
development is
widening.

AI development is at an inflection point in 2021. While tests in research settings are conquering ever-
higher performance goals, deployment of AI in real-world conditions is keeping pace, scaling wider and
deeper. A new wave of implementation is surging within private enterprise, public agencies, and
partnerships between commerce and government. Organizations are using the technology to optimize
efficiency and profitability, which may be seen in metrics for operations, financial reporting, human
resources, customer service, and a growing list of other aspects of running a business. Governments are
using and considering AI systems to determine which citizens receive benefits or who is subject to
increased policing. Blended efforts that combine private and public capabilities have been proposed to
redress complex societal matters, for example, in public health and environmental science.

AI continues to infuse life beyond experimental settings by expanding from the theoretical realm to the
corporate boardroom, online marketplace, private home, and public sidewalk—and the responsible tech
ecosystem needs to move with it. Conversations and actions need to build new tools and language to
deal with novel implementations of AI. In this publication, we focus on business and corporate settings
where AI systems are being developed, used, and assessed.

THE BUSINESS CASE FOR AI ETHICS | 7


Objective
This guide offers the emerging responsible tech ecosystem a blueprint for promoting ethical AI through
bottom-up and top-down change at the organizational and individual levels. It focuses on enterprise
issues in AI development and implementation, but its recommendations are applicable to other
organizational and political contexts. It provides answers to the question:

How do you enact change inside


of an organization with the goal
of operationalizing AI ethics?

By responding to this question, we address concerns that organizations have about matching their
ethical aims with their practical execution. This distance between theory and action is what a 2021
World Economic Forum white paper has called an intention-action gap (link:
http://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology_2021.pdf).

Audience
Successful operationalization of ethical AI requires an interplay of stakeholders. This report supplies a
playbook for three organizational dynamics that give momentum to a sustainable vision for AI
implementation:

1. Tech workers feel empowered to innovate responsibly.


2. Leadership feels confident about the ROI of ethical AI.
3. All contributors have access to the knowledge base that is cultivated by the responsible tech
ecosystem outside and inside the organization.

In tandem, these three conditions generate synergy toward business and societal values that may

THE BUSINESS CASE FOR AI ETHICS | 8


be gained from ethically aligned AI. For instance, designers and engineers will more successfully build
responsible systems if they have adequate buy-in from company leadership because of improved
organizational culture and resources. At the same time, vibrant flows of knowledge in an organization
from its own members and the wider ecosystem of advocates can enhance the ways that board members
and executives understand the long-term implications of AI deployment. Likewise, better information
from research communities and policy discussions serve to arm tech workers with guidelines and
toolkits.

Consider Alex, a manager who oversees people and processes at a hypothetical data-driven company.
At an early point in Alex’s time with the firm, the CEO announced a statement of ethics principles to
communicate a commitment to responsible use of AI. Until the organization found ways to convert
principles to action, however, Alex and fellow employees questioned whether the stated guardrails
were actually being met. The operationalization of AI ethics, on the other hand, puts the firm on a new
course by creating conditions for progress.

A framework for action


“Operationalization” refers to making change in a company’s practices, projects, and deliverables—
therefore, it requires action beyond theory and principles. To operationalize ethical AI, an organization
creates specific, actionable directions for future-oriented strategies, constructive criticism, audits, and
assessments.

Moving from theory to action is not always straightforward, particularly for individuals in an
organization that lacks a culture in which contributors are invited to reflect on and speak up about
values, mission, and outcomes. Unless an organization is self-reflective, downsides might involve
existential risks to the business or a line of business, inability to attract top talent, strained
relationships with business partners, declines in customer loyalty, regulatory fines, civil and criminal
lawsuits, unintended harms to communities, and/or amplification of historical biases against
marginalized groups.

For many organizations, the embedding of ethics into AI development and implementation will involve
change to governance, policy, and procedure. Some companies may create new roles, shift
responsibilities in existing roles, or reconfigure teams. Unique and unfamiliar challenges may arise in
operationalizing responsible AI, even for those who have experience in change management.

THE BUSINESS CASE FOR AI ETHICS | 9


To address distinctive issues that AI poses for change management, we recommend that
organizations pay deliberate attention to four principles that require special
consideration:

Executive accountability

Worker empowerment
and protection
AI Ethics Framework Regulatory effectiveness

Stakeholder awareness

1. Executive accountability. The executive team and board are responsible to


stakeholders for outcomes associated with AI technologies that their organizations
create and deploy. In an increasing number of important contexts, executives might now
or in the future be held accountable for harm caused by these tools. These leaders can
contribute to a culture of trust by actively engaging in and fully supporting the adoption
and operationalization of AI ethics.

2. Worker empowerment and protection. With ethically aligned standards in place,


workers need to be empowered to make and influence decisions in real time about AI
systems’ design, development, deployment, monitoring, and suspension. The permissive
conditions for giving voice to problems and complexity must go beyond lip-service and
policy to include protections for workers who identify ethical violations, omissions, and
harms.

3. Stakeholder awareness. Across the organization, team members will need to invest
time in cultivating awareness and knowledge of the reasons for operationalizing
workflows and processes that promote ethical AI. Stakeholders who need to be
equipped in this way include leaders, board members, contractors, vendors, customers,
business partners, employees, and communities affected by the business.

4. Regulatory effectiveness. At local, regional, national, and international levels,


governance of AI systems is rapidly evolving. Companies need to build trusting
relationships with entities in power. Those that take proactive steps to adopt ethically
aligned guardrails for designing and using AI can leverage their progress to improve
dealings with regulators and influence governance outcomes.

THE BUSINESS CASE FOR AI ETHICS | 10


Organization of this report
In the following sections of this document, we describe four parts of an ethically aligned organization,
which we substantiate with real-world insights from 28 leaders and changemakers in the space. First,
we address profiles and functions of people who take ownership and responsibility for AI ethics in a
business. Second, we describe how individuals may leverage knowledge generated inside and outside
their organization about AI governance. Third, we focus on tools for increasing buy-in from company
leadership for producing action out of intentions and principles. Fourth, we explain how to build on
internal support to develop practices and processes.

Our survey results are found in the Community Interviews section of this guide. Readers will find
questions and answers that are based on practical experience with and direct knowledge of decisions
that businesses face.

A language of persuasion
Without a sense of the business case for AI ethics, even the most passionate advocates for responsible
tech will face an uphill battle when trying to execute real and meaningful change in the commerce and
investment spheres. To translate academic and policy work regarding AI ethics into an action
framework for enterprise, this guide shifts the mode of expression into one of organizational change
and management. It adjusts some prevailing discourse from activism, academia, and governance into
language that businesses use. This effort draws authenticity and credibility from the interview texts in
the latter pages of this guide, which come from people who do translational work between enterprise
and stakeholders.

For people who are new to the field of AI ethics, this guide offers an invitation to learn key terms in the
associated business vocabulary. It further provides understandable terminology for producing a plan of
action and winning supporters within a business organization.

Tools to increase buy-in Ability to take action Informed by knowledge Ownership


from leadership base

THE BUSINESS CASE FOR AI ETHICS | 11


Ethical AI ownership and responsibility
Whose responsibility is AI ethics? AI ethics must become a corporate-wide responsibility in fulfilling the
goals and standards of an organization’s commitment to ethical business practices in the development
and use of AI. With a greater focus in recent years around the unintended consequences and negative
externalities regarding AI, especially regarding its impact on marginalized communities, all contributors
to organizations that make and use AI systems can take an active role in cultivating responsibility for
equitable social impact. This leadership is an important component in building trust with all
stakeholders, and in the organization’s ethical use of AI.

"AI ethics must become a corporate-wide responsibility in fulfilling the goals and
standards of an organization’s commitment to ethical business practices in the
development and use of AI. With a greater focus in recent years around the unintended
consequences and negative externalities regarding AI, especially regarding its impact
on marginalized communities, all contributors to organizations that make and use AI
systems can take an active role in cultivating responsibility for equitable social
impact."
Developing a corporate-wide culture requires commitment from the top in establishing the corporate
commitment to AI ethics and standards. A supportive culture is essential to fulfilling this commitment
and making ethical use of AI a reality. It is important to effectively engage all key stakeholders and to
align with the corporate culture that best supports this vision, whether top-down or bottom-up. This in
turn requires expanded roles and responsibilities for technology workers that cross domains, processes,
methodologies, and departmental operations. An effective ethical AI model will support the inclusion
and empowerment of key players as part of its implementation.
An example of a corporate-wide model, with identified roles and responsibilities, could include:

Technology Workers: The role for technology workers is to manage the technology infrastructure
required to collect, store, and distribute data. The technology function is critical in ensuring
effective governance and selecting toolkits required for the ethical use of AI.

Technology workers are responsible for: technology infrastructure, data governance, data
accuracy and quality, and data privacy, in accordance with the corporate AI Ethical Standards.

Data Scientists: The role of data scientists is to provide the organization with the information
needed for effective problem-solving and decision-making with algorithms that provide the
organization with accurate business intelligence.

Data scientists are responsible for: transparency in data construction, disclosure in assumptions
utilized in development of algorithms, information value, and compliance with ethical AI metrics.

THE BUSINESS CASE FOR AI ETHICS | 12


Chief Executive Officer: The role of the CEO is to set corporate commitment to ethical values in
the use of AI via a Code of Ethics that supports human values for all stakeholders, endorsed by the
Board of Directors. The CEO’s role is to further ensure compliance by establishing a Corporate
Compliance AI Ethics Committee, chaired by a Technology Executive, that monitors performance
and annually produces a report to the Board of Directors.

The CEO is responsible for: corporate reporting to the Board of Directors on ethical practices
and metric results.

Boards of Directors: The role of the Board of Directors is to endorse the CEO’s Code of Ethics and
to sign off on an Annual Report on ethical AI use and metric results, as reported by the Corporate
Compliance AI Ethics Committee.

The Board of Directors is responsible for: corporate trust and corporate AI Ethical Compliance.

AI Ethics Compliance Committee: The role of the AI Ethics Compliance Committee, under the
direction of a Technology Executive and AI Ethicist, is to audit corporate compliance with the
corporate Code of Ethics through monitoring and measurement. The Committee further monitors
for continuous improvement and development of best practices.

The Compliance Committee is responsible for: issuing an annual report to the Board of Directors
on corporate compliance with the AI Ethical Values and Standards.

HR: The role of HR is to develop basic orientation for all employees to meet the corporate AI
Ethical Standards and to support commitment to these values and goals. HR further monitors the
corporate adherence to the Code of Ethics in the hiring, promotion, and firing of all employees.

HR is responsible for: orientation of all employees to the corporate Code of Ethics and reporting
of any violation to the AI Ethics Compliance Committee, especially resulting from the violation
of any employee’s rights in fulfilling the corporate AI ethics commitment.

Department/Business Line User: The role of the department/business line user is to comply with
the corporate Code of Ethics and to ensure accountability in the ethical use of AI in departmental
operations.

The department/business line user is responsible for: departmental operations and the impact of
decisions made in the use of AI.

THE BUSINESS CASE FOR AI ETHICS | 13


As outlined in Omidyar Network's Ethical Explorer (link: https://ethicalexplorer.org/), tech workers can
become the stewards and ensure the corporate technology infrastructure and network is a trusted
corporate asset that:

Ignites change through dialogue: “Start small, discover common


ground, and empower your team to create human-centered
technology—one conversation at a time.”

Creates a culture of questioning: “Recognize, challenge, and question


the decisions we make. The more we use our voices, the more we’ll
inspire others to do the same.”

Supports human values: “Be intentional about building tech


that values fundamental human rights, empowers users, and
creates healthy online experiences.”

Our report is built on the premise that all technology workers can take an active role in accepting
responsibility for the social impact and adherence to a code of ethics in the use of AI. Leading the
internal effort to build “trusted” use of AI with ethical standards will enable tech workers to become a
vital part of the corporate commitment to serving all stakeholders.

"At the root of the challenges ethics owners face is the


fact that while the tech industry is adept at producing
scalable solutions, ethical harms remain tied to highly
specific contexts."

Emanuel Moss & Jacob Metcalf

(link)

THE BUSINESS CASE FOR AI ETHICS | 14


Leverage the existing AI Ethics knowledge base
As AI becomes more widely used in applications which directly affect both individuals and society, such
as healthcare coverage and predictive policing, the adjacent field of AI ethics has become increasingly
relevant.

To begin enacting AI ethics change, it is important to first understand the field, the gaps in the current
literature, and the general scope of the work that’s already been produced by AI ethicists over the last
decade. This section provides a brief overview of those topics.

Defining AI ethics

This broad definition is from


The Alan Turing Institute:

“AI ethics is a set of values, principles, and


techniques that employ widely accepted standards of
From the AI Now Institute:
right and wrong to guide moral conduct in the
development and use of AI technologies.”
“Machine ethics is more
narrowly and explicitly
(link: https://arxiv.org/pdf/1906.05684.pdf)
concerned with the ethics of
artificially intelligent beings
and systems. . . . AI ethics
concerns wider social
concerns about the effects of
AI systems and the choices This quote from Deloitte demands more from its
made by their designers and practitioners:
users.”
“'AI ethics' refers to the organizational constructs that
(link: delineate right and wrong—think corporate values,
https://ainowinstitute.org/AI_ policies, codes of ethics, and guiding principles applied to
Now_2017_Report.pdf) AI technologies. These constructs set goals and guidelines
for AI throughout the product lifecycle, from research and
design, to build and train, to change and operate.”

(link:
https://www2.deloitte.com/us/en/pages/regulatory/article
s/ai-ethics-responsible-ai-governance.html)

The first challenge that an organization will likely face is creating or selecting a definition of AI ethics
that resonates with them or is particularly relevant to their business. Although the three definitions
above are quite similar, one is broad and all-encompassing, another puts the onus of ethical AI on

THE BUSINESS CASE FOR AI ETHICS | 15


developers, and the third is more concerned with the societal implications of AI. Despite the
difficulties that come with the lack of a consistent definition of ethical AI, this does allow companies
to focus on the aspects of AI ethics most relevant to them.

High-level governance

Between 2016 and 2019, 74 sets of ethical principles (or guidelines for AI) were published by various
groups, focusing on high-level guidance like “creating transparent AI” (Carly Kind at VentureBeat; link:
https://venturebeat.com/2020/08/23/the-term-ethical-ai-is-finally-starting-to-mean-something/).
Although these frameworks were an important first step for AI ethics, they are hard to put into action
because of their vagueness.

As Andrew Burt wrote in Harvard Business Review:

“Many AI ethical frameworks cannot be clearly implemented in practice…


there’s simply not much technical personnel can do to clearly uphold such
high-level guidance. And this, in turn, means that while AI ethics frameworks
may make for good marketing campaigns, they all too frequently fail to stop AI
from causing the very harms they are meant to prevent... organizations should
ensure these frameworks are also developed in tandem with a broader
strategy for ethical AI that is focused directly on implementation, with
concrete metrics at the center. Every AI principle an organization adopts, in
other words, should also have clear metrics that can be measured and
monitored by engineers, data scientists, and legal personnel”

(link: https://hbr.org/2020/11/ethical-frameworks-for-ai-arent-enough)

An overview of many existing AI ethics guidelines can also be found at Algorithm Watch (link:
https://inventory.algorithmwatch.org/).

These frameworks are similar in that they incorporate concepts of bias, fairness, accountability,
transparency, and explainability. However, they don’t make AI ethics actionable, outline steps that need
to be taken, identify people who must be involved, or provide methods to quantify any efforts.

For example, once an organization has agreed upon a definition for AI ethics, such as the one above,
they still have several decision points to incorporate AI ethics into their business.

THE BUSINESS CASE FOR AI ETHICS | 16


1. How do we quantify efforts and measure success? What is the target
and who decides what is ethical?
2. How do we break down this definition into a tangible and
comprehensive framework?
3. Who do we bring into the process?
4. What tools do we need to be successful in our efforts?

The challenge is ultimately that the ethics of AI is dependent on the data collected and handled, the
algorithms applied, the individuals building the models, and the consequences of the outcomes. Ethical
AI frameworks themselves may not be effective at preventing AI harms, but monitoring the way that
algorithms are implemented and investing in best practices can pave the way to translate governance
into impact.

Bias mitigation

The Gender Shades project (link: http://gendershades.org/), based on Joy Buolamwini and Timnit
Gebru's groundbreaking paper, showed that facial recognition technology was biased against women
and people of color and inspired a shift in the AI ethics community to focus on detecting and mitigating
statistical bias. It is generally understood that math isn’t biased—we encode our human biases into our
data and models. Unintentional bias can stem from problem misspecification and data engineering, but
even more commonly from prejudice in historic data and under-sampling.

"Unfair, biased, and at times malevolent algorithms can further


disadvantage already vulnerable communities."

Unfair, biased, and at times malevolent algorithms can further disadvantage already vulnerable
communities. There is a wealth of information about methods of bias mitigation. For example, a bias
bounty is the practice of hiring companies to find bias in a model before it is discovered by the public.
This is especially powerful if the government continues not to regulate AI strongly. A plethora of
toolkits, including IBM’s AIF360, Microsoft’s Fairlearn, and FairML have emerged to combat bias in
datasets and AI models. Most importantly, the issue of bias points to why we need diversity in tech.
Diversity of perspectives, education and training, and sociodemographic factors are all important to
catch bias in our models before they are deployed.

THE BUSINESS CASE FOR AI ETHICS | 17


Ways to increase buy-in from leadership
Buy-in from leadership is crucial to enacting a wider cultural change within an organization. Leaders
have many competing priorities and will need to dedicate time and focus to AI ethics if it is to be an
organization-wide success. Creating a business case for AI ethics is one way to build buy-in regarding
the importance of AI ethics in an organization.
.

Translate into a business case to gain support

Like any new endeavor in a business, there needs to be a clear answer (or answers) to “Why should we
do this?” Because much of the work in the field of AI ethics has been developed by academics and
activists, the language does not always translate directly to a business environment.

Concrete ways to emphasize the business case for AI ethics can be found on the next page. These are
taken from the Economist Intelligence Unit (EIU)’s recent report (link: https://pages.eiu.com/rs/753-
RIQ-438/images/EIUStayingAheadOfTheCurve.pdf) as well as stakeholder interviews. As someone
trying to convince your boss(es) about why this is important, you will be best placed to pick and choose
which of these reasons are likely to resonate most.

For example, when trying to convince your manager “Sam,” you know that the main argument important
to them is how AI ethics helps meet the already high pressure placed on revenue and sales targets. For
this manager, you might choose to focus on AI ethics as a way of increasing win-rates and improving
customer engagement—both related to their focus on revenue and sales. Perhaps a targeted example of
how a competitor or two are operating with regards to AI ethics and how this may result in a loss of
competitive advantage for your manager’s sales and revenue may be convincing for this manager as well.

When trying to convince your manager “Alex,” however, you know that their main concern is how to
build a legacy company that operates with integrity. Accordingly, arguments associated with “doing the
right thing” and how AI ethics enables the organization to “walk the talk” may be relevant, as would
arguments around future-proofing the organization. Trust in the market as well as attracting and
retaining top talent would also likely be of interest to this manager.

Choosing the “right reasons” is highly context-dependent. Not all of these may be relevant to your case.

THE BUSINESS CASE FOR AI ETHICS | 18


Business benefits of AI ethics
There are many reasons a company would want to adopt responsible technology guidelines for its own
benefit. Of course, implementing AI ethics can simply be the right thing to do and aligned with your
company's values—such as integrity, authenticity, and fairness. Implementing AI ethics is a way your
company can adhere to those values and "walk the talk."

Furthermore, investing in AI ethics allows you to keep up with competitors. Consider key competitors’
AI ethics maturity—do they have high level frameworks and principles? Have they implemented AI
ethics boards, teams, or other oversight mechanisms? Have they developed specific toolkits to tackle
fairness or bias? If so, you can make the case that your company must invest in AI ethics to keep up.
There are numerous other ways to make the business case for AI ethics, including the following:

THE BUSINESS CASE FOR AI ETHICS | 19


Be real about implementation challenges
Generating buy-in from leadership is not just about selling the positives, but also recognizing the
realistic challenges to including AI ethics in the fabric of their organization. Leaders need to go in with
“eyes wide open.” Pre-emptively identifying what it will take to authentically embed AI ethics is
important so that adequate commitment and resources can be allocated to implementation.

It will take (amongst other things):

Whole-of-organization collaboration – Often, embedding AI ethics in an organization may


begin with the remit of a Chief Data Officer or a Chief Technology Officer, but will often
mature into a broader effort that incorporates wider teams across human resources,
diversity and inclusion, legal, and others. Collaborating and communicating across different
roles within the organization will be paramount to implementation success.

Bridging the information gap – It must be recognized that artificial intelligence and its
implications may not be familiar topics for many within the organization. Clear internal
communication programs and training may be needed for employees to grasp the
importance of AI ethics and understand its relevance to their particular role or team.

Wider structural changes – AI ethics cannot exist in isolation from other structures and
processes within the organization. Implementation will also have to consider wider
structural changes to assist and reinforce the goals of AI ethics. This could include changes
in compensation incentives, existing policies, decision-making procedures, risk assessments,
and many other areas.

Clear accountability – Implementing any change requires clarity on accountability


structures. This may require an addition of responsibility to existing roles, new roles
altogether, and potentially new ways of governing that have not been used previously in the
organization.

A commitment to continuous monitoring, feedback and improvement – Embedding AI


ethics in organizations is a dynamic process, ever-changing according to changes in internal
and external environmental conditions. As such, a commitment to embedding AI ethics is
also a commitment to facilitating continuous monitoring, feedback, and improvement
processes.

THE BUSINESS CASE FOR AI ETHICS | 20


Create a set of real solutions
The true potential of AI can only be unlocked when ethics is embedded as a full-sum complement to its
predictive capabilities. However, there is sometimes a suggestion that AI ethics cannot be measured, at
least not as easily as traditional development metrics. But what may not seem easy does not need to be
hard.

The following actions can be taken to identify and maximize the benefits of AI ethics:

Survey customers. Companies have long sought insights from customers to guide product direction,
and AI ethics can be probed from a variety of perspectives to understand market expectations.
Companies should also monitor AI survey results shared by McKinsey, PwC, Accenture, and other
consulting firms.

Create a customer advisory board. Many companies bring together a group of strategic customers to
periodically discuss industry trends, issues, and priorities. While the main focal point of such a board
is to explore potential solutions, probing the importance and application of AI ethics should lead to
valuable intelligence and further collaboration.

Create an ethics advisory board. As more companies have come to appreciate the ubiquity of AI and
its importance to society and the future of work, they have recruited internal and external cross-
functional experts to act as the governance body which sets the AI ethics strategy. There is no
consensus for how such a board should be constructed, tasked, or empowered, so companies should
perform research in order to optimize this opportunity.

Monitor laws and industry standards. While some existing laws apply generally to AI, many new
ones are being proposed to expressly address the unique challenges presented by AI. Similarly, many
industry standards have begun to mature from high-level principles to actionable plans that can
funnel into and fuel the development pipeline (e.g., from the IEEE). Staying aware of these updates
will be critical to enable responsible innovation while ensuring compliance and maintaining brand
integrity.

Always incorporate ethics in proposals. Effective proposals reflect an understanding of a customer’s


business challenges and offer a tailored solution. All AI proposals should meaningfully address
ethical considerations—i.e., not just as a bootstrapped marketing tactic. The fact that a solution has
been developed and can be deployed within a responsible AI framework will be a genuine
competitive differentiator.

Always address ethics in projects. Many technology implementation best practices translate readily
to AI, including selecting a project methodology and documenting key risks, actions, issues, and
decisions. Holistically integrating ethics into that delivery process—including with contextual
modifications specific to AI, like performing statistical analyses of data sets and impact assessments
—will be the next step forward to ensure sustainable, long-term success.

THE BUSINESS CASE FOR AI ETHICS | 21


Launch an internal awareness campaign. Culture has never been more important, and anchoring a
corporate mission and vision to core ethical values will help foster responsible behavior throughout
a company. But values alone are not enough if unsupported by consistent communication that
educates all employees on the need for those values (and the risks of not having them), as well as
incentives and social norms which put ethical design into everyday practice.

In particular, AI ethics can benefit significantly by unleashing the power of diversity and inclusion,
not just within development teams, but across all departments. The field demands collaboration from
multi-disciplinary stakeholders to better identify and manage risks, improve decision-making, and
drive human-centric innovation—while simultaneously boosting employee morale, engagement, and
loyalty in a virtuous cycle.

Launch a public awareness campaign. There is more attention on, and investment in, AI than ever
before, which creates the perfect opportunity for leadership on the significance of AI ethics.
Companies should embrace it by participating at events, sharing articles or information, and taking
other steps to advance the public dialogue and earn a reputation as an ethical innovator.

"Operationalization of AI ethics requires alignment with


organization's culture and all of its mechanisms and
methodologies. It needs buy-in from all levels in the
organization, and it requires intentional steps to be
taken. . . . It is crucial that the organizations monitor
and improve both their AI products, as well as their
organizational processes and mechanisms."

-Merve Hickok

THE BUSINESS CASE FOR AI ETHICS | 22


Conclusion
The above recommendations and descriptions draw partly on insights gained from speaking and
collaborating with people in the responsible tech ecosystem. Some of these individuals offered
perspectives in a survey that we administered to examine key issues in AI ethics for businesses. In our
survey results, find answers to major questions such as:

What value propositions do businesses see in ethical AI?

What are key challenges in operationalizing AI ethics?

How do leading organizations in AI ethics shape their ambitions and


guiding principles?

What do career paths into the field of AI ethics look like?

We invite you to learn more by reading the interviews on the following pages.

THE BUSINESS CASE FOR AI ETHICS | 23


THE BUSINESS CASE FOR AI ETHICS

Community
Interviews
Hear from a broad range of leaders about
ways to operationalize AI Ethics.

Some interviews have been lightly edited to improve


consistency and readability.

AllTechIsHuman.org | BusinessCaseForAIEthics.com

Join our Slack group at bit.ly/ResponsibleTechSlack


Mentorship program at bit.ly/ResponsibleTechMentorship

THE BUSINESS CASE FOR AI ETHICS | 24


considered, and use the outputs to stay
LEARNING FROM THE COMMUNITY
within the ethical framework. This takes

Beena Ammanath
place throughout the AI lifecycle, from
conception and development, to
assessment and deployment, and
through management and ongoing re-
assessment. Also, in thinking about your
workforce ecosystem, design clear
protocols that put the end-user first. This
Executive Director, Deloitte AI Institute and leads to a culture where AI ethics is a
Founder, Humans For AI priority for every stakeholder.

Tell me how you got into Responsible AI.


What’s been your path into the field?

I am a computer scientist by training.


Over the course of my career, I’ve built
data and AI products and solutions
across different industries – software,
manufacturing, financial services,
trading, brand protection, media, supply
chain, and telecom. I have had a front row
seat to the evolution of the data space,
from traditional transactional database
systems to data warehouses and
business intelligence to big data, data
science, machine learning, and AI.

As a data geek, the potential of what is


possible with data and AI is extremely
exciting. Yet as I’ve watched these new
capabilities come to market, I realized
the negative impacts this technology
could have if we ignore the risks. If we
want AI to reach its full potential and
benefit humanity as a whole, ethics must
be a part of the technology’s trajectory.
Just as we focus on the business value of
In your opinion, how do you enact Step two is to identify who is AI, we have to be diligent in thinking
change inside an organization to responsible for ensuring that AI is through all the possible negative impacts
operationalize AI ethics? implemented ethically. This is not the same technology could have and
your data scientists deploying the proactively address them.
I think about organizational change models. Instead, you need a person
in three steps to start with, and (or maybe a group of people) whose I realized that one of the best ways to be
then continue to iterate. Step one is job it is to think through questions able to think through all the possible
to agree upon which ethical AI like "what are the ways AI could go impacts is to have diversity of thought on
principles matter for the wrong because we did not foresee the AI teams. We need all humans to be
organization and create a the ethical implications?" Now you part of the AI conversation. That led me
framework from that consensus. have someone in the driver’s seat of to start Humans For AI in 2017, a non-
Fairness as a principle is often cited, AI ethics, and their roadmap is the profit focused on increasing diversity and
but is privacy a consideration? ethical AI framework. inclusion in AI.
What about transparency? Every
organization is different, and must Step three is to amend processes. Continued on next page
decide what is meaningful in the Look through project planning, set
context of its work. up checkpoints where ethics are
THE BUSINESS CASE FOR AI ETHICS | 25
One of the challenges that I have seen emerging is the sheer amount of noise and way too many clickbait headlines around AI
ethics without enough action on solving for it. To solve for AI ethics means understanding the nuances of the AI solutions in the
context of the business and adding in relevant guardrails—and both the ethics definitions and guardrails could be very different
depending on industry and use case. That led me to Deloitte, where I focus on operationalizing AI ethics across different
industries and sectors via our Trustworthy AI initiative.

What are the challenges you / your organization face with operationalizing AI ethics?

In our most recent State of AI in the Enterprise survey, 95% of respondents expressed concerns around ethical risks for their AI
initiatives. Despite these worries, the study reports only about a third of adopters are actively addressing the risks—36% are
establishing policies or boards to guide AI ethics, and the same portion says they’re collaborating with external parties on
leading practices.

Although there is still a long way to go, a growing number of organizations are tackling AI-related risks head-on:

• As a founding donor for The Council on the Responsible Use of Artificial Intelligence at Harvard’s Kennedy School, Bank of
America has embraced the need to collaborate on AI ethics. It has also created a new role—enterprise data governance
executive—to lead AI governance for the firm and work with the chief risk officer on AI governance.
• The German engineering firm Robert Bosch GmbH, which plans to embed AI across its products by 2025, is training 20,000
executives and software engineers on the use of AI, including a recently developed AI code of ethics.
• Workday, a provider of cloud-based enterprise software for financial management and human capital management, is
employing a broad spectrum of practices. It has committed to a set of principles to ensure that its AI-derived recommendations
are impartial and that it is practicing good data stewardship. Workday is also embedding “ethics-by-design controls” into its
product development process.

We are also helping our clients navigate AI ethics with our Trustworthy AI Framework, designed to help organizations navigate
through potential issues such as bias, transparency, privacy, and developing regulations.

The Trustworthy AI Framework helps organizations develop ethical safeguards across six key dimensions—a crucial step in
managing the risks and capitalizing on the returns associated with AI. These pillars include fair and impartial use checks,
implementing transparency and explainable AI, ensuring responsibility and accountability, putting proper security in place,
monitoring for reliability, and safeguarding privacy.

What recommendations do you have for operationalizing AI ethics?

I approach operationalizing AI ethics across the dimensions of people, process /controls, and technology, all of which are
interdependent. Agile technologies allow you to assess and truly validate whether AI tools are behaving in line with the ethical
framework. Technology solutions can mine data and reveal insights and trends. Importantly, such technologies need to be
flexible enough that they work across all use cases and also simple enough that they provide meaningful outputs to a diversity of
decision-makers.

Yet, who is making decisions? Organizations need clear roles and responsibilities for stakeholders whose daily effort is to think
about, monitor, and drive AI ethics. This may mean establishing the role of Chief AI Ethics Officer, creating an AI ethics advisory
group, hiring AI ethicists, distributing responsibility across existing leadership—or perhaps all four. It also means training for the
entire organization. Every employee needs to be thinking about AI ethics in the same way.

Alongside this are the processes and controls for a repeatable, sustainable approach. Processes contain guardrails that map
from the technological solution to the framework and inform the decision-makers. This means using real-world domain
information, not just experimental datasets.

What are the biggest challenges in general to operationalizing AI ethics?

Business leaders understand the importance of AI ethics because the risks posed by misbehaving AI are so significant. There’s
no shortage of talk and spilled ink on the topic. Yet while it’s easy to say we need to mitigate AI risks, it is much harder to define
how to do it. We need to move beyond high-level principles and dig into the specifics, and therein is a primary challenge: AI
ethics is an emerging area. The path forward is still being defined.
Continued on next page
THE BUSINESS CASE FOR AI ETHICS | 26
There is not always an agreed-upon vocabulary for AI ethics. There may not yet be an appreciation for what is meant by
concepts like fairness and impartiality, transparency, accountability, and even privacy. This challenge is made greater because
there are a range of disciplines involved in ensuring ethical AI, and some professions are underrepresented, notably, computer
science. When it comes to figuring out how to achieve AI ethics in practice, we need the technologists’ voices and perspectives
just as much as we need input from philosophers, sociologists, legal experts and enterprise leaders.

Ultimately, the tactics and strategies that work for AI ethics at scale require buy-in and input from the whole organization. It
really is a united effort, where the ethical framework is embedded in the business process and the culture prioritizes AI ethics as
much as it does AI function.

Since Responsible AI requires a culture change and re-education in a range of areas, how are you ensuring your employees
have the knowledge and skills to design and build responsible AI solutions?

If step one of driving toward ethical AI is agreeing upon the organization’s ethical principles, then step 1b is making sure the
framework is easy to communicate and understand. AI is complicated, but ethics should not be. With a clear framework, cultural
change flows out of training and awareness. Most ethical organizations already use some form of business ethics or integrity
training, both when new employees are on-boarded and then regularly thereafter. AI ethics should become a part of this.

As the culture begins to shift, every stakeholder can see their place in the larger effort of upholding AI ethics. Education and
skills development help, but what you also need are channels for employees to provide feedback and raise concerns.
Stakeholders need to understand their responsibilities but they also need to be empowered to play an active role and inform
those ultimately responsible for ethical AI. One question then is how does an organization motivate the workforce to raise AI
ethical concerns? Incentives are a good start. Things like acclaim and awards within the organization, factors for performance
metrics, and potentially penalties for inaction should be on the table when organizations are figuring out how best to cultivate
an ethical AI culture.

Connect with Beena Ammanath @beena_ammanath

"When it comes to figuring out how to achieve AI ethics in practice, we need the
technologists’ voices and perspectives just as much as we need input from philosophers,
sociologists, legal experts and enterprise leaders."

-Beena Ammanath, Executive Director, Deloitte AI Institute and


Founder, Humans For AI

THE BUSINESS CASE FOR AI ETHICS | 27


We developed an ethics integration
LEARNING FROM THE COMMUNITY
model for innovation called the PiE

Cansu Canca, PhD


(Puzzle-Solving in Ethics) Model
(https://aiethicslab.com/pie-model/).
Ethics puzzles are about minimizing
ethics risks and actualizing ethics
opportunities, and doing this in real time.
This model is drastically different from
the traditional ethics practice, which
Founder and Director of AI Ethics Lab adopts the compliance and oversight
model with audits and codes. Instead of
“ethics policing,” in the PiE Model we
collaborate with developers, designers,
and managers to enhance technology by
solving its ethics puzzles.

What are the challenges you face with


operationalizing AI ethics?

Most practitioners do not realize the


ethical landscape (and minefield) that
they are operating on until a scandal
blows up. This means that most often
their initial interest in ethics remains
either for “extra-curricular” or PR
purposes. And when there is a scandal,
they go to the other extreme of rules,
codes, and of course also PR. However,
operationalizing AI ethics is only possible
through end-to-end integration.

What recommendations do you have for


operationalizing AI ethics?

Organizations should focus on building


capacity, structure, and processes that
enable meaningful ethics integration.
This means engaging in the right type of
Tell us how you got into health to finance and from ethics analysis at the right time – some
Responsible AI. What’s been your education to public safety, we must ethics questions are simple enough that
path into the field? ask these questions in relation to practitioners armed with basic ethics
the AI systems we are building and tools and skills can detect and solve
I am a philosopher specializing in integrating into these sectors. them; others require in-depth analyses
applied ethics. Prior to AI Ethics by ethics experts. Putting in place a
Lab, I worked extensively on ethics Tell me about your organization’s robust ethics strategy would allow
and health, focusing on health ambition for Responsible AI? Why organizations to use their resources
policy and health technologies. My is it a priority? efficiently.
research in health technologies led
me to dive deeper into the ethics of Our goal is to revolutionize the Continued on next page
AI. In applied ethics, we ask “what is applied ethics practice by making
the right decision/action to take” ethical puzzle-solving a standard
and “what is the right policy to practice within the innovation
implement." To achieve ethical and process.
fair systems in all sectors from

THE BUSINESS CASE FOR AI ETHICS | 28


What are the biggest challenges in general to operationalizing AI ethics?

There is no shortcut for ethics. Publishing AI principles and committing to ethical AI are great, but to operationalize AI ethics,
we need to go beyond them. (You can read our article “Operationalizing AI Ethics Principles.”) Making the “right” choice in every
turn often requires a thorough understanding of ethics and its intersection with the technology and the domain. By building
processes that effectively integrate ethics into the innovation lifecycle and by bringing ethics expertise on board, we can
operationalize AI ethics.

What is your business case for Responsible AI?

Ethics risks are serious business risks that might result in reputational harm and loss of customer trust. And ethics opportunities
are often real business opportunities which might give a company the edge that it needs to succeed.

What resistance is one likely to encounter when making the case for responsible AI to the business community?

Practitioners think of ethics as a hindrance—a policing system that stands between a brilliant idea and its implementation. In
fact, AI ethics enhances technology and it does so for all of our sake. Businesses also think that ethics is an unnecessary cost.
However, ethics risks are business risks.

Any organization trying to incorporate responsible tech practices exists in a broader economic ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?

Ethics and profits are not mutually exclusive. Ethics failures put customer trust—and thus, profits—at risk. With complex AI
technologies, customer trust plays an important role for their adoption of these technologies. A company’s ethics failure would
put its other practices into question. Should customers trust them with their data? Should they trust them for receiving fair
treatment? Should they trust the system to be non-manipulative? If they cannot, they might rightly choose not to use these
products.

Connect with Cansu Canca @ccansu

"Ethics and profits are not mutually exclusive. Ethics failures put customer trust–and
thus, profits–at risk. "

-Cansu Canca, PhD, Founder and Director of AI Ethics Lab

THE BUSINESS CASE FOR AI ETHICS | 29


What recommendations do you have for
LEARNING FROM THE COMMUNITY
operationalizing AI ethics?

Yasmine Boudiaf
It should be regularly reviewed and
revised, with input from external groups.
Societal values and the way AI is used
constantly changes. Don't be so arrogant
as to think that the operational ethics
you came up with yesterday will be
Creative Technologist and Visiting Fellow at relevant tomorrow.
Ada Lovelace Institute What are the biggest challenges in
general to operationalizing AI ethics?

My biggest concern is organisations


waiting for regulatory enforcement in
order to adopt ethical AI practices. A
smart strategy would be to have them in
place before regulation forces you to.

What is your business case for


Responsible AI?

It is far more efficient to have ethical AI


practices in place than to retrofit them
into your operations when you are forced
to by regulation.

In large organizations, data is generated


from multiple interactions. Are there
any guiding principles for data
collection, curation, etc. in your org
ecosystem?

When it comes to gathering personal or


identifiable data, make sure you have
informed consent and ongoing consent
(i.e., people can withdraw their data at
In your opinion, how do you enact That's the key: there need to be any time). Clearly communicate and
change inside an organization to operational parameters at the very justify the reason for data collection;
operationalize AI ethics? beginning, before you even look at a collecting for posterity is not a good
dataset. reason.
My experience with organisations
has shown me that there needs to What are the challenges you face
be a financial incentive to do nice with operationalizing AI ethics?
things, or a financial penalty for Continued on next page
doing bad things! Frankly I've given Designing an ethical framework is a
up trying to convince exec boards major challenge. We largely define
to adopt ethical practices—I now ethical and unethical behaviour
focus on working with smaller from a Euro-centric, neo-colonial,
organisations that have an ethical capitalist viewpoint. Any ethical
code of practice from the start. framework coming from that
position will be unjust.

THE BUSINESS CASE FOR AI ETHICS | 30


What resistance is one likely to encounter when making the case for responsible AI to the business community?

Making a financial case for it, as always.

Any organization trying to incorporate responsible tech practices exists in a broader economic ecosystem that has
fundamentally motivated how businesses and societies operate.

How can companies try to incorporate these practices when they exist in a capitalistic system that is focused on profits?

The challenge is to think long-term. The unethical, yet permissive AI practices that are in operation now will not make for a
viable business in the future. It's not only "nice" to be ethical, it will ensure a business's survival.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

I have policies in place in my own practice to mitigate discrimination. I acknowledge that I have blind spots, and so will simply
have a conversation with collaborators at the beginning of a project on how best we can work together. I regularly experience
discrimination, but that is not to say that I'm incapable of discriminating against others.

"We define ethical and unethical behaviour from a Euro-centric, neo-colonial,


capitalist viewpoint. Any ethical framework coming from that position will be unjust. "

-Yasmine Boudiaf, Creative Technologist and Visiting Fellow at Ada Lovelace


Institute

THE BUSINESS CASE FOR AI ETHICS | 31


critique, and develop these norms—and
LEARNING FROM THE COMMUNITY
in turn contribute to the evolution of an
AI-driven political economy.

Madhulika Srikumar What recommendations do you have for


operationalizing AI ethics?

I came across a quote recently by Daniel


Kahneman, in an excellent piece about
Program Lead, Partnership on AI the limitations of statistical proxies, that
when faced with a difficult question we
have a habit of unknowingly swapping it
for an easy one. I think it’s worth
reminding ourselves that even as we try
to find answers to ethical conundrums
brought on by the design and
deployment of AI, frequently revisiting
the problem we are trying to solve and
reframing it by engaging with all
stakeholders can be a powerful endeavor
in and of itself. A recent paper by Liu and
Maas called "Solving For X" says it best:
we should invest in a problem-finding
approach in addition to a problem-
solving one [link: http://dx.doi.org/
10.2139/ssrn.3761623].

I also found this research (link:


https://arxiv.org/abs/2006.12358) led by
former PAI colleagues to be an essential
read on recommendations for org
structures that can enable more effective
responsible AI initiatives.

What are the biggest challenges in


general to operationalizing AI ethics?

The challenge of identifying how we can


Tell us about your role: Tell us how you got into embed principles of due process in
Responsible AI. What’s been your automated decision-making systems—
I help develop strategy and drive path into the field? especially in cases where there’s an
research alongside my colleagues absence of quantifiable benchmarks—
and representatives of partner I started out training as a lawyer in stands out to me as a pressing one.
organisations at the Partnership on India and worked at a think tank in
AI. As a part of my role, I identify New Delhi on the regulation of Connect with Madhulika Srikumar at
questions in relation to emerging technologies. Invariably a @madhusrikumar
"responsible AI" that can deeply lot of my research uncovered how
benefit from a diversity of voices cyber governance in emerging
weighing in—from across sectors, economies are shaped by policies
disciplines and demographics—and and ethical norms scripted
then work to design multi- predominantly in the West. I set out
stakeholder programs and to find opportunities during my
processes to bring these voices graduate studies in law to gain
together. proximity to actors who challenge,

THE BUSINESS CASE FOR AI ETHICS | 32


stakeholders, recruits and trains its staff
LEARNING FROM THE COMMUNITY
and sets the overall organizational

Ansgar Koene
priorities. This is why the UnBias AI for
Decision Makers toolkit I developed with
Giles Lane focuses on breaking down
organizational silos to bring together
people from across organizational teams
for a holistic, transdisciplinary discussion
about the ethical implications of
Global AI Ethics and Regulatory Leader / EY implementing an AI application.

Tell us how you got into Responsible AI.


What’s been your path into the field?

I spent about ten years as an academic


doing research on the intersection
between computational neuroscience
and robotics. In 2012, while doing some
work on interdisciplinary data sharing I
started to look into the use of social
media data for Computational Social
Science. This led to me joining the
Horizon Digital Economy Research
Institute at the University of Nottingham
on a project exploring privacy and ethical
issues around the use of social media
data. After attending a conference on
recommender systems, which focused
my thinking on the way our online
interactions are used to shape the
information ecosystem we are exposed
to, I started pitching the idea for a
research project focusing on bias in
recommender systems. This led to the
UnBias research project, a collaboration
between the Universities of Nottingham,
Oxford and Edinburgh, focusing on the
concerns of young people when engaging
Tell us about your role: the OECD, IEEE and others in the with algorithmically mediated online
development of international platforms.
As Global AI Ethics and Regulatory standards and best practices for
leader at EY, I bridge the responsible use of AI. Tell me about your organization’s
technology development and ambition for Responsible AI? Why is it a
consulting work that EY is doing on In your opinion, how do you enact priority?
Trusted AI with the Global Public change inside an organization to
Policy team that engages with operationalize AI ethics? As a professional services company with
policy and regulatory development a strong pedigree in the audit and
related to AI. Part of the work Operationalizing of AI ethics consulting sectors, EY is focused on
involves developing our internal requires a “whole of organization” providing a trusted ecosystem where
governance around the use of AI approach. It is not just a matter for people, organizations and governments
systems, making sure that we “walk the data science team or an AI can pursue the benefits of AI for building
the talk” by applying the Trusted AI ethics board. Operationalizing AI a better working world, with the
principles for ethical and robust AI ethics involves the way the Continued on next page
to our own processes. Another part organization conceptualises the use
is engaging with organisations like of AI, engages with external
THE BUSINESS CASE FOR AI ETHICS | 33
knowledge that processes are in place to minimize and mitigate potential risks.

What are the challenges you face with operationalizing AI ethics?

One of the ways I am working to operationalize AI ethics is through the development of standards, such as the IEEE P7003
Standard for Algorithmic Bias Considerations. Obviously, given standards about bias, an important issue for us has been
inclusion and diversity among the participants of the working group. While we managed to achieve a good mix of participants
from industry, academia, and civil society, as well as having at least some representation from each inhabited continent, it
remains undeniable that participation in the group is heavily skewed towards people from Europe and North America. More
broadly speaking, there is a significant lack of participation by the Global South in the development of the new AI standards that
are currently being developed by ISO/IEC, IEEE-SA, ITU, and other bodies, which will define industry best practices.

What recommendations do you have for operationalizing AI ethics?

My main recommendation for operationalizing AI ethics is to avoid siloed thinking. There can be no AI ethics without
organizational ethics. Think beyond the immediate aims of the AI project to take into consideration the wider impacts on all
affected stakeholders.

What are the biggest challenges in general to operationalizing AI ethics?

The biggest challenges to operationalizing AI ethics are the structural changes that may be required within an organization to
systematically identify the downstream impacts of AI applications and the need to accept that being serious about AI ethics
includes potentially having to devote more time or resources, or maybe even cancelling AI projects when no ethical
implementations are possible.

What is your business case for Responsible AI?

My business case for ethical AI centers on long-term value for the organisation both in terms of reputation and trustworthiness,
as well as future-proofing for compliance with regulatory developments.

Since Responsible AI requires a culture change and re-education in a range of areas, how are you ensuring your employees
have the knowledge and skills to design and build responsible AI solutions?

Within EY we have launched a new professional training course focused on ethical and trusted AI. The course is offered to
everyone within the organisation and is part of the EY Badges system that all EY staff are encouraged to do for their continuing
professional development.

In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc. in your org ecosystem?

Since EY operates in the highly regulated auditing sector and deals with sensitive client data, there are strict regulatory
compliance requirements on the way data is used.

In your opinion, who’s doing interesting things in Responsible AI?

European Commission, European Parliament, Council of Europe, OECD, WEF-AI Global, Algorithm Watch, AI Now, IEEE

What resistance is one likely to encounter when making the case for responsible AI to the business community?

Typical resistance to ethical AI in the business community is likely to focus on the difficulty in quantifying the benefits or
averted risks that will be achieved as a result of resources and time that need to be invested. This is especially true in the
absence of regulatory requirements with defined fines or other consequences for non-compliance.

Continued on next page

THE BUSINESS CASE FOR AI ETHICS | 34


Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?

To address this issue, we are working to link AI ethics to other shifts in business thinking, such as a move towards focusing (and
measuring) long-term value and non-financial corporate assets (including reputation).

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

EY operates primarily in the B2B of B2G space where the affected stakeholders are other organizations. When providing
consulting services to clients providing the B2C or G2C services, EY’s Trusted AI framework includes items on suitability of
project teams, representativeness of data, and assessment of bias in model performance.

Connect with Ansgar Koene @arkoene

"Operationalizing of AI ethics requires a 'whole of organization' approach. It is not


just a matter for the data science team or an AI ethics board."

-Ansgar Koene, Global AI Ethics and Regulatory Leader / EY

THE BUSINESS CASE FOR AI ETHICS | 35


People want short easy answers. They
LEARNING FROM THE COMMUNITY
want platforms that give metrics. They

Dr. Rumman
don't want to hear 'multi-stakeholder
engagement' . . . until they actually try
(and fail) to do a real ethical audit.

Chowdhury
Getting different groups to understand
other groups is difficult—sometimes
because of skillsets, and other times
because of how they value (or do not
value) the group mentioned.
CEO and Founder of Parity
What recommendations do you have for
operationalizing AI ethics?

Any algorithmic audit or ethical work


needs to be deliberative. This may be at
odds with scaling, but so what? There's
time to do good work—we have figured
out how to incorporate legal review,
security assessments, QA and more.

What are the biggest challenges in


general to operationalizing AI ethics?

You can't quantify and operationalize


everything. You will never be able to do
'automated' algorithmic audits that are
comprehensive.

What is your business case for Ethical


AI?

Responsible AI is about making better


products.

Since ethical AI requires a culture


change and re-education in a range of
Tell us about your role: Tell me about your organization’s areas, how are you ensuring your team
ambition for Ethical AI? Why is it a at Parity has the knowledge and skills to
CEO and Founder [of] Parity AI, a priority? design and build ethical AI solutions?
software that enables businesses to
audit their algorithms. Formerly My company's goal is to enable Education and practice. Parity's platform
Accenture's Global Lead for multi-stakeholder engagement in allows companies visibility into what
Responsible AI. ethical audits, including the other teams are doing, and it provides
meaningful integration of insights into best practices.
What’s been your path into the qualitative work (e.g. user
field? interviews, legal feedback) with Overall—we have to balance process
quantitative work (data science standardization with contextual critical
Data scientist and social scientist. I interventions). thinking.
love understanding patterns of
human behavior using data, What are the challenges your Continued on next page
building technological solutions, company faces with
and want to ensure these practices operationalizing AI ethics?
are done properly.
THE BUSINESS CASE FOR AI ETHICS | 36
What resistance is one likely to encounter when making the case for ethical AI to the business community?

-The idea that Ethical AI is a political left signal.


-[T]he thought that it slows down innovation or slows down the process.
-[The] belief that things like 'fairness' are normative and subjective, or fluffy and simply there for signaling (related to first one).

Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?

My suggestion is to think through the literature in organizational change management focused on theories and catalysts of
change. Featured in our article on how to enable Responsible AI (with Bogdana Rakova, Henriette Cramer, and Jingying Yang,
link: https://sloanreview.mit.edu/article/putting-responsible-ai-into-practice/).

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

[The] Parity platform is built to engage multiple stakeholders and draw insights from their feedback - so this sort of interaction
is enabled by design. While we can all talk about 'incorporating minority voices' few people have figured out how to translate
that into product development—what does it mean to translate 'inclusivity' into an app?

Connect with Rumman at @ruchowdh

"You can't quantify and operationalize everything. You will never be able to do
'automated' algorithmic audits that are comprehensive."

-Dr. Rumman Chowdhury, CEO and Founder of Parity

THE BUSINESS CASE FOR AI ETHICS | 37


and business ethics, and when I join the
LEARNING FROM THE COMMUNITY
AI center of Excellence I started

Maria Axente
exploring what ethics mean for AI and
what position we, as a professional
service firm, should take in framing it for
us and our clients. Along this journey, we
built the Responsible AI toolkit, as our
response to client challenges of
implementing ethics across the whole AI
Responsible AI and AI for Good Lead PwC UK lifecycle. The toolkit also has an AI ethics
framework that looks at supporting C
suite to contextualize and operationalize
ethics. I currently lead the
implementation of it for our clients and
its development - we just launched a
data ethics approach based on
Responsible AI. In our quest of
developing the toolkit and the
philosophy, I got to get involved with
many organizations in the UK and
globally and co-create many of the
knowledge, narrative and frameworks
around responsible and ethical AI.

Tell me about your organization’s


ambition for Ethical AI? Why is it a
priority?

We are working to embed ethics in all


our technology efforts, focusing
currently on data and AI, leveraging the
Responsible AI experience but also our
corporate philosophy of aligning profit
with purpose. We believe that all AI
should be responsible AI to see the
benefits it promises and mitigate the
risks. While ethical AI is a clear
competitive advantage for those who do
Tell us about your role: methodology for embedding ethics it right, it takes time, commitment,
in AI. I am a globally-recognized AI transparency and bravery to trigger the
In my role as Responsible AI and AI ethics expert, [as well as an] organizational change AI ethics will
for Good Lead at PwC, I lead the Advisory Board member of the UK bring.
implementation of ethics in AI for All-Party Parliamentary Group on
the firm while partnering with AI, ORBIT, UNICEF AI 4 children What are the challenges that you and
industry, academia, governments, programme, member of BSI/ISO & your organization face with
NGO and civil society, to harness IEEE AI standard groups, a Fellow operationalizing AI ethics?
the power of AI in an ethical and of the RSA and an advocate for
responsible manner, acknowledging gender diversity, children and Let's frame the issue correctly—
the benefits and risks in many walks youth rights in the age of AI. operationalizing ethics in AI is an
of life. I have played a crucial part in operations issue—you have to
the development and set-up of What’s been your path into the understand first the system (the
PwC’s UK AI Center of Excellence, field? organization).
the firm’s AI strategy and most
recently the development of PwC’s During my MBA course I focused on Continued on next page
Responsible AI toolkit, [the] firm's Strategy, corporate governance
THE BUSINESS CASE FOR AI ETHICS | 38
That needs an ethical check/overhaul, allocate the right resources - people in the right roles and right expertise, with the
mandate and support and, lastly have a transparent roadmap, communicate continuously, engage throughout. And if we do that
is easy to see where the challenges come from.

In our case, the biggest one is complexity of the task for an organization as diverse and big as ours. We know our approach needs
to be systemically rather than linearly (as is being done in most of the cases) and invest time, resources and patience on [the]
medium term, while we focus on low hanging fruits.

The second challenge is how to prioritize various initiatives based on potential impact on outcome, ease of implementation,
balance internal work with external work we do for our clients.

Lastly, demonstrating progress and trustworthiness to all the stakeholders involved—that requires new ways of thinking,
working, and ultimately behaving.

What recommendations do you have for operationalizing AI ethics?

Think big—systemically but act small & focused—low hanging fruit and build momentum. AI ethics is a complex and profound
cultural and organizational change, treat it like one, learn from the science of cultural change and transformation how to do it
well and succeed.

Also operationalization is not the same with ethical reasoning—with decision making and acting, the missing part in all the
processes of creating ethical AI—which is the glue that holds it all together. All we do with AI, around it and for it needs to be
centered on one principle ' Should I build it' not 'Can I build it ' which is the philosophy of computer science since inception.

Far too often, we apply a linear thinking for a complex and systemic task, far too often we fall for "broken part" fallacy [. . .] when
what is needed is a systemic analysis [. . .] with correlations and causations highlighted.

What are the biggest challenges in general to operationalizing AI ethics?

In my view two big ones:


1. Most use the narrow AI ethics approach with no correlation between various activities and no long term plan. Having
principles, or boards or training and framework are good, but only the beginning of the journey.
2. Not yet a strategic priority for C Suite—if it would be, you will hear companies talking only about responsible AI, not just AI.
Because yes, as Virginia Dignum says it "All AI should be Responsible AI." And if it is not, we should not do it!

What is your business case for Ethical AI?

1. A higher awareness among customers, citizens, and leaders of the ethics and dangers of technology. As a result of high- profile
ethical failures, the media, lawmakers, regulators, and society-at-large have started to focus their attention and actions on the
negative impact of AI, and how ethics should play an important part.

2. The UK's AI public policy is focused on ethical and responsible use of data and AI:
A significant proportion of the AI policy work within the UK is concerned with ethical issues when it comes to the strategic use
of data and AI. In the UK, the ‘ethics of AI’ has been seen as a national competitive advantage in the global AI sector. The UK is
the only other country alongside Singapore to have a government department focused upon the ethics of data and innovation.

3. Imminent AI regulation triggers companies to invest in readiness. UK regulators (alongside the EU) are seen as leading the
way internationally in their understanding of the financial, societal and moral impacts and they are acting swiftly to develop
regulatory frameworks mostly addressing ethical issues on data and AI.

4. Increased scrutiny of the tech industry has increased the trust and branding risks associated with a lack of ethics in
technology. Building and maintaining brand trust in the age of AI becomes a priority for organizations of all sizes.

Continued on next page

THE BUSINESS CASE FOR AI ETHICS | 39


5. Customers demand responsible tech and use of data. There is a growing customer expectation for technology to be fair,
responsible, ethical and beneficial, hence there is more scrutiny on the products and services they consume.

6. [The] C suite sees responsible technology and data as a source of competitive advantage. 90% of respondents believe the
benefits of responsible AI will outweigh the costs (The Economist)—this could have a significant impact on the following:
Enhanced product quality and commercial competitiveness, Talent acquisition and retention, Sustainable investing and
strengthening all stakeholder relationships, Boost and maintaining a high level of brand trust.

7. Responsible tech and data is a strategic priority for some C-suite according to the most recent PwC Responsible AI research,
three quarters of companies have formal policies (or guidance) in place with 1 in 5 having an ethical framework in place. All
companies questioned have defined/looking to their own point of view on ethics of technology/AI/data via a variety of
activities: ethical principles, boards and committees, frameworks and Audit/assurance tools.

In your opinion, who’s doing interesting things in Ethical AI?

In big organizations we work with, many grasp to understand it, few talk about it, and even fewer can demonstrate real progress
not empty PR. Three examples come to mind—Ethical AI Practice at Salesforce, Ethics and Society at Microsoft, and Yoti.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

[The] current incentive structure and the mindset that generates it. It is a false antithesis between profit and ethics, that
embedding ethics will reduce the profitability rate. We have to go back to the business model and demonstrate how ethical AI
products are not only opening new markets but reduce the compliance and risks cost by mitigating it proactively.

How can companies try to incorporate these practices when they exist in a capitalistic system that is focused on profits?

See, here is the problem, in the way we think about ethics as a killer of progress! We have to frame it correctly—how can ethical
businesses can be more profitable and solve important societal problems at the same time. We have done it in the past with
ethical supply chain and sourcing, for example, so what can we learn from these practices? How can we respond to the huge
cultural shifts that BLM and metoo have brought, shifts that give rise to a new consumer mindset? This could be huge
opportunities for organizations to change profoundly if they wish to remain competitive and thrive.

But you know what, each organization willingly or unwillingly—with pressure from society, customers or regulators will have to
go down that path of soul searching and through the test of 'how moral we REALLY are." Responsible AI cannot be built by
unethical business, and funny enough AI are exactly those mirrors that will uncover with phenomenal precision the true moral
nature of those organizations.

Connect with Maria @maria_axente

"Having principles, or boards or training and framework are good, but only the beginning
of the journey."

-Maria Axente, Responsible AI and AI for Good Lead PwC UK

THE BUSINESS CASE FOR AI ETHICS | 40


inclusivity and equity. Race, gender
LEARNING FROM THE COMMUNITY
identity and class are afterthoughts in

Brandeis Marshall
computing and automation right now.
Let’s actively work to center the most
vulnerable in our handling of data.

What are the challenges you face with


operationalizing AI ethics?

CEO and Co-Founder/DataedX Operationalizing AI ethics approaches, in


my opinion, continues to prioritize
scalability and generalizability over
impact to people. Approaches are re-
inventions of tech solutionism when
inclusivity and equity are localized
decisions to be made by a collective of
folks in tech, social science, humanities.
As long as tech leads the solution paths,
ethics in practice will struggle to be agile
enough to course correct.

What recommendations do you have for


operationalizing AI ethics?

The main recommendation is to follow


the expertise of both internal and
external advisory groups in
implementing methods that'll minimize
harm.

What are the biggest challenges in


general to operationalizing AI ethics?

To me, the biggest challenges are


outlined in a 2017 Forbes article
("Five Reasons Data Transparency Isn't
Working in Your Organization (Yet),"
link:
Tell us about your role: involvement of non-white people https://www.forbes.com/sites/brentdyk
by reducing access to education, es/2017/11/30/five-reasons-data-
My role is to develop high quality employment, and promotion for transparency-isnt-working-in-your-
data equity strategies and plans decades. organization-yet/?sh=91b8c6bc30c7).
ensuring alignment with short-term Those challenges are people-driven
and long-term goals, oversee all I have seen and experienced the barriers: protect advantage, preserve
operations and business activities, impact of this toxicity to non-white position, hide problems, avoid risk and
build trust relations with key people as a computer scientist and resist change.
partners, and maintain a deep educator for nearly two decades. As
knowledge of the data learning the value of data is more widely Continued on next page
markets and industry. known, I want to help curb a repeat
experience in what was
What’s been your path into the emerging as the data science field.
field?
Data is about people. So data
White male toxicity in computing understanding, tools and
has intentionally discouraged the applications needs to bake in
THE BUSINESS CASE FOR AI ETHICS | 41
How can companies try to incorporate these practices when they exist in a capitalistic system that is focused on profits?

Companies must decide to re-align their areas in which they obtain their profit gains. Automated algorithms and systems that
are identified to cause harm need to be prioritized by the company for timely resolution. Time, talent, and money must be
redirected to fully implement the needed interventions.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How
does your organization factor in these possible risks/liabilities?

The mission of DataedX is to democratize culturally responsive data learning. To do this as a company, we design, deliver and
co-create resources and strategies that bake in systemic equity into automated algorithms, systems, policies, and
regulations. We center inclusivity and promote the contributions of non-white people to the data space in our work.

Connect with Brandeis @csdoctorsister

"Operationalizing AI ethics approaches, in my opinion, continues to prioritize


scalability and generalizability over impact to people."

-Brandeis Marshall, CEO and Co-Founder/DataedX

THE BUSINESS CASE FOR AI ETHICS | 42


What are the challenges you face with
LEARNING FROM THE COMMUNITY
operationalizing AI ethics?

Reid Blackman, PhD


The first challenge is getting senior
leaders to take it seriously. Some already
do, but that's a small percent of senior AI
leaders. Those that do are not sure how
to get the ball rolling both with their
peers and the people they manage. Thus,
CEO & Founder Virtue; EY AI Advisory Board building organizational awareness and
Member justified buy-in is very important.

What recommendations do you have for


operationalizing AI ethics?

There's a lot to say here. I think the first


step is getting buy-in from as many
senior people as possible. As much as
people like to talk about "bottom up," I
don't think we'll see real impact/decent
risk mitigation with that approach. A top
down approach will require building an
implementable framework which
includes ethical standards, a governance
structure, quality assurance program,
role-specific responsibilities, and more.
That's a big ask, of course. For those
organizations for whom that's too large a
first step - which is most organizations - a
well curated, well trained, well
positioned, and powerful ethics
committee can work wonders.

What is your business case for Ethical


AI?

There isn't one business case to make;


there are multiple business cases. For C-
Tell us about your role: help leaders who want to take the Suite and the board, it's all about
ethical and reputational risks mitigating reputational risks and
I help senior leaders operationalize seriously. regulatory investigations. For product
AI ethics to mitigate risk and earn managers, it's about building products
trust. Tell us about your organization’s that are more appealing to consumers
ambition for Ethical AI? Why is it a who take ethics seriously. For people in
What’s been your path into the priority? operations, it's about the ability to move
field? through ethical quandaries efficiently.
Our goal is to help people build an
I was an academic philosopher ethical AI program into existing In your opinion, who’s doing interesting
specializing in ethics for 20 years. infrastructure and processes. We things in Ethical AI?
About 3 years ago I saw engineers want organizations to see this is
ringing the alarm bells around part and parcel of a general risk Like I said, I was an academic philosopher
AI/ML. I also saw that people often mitigation strategy. We're not an
don't know how to tackle ethics in a "AI for Good" organization. We're Continued on next page
business setting. Given my "AI for Not Bad."
expertise, I saw an opportunity to
THE BUSINESS CASE FOR AI ETHICS | 43
for 20 years so, unsurprisingly, I find the most interesting work to be academic research in philosophy, computer science,
sociology, etc. First, academic research is intellectually rigorous in a way that lots of other work isn't. If you want to
systematically and exhaustively identify ethical risks, that kind of rigor is important. Second, it's interesting because it presents
a challenge to me: how can I take the academic research and translate it into a practical business context in a way that retains
the insights while making contact with the real world problems?

What resistance is one likely to encounter when making the case for ethical AI to the business community?

One issue is that there is no cross-organizational standard concerning who should own the problem. That means you can make a
strong case for ethical AI but the person/people to whom you're speaking are not tasked with solving that problem, which
means they don't have the budget/resources to tackle it.

Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?

That's only an issue if you think ethical AI is either not conducive to, or contrary to, profits. If there are ways in which ethical AI
is conducive to profits, there's no issue. Now, there's a very strong claim one could make that I think is false: In all cases, ethical
AI—and all that goes into creating the infrastructure, process, and process that makes it possible—is more conducive to
promoting the bottom line than non/unethical AI. Endorsing that claim strikes me as an expression of an ideology as opposed to
a judgment grounded in empirical evidence. That said, there are many cases in which it is true that a given company would in fact
promote their bottom line by implementing an ethical AI program.

Connect with Reid @reidblackman

"The first challenge is getting senior leaders to take it seriously. Some already do, but that's a
small percent of senior AI leaders."

-Reid Blackman, PhD, CEO & Founder Virtue

THE BUSINESS CASE FOR AI ETHICS | 44


it is critically important that we have
LEARNING FROM THE COMMUNITY
Trust in our AI systems. Is it fair, is it easy

Phaedra Boinodiris
to understand, did anyone tamper with it,
is it accountable?

What are the challenges your


organization faces with operationalizing
AI ethics?
Executive Consultant, Trust in AI At IBM, we recognize that a
multidisciplinary, multidimensional
approach is needed to help organizations
create these safe guardrails in their AI
journey that ensure Fairness and
Transparency—protecting end users
from the risks of bias. This holistic
approach needs stakeholders across an
organization to connect with each other
to mature their approach.

What recommendations do you have for


operationalizing AI ethics?

We help clients, not by throwing tech


over the fence thinking it alone will solve
the problem. Our approach towards
helping clients create Responsible
systems is necessarily holistic. IBM is
best positioned to help our clients
mitigate their risk through a three part
offering to reduce this risk. We can help
our clients with the culture they need to
adopt and scale AI safely, the AI
engineering with forensic tools to see
inside black box algorithms and
governance to make sure the engineering
"sticks" to the culture.

Tell us about your role: I became impassioned about the Our approach engages stakeholders
field after the news regarding from across an organization- from Data
As you scale AI you are also scaling Cambridge Analytica came out in scientists and CTOs to Chief Diversity
your risk. I am responsible for 2018. I was so utterly horrified I and Inclusivity officers. Fighting bias and
helping our clients mitigate their decided to pursue a PhD in the ensuring fairness is a challenge that is
risk through a three part offering to space to learn as much as I could solved by more than just good tech and
reduce this risk. I work to help about it. After spending two years by more than just one kind of
clients with the culture they need researching, and doing talks, I can stakeholder.
to adopt and scale AI safely, the AI now claim that it is my "day job" to
engineering with forensic tools to teach this practice to others. What are the biggest challenges in
see inside black box algorithms, and general to operationalizing AI ethics?
governance to make sure the Tell me about your organization’s
engineering "sticks" to the culture. ambition for Ethical AI? Why is it a Oftentimes, like in the case of a US
priority?
What’s been your path into the Continued on next page
field? As AI is being used to make many
high stakes decisions about people,
THE BUSINESS CASE FOR AI ETHICS | 45
retailer, we are initially engaged to solve a business problem from ONE stakeholder—oftentimes the Chief Data Officer wanting
to increase time to value for AI— only to have the effort mature into a broader effort that incorporates cultural transformation
and governance. The engaged stakeholders then grow well beyond the initial Data Scientist to include the Chief Diversity and
Inclusivity Officer, Chief Legal Counsel and more.

What is your business case for Ethical AI?

Unwanted bias places privileged groups at systemic advantage and unprivileged groups at systemic disadvantage, and it can
proliferate in your data and your AI. Unwanted bias comes from problem misspecification and data engineering, but even more
commonly from prejudice in historic data and under sampling. Artificial Intelligence (AI) enhances and amplifies human
expertise, makes predictions more accurate, automates decisions and processes, optimizes employees' time to focus on higher
value work, improves people's overall efficiency, and will be KEY to helping humanity travel to new frontiers and solve problems
that today seem insurmountable. But, as you scale AI, you also scale your risk of calcifying unwanted bias systemically.

Today AI is being used in virtually every domain and industry to make all types of decisions that directly affect people's lives.
Regulation can be a powerful tool to build consumer trust in emerging technologies. As per our CEO's letter to President Elect
Joe Biden, IBM believes that a “precision regulation” approach by the government can help create ethical guardrails for the use
of new technologies without hindering the pace of technological innovation.

Only by embedding ethical principles into AI applications and processes can we build systems based on trust.

Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?

-Emphasize need for Diverse and Inclusive teams.


-Incorporate Ethics in Design Thinking.
-Red-Team vs Blue-Team tactics to stress test assumptions of AI.
-AI Advocacy Ambassador program.
-Feedback loops.
-Teach unconscious bias and how it relates to data and work product.
-Focus on Human-Friendly Automation.

In your opinion, who’s doing interesting things in Ethical AI?

Linux Foundation, Partnership for AI, WEF AI Ethics Board guidance, IEEE, Kathy Baxter at Salesforce (love her blogs).

What resistance is one likely to encounter when making the case for ethical AI to the business community?

It oftentimes goes directly against the entrepreneurial mindset of pivot fast, 'throw spaghetti on a wall until it sticks.' Ethical AI
must be necessarily deliberative in predicting unintended consequences.

Anything else you'd like to add on the topic of AI Ethics in general?

We need to ask ourselves why we are not teaching about AI and Ethics in high schools? WHY do Higher Ed institutions market
classes on Foundational AI ONLY to Data Scientists and Computer Scientists? This is a massive disservice to our community.

Connect with Phaedra @Innov8game

THE BUSINESS CASE FOR AI ETHICS | 46


Tell me about your organization’s
LEARNING FROM THE COMMUNITY
ambition for Ethical AI? Why is it a

Emanuel Moss
priority?

My organization is deeply interested in


the role technology plays in society, and
in bringing a humanistic, social scientific
perspective to understanding the
relationship between technology and
Researcher - Data & Society Research Institute society. Ethical AI, as a way of thinking, as
a set of practices, and as an emerging
goal is shaping how technology and
society are shaping each other today.

What are the challenges your


organization faces with operationalizing
AI ethics?

We study how AI ethics are being


operationalized, and the biggest
challenges we face are that so much of
the ethical AI conversation is happening
behind closed doors, and so many voices
are excluded from the AI ethics
conversation. AI ethics needs more case
studies—about successes and failures—
from which to build common knowledge
and improve practices across the
industry, and needs to recognize the past
and future contributions of community
activists in addressing many of the most
trenchant ethical issues for AI.

What are the biggest challenges in


general to operationalizing AI ethics?

Understanding what the most common


failure modes for AI are, understanding
Tell us about your role: technological objects for society. the full set of algorithmic harms, being
able to measure the impacts of those
I research how abstract ethics What’s been your path into the harms.
principles are turned into concrete field?
organizational practices inside tech In your opinion, who’s doing interesting
companies, and the social context I began studying ethical AI as part things in Ethical AI?
of algorithmic accountability. As an of my dissertation research
anthropologist, I am interested in conducting an ethnography of Ada Lovelace Institute, AI Now, fast.ai,
understanding data-driven machine learning. People in the lab I Algorithmic Justice League, Data &
technologies as socio-technical was researching became very Society, Markkula Center at Santa Clara
systems and in understanding how concerned with ethical issues University, DeepMind.
people construct meaning for their rocking the industry, and I wanted
work. I conduct interviews and to understand where these
engage in participant-observation concerns came from, because they Connect with Emanuel @mannymoss
to analyze how organizations work, didn't resemble old-fashioned
how people understand their roles, research ethics, and where they
and the significance of were going for the industry.
THE BUSINESS CASE FOR AI ETHICS | 47
and known problems with AI.
LEARNING FROM THE COMMUNITY

Triveni Gandhi
Tell me about your organization’s
ambition for Ethical AI? Why is it a
priority?

Dataiku is a software vendor that


enables individuals from all skill sets and
Data Scientist, Responsible AI - Dataiku backgrounds to use and deploy AI in their
organizations. However, that power can
not be unchecked, and we feel it is our
imperative to make sure our users have
assessed all potential risks, negative
impacts, and unintended consequences
of their AI products. This means we
prioritize education and guidance on the
responsible use of our tool.

What are the challenges your


organization faces with operationalizing
AI ethics?

I think the biggest issue is coming to an


agreement on what ethics we want to
promote to our users. We have taken the
stance that it is not our place to dictate
what constitutes fair and balanced,
meaning our education and evangelism is
more neutral (a loaded term of course).
As a result, we think about Responsible
AI as ensuring the outcomes of a pipeline
are in line with an organization's stated
outcomes and goals—whatever that may
be. This means we want our clients to
own the tradeoffs they make with every
AI decision, especially if that decision has
negative impacts.

Tell us about your role: What’s been your path into the What recommendations do you have for
field? operationalizing AI ethics?
As a data scientist and Responsible
AI SME, I work with clients to build I completed my PhD in Political AI ethics start at the top—leadership
and deploy their AI pipelines, with a Science in 2016 but realized I would needs to determine a set of expectations
special focus on reducing bias in the have more practical impact on the and values for the organization. Only
machine learning cycle. I also create lives of others outside of academia. then can leaders across the various parts
custom trainings about how to After working as a data analyst in a of the organization work together to
think about and execute non-profit, I came to Dataiku to define and build requirements,
Responsible AI at the executive and help democratize AI. While here, processes, and transparency mechanisms
practitioner level. Most I've seen the amazing power of our both vertically and horizontally.
importantly, I evangelize the tool to transform businesses, but
importance of holistic approaches with my social science background, Continued on next page
to Responsible AI both internally I am equally aware of the potential
and externally. negative impact of AI. Thus I began
a push for enabling users of our tool
Tell me how you got into Ethical AI. to think about broader implications
THE BUSINESS CASE FOR AI ETHICS | 48
What are the biggest challenges in general to operationalizing AI ethics?

There are so many different methods, ideas, approaches, and more to Responsible AI that it can become overwhelming to know
where to start. In particular, it becomes difficult to pinpoint the one person who is willing to take on the task to organize the
various moving pieces.

What is your business case for Ethical AI?

I can find a business case at nearly every client I speak to—even manufacturing firms who are interested in using AI for People
Operations. The business cases are not hard to find, because AI impacts humans in society in so many ways.

Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?

We are creating internal enablement materials so that every customer-facing role can speak to and support our clients in their
Responsible AI efforts.

In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc in your org ecosystem?

Start with the ground belief that all data is biased, as it is a product of the historical context it is collected in. Data is not
objective or infallible—so start there and be willing to question and analyze every piece of information. In addition, document
everything! Who collected the data? When? How? The concept of Datasheets for Datasets by Gebru et al is really important
here.

In your opinion, who’s doing interesting things in Ethical AI?

Timnit Gebru, Joy Buolamwini, Brandeis Marshall, Rachel Thomas, Shannon Vallor are big names that come to mind. I think
there are also numerous researchers out there who are probably not flashy names but are contributing to the field every day,
and I hope we can find ways to elevate more of those voices.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

The main issue is of course push back from people who don't see Ethical AI as a problem. To them, bias is subjective and a
difference in outcomes or impact is only reflecting what is true in the world. Folks like this think that AI should only reflect what
already is, not what it could be.

How would ethical AI improve your value prop to your customers?

I think the value prop is quite clear—we are helping orgs understand and mitigate risks, or at the very least, be aware of what
impacts they have. Reputation costs are big today, and our customers are keen to avoid any potential fallout of their AI
products. Success of Responsible AI means knowing that our customers are constantly thinking about and improving their
processes, not seeing responsible AI as an afterthought, or something to do once and forget.

How can companies try to incorporate these practices when they exist in a capitalistic system that is focused on profits?

The biggest motivation is that of reputation cost. No company wants to be seen as evil or doing wrong, especially with the way
information and news is instantaneously shared around the world. It would be nice for businesses to have a sense of moral
responsibility, but in a capitalist system it is the stick, not the carrot, that will drive ethical considerations forward. This means
businesses can focus on profits, but know when to draw a line to avoid greater costs to themselves.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

We build tools that allow anyone with subject matter expertise to be involved in the AI pipeline [and] create more democratic
and collaborative environments.

Connect with Triveni at @triveni_gandhi

THE BUSINESS CASE FOR AI ETHICS | 49


field?
LEARNING FROM THE COMMUNITY

Merve Hickok
My previous corporate work was focused
on HR recruitment technology, processes
and diversity. My work allowed me to see
how certain processes, mechanisms and
technologies can create obstacles for
those in disadvantaged groups; and what
intentional steps we need to take to
Founder / AIethicist.org prevent that from happening.

When I started researching more and


immersing myself in different case
studies, I understood the wider impact of
AI products on individuals' access to
opportunities and resources, as well as
the consequences for social justice. So
my path has been one of moving from
awareness to research, further insights
to wider picture, and working towards
spreading that message.

Tell me about your organization’s


ambition for Ethical AI? Why is it a
priority?

My ultimate ambition is to make my job


obsolete because the need for ethical
development and use of AI has become
second nature and that we do not need
to make a case for it, or advocate for
organizations to be responsible in their
products anymore.

What are the challenges you face with


operationalizing AI ethics?

I am a consultant and trainer helping


Tell us about your role: consultancy throughout the organizations operationalize AI ethics.
product life-cycle, depending on The main challenge is to ensure that all
I am an independent consultant and their needs. I also think soft and levels involved understand the
trainer on AI ethics. My work is hard governance is crucial. So I importance of AI ethics and commit to it.
focused on 3 priorities; creating collaborate with a number of AI ethics cannot be a top-down or
awareness, building capacity, and international organizations and bottom-up approach only. It has to be a
developing AI governance. In terms projects developing AI governance combination of both, and it needs to align
of awareness raising and advocacy, methods internationally. with the organization's values and
I try to make it easier for individuals culture. Because I can provide a more
and organizations [to] understand Separately, because of my tailored approach and support to clients,
the impact of big data and AI professional background in HR we can ensure that [the] AI mechanisms
products on individuals and social tech, recruitment, and diversity, I or governance we build works for their
justice, and the consequences of work on bias in AI recruitment organization and product.
biased products. When products with the above priority Continued on next page
organizations commit to making a areas.
change, I help them build capacity in
their own teams or provide What’s been your path into the
THE BUSINESS CASE FOR AI ETHICS | 50
What recommendations do you have for operationalizing AI ethics?

Operationalization of AI ethics requires alignment with an organization's culture and all of its mechanisms and methodologies.
It needs buy-in from all levels in the organization, and it requires intentional steps to be taken. Therefore even when a system is
in place, it is crucial that the organizations monitor and improve both their AI products, as well as their organizational processes
and mechanisms.

What are the biggest challenges in general to operationalizing AI ethics?

The biggest challenge is to start the conversation with decision-makers on the importance of AI ethics and why they should
adopt it. Then you can help those involved (whether decision-makers or developers) see that AI ethics is not only the
responsible thing to do, but it is also crucial for the organization's continuity and success. Yes, there will be some additional
resources required until capacity is built inside the organization and until it becomes business as usual. However the return on
those investments both ethically and financially are real.

What is your business case for Ethical AI?

Ethical development of AI provides your organization [an] advantage and your investments are returned in terms of employee
loyalty and commitment, customer satisfaction and brand loyalty, investor appreciation and additional investments, and less
stress and costs in legal or PR battles.

In your opinion, who’s doing interesting things in Ethical AI?

Too many to count here as it is spread across advocates, activists, academics, business people. There are also a lot of unseen
people who try to push this inside their organizations. I think the bigger message is 'anyone can contribute to ethical
development or use of AI regardless of their education. This is about envisioning a better future and contributing to it with our
own knowledge and experiences.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

At first the businesses think that AI ethics is a 'nice to have' or an 'additional feature' so if they do not have the resources, they
prefer to delay it until a crisis hits. However developing and using AI ethically makes your product stronger, your organization
more resilient and responsible, and your potential market wider. Once AI ethics is operationalized and embedded in an
organization, it creates competitive advantage. So there is always resistance at first until you are able to show the wider picture
and the impacts into the future.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

Lack of diversity in tech has more to do with the organization's culture and mechanisms than pipeline. So the organizations need
to take a bird-eye view of their processes (recruitment, incentives, performance management, flexible work, to name a few) and
understand how they interact with each other and impact the overall culture, diversity and inclusion—and ultimately the
products they develop. Before you address these processes, any fix you are trying to bring in will be short-lived.

After that, depending on the industry and use case of your AI product, you need to ensure that your stakeholder group is wide,
voices and lived experiences are respected, and that everyone can flag concerns about data, context, metrics, model, outcomes
etc. This is not only about creating UI/UX personas and edge cases—although that is definitely a good start.

Organizations need to be ready to pull the plug on a project at any point if they cannot justify the impact on individuals or
society. They also cannot extract knowledge and perspectives from marginalized groups only to then turn around and develop a
product that will exploit those groups.

Connect with Merve @HickokMerve

THE BUSINESS CASE FOR AI ETHICS | 51


What are the challenges you / your
LEARNING FROM THE COMMUNITY
organization face with operationalizing

Elizabeth M. Adams
AI ethics?

Education has been a top challenge. Most


elected officials, appointees, boards and
commissions are not technologists and
their service to the city is not centered in
technology. This education has to be
Stanford Fellow - Race & Tech balanced with the role officials have in
keeping communities safe. The work
involves an understanding that everyone
should have the opportunity to thrive
and then from there we start to unpack
technology which is being used to govern
communities. If the technology does not
allow all citizens the ability to thrive we
question it and work to draft transparent
data policies.

What recommendations do you have for


operationalizing AI ethics?

Ethical Tech Design is key. Building an


agile process to ensure ethics is a part of
the entire lifecycle will help speed up
correction. Ensure there is a monitoring
phase that feeds updates back into the
process. Companies need to budget for
this loop of find, correct, test,
operationalize. I have met with many
companies who say they didn't have the
budget to "really" consider diverse data
sets in their models. Diversity in data
should not be a "next version"
consideration. Finally AI Ethics Auditors
are needed to do the checking
throughout the lifecycle. So ethical tech
Tell us about your role: algorithmic bias. It has evolved design, a monitoring phase, budget
from working with data scientists to inclusive of a diverse data design and AI
I co-lead an effort to help elected helping draft corporate AI Ethics Ethics Auditors.
officials in the city of Minneapolis principles, to working with elected
adopt public oversight of officials to pass data policies to Connect with Elizabeth @technologyliz
surveillance technology and advising organizations on AI Ethics.
military equipment. More about our
work can be found at postme.mn. Tell me about your organization’s
ambition for Ethical AI? Why is it a
What’s been your path into the priority?
field?
To ensure everyone can live safely
After spending 20 years in tech, I and confidently in the city of
saw an opportunity to use my lived Minneapolis as the use for
experience and my love for tech to surveillance technologies are
help solve emerging challenges with explored.

THE BUSINESS CASE FOR AI ETHICS | 52


Tell me about your organization’s
LEARNING FROM THE COMMUNITY
ambition for Ethical AI? Why is it a

Meeri Haataja
priority?

We develop technologies and services


that will help public authorities and
private companies put in place
systematic AI governance by deploying
AI registers.
CEO & Co-founder, Saidot
Saidot is a technology platform for
teams, who want to make their AI
transparent and explainable and invite
their end-users into a dialogue around
transparency. Our platform allows
individuals and organizations to
cooperate in creating registers of
transparent AI systems, built on a
foundation of accountability.

Our customers register their AI systems


and manage the governance and
documentation of their systems as per
deployed metadata model. Our
transparency design system and
integration tools will help our customers
to publish their AI transparency through
public AI registers, and deploy
transparency features in their AI-
powered consumer applications.

What are the challenges your


organization faces with operationalizing
AI ethics?

Many of our customers are lacking clear


roles, responsibilities, and processes for
AI governance and ethics. Also, we see a
Tell us about your role: responsible not only for AI, but also major skills-related challenge as many AI
for GDPR compliance-focused teams have a relatively little background
I'm CEO and Co-founder of Saidot, programs. Having these two in analyzing and addressing different
a company building platform teams different perspectives, AI ethics-related questions.
who want to make their AI innovation and privacy compliance,
transparent and explainable to on my table at the same time, I What recommendations do you have for
their stakeholders from consumers found myself paying also more and operationalizing AI ethics?
to partners to investors. more focus on other risks, such as
non-discrimination, and innovating Find a governance framework that will
What’s been your path into the ways to address these in a help you systematically address the same
field? systematic manner. Eventually it approach and document the same
became very clear I want to focus aspects across all of your AI projects.
Prior to my current work as an my full attention to helping Educate your organization to talk [about]
entrepreneur, I used to lead AI companies put their AI principles and analyze ethical questions . A good
strategy in one of the largest into practice and establish clear Continued on next page
financial services companies in operations around transparency.
Finland. In this role I was
THE BUSINESS CASE FOR AI ETHICS | 53
start could be this free online course (link: https://ethics-of-ai.mooc.fi/).

Embrace transparency and community collaboration as an opportunity to learn together. Invite constructive and critical
feedback, that will help you learn quicker.

In your opinion, who’s doing interesting things in Ethical AI?

Cities of Amsterdam and Helsinki, Ada Lovelace Institute, UK government in relation to their Data strategy, IEEE, Mozilla
Foundation, just to name a few.

Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?

We're witnessing a rapid expansion of non-financial ESG criteria starting to impact in large scale how investors are allocating
their assets. While AI ethics is essentially related to the social impacts of technologies, the related risks, and how companies are
managing them, the rise of ESG will be one of the most important drivers for private companies to take AI ethics seriously.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

We encourage our customers to always consider and document their approach to ensuring equity and non-discrimination in the
system context. This means all systems registered on our platform are expected to tell how these issues have been actively
managed.

We also provide means for our customers to collect feedback from their stakeholders, and are working on a concept for
enabling consumer participation throughout the system lifecycle.

Furthermore, in our enablers for AI in CSR reporting, we encourage our customers also to communicate the diversity of their AI
teams, actively set targets, and measure the diversity of their teams over time.

Connect with Meeri @meerihaataja

"Many of our customers are lacking clear roles, responsibilities, and processes for AI
governance and ethics."

-Meeri Haataja, CEO & Co-founder, Saidot

THE BUSINESS CASE FOR AI ETHICS | 54


conundrums presented by our use of
LEARNING FROM THE COMMUNITY
algorithmic classification.

Sheryl Cababa
As a result of these areas of focus, I’ve
worked on tools that are meant to help
technologists consider the potential
ramifications of their products, and [I]
have worked with organizations such as
Microsoft and Omidyar [Network] in this
VP of Strategy, Substantial space.

Tell me about your organization’s


ambition for Ethical AI? Why is it a
priority?

Our goal is to be able to integrate forms


of ethical and responsible decision-
making on all of our projects. Though we
are small, as a design and build studio, we
are often in product development
processes that are similar to other
technology companies. We enjoy the
freedom to work on projects that are
aligned with our company’s values, but
we still need to equip ourselves with
tools to make it an explicit part of our
work and process. Even if you’re working
with values-aligned clients, you can still
be blindsided by unintentional
consequences that are the result of
having a myopic lens during the product
development process. We want to be
aware of this on all our projects.

What are the challenges you face with


operationalizing AI ethics?

Even if our small company is aligned with


Tell us about your role: For the past few years I’ve been the need to integrate ethics into our
focused on developing a more work, we work with a wide array of
I’m the VP of Strategy at outcome-centered and systems clients who don’t always find that a
Substantial, a digital innovation thinking approach to my design priority; especially if they feel that the
consultancy in Seattle. My role is practice. A lot of this involves work that they are doing is of social
heading up design research and explicitly on integrating second- importance, such as healthcare, for
strategy, and working to integrate order thinking, which naturally example. Although many people have
ethical innovation practices into leads to a consideration of societal bought into the importance of baking
our consulting work. We have fairly harms and unintended ethics explicitly into product
diverse clients: everyone from consequences of our technological development processes, it can still be
startups, to large technology and development. This intersects with challenging to convince those who don't
healthcare companies, to non-profit my emphasis on using equity- feel it is not a priority.
and philanthropic organizations. centered methods for design
research. With much of my work Continued on next page
What’s been your path into the being focused on technology and its
field? impact on humans, this dovetails
naturally into the potential ethical
THE BUSINESS CASE FOR AI ETHICS | 55
What recommendations do you have for operationalizing AI ethics?

One thing I’ve learned from working with many different organizations on this topic is that you have to find ways to meet them
where they are in terms of organizational culture. If, for example, the word "ethics" is a sticking point, you might want to trojan
horse the concepts under other terms, such as "responsible technology." I’ve also seen effective progress with companies that
have an explicit plan for operationalizing, such as a progression from simply building awareness—using the kinds of tools that I
mentioned earlier -- to full process integration and changes in KPIs. It’s one thing to build awareness, but if no one knows, from a
tactical perspective, how to actually integrate it into their day-to-day work, then awareness doesn’t go anywhere.

What is your business case for Ethical AI?

I actually challenge the idea that there needs to be a business case for ethical AI work. For me, it’s kind of like those who insist
there needs to be a business case for, say, diversity. Even if diversity within our organizations weren’t explicitly beneficial to
business, wouldn’t it still be the right thing to do? We are constantly twisting ourselves in knots to align values to profit, and in
the case of ethical AI, we should still do it, even if it means trading away some profit. I know this isn’t a popular perspective, but
the demand for a business case absolves us from harmful decisions if, God forbid, there isn’t a good business case. Just do the
right thing, people!

In your opinion, who’s doing interesting things in Ethical AI?

I appreciate that work needs to be done both inside of, and external to, technology companies. In terms of internal work, I really
respect the work that the folks at Microsoft are doing in regards to Responsible AI. For external organizations, I look to activists
such as the Algorithmic Justice League to help pressure companies to be more thoughtful, careful, and just in their use of AI.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

The biggest barrier is [that] being thoughtful about this work requires time, and that time slows down production. When
integrating ethical AI means not aligning with, say, OKRs within your organization, then you aren’t actually going to be able to
make much of a difference; your hands are tied by metrics that result in harm. Metrics have to change, perceptions of success at
scale need to change. This organizational change requires time and energy, and businesses of course, are reluctant to dedicate
time or space to efforts that will fly in the face of short-term growth or profit. The key is to argue for long-term health as the
yardstick, for your users, for society.

How can companies try to incorporate AI ethics practices when they exist in a capitalistic system that is focused on profits?

I’ve seen organizations where individual and team goals are the equivalent of “more engagement, by more users, for longer
amounts of time, and more frequently.” These types of engagement-oriented and active user metrics result in algorithmic
systems that are agnostic to the potential harm that driving traffic and engagement can cause. Eyeballs equal money. To
reiterate what I said earlier, you likely need to challenge your organization: what if doing the right thing means fewer eyeballs,
but longer-term health for your users and society? We need to prioritize, and what that often means as a result, is giving up
short-term profit. As a systems thinker, my philosophy is that we are where we are *because of capitalism*. So we need to
challenge the system.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

It’s been disheartening recently to see instances such as Google firing Timnit Gebru, AI ethics pioneer and founder of Black in
AI. It’s an example of how vocal representation is often silenced, and diversity in this field is often deprioritized. As an
underrepresented woman of color in tech, I’ve seen it all when it comes to the biases of an industry that is so entrenched in a
white male-dominated culture. My goal is to be a part of increasing representation as much as I can, and also working to help my
dominant-culture peers use tools and frameworks to help them interrogate their biases. As a design researcher, I also focus on
integrating equity-centered practices in my work so that we ensure that we engage in participatory design with those who are
potentially most affected by biases in our technology. These methods, of course, are a stopgap. What will truly make a
difference is true diversity and representation more broadly within our industry.

Connect with Sheryl @SherylCababa


THE BUSINESS CASE FOR AI ETHICS | 56
Ethics of Autonomous and Intelligent
LEARNING FROM THE COMMUNITY
Systems that led to the creation of

John C. Havens
Ethically Aligned Design, a 300-page
treatise on responsible AI created by
over 700 global experts over the course
of three years (links:
https://standards.ieee.org/industry-
connections/ec/autonomous-
Executive Director, The IEEE Global Initiative systems.html and
https://ethicsinaction.ieee.org).
on Ethics of Autonomous and Intelligent
Systems Tell me about your organization’s
ambition for Ethical AI? Why is it a
priority?

IEEE is the world's largest technical


professional association. It is, without
exaggeration, the largest global
engineering body with representatives in
more than 160 countries. The ethically
aligned principles, including human
rights, wellbeing indicators/metrics, and
data agency, were recognized at the
Board level of IEEE in 2019. IEEE's motto
is, "Advancing Technology for Humanity"
and this means prioritizing Responsible
or Ethically Aligned Design
methodologies at the outset of design.

What are the challenges you face with


operationalizing AI ethics?

One thing we learned early on is you


cannot do this work without being
multidisciplinary in nature. Otherwise
there will simply be harm created by
virtue of the fact that not everyone is an
expert in everything. For instance,
Tell us about your role: field? creating a toy for children outfitted with
affective computing sensors means you
I help create, lead and drive I was working on a series of articles need child psychologists and mental
strategy for IEEE's largest AI Ethics for Mashable in 2013 and 2014 health specialists on a team including
oriented program. This involves looking for a common Code of engineers and data scientists. A second
creation of committees, papers, Ethics for AI. This was part of the challenge is the centrality of data for
pre-standardization research for my book, Heartificial anything related to "AI ethics." A focus
recommendations, policy feedback, Intelligence: Embracing our Humanity on privacy alone is not sufficient—data
and events. I also drive outreach to to Maximize Machines. I was quite agency or sovereignty is essential.
the volunteers who make up our shocked to discover there wasn't a
committees and standards working common code of ethics for my What recommendations do you have for
groups. Created in 2016, our research; but it was at this time operationalizing AI ethics?
Initiative has about 2,200 IEEE invited me to speak at SXSW
participants in our larger about my book and work. That led Get all stakeholders in a room at the
community. me to pitch them on creating a outset of design. We have a paper
Code of Ethics for AI, which turned Continued on next page
What’s been your path into the into the IEEE Global Initiative on
THE BUSINESS CASE FOR AI ETHICS | 57
focused on Ethically Aligned Design for Business featuring product marketing / AI ethics leads from companies like IBM,
Salesforce, and Microsoft (available at link: https://ethicsinaction.ieee.org/). Here a lot of the focus from members was on
identifying your early evangelists who can demonstrate the business value of Responsible AI as a design methodology from the
outset. Getting buy-in from colleagues here has to do with time savings by being cross-discipline from the start (e.g., invite legal
and compliance teams to talk with designers and data scientists when discussing issues like algorithmic bias), and the time and
trust savings that happen when not waiting for a PR or risk-based crisis to happen when releasing AI too quickly into the market
without due diligence and responsibility-focused AI design.

What are the biggest challenges in general to operationalizing AI ethics?

Ignorance in thinking that "ethics" means "morality." Also, existing business models prioritizing exponential growth or
exponential profits versus factoring in environmental and human wellbeing at the same level of financial concerns. Note that
social innovation has for years been demonstrating that long-term sustainability and business growth are best served by
avoiding short-termism. However, the bigger question here is—what are the ultimate societal success metrics for the AI we
build? "AI for Good" where this is defined by reaching the UN SDGs is a fantastic step in this direction. But the point is we value
what we measure and what we prioritize. Where people, planet, and profit aren't prioritized at the same level, people and planet
are often considered more of as an afterthought with CSR and ESG reporting, and harm can result because products are already
designed and in the world.

What is your business case for Ethical AI?

Please see our paper, Ethically Aligned Design for Business as mentioned above. However, for a more general sense of how
Ethically Aligned Design can be integrated overall, please see the latest version of Ethically Aligned Design at:
ethicsinaction.ieee.org.

Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?

It's a multi-pronged strategy that's taken years to date. First we created Ethically Aligned Design and launched a series of
standards working groups. Then multiple other areas within IEEE began focusing on issues of ethics. Then we were invited to
multiple policy discussions over the past five years. By being a part of these conversations and creating new committees focused
on areas like business, our continued publications and new standards working groups provide ongoing best practices for the
academic, business, and policy communities. Our work is also open to all and free to join.

In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc in your org ecosystem?

Yes. In Ethically Aligned Design we have a chapter focused on Personal Data Agency. The basic concept here is that where all
people don't have access to and portability of their data, they will always rely solely on the hope that government or businesses
will protect their personal information. This is not only ill advised in terms of hacking and cyber issues, but denies the
fundamental need for technological and policy channels that allow for genuine peer to peer exchange of one's data and choices
via parity with the existing systems today that can be extractive in nature and largely focused on advertising and economic
priorities. Data sovereignty, however, especially in the coming immersive or spatial web, will be essential both for all humans to
prove their identity while having trusted means to exchange data with businesses, governments, and each other. Not having
data sovereignty means not having true agency over your identity, emotions, or publicly declared choices, which includes voting
in a democratically oriented society.

In your opinion, who’s doing interesting things in Ethical AI?

So many people!! Kathy Baxter at Salesforce, Adam Cutler and Milena Pribić at IBM, Rumman Chowhury, Olivia Gambelin,
Jonathan Stray, Kay Firth-Butterfield, Paola Ricaurte, Data & Society, and many more. It's quite hard to not list about six
hundred names here.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

Fighting status quo, single bottom line, quarterly imperatives. Communicating that "AI Ethics" can also be called
Continued on next page
THE BUSINESS CASE FOR AI ETHICS | 58
"Responsible AI" and not freak people about because they think you're talking about morality. Letting them know "AI Ethics" is a
cross-organizational need and focus akin to Agile for Marketing practices which many focus on marketer types but needs legal,
R&D, and stakeholders from all departments at some point to make a good scrum.

How would ethical AI improve your value prop to your customers? What are your top use cases for application of ethical AI?
How do you measure the success of your ethical AI initiative?

Here again Kathy Baxter at Salesforce has done groundbreaking work. In terms of measuring success, beyond traditional KPIs
of sales lift, increased positive sentiment in PR/social channels, I'd say increased trust is the biggest positive most are hoping for
when realizing trust has to be a two-way street.

How can companies try to incorporate AI ethics practices when they exist in a capitalistic system that is focused on profits?

This is where wellbeing metrics can and should be applied. IEEE has created their IEEE 7010-2020 standard to provide specific
assistance in this regard. While the word "wellbeing" can be confusing, the logic is simply that what we measure is what we
value. And what we value is what we count. So if the floor or base of all AI uses GDP or financial metrics in isolation, ultimately
our success for AI uses these metrics as validation. But as countries like New Zealand and Wales are showing, triple bottom line
metrics (people, planet and profit) are essential for all AI we build if we want these amazing technologies to be truly transparent,
accountable, and fair. It's critical to note that focusing on profits is not wrong or unethical—it's simply that when the only
metrics used to gauge success focus on these issues and they're prioritized first that CSR or ESG reporting (or legislation
overall) may deal with an AI product or service after it's created or in use.

Responsible AI design, or the creation of a "societal impact assessment" like the logic of 7010-2020 is not to dictate the "one
Indicator to rule them all" logic like the logic that exists now with GDP. Rather the logic is to have AI designers scenario-plan
around how their product or service would be different if honoring people, planet, and profit in unison [are] metrics recognized
as success. This logic provides a new type of R&D that also is reflected in sustainability practices overall and legal structures like
B-Corp models.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

The Diversity, Equity and Inclusion (DEI) movement of professionals growing in importance cannot be highlighted enough.
Along with creating a workforce that is truly representative of the population that an AI product or service will serve, DEI
practices mirror and complement Responsible AI design methodologies as well.

Connect with John @johnchavens

Please note: The responses in this interview reflect the views of John C. Havens and do not necessarily reflect the overall
views of IEEE.

"[T]he bigger question here is—what are the ultimate societal success metrics for the
AI we build?"

-John C. Havens, Executive Director, The IEEE Global Initiative on Ethics of


Autonomous and Intelligent Systems

THE BUSINESS CASE FOR AI ETHICS | 59


We have been working on helping
LEARNING FROM THE COMMUNITY
governments, businesses and other
organizations create ethical frameworks

Kay Firth-Butterfield
and governance for the design
development and use of AI since 2017 as
part of the WEF Centre for the Fourth
Industrial Revolution. They are based in
San Francisco, Japan, India, Colombia,
Brazil, Saudi, Israel, UAE, Turkey,
Head of AI and ML at the World Economic Norway, South Africa, and Rwanda.
Forum What are the challenges you and your
organization face with operationalizing
AI ethics?

An organization must be committed to


the process. That said, many
governments and businesses have come
to our team wanting help in working out
how to operationalize AI. We have co-
created many tools on this which any
organization can use as we release them
under creative commons. For example,
we found that members of Boards had
difficulties understanding AI and their
oversight role. We co-created with a
multi-stakeholder team a Toolkit for
them which is web and mobile based.

What recommendations do you have for


operationalizing AI ethics?

Join our work at the World Economic


Forum so you can be part of creating the
best tools for operationalizing AI ethics.
Use our tools which we have already
created. For example, with the
governments of the UK, UAE and Bahrain
Tell us about your role: degrees (law and international tools for procuring ethical AI by
relations), in the overlap of AI/ML government for and on behalf of citizens.
I lead work on AI governance and and human rights and the planet. In
policy which enables the 2014 I became the world's first Biggest challenge: getting companies to
operationalization of ethical AI Chief AI Ethics Officer. I was the understand that they will all be AI
principles through offices and founding vice-chair of the IEEE’s companies and so whether they are
government and business efforts on ethically aligned design health companies or manufacturing
partnerships around the world. I of AIS and I participated in the companies or mining companies or…
will give some examples below, but Asilomar AI Principles. In 2017 I they need to use AI ethically and it DOES
it is a rigorous multi-stakeholder started the Forum’s work on AI and apply to them.
process. have been building the work and
team since. Continued on next page
What’s been your path to the field?
Tell me about your organization’s
I used to be a human rights lawyer. ambition for Ethical AI? Why is it a
In about 2012 I became very priority?
interested, and did two masters
THE BUSINESS CASE FOR AI ETHICS | 60
What is your business case for Ethical AI?

Currently many algorithms are working poorly or erroneously because of not addressing ethical issues such as bias, so business
is throwing away R&D time and money by not addressing ethical AI. Also, if they get it wrong it could lead to customer or
employee lack of confidence.

Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?

We have a project on how to use technology responsibly (link: https://www.weforum.org/projects/responsible-use-of-


technology). It explores who to educate and train employees, how to make the right organizational changes to allow ethical AI to
thrive and how to think about product management with AI in use. I would suggest joining that work and learning more.

In your opinion, who’s doing interesting things in Ethical AI?

WEF, IEEE, GPAI, Salesforce, Microsoft, DefinedCrowd, Cantellus, Office of AI (UK), EC, Council of Europe, Parity, UNESCO.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

[Resistance] is not extensive once they [businesses] understand what we mean; we have over 85 businesses working with us on
operationalizing ethical AI at any one time.

Connect with Kay @KayFButterfield

"Biggest challenge: getting companies to understand that they will all be AI companies
and so whether they are health companies or manufacturing companies or mining companies
or … they need to use AI ethically and it DOES apply to them."

-Kay Firth-Butterfield, Head of AI and ML at the World Economic Forum

THE BUSINESS CASE FOR AI ETHICS | 61


Wellcome Trust’s Data Labs, a strategic
LEARNING FROM THE COMMUNITY
initiative of UK’s largest charitable

Aparna Ashok
foundation.

Tell me about your organization’s


ambition for Ethical AI? Why is it a
priority?

[For] Ethics Sprint [technology


Technology Anthropologist, Service Designer, consultancy], the aim is to sensitize
Applied AI Ethics Researcher owners and creators of human-facing
automated decision making systems to
the social implications of such systems
and arm them with the practices to
modify the systems proactively. While
regulation is necessary, it is not able to
pinpoint nuances that happen behind
closed doors. The people building such
systems are in a unique position of power
in terms of knowledge about the nuances
of the product and their ability to course-
correct.

What are the challenges you face with


operationalizing AI ethics?

Companies are more interested in how


they can adopt AI for business success
than AI ethics, which is seen as complex
and an inhibitor. I believe the mainstream
thought process around it is slowly
shifting and a 'stated interest' in AI Ethics
is now seen as progressive thinking even
in commercial companies. Few places are
willing to take the risk of taking active
steps towards it.

A few forward-minded companies


What’s been your path into the responsible product design. I (consultancies, analytics service
field? founded Ethics Sprint, a technology providers, startups, etc.) are interested in
consultancy and methodology that knowing more about AI ethics and have a
My background is in social brings practical awareness about AI discussion on what that means for their
enterprise, service design, and Ethics to technologists. I helped business—not always ready to pay for
design strategy. I got interested in build Fluxus Landscape (2019) – an that experience.
AI Ethics while doing an MA in open-source, interactive,
Masters in Digital Experience ecosystem map of over 500 AI Ethics [is] seen as inhibiting creativity
Design. My thesis was ethics and governance initiatives and innovation. I try to change this and
"Anticipatory Ethics for AI," which worldwide—a project by Stanford that ethics is too complex and boring
examined if and how ethical University funded by The Stanford through my workshop, but the
reflection can be incorporated into Institute for Human-Centered perception remains unless they have
the product design cycle. Since Artificial Intelligence (HAI). seen otherwise. [Another issue is] not
then, I developed the "Ethical having a common language or an
Principles for Humane Technology" I was named on the 100 Brilliant accepted business practice to even talk
framework that outlines the Human Women in AI Ethics list for 2020 Continued on next page
Rights considerations crucial for and am on the Advisory Group of
THE BUSINESS CASE FOR AI ETHICS | 62
about what ethics means in a business context.

What recommendations do you have for operationalizing AI ethics?

It starts with emphasizing technologists' humanity and helping them understand how automated systems and their challenges
affect individuals at scale. After that they need to be armed with tools and templates that fit in with their existing workplace
practices. Mindset change is required to perceive ethics not as a penalty-based activity, but one that leads to beneficial
business.

And for that I think that what you are doing—collecting business cases for AI ethics—is an extremely smart move.

What are the biggest challenges in general to operationalizing AI ethics?

Mindset—"it's seen as a 'good to have,'" "a problem for legal," "we'll do it when the rules enforce it."

Knowledge—Technologists either don't understand on an everyday level why it is important to practice ethics when building
technology (partly because so many business practices associated with unicorn companies are seen as industry benchmarks),
and the ones who want to work differently don't know HOW.

What is your business case for Ethical AI?

Ethical AI is a powerful tool that enables business to foresee potential opportunities leading to early market share (untapped
markets, unsolved problems, overlooked lucrative use cases) and potential risks leading to legal penalties.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

"It is a great ideology to aspire to, but hard to practice with the challenges of a tech business."

Connect with Aparna @aparnaashok_

"While regulation is necessary it is not able to pinpoint nuances that happen behind closed
doors. The people building such systems are in a unique position of power in terms of knowledge
about the nuances of the product and their ability to course-correct."

-Aparna Ashok, Technology Anthropologist, Service Designer, Applied AI Ethics


Researcher

THE BUSINESS CASE FOR AI ETHICS | 63


operationalize AI Ethics in their own
LEARNING FROM THE COMMUNITY
organizations. One of the common

Olivia Gambelin
challenges we come across is that the
majority of frameworks and policies in
existence have been developed and
tailored to large scale business, whereas
they are not always applicable or even
feasible for the smaller scale business
already strapped for resources.
Founder / Ethical Intelligence
What recommendations do you have for
operationalizing AI ethics?

Start by understanding where the


business is already at instead of coming
in and forcing a new framework. Business
processes/org/culture take a long time to
change. It's much easier to assess what
the company is already working with and
develop from there, rather than starting
from scratch.

What are the biggest challenges in


general to operationalizing AI ethics?

The lack of flexibility to the protocol


approach, and the vagueness of the
principles approach. By trying to
operationalize AI Ethics through a hard
set protocol, much like a checklist, the
process loses the flexibility needed to
adapt to cultural differences,
developments in the company's sector,
and the pace of innovation. However, if
you then go to the other extreme
through the principles approach, i.e.
adopting an ethics charter of values to
"uphold" in the company, then you
Tell us about your role: grew up in the Silicon Valley, I grew achieve the needed flexibility at the cost
up speaking fluent techie. But it of clear direction. By only having a simple
I am both the Founder of Ethical wasn't until my time spent as a charter or policy to follow, it becomes
Intelligence as well as the company researcher in the EU Parliament on very vague in how to do such.
AI Ethicist. Which basically means I GPDR and data privacy that I
split my time between handling the discovered I could combine my What is your business case for Ethical
usual business turmoil of a startup knowledge base in ethics and AI. AI?
founder and the other half working Ever since what can only be called a
on research and guidance in AI light bulb moment, I've been Equipping design and development
Ethics for our clients. studying, researching, and working teams with the skills to identify and
in the field. mitigate ethical challenges builds
What’s been your path into the confidence that the final product is truly
field? being created for good. We've noticed
What are the challenges you face that often technologists have every
My academic background has with operationalizing AI ethics? Continued on next page
always been focused on morality &
ethics in Philosophy, and since I We work with startups and SMEs to
THE BUSINESS CASE FOR AI ETHICS | 64
intention of creating ethically good technology, but are unsure how to even begin the process, which then causes them to either
pass the responsibility or doubt the ethics of their own work. Furthermore, ethics also increases innovation as it forces teams
out of complacency. Without ethical guidelines, development teams must only meet technical limitations, but once ethical
guidelines are in place, development teams must be even more innovative in how they fulfill technical AND ethical limitations.

Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?

We are about to launch a cyclical training resource that builds practical knowledge in AI Ethics for technologists and founders
that targets this gap (link: https://www.ethicalintelligence.co/equation).

In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc. in your org ecosystem?

We are European-based, so GDPR is the main player. But also when we work with clients, we have to analyze the definition of
privacy in whichever culture the client is situated in as well as the client's user base.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

The main resistance one encounters is the claim that ethics at best doesn't have any impact on profit and at worst slows down
production to the point that it costs the company. When talking specifically with tech startups, the most common phrase we
hear is "ethics hinders innovation."

Any organization trying to incorporate responsible tech practices exists in a broader economic ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?

Triple bottom line thinking—it's still ok to pursue profit, but this must be balanced with equal weight and consideration to
people and planet.

Connect with Olivia @oliviagambelin

"Equipping design and development teams with the skills to identify and mitigate ethical
challenges builds confidence that the final product is truly being created for good."

-Olivia Gambelin, Founder of Ethical Intelligence

THE BUSINESS CASE FOR AI ETHICS | 65


ethics conversations at the Atlantic
LEARNING FROM THE COMMUNITY
Council’s GeoTech Center.

Steven Tiell
Tell us how you got into Ethical AI.
What’s been your path into the field?

I started investigating data ethics in


2013 while leading research for
Accenture’s Technology Vision, the firm’s
Sr. Principal, Responsible Innovation + Data premier, annual thought leadership
Ethics, Accenture publication that looks at how technology
will impact business, government, and
civil society over the next three to five
years. From the foresight exercises we
undertook, 75 percent of the stories that
emerged begged the question, “just
because we can, does it mean we
should?” I wasn’t sure where to start, so I
sought luminaries in a diverse set of
disciplines to help. It took nearly a year
to assemble a team of about 20 external
stakeholders to investigate a series of
research questions—and after years of
donating nights and weekends to this
collaborative effort, we published a
handful of reports at the link:
accenture.com/dataethics. If you had
told me then that I'd still be focused on
data ethics more than six years later as
my full-time role, collaborating with
multiple universities, engaging with
dozens of clients, and the work would be
a catalyst for culture change, I would’ve
been skeptical. And now, I can think of no
higher calling, no work I'd rather be
doing, and no greater positive impact I
could bring to the world. I love this
[emerging] profession!
Tell us about your role: focused on thought leadership and
helping Accenture clients lead in Tell us about your organization’s
I lead Responsible Innovation at this area, versus follow—to this end, ambition for Ethical AI? Why is it a
Accenture, globally. This fast- we’ve published more than a dozen priority?
growing offering was born out of papers that have moved the field
insights from five+ years of forward and were the first In 2013, Accenture made the bold claim
stakeholder-intense research in publications Accenture has done that "every business is a digital business."
data ethics. This pioneering work under Creative Commons licenses. I Today, every industry realizes that digital
helps clients manage risks brought often speak with clients and technologies are critical to their ability to
on by digital transformations and audiences on topics such as compete and serve stakeholders. AI is a
widespread use of artificial governance, trust, deepfakes, data priority for organizations because it’s an
intelligence. I’m fortunate to advise ethics and tech-driven industry increasingly critical component for
some of the world's largest transitions. I also serve on the digital strategies across industries.
organizations in high-tech, media, steering committee for Accenture’s Deploying AI at scale, however,
telecom, financial services, public Human Insights Lab, advise the introduces a new set of risks, some of
safety, public policy, government, World Economic Forum on content Continued on next page
and defense sectors. My role is safety, and help to facilitate data
THE BUSINESS CASE FOR AI ETHICS | 66
them at an existential level. In that context, Ethical AI (and other names for it) is much more than a priority, it's a necessity.

What are the challenges that you and your organization face with operationalizing AI ethics?

Accenture has a half-million employees around the world, and our business is primarily focused on helping other businesses run,
so many of our challenges in this space are less transferrable. The one consistent thread is that an organization’s culture can be
an enabler or an impediment. Accenture’s culture is rich and diverse and open to integrating new practices, especially when
those practices encourage and celebrate diversity and inclusion. In our client work, operationalizing AI ethics often means
focusing on an organization’s values and respecting what they care about and want to protect and promote through tweaks to
existing processes and practices. Sometimes this can be as simple as describing how organizational values show up in products
and services.

To implement the values, we look to principles which describe what is required (ethically and legally) to live the values. It is then
the job of governance to assess whether the principles are satisfied in a particular case. Ethics is the work we perform to satisfy
the values, in accordance with the principles, and in support of governance. We apply these operationalizing techniques with
numerous interventions and tools, primarily with design, engineering, and decision-making processes. It often means new
training programs, changing variable compensation incentives, and adding governance. In many cases, this means culture
change needs to be part of a high-quality AI ethics program.

What recommendations do you have for operationalizing AI ethics?

Oh my goodness, there are so many. I think key among them is to find a C-level champion. To operationalize AI ethics
successfully, the C-suite must be involved, be an advocate, and help to break down barriers throughout the organization.

What are the biggest challenges in general to operationalizing AI ethics?

AI ethics is an emerging field. Even those tasked with owning AI ethics probably don’t have a background in both computer and
social sciences. Therefore, many people in these roles don't know what they don't know. Getting up-to-speed can be
considerable friction to overcome. Hiring a team has the same challenges. These teams then perform tasks that are likely new to
the organization and may be met with resistance from a variety of stakeholders. Without clear support from the C-suite, mid-
level leaders of these programs can struggle to gain a foothold and relevance, let alone be able to execute on the organizational
change that will eventually be required to have a robust program in place.

What is your business case for Ethical AI?

It's simple and comes down to sustainability—can your organization continue doing what it's doing in perpetuity? In many cases,
we can see that businesses that exist solely to collect data and sell it to others probably have a limited time horizon on that
being a viable business model (largely due to increasing regulations around privacy). But any organization that uses data to
make business decisions or influence the lives of others must pay attention to the sustainability of the way they collect, manage,
use, and share data. A simple mistake or oversight at any point along the data supply chain can have out-sized impacts on a
business and represent risks that could be materially detrimental to a business. Robust AI ethics helps organizations avoid
unnecessary risk.

Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?

Every organization is different. Some start with training programs for specific groups of workers. Some start with governance.
Others start with policy changes, or engineering and agile process tweaks. The ones that do it well take on a variety of
improvements in parallel, start small, iterate often, and gradually roll out to more parts of the organization.

In your opinion, who’s doing interesting things in Ethical AI?

There are so many. I learn of new ones by the week. My regular sources for news, information, and leadership include Data &
Society, AI Now Institute, Markkula Center for Applied Ethics at Santa Clara University, a couple groups at Stanford, Ron
Sandler and his Ethics Institute at Northeastern University, David Danks at Carnegie Mellon University, and the Atlantic
Continued on next page

THE BUSINESS CASE FOR AI ETHICS | 67


Council's GeoTech Center (where we spun our Data Ethics Salon Series out to). ForHumanity (link: ForHumanity.center) is
doing wonderful work on building audits for AI systems. The CELA [Corporate, External, & Legal Affairs] and Responsible
Innovation teams at Microsoft and Responsible Innovation team at Google also provide good resources. The U.S. Department of
Defense is an emerging leader, consolidating efforts at the Joint AI Center (JAIC). I also like the way Europeans make laws, and
a few of their privacy organizations are truly inspired: the ICO [Information Commissioner's Office] in Britain and the European
Commission have very good resources, as does Denmark's national strategy for AI. Are You A Robot’s public Slack is another
fast-growing community with a good diversity of experts and newcomers. And of course, All Tech Is Human.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

In my experience, leaders who need to be convinced to do something will always be a point of friction and seldom lead to
generative uptake of Responsible Innovation. Often, their thinking is that "it's just another hurdle that will slow my team down,
hurt revenue, and doesn't contribute to shareholder value." In fact, if the C-suite at an organization rejects the Business
Roundtable's call for focusing on stakeholder value instead of shareholder value, that will be a difficult, if not impossible,
organization to make the case for this work.

How would ethical AI improve your value prop to your customers? What are your top use cases for application of ethical AI?
How do you measure the success of your ethical AI initiative?

In many cases, the value prop is about trust—a focus on Responsible Innovation helps to establish, build, maintain, and/or repair
trust with stakeholders and builds gravity toward a brand. Often, simply highlighting data-centric risks that could be
detrimental to the brand is enough to gain a second conversation. Success is measured in vastly different ways across industries
and organizations and can include the absence of negative outcomes.

How can companies try to incorporate these practices (Responsible AI) when they exist in a capitalistic system that is
focused on profits?

Time. In an increasingly skeptical world, consumers (and businesses) are seeking signals of trust. Trust builds gravitational pull
toward a brand and makes people open to new things (think about the blockbuster lines any time Apple releases a new
product). It is these brands that will thrive when the "kids" using YouTube as their primary search engine today (because it's
only real if they can see it happen) become the engine of the economy tomorrow.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

One of the reasons I’m at Accenture is because of how much Accenture values diversity. Just as those who practice portfolio
theory in financial markets experience outsized returns more consistently than those who don’t, the same is true of worker
diversity within organizations. Accenture “gets it” and acts according. More broadly, these initiatives, at any organization, take
time. A decision to hire a diverse workforce today may not yield a more diverse workforce for years to come. I like to challenge
organizations to name, define, and rank the normative values of justice that drive their diversity efforts. In doing so, some
organizations discover that what they need to optimize for is lived experiences, which might be different from skin tone,
gender, and sexual identity. Perhaps this varies by geography within the same organization. I like to see hiring practices that
hold off on closing a req until there’s a sufficiently diverse pool of applicants, and then scrubs name, picture, and “legacy” (e.g.,
university name) from resumes before sharing with hiring managers.

While hiring is part of the long-term solution, it doesn’t directly solve the product development issues in front of us today. One
of the papers we published in 2016 had to do with questions to ask along the data supply chain—we saw a similar framework
called EthicalOS a couple years later. While these frameworks can help, without an applied ethicist leading the conversation,
they can often turn into circular discussions. Evolving from this, we worked with a handful of organizations to develop an
“ethical spectrums” framework that empowers decision-makers to have "directed agency" by establishing a set of spectrums
from which to evaluate business, product, and engineering decisions—are you closer to the status quo end of a spectrum or the
ethically higher bar end? We also worked with academia to build an “ethics triage” approach to understand where digital risks
were entering systems and to act at those points. Again, we focus on maximizing agency and providing avenues for recourse.
These and other product development practices can help to prevent products and services from being developed in a way that’s
problematic in the first place and offer recourse when they are.
Connect with Steven @stiell
THE BUSINESS CASE FOR AI ETHICS | 68
into the product development process.
LEARNING FROM THE COMMUNITY
[T]hrough Sense & Respond Press, I

Pavani Reddy
published my findings into a playbook:
Ethical Product Development: Practical
Techniques to Apply Across the Product
Development Lifecycle (2020). Since then,
I continue to consult in this field of
ethical tech—and ethical AI more
specifically. For example, I am working
Managing Director | Author of Ethical with a think tank on defining common AI
Product Development principles by sector—including AI in
education and workforce development.
Our project objective is to help ethics
owners and policymakers to define these
well-known general principles more
granularly by sector and use case.

What are the challenges you face with


operationalizing AI ethics?

As a technology worker very eager to


produce ethical AI, I appreciate the key
themes that we have aligned on in the
sector. Researchers at the Berkman Klein
Center for Internet & Society at Harvard
evaluated 36 documents about AI
principles written by a diverse array of
actors from 2016 through 2019. They
arrived at eight key themes (see below)
that continue to hold steady across 2020.
The challenge I face as a practitioner is to
apply the component principles inside
these themes (there are 47 catalogued in
the Harvard study) in a consistent way to
up-level our approach.

1. Privacy
2. Accountability
Tell us about your role: nearly twenty years working with 3. Safety & Security
teams to produce new technologies 4. Transparency & Explainability
I am a mid-career thinker-doer and responsibly. 5. Fairness & Non-discrimination
a self-appointed “ethics owner” in 6. Human Control of Technology
my full-time role at EAB. I serve as a In the last several years, I 7. Professional Responsibility
product manager and user personally noticed a gap in practical 8. Promotion of Human Values
experience designer of data & guidance for technologists on how
analytics solutions for higher to improve the ethical trajectory of What recommendations do you have for
education. My personal mission the products they produce. Outside operationalizing AI ethics?
behind my work at EAB is to help of work, I researched emerging best
higher education institutions practices from many companies and The advice that I have for myself is to
operate more effectively to thought leaders. My goal was to pursue the challenging work of codifying
produce more upward mobility for write down for myself and my peers what we mean within the sector, by use
people. I approach my work with an (within and beyond my current case, acknowledging stakeholder groups,
interdisciplinary mindset, having environment) a collection of Continued on next page
trained in economics, law, business, practical techniques that could
and the lessons that come from integrate ethical decision-making
THE BUSINESS CASE FOR AI ETHICS | 69
as well as the operative norms and laws.

In your opinion, who’s doing interesting things in Ethical AI?

Data & Society, as they appear to manifest their interdisciplinary and inclusive ethos in a courageous and experimental way;
similarly, every time I hear Renée Cummings, but I do appreciate the way she approaches her audience for the same reason!
There are numerous emerging leaders in the area; it is very exciting.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

The ROI on this is more open-ended than other investments, [as] so many businesses feel that they cannot afford deep, formal,
or dedicated inquiry into ethical AI. I have noticed that well-resourced, large players like Google are situating inquiry into this
area in the form of adding personnel to their "public policy" groups to influence policy. As a secondary part of these roles, these
personnel would work with internal product teams. I am curious what it would be like to shift the emphasis by investing in
Ethical AI by situating the role within the product team so that it permeates through their methodology and approach—and is
not an afterthought. My hypothesis is that more practical regulatory policy can be developed from this vantage point.

How can companies try to incorporate AI ethics practices when they exist in a capitalistic system that is focused on profits?

There are three motivators for adherence to principles of ethical AI: (1) completely voluntary adherence, especially if the
principles are self-derived and expressed to the customer/user community; (2) market pressure from the customer / user
community that pushes toward consistent adherence towards priority principles; (3) government regulation with a clear
enforcement regime. Since the U.S. system does not have strong regulation, companies can serve their customers by creating
principles to uphold and educate customers and the broader public about not only the benefits, but the possible harms in using
AI technologies, thereby creating a healthy and transparent public dialogue that creates pressure to adhere to priority
principles.

Connect with Pavani at ethicalproductdevelopment@gmail.com

"As a technology worker very eager to produce ethical AI, I appreciate the key themes
that we have aligned on in the sector."

-Pavani Reddy, Managing Director, EAB, and


author of Ethical Product Development

THE BUSINESS CASE FOR AI ETHICS | 70


work and learn from people who have
LEARNING FROM THE COMMUNITY
expertise in a variety of domains like

Ashley Casovan
health, education, economics, and social
services. Having always worked in digital
policy, specifically related to enterprise
data, and open-source architecture,
tackling responsible AI policy was a
natural progression

Executive Director, AI Global Tell me about your organization’s


ambition for Ethical AI. Why is it a
priority?

AI Global is a non-profit building tangible


governance tools to address growing
concerns about AI. Our mission is to
catalyze the practical and responsible
design, development, and use of AI. Our
tools have been among the first to
demonstrate how to turn responsible AI
principles into action. Bringing extensive
experience in responsible AI policy and
the development of AI systems for
industry, AI Global is uniquely positioned
to partner with organizations across
public and private sectors to guide and
inform responsible AI governance
around the world. In collaboration with
the World Economic Forum (WEF) and
the Schwartz Reisman Institute (SRI), we
are designing and developing a
certification mark for the oversight of AI
systems. This work will be reliant on the
collective research and standards that
are being developed in this space.

What are the challenges you are facing


with operationalizing AI ethics?
Tell us about your role: for the public good, human rights,
and justice, I studied political With a quickly growing responsible AI
I work closely with our members, science and economics. landscape, it's often difficult to know
partners, and the AI community to what to read, follow, and educate
develop tangible tools that support I was initially interested in global yourself on. Compiling and finding ways
the responsible design, issues, however, I soon realized that to implement these various different
implementation, and use of AI. In an there were many injustices closer practices is complicated even for those
effort to protect the public, we are to home. Pursuing these interests, who are thinking about these issues on a
building tools like an interactive prior to working with AI Global I daily basis. This is one of the key drivers
map of AI use cases, and was a public servant working for of creating an independent and
assessment tools to help better both municipal and federal authoritative certification mark.
design AI systems. governments. This experience Providing a single authoritative
provided me with a strong framework to support practitioners is
What’s been your path into the understanding of the importance of necessary. We recognize that this won't
field? having an institution that protects Continued on next page
and supports the public good. Here
Having always had a strong concern it was inspiring and informative to
THE BUSINESS CASE FOR AI ETHICS | 71
be an easy feat. Deliberate and coordinated efforts to organize this information will be incredibly important. The reason we are
partnering with WEF and SRI is the first step in this coordination.

What recommendations do you have for operationalizing AI ethics?

The past few years have been marked by significant research on all aspects of responsible and ethical AI. Everything from
identifying principles which act as collective targets for us to strive towards measurable techniques for testing bias and fairness
in applications. While this body of research continues to grow, it requires coordination and adoption in order to be tested and
ultimately realized.

What are the biggest challenges in general to operationalizing AI ethics?

In addition to what I've already mentioned, greater awareness including education and training is required. This is needed not
only needed at a general level for all members of the public, but also domain-specific training is required. AI systems are
becoming an increasingly important part of every industry. Knowing how these systems will impact these industries is incredibly
important for the practitioners implicated.

What is your business case for Ethical AI?

We believe that being responsible and ethical AI is good for business. In thinking through what the various different challenges
that AI can pose, often unintentionally, it's important that there are oversight practices in place that both mitigate harm to
people and mitigate risk to those building and implementing these systems. Oversight mechanisms such as establishing
standards and certifications that AI developers can follow will help.

In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc. in your org ecosystem?

We have created a unified framework of responsible AI principles based on the most cited frameworks including the Montreal
Declaration, IEEE's Ethically Aligned Design, the Asilomar AI Principles, etc. Our framework includes: Accountability, Data
Quality, Rights, Bias and Fairness, Explainability and Interpretability, and Robustness.

In your opinion, who’s doing interesting things in Ethical AI?

So many. In addition to our collaborators at the Schwartz Reisman Institute and the World Economic Forum, organizations like
the Data Nutrition Project, Equal AI, Algora Labs, CIFAR, [and the] Ada Lovelace Institute are doing great things. Companies like
Cognitive Scale, AltaML, Arthur.AI, [and] BEACON are conducting important research and directly working with companies to
refine it.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

We have heard concerns that there will be substantial costs to implementing responsible and ethical practices. This is a key
concern for sure. While there could be upfront costs to changing governance practices, in the end, it will not only be the right
thing to do to protect people, but it could prevent financial and reputational harm.

How can companies try to incorporate AI ethics practices when they exist in a capitalistic system that is focused on profits?

Trade-offs will be important, however, there is no reason why these systems can't be ethical and also make a profit.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

Ensuring that these voices are at the table when designing and building AI systems is incredibly important. It will ensure that
research and techniques are viewed through necessary lenses.

Connect with Ashley @AshleyCasovan

THE BUSINESS CASE FOR AI ETHICS | 72


Otherwise it is a steep uphill climb.
LEARNING FROM THE COMMUNITY

Will Griffin
What are the challenges in general with
operationalizing AI ethics?

Getting every designer and development


to embed the ethical reasoning process
into their workflows.

Chief Ethics Officer at Hypergiant What is your business case for Ethical
AI?

Ethical reasoning is another tool that


should unlock the creativity and broader
range of solutions to tech problems.
Failure could lead to negative impacts for
clients and society and destroy
relationships and company reputation.

Since ethical AI requires a culture


change and re-education in a range of
areas, how are you ensuring your
employees have the knowledge and
skills to design and build ethical AI
solutions?

Training every new employee on our


ethical reasoning framework during the
on-boarding process.

In large organizations, data is generated


from multiple interactions. Are there
any guiding principles for data
collection, curation, etc. in your org
ecosystem?

1. Is there goodwill (positive intent) for


the use case, 2. Categorical Imperative: if
Tell us about your role: reasoning into AI design and every company in our industry deployed
development workflows. AI in this way what would the world look
[I] vet AI use cases using the Top of like, and 3. Law of Humanity: are people
Mind Ethics (TOME) Framework. What are the challenges you and being used as a means to an end, or are
your organization face with people/society the primary beneficiaries
What’s been your path into the operationalizing AI ethics? of this use case?
field?
Ensuring designers and developers What resistance is one likely to
[My] background is as an understand the importance of encounter when making the case for
entrepreneur at the intersection of ethical reasoning in the AI ethical AI to the business community?
media and tech. development and design workflow.
CEOs lack vision and appreciation of the
Tell me about your organization’s What recommendations do you benefits of economic value creation.
ambition for Ethical AI? Why is it a have for operationalizing AI
priority? ethics? Continued on next page

The goal is to embed ethical Get CEO and Board level buy-in.
THE BUSINESS CASE FOR AI ETHICS | 73
How would ethical AI improve your value prop to your customers? What are your top use cases for application of ethical AI?
How do you measure the success of your ethical AI initiative?

Ethical AI guided by ethical reasoning should increase not decrease the potential solutions to any given technical/business
problem. It can be measured by the number of potential solutions created during the design and development process. More
potential solutions (and beneficiaries) means the framework was used properly.

Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?

Companies must think about a broader array of stakeholders who will be impacted by the AI solutions they design and develop.
The more stakeholders who benefit the more robust the solution will be.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

Once you envision all of society as a potential stakeholder in any given use case it should be easy to articulate the impact on
marginalized communities. If you cannot delineate the impact on a given community that means you have not considered them
in the design and develop process. Start over and use ethical reasoning as your guide.

Connect with Will @willgriffin1of1

"Ethical reasoning is another tool that should unlock the creativity and broader range of
solutions to tech problems."

-Will Griffin, Chief Ethics Officer at Hypergiant

THE BUSINESS CASE FOR AI ETHICS | 74


nowhere near the forefront of our work;
LEARNING FROM THE COMMUNITY
we mainly sought to make the

Liz O'Sullivan
technology usable, in a form that
sufficiently “worked." It wasn’t until I
took a role at a computer vision company
as head of annotations that I became
acutely aware of the ways that labeling
VP of Responsible AI at Arthur, and and cultural biases can show up and
become encoded into enterprise
Technology Director of STOP (The Surveillance systems. Imagery is such a visual medium
Technology Oversight Project) that bias becomes something tangible,
that you can explore on your own.
Especially combined with the global
ecosystem of labelers around the world,
it became very clear to me quickly how
cultural differences can yield harmful
results.

A definition of something seemingly


simple can very easily become
interpreted in different ways, leading to a
confused and at times even harmful
dataset that, when transformed into a
model, will naturally inherit this harm.

What are the challenges you face with


operationalizing AI ethics?

In many cases, Arthur clients are simply


beginning their journey into
operationalizing a responsible
framework around AI, and it’s been
incredibly fulfilling to work with them on
the natural challenges that all
enterprises face today. There is a
burgeoning market of tools available to
enterprises seeking to make their AI
more transparent and fair, including
Tell us about your role: mainly on the policy implications of robust developer toolkits and open
AI and its impact to source contributions. However, many
At Arthur, I help our clients underrepresented minorities. The organizations have trouble bringing silos
operationalize AI in ways that are majority of my time in this sector of different work together to form a
mindful of the broader societal has been spent working on local holistic view of what’s happening across
implications of the work they’d like New York policy with the the organization. When you couple that
to do. Our platform helps monitor Surveillance Technology Oversight with a need for extensive infrastructure
AI for performance and Project (STOP), where we advocate that can support intensely
discrimination, to help ensure it’s for civil liberties and limits to computational-heavy tasks, the very act
working in the ways intended for policing power. of centralizing policies and compliance
use. To that end, my role is to advise can become a huge engineering task that
partners on the rapidly evolving What’s been your path to the field? can be costly and difficult to manage.
body research in the space, both That’s why it’s so exciting to see more
from a mathematical fairness My path to the Ethical AI space has and more attempts to alleviate this pain
framework and from a humanities, been fairly untraditional, beginning Continued on next page
social implications lens. I also work at an NLP company during a time
with the non-profit sector, focusing when bias and explainability were
THE BUSINESS CASE FOR AI ETHICS | 75
through new and exciting startups seeking to make governance more user-friendly, and applicable at all levels of the
organization.

What recommendations do you have for operationalizing AI ethics?

Operationalizing AI ethics first begins with a detailed plan that must originate from key stakeholders at the C-level, workers
including data science and annotations professionals, and input from the communities you serve. Careful, critical thinking is a
vital part of the planning process that no toolkit can ever replace. I highly recommend that organizations hire from the
humanities to provide at least one role that’s fully focused on the ethical outputs of the company, lest it become “side work” that
falls to the bottom of the list. Every part of the AI pipeline deserves scrutiny, from the data collection, to annotations, along the
algorithm selection and documentation. But the work of “checking” for ethical violations is an ongoing one, and policies must be
set to ensure that algorithms continue to fit their intended parameters when integrated into the real world of production
environments.

What is your business case for Ethical AI?

The business case for responsible AI has never been clearer than it is today, following enhanced legal scrutiny from a number of
agencies and lawmakers who have made it clear that discrimination is illegal, even when accidental. There are multiple pending
court cases that will seek to prosecute companies for the outputs of their algorithms, most notably with Apple Card and United
Healthcare’s Optum. But even without the threat of costly legal action and, ultimately, fines, the damage to a company’s
reputation can not be understated, as recent events including the failure of Twitter’s image cropper to recognize black faces.
The last thing brands want to do is to alienate their users by furthering the disparities and inequities that plague our society
today. Moreover, one small part of operationalizing responsible AI is to simply ensure that your models behave the way you
think they will on real world data, on an ongoing basis. By monitoring for anomalies and concept drift, companies become better
able to catch model issues before they become big problems, allowing them to be re-tuned and re-fit for better accuracy. In the
financial industry especially, higher accuracy can mean better efficiencies, more profit, and lower cost.

In your opinion, who’s doing interesting things in Ethical AI?

I’m particularly interested in the people working on AI’s criminal justice implications, especially when these intersect with race.
To that end, I’m a big fan of the work of the ACLU, Kristian Lum, Tawana Petty, Julia Angwin’s work on the COMPAS algorithm
(along with her work in general), Ruha Benjamin, The Algorithmic Justice League of Joy Buolamwini, Timnit Gebru, and Deb Raji
(of course), and Mutale Nkonde for her work at AI For the People.

Connect with Liz @lizjosullivan

"Operationalizing AI ethics first begins with a detailed plan that must originate from
key stakeholders at the C-level, workers including data science and annotations professionals,
and input from the communities you serve."

-Liz O'Sullivan, VP of Responsible AI at Arthur, and Technology Director of STOP


(The Surveillance Technology Oversight Project)

THE BUSINESS CASE FOR AI ETHICS | 76


people for two years at parties because I
LEARNING FROM THE COMMUNITY
was so worried people would ask me

Shalini Kantayya
about what I was working on. I think the
working title at the time was Racist
Robots, and it was really hard to explain.
As someone who doesn't have advanced
degrees in data science, I had this fear of
improperly explaining ideas like
Director of Coded Bias algorithms or artificial intelligence or
machine learning. But I think what
enabled me to get over my fear was just
asking a lot of questions. And I came to
see that artificial intelligence is going to
transform every sector of society and
touch every civil right we enjoy.

Automated decision-making has the


unprecedented power to disseminate
bias at scale. As humans increasingly
outsource our autonomy to machines,
algorithms are already being deployed to
decide what information we see, who
gets hired, who gets health care, and who
gets undue police scrutiny. As artificial
intelligence moves out of the data
science labs and into the real world, bias
has the potential to be deployed at scale.
There’s a real danger that without proper
training on data evaluation and spotting
the potential for bias in data, vulnerable
groups in society could be harmed or
have their rights impinged. AI also has
intersectional implications on criminal
and racial justice, immigration,
healthcare, gender equity, and current
social movements.

There is a scene in Coded Bias where the


Tell us about your film, Coded Bias: not neutral, and women are leading researchers Joy Buolamwini, Deb Raji,
the charge to ensure our civil rights and Timnit Gebru discuss their difficulty
Modern society sits at the are protected. as women of color in receiving the same
intersection of two crucial level of respect and recognition for their
questions: What does it mean when What sparked your own interest in work. In your opinion, what's the
artificial intelligence increasingly the implications of AI and why do relationship between diversity in tech
governs our liberties? And what are you find this issue so important? and making AI more ethical?
the consequences for the people AI
is biased against? When MIT Media The first inspiration or spark to tell When I first started making the film, I
Lab researcher Joy Buolamwini a story is always a compelling actually didn't plan to make the film so
discovers that most facial- character. I stumbled upon the predominantly led by women. But my
recognition software does not work of Joy Buolamwini through a research just kept leading me back to all
accurately identify darker-skinned TED talk and read Cathy O'Neil's these incredibly brilliant and badass
faces and the faces of women, she book, Weapons of Math Destruction, women. What I came to learn is that the
delves into an investigation of and just fell down a rabbit hole of Continued on next page
widespread bias in algorithms. As it the dark side of artificial
turns out, artificial intelligence is intelligence. I couldn't talk to
THE BUSINESS CASE FOR AI ETHICS | 77
people who are leading the data, the scientists and mathematicians and journalists and activists who are leading the fight for
ethics and more humane uses of artificial intelligence, are actually women, people of color, and LGBTQ. And so what I came to
see is that there was this canon inside of tech that was not being heard. The role of women and people of color as a force for
change within Silicon Valley has been long underestimated.

So much of the film is about the need for collective action to improve how we develop and deploy artificial intelligence
technologies. What change would you like to see moving forward?

I think we are in a moment in history where we're all being asked to lead from a deeper place in our humanity. I think a lot of
times we talk about tech as if it's like magic or God. And when you pull back the curtain, what I realized is that technology is just
a reflection of ourselves. Because these technologies impact all of us, we all should have some voice in how they get deployed.
I’d like to see legislation that protects data rights as fundamental to civil and human rights. We should move toward technology
that values the inherent value of every person.

Connect with Shalini at @shalinikantayya and learn more about Coded Bias at CodedBias.com

"There’s a real danger that without proper training on data evaluation and spotting the
potential for bias in data, vulnerable groups in society could be harmed or have their rights
impinged. AI also has intersectional implications on criminal and racial justice, immigration,
healthcare, gender equity, and current social movements."

-Shalini Kantayya, Director of Coded Bias

THE BUSINESS CASE FOR AI ETHICS | 78


personal responsibility for collective
LEARNING FROM THE COMMUNITY
actions that are morally wrong.

Caryn Lusinchi
Tell me about your organization’s
ambition for Ethical AI? Why is it a
priority?

Ethical AI is not a technology issue as


much as a human one. It impacts
CEO & Founder, Bias in AI everyone alive today, the generations
who will outlive us, and the ones that will
survive them. We’re living in an
"algocracy" (a concept originally coined
by A. Aneesh), meaning our lives are
governed by computer algorithms.
Algorithms determine ads we see, tv
shows we watch, photos we swipe
left/right, voice accents of our virtual
assistants, whether we’re hired/fired, the
wages we earn, loans eligibility,
healthcare decisions, court verdicts, etc.
Is it fair that thousands of automated
black box systems (that a select few data
scientists code) govern decision making
around our personal and professional
lives?

Our organization’s ambition is to 1) to


increase AI bias awareness 2) to
empower businesses to ask audacious
big-picture ethical questions when
building, buying, or outsourcing AI
projects 3) to offer a diversity of services
and solutions are available for businesses
to build design/build AI systems that are
equitable, explainable, socially
responsible, and respect data privacy.

Tell us about your role: I started off as a corporate What are the challenges you face with
securities fraud investigator before operationalizing AI ethics?
I’m the CEO & Founder of veering into global go-to-market
biasinai.com, the smarter way to strategy and marketing roles at Most enterprises use the RACI model
source AI. We’re building an AI startups, interactive agencies, (responsible, accountable, consulted,
directory across 40+ industries that consulting firms, Google, and informed) to inform project roles and
aggregates companies and WhatsApp. In the past three responsibilities. There’s no shortage of
consultants, who help humans and decades, there’s been one career well-intentioned tech employees who
machines work better together, to constant: I’ve witnessed too many wish to consult on or stay informed.
reduce bias in AI systems good people doing bad things in the However, the biggest challenge with
(specifically algorithmic, gender, pursuit of short-term “business” operationalizing AI ethics is within large
cultural, racial and data-driven profits and the magical belief that cross functional group project settings,
biases). every social problem has a where there’s considerable ambiguity
technological solution. I got into around which individual or department is
What’s been your path into the ethical AI out of frustration of Continued on next page
field? working in tech cultures where
there’s a diminished sense of
THE BUSINESS CASE FOR AI ETHICS | 79
held responsible or takes accountability when there’s a negative societal impact, post launch. Traditional PR tactics of issuing a
delay--> deflect--> dulcify--> deal series of crisis communications isn’t the right answer—it’s a knee-jerk gut reaction.
Proactively solve the AI ethics ownership question first.

What recommendations do you have for operationalizing AI ethics?

Typically, operationalizing AI ethics defaults to data science teams where numerous AI projects are in R&D, beta, or pilot phase.
Or it may start with C-Suite/BOD/HR institutionalizing AI ethics in a mission/vision/values statement the company can peacock
display. Beyond the obvious bottom-up and top-down participation, there’s plenty of opportunity for "middle sandwich-layer"
departments to start operationalizing AI ethics. User researchers and experience leads can set standards for D&I
representation and marginal group participation in all unmoderated/moderated studies. Project managers can build user stories
to ensure values are weighed in the design process and user experiences reflect diverse contextual use cases. QA and trust and
safety teams can create AI bias bounties. Everyone can participate; there's room at the table for more than computer science
and philosophy PhDs.

What resistance is one likely to encounter when making the case for ethical AI to the business community?

1. Long-term benefits of ethical AI are de-prioritized for short-term pursuit of profit to satisfy shareholders' ROI demands;
someone said . . . "ethics never makes you money but can save you a lot of money."

2. Overconfidence bias and technological determinism of relatively young, inexperienced technology teams negate the need for
ethical AI; this stems from the stubborn, tech culture conviction that anything and everything can be solved for internally so
there’s no need to consider engaging an independent, impartial third party (auditor or tool) to assess true societal impact of a
team’s AI/ML project inputs and output.

Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?

Learn lessons and case study successes from Certified B Corporations; Certified B Corporations are a new kind of business that
balances purpose and profit.

The board and/or C-Suite can develop KEIs (key ethics indicators); they can co-exist along quarterly and annual KPIs.
Reward employees (performance, bonus ,and/or equity awards) for identifying and/or cleansing dirty datasets (intentional or
unintentional), for hacking AI systems to expose vulnerabilities, or for graveyard-ing projects prior to launch that result in unfair
outcomes.

The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?

AI racial bias will always be an issue as long as diversity, equity, and inclusion do not translate into actual AI/ML candidate team
hiring and promotion practices within the tech industry. We strive to highlight companies offering products, services, datasets
and research that are developed/designed to be representative of all ethnic groups. Additionally, our directory offers a
diversity of tools that help ensure machine learning algorithms are tested for fair outcomes prior to launch.

Connect with Caryn at BiasInAI.com

THE BUSINESS CASE FOR AI ETHICS | 80


foreground ethical impacts of all
LEARNING FROM THE COMMUNITY
designed information technologies. This

Andrew Dillon
cannot be done though a single course in
ethics that a student takes as part of
their curriculum in some bolt-on fashion
typical of MBA programs. It has to be
woven into the complete coursework in a
program so that design is recognized and
understood as enacting choices over how
V.M. Daniel Professor of Information, people live and work.
University of Texas Future designers cannot be allowed to
claim ignorance of ethics, and
professional bodies must hold their
future members to account. We may not
have an agreed set of ethics yet, but
there are general principles of ethical,
human-centered design that we can
agree upon while allowing for continued
attention on emerging challenges. Let's
start there.

What resistance is one likely to


encounter when making the case for
ethical AI to the business community?

That it costs; that it is not agreed upon or


understood well enough yet; that designs
which persuade, capture attention, and
nudge are the real keys to profit. We
shall hear these arguments continually
until we address them headfirst.

How can companies try to incorporate


AI ethics practices when they exist in a
capitalistic system that is focused on
profits?

Tell us about your role: operationalizing AI ethics? If we can demonstrate that a better user
experience has long-term benefits for
[I am a] researcher and educator in In a rapidly evolving domain, people are companies and consumers, the case will
human-centered information design. either focused on the technology of AI make itself. Of course, businesses exist to
without appreciating its human impact, make a profit, but few would claim it is
What’s been your path into the or they are concerned with ethical profit no matter what. It is possible to
field? issues but lack an understanding of the motivate profits and human well-being in
technology. We need to bridge these a political environment that balances
As a psychologist, I've always been groups to create meaningful individual rights, ability to profit, and
interested in how we can leverage discussions. collective well-being. This is a longer
information technologies for human
conversation, with legal and ethical
benefit, to augment our capabilities What recommendations do you have implications for all of us, and in an
and create a more inclusive world. for operationalizing AI ethics? information-mediated world, businesses
have to engage in this conversation
What are the challenges you and We must tackle design education so as rather than ignore it.
your organization face with to

THE BUSINESS CASE FOR AI ETHICS | 81


We'd love to hear your feedback on this report. Please write us at
Hello@AllTechIsHuman.org

You can find the most up-to-date version of our work at


BusinessCaseForAIEthics.com

All Tech Is Human is an organization committed to building the


Responsible Tech pipeline by making it more diverse,
multidisciplinary, and aligned with the public interest.

Join our newsletter at AllTechIsHuman.substack.com


Join our Slack group at bit.ly/ResponsibleTechSlack
Join our mentorship program at bit.ly/ResponsibleTechMentorship
Check out our Responsible Tech Job Board at AllTechIsHuman.org

THE BUSINESS CASE FOR AI ETHICS | 82