Explore Ebooks
Categories
Explore Audiobooks
Categories
Explore Magazines
Categories
Explore Documents
Categories
"When it comes to figuring out how to achieve AI ethics in practice, we need the
technologists’ voices and perspectives just as much as we need input from philosophers,
sociologists, legal experts and enterprise leaders." -Beena Ammanath, Executive Director,
Deloitte AI Institute and Founder, Humans For AI
"[T]he bigger question here is—what are the ultimate societal success metrics for the AI we
build?"-John C. Havens, Executive Director, The IEEE Global Initiative on Ethics of
Autonomous and Intelligent Systems
"You can't quantify and operationalize everything. You will never be able to do
'automated' algorithmic audits that are comprehensive." -Dr. Rumman Chowdhury, CEO
and Founder of Parity
"Having principles, or boards or training and framework are good, but only the beginning
of the journey." - Maria Axente, Responsible AI and AI for Good Lead, PwC UK
"While regulation is necessary it is not able to pinpoint nuances that happen behind closed
doors. The people building such systems are in a unique position of power in terms of
knowledge about the nuances of the product and their ability to course-correct."
-Aparna Ashok, Technology Anthropologist, Service Designer, Applied AI Ethics
Researcher
"The first challenge is getting senior leaders to take it seriously. Some already do, but
that's a small percent of senior AI leaders."-Reid Blackman, PhD, CEO & Founder Virtue
"Operationalizing AI ethics first begins with a detailed plan that must originate from
key stakeholders at the C-level, workers including data science and annotations
professionals, and input from the communities you serve."-Liz O'Sullivan, VP of
Responsible AI at Arthur, and Technology Director of STOP (The Surveillance
Technology Oversight Project
5 About All Tech is Human | Contributors 24 Perspectives from 28 leaders in the field
knowledge base
23 Conclusion
EDITORIAL TEAM
SPECIAL THANKS TO THE MANY EDITORS AND EXPERTS WHO CONTRIBUTED TO THIS REPORT
Abhishek Mathur Charles Radclyffe Jessica Pham-Ruhland Moe Sunami Samuela Marchiori
Adrian J. Mack, PhD Charlie Craine Jigyasa Sharma Monika Viktorova Sanhitha Cherukupally
Aishwarya Jare Chhavi Chauhan Joan Mukogosi Nadia Piet Sara Jordan
Amanda Pogue Chris McClean Joey Gasperi Nandini Ranganathan Sara Kimmich
Amit Dar Claudia Igbrude John C. Havens Nandita Sampath Sara Murdock
Ana Chubinidze Cynthia Mancha Joshua Ukairo Stevens Nina Joshi Sara Rubinow
Ana Rollán Dan Gorman Kacie Harold Nupur Sahai Shea Brown
Andrew Sears Dan Wu Kapil Chaudhary Olivia Gambelin Sibel Allinson
Aneekah U Eli Clein Karen Aoysia Barreto Osiris Parikh Sidney Madison
Angelica Li Elisa Ngan Karina Alexanyan, PhD Oyidiya Oji Palino Prescott
Ankita Joshi Ellysse Dick Katherine Lewis Pamela Jasper Siva Mathiyazhagan
Ansgar Koene Emanuel Moss Katrina Ingram Pavani Reddy Supriyo Chatterjee
Arsh Shah Felicia Chen Kayla Brown Philip Walsh Swathi Young
Arushi Saxena Firat M. Kevin Macnish Ploipailin Flynn Susannah Shattuck
Ben Roome Haciahmetoglu Lauren Mobertz Phaedra Boinodiris Tania Duarte
Bethany Edmunds Fitz Mullins LavinaRamkisson Portia Pascal Tim Clements
Beverley Hatcher- Gabriel Kobus Lilia Brahimi Rachel Stockton Titus Kennedy
Mbu Gunjan Kishor Lydia Hooper Randall Tran Tracy McDowell
Bijal Mehta Harini Gokul Mark Cooper Rebekah Tweed Ursula Maria Mayer
Camilla Aitbayev Jack-Lucas Chang Matthew Chan Renee Wurth, Ph.D Victoria Allen
Cara Davies Janna Huang Mayra Ruiz-McPherson Roshni Londhe WillmaryEscoto, J.D.
Cara Hall Jeff Felice Merve Hickok Rumman Chowdhury Yada Pruksachatkun
Caryn Lusinchi Jennifer Dalby Michelle Calabro Sachi Bafna
Abhishek Gupta
in moving from abstract to concrete in
the field of AI ethics. Taking the lens of
business objectives and organizational
change, the ideas presented at the
beginning of this report help to situate
the reader, providing you with the
Founder and Principal Researcher, Montreal AI vocabulary to navigate the domain
Ethics Institute & Machine Learning Engineer and confidently. The vignettes from each of
the featured profiles provide us with the
CSE Responsible AI Board Member, Microsoft lessons learned by people who are
working on addressing these challenges
today in an applied manner.
AI ethics has become one of the most- a framing can cause material harm to
watched buzzwords of the past us making progress toward realizing
couple of years. This is both a positive AI ethics in practice.
and a negative outcome. Positive
because there is increased awareness What we need is to learn from lived
of the harms that arise from experiences of those who face the
indiscriminate utilization of AI harms from such systems and from
systems. Negative because there are those who are actually trying to
currently a lot more abstract implement these ideas within their
discussions compared to communities and organizations. That
operationalization. Having worked in is the source of knowledge that will
this domain for several years, and help us create a more fair, just, and
through my work at the Montreal AI well-functioning society.
Ethics Institute, an international non-
profit institute that is democratizing This report is the result of an
AI ethics literacy, I’ve seen how such extensive survey conducted by the All
The technical and social complexity of AI systems has required a multi-voice effort to explore what AI
can do, what it should do, and what it could do in the future. The responsible tech ecosystem is a venue
where such issues are examined, guardrails are proposed, and value propositions are offered.
The course of AI
development is
widening.
AI development is at an inflection point in 2021. While tests in research settings are conquering ever-
higher performance goals, deployment of AI in real-world conditions is keeping pace, scaling wider and
deeper. A new wave of implementation is surging within private enterprise, public agencies, and
partnerships between commerce and government. Organizations are using the technology to optimize
efficiency and profitability, which may be seen in metrics for operations, financial reporting, human
resources, customer service, and a growing list of other aspects of running a business. Governments are
using and considering AI systems to determine which citizens receive benefits or who is subject to
increased policing. Blended efforts that combine private and public capabilities have been proposed to
redress complex societal matters, for example, in public health and environmental science.
AI continues to infuse life beyond experimental settings by expanding from the theoretical realm to the
corporate boardroom, online marketplace, private home, and public sidewalk—and the responsible tech
ecosystem needs to move with it. Conversations and actions need to build new tools and language to
deal with novel implementations of AI. In this publication, we focus on business and corporate settings
where AI systems are being developed, used, and assessed.
By responding to this question, we address concerns that organizations have about matching their
ethical aims with their practical execution. This distance between theory and action is what a 2021
World Economic Forum white paper has called an intention-action gap (link:
http://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology_2021.pdf).
Audience
Successful operationalization of ethical AI requires an interplay of stakeholders. This report supplies a
playbook for three organizational dynamics that give momentum to a sustainable vision for AI
implementation:
In tandem, these three conditions generate synergy toward business and societal values that may
Consider Alex, a manager who oversees people and processes at a hypothetical data-driven company.
At an early point in Alex’s time with the firm, the CEO announced a statement of ethics principles to
communicate a commitment to responsible use of AI. Until the organization found ways to convert
principles to action, however, Alex and fellow employees questioned whether the stated guardrails
were actually being met. The operationalization of AI ethics, on the other hand, puts the firm on a new
course by creating conditions for progress.
Moving from theory to action is not always straightforward, particularly for individuals in an
organization that lacks a culture in which contributors are invited to reflect on and speak up about
values, mission, and outcomes. Unless an organization is self-reflective, downsides might involve
existential risks to the business or a line of business, inability to attract top talent, strained
relationships with business partners, declines in customer loyalty, regulatory fines, civil and criminal
lawsuits, unintended harms to communities, and/or amplification of historical biases against
marginalized groups.
For many organizations, the embedding of ethics into AI development and implementation will involve
change to governance, policy, and procedure. Some companies may create new roles, shift
responsibilities in existing roles, or reconfigure teams. Unique and unfamiliar challenges may arise in
operationalizing responsible AI, even for those who have experience in change management.
Executive accountability
Worker empowerment
and protection
AI Ethics Framework Regulatory effectiveness
Stakeholder awareness
3. Stakeholder awareness. Across the organization, team members will need to invest
time in cultivating awareness and knowledge of the reasons for operationalizing
workflows and processes that promote ethical AI. Stakeholders who need to be
equipped in this way include leaders, board members, contractors, vendors, customers,
business partners, employees, and communities affected by the business.
Our survey results are found in the Community Interviews section of this guide. Readers will find
questions and answers that are based on practical experience with and direct knowledge of decisions
that businesses face.
A language of persuasion
Without a sense of the business case for AI ethics, even the most passionate advocates for responsible
tech will face an uphill battle when trying to execute real and meaningful change in the commerce and
investment spheres. To translate academic and policy work regarding AI ethics into an action
framework for enterprise, this guide shifts the mode of expression into one of organizational change
and management. It adjusts some prevailing discourse from activism, academia, and governance into
language that businesses use. This effort draws authenticity and credibility from the interview texts in
the latter pages of this guide, which come from people who do translational work between enterprise
and stakeholders.
For people who are new to the field of AI ethics, this guide offers an invitation to learn key terms in the
associated business vocabulary. It further provides understandable terminology for producing a plan of
action and winning supporters within a business organization.
"AI ethics must become a corporate-wide responsibility in fulfilling the goals and
standards of an organization’s commitment to ethical business practices in the
development and use of AI. With a greater focus in recent years around the unintended
consequences and negative externalities regarding AI, especially regarding its impact
on marginalized communities, all contributors to organizations that make and use AI
systems can take an active role in cultivating responsibility for equitable social
impact."
Developing a corporate-wide culture requires commitment from the top in establishing the corporate
commitment to AI ethics and standards. A supportive culture is essential to fulfilling this commitment
and making ethical use of AI a reality. It is important to effectively engage all key stakeholders and to
align with the corporate culture that best supports this vision, whether top-down or bottom-up. This in
turn requires expanded roles and responsibilities for technology workers that cross domains, processes,
methodologies, and departmental operations. An effective ethical AI model will support the inclusion
and empowerment of key players as part of its implementation.
An example of a corporate-wide model, with identified roles and responsibilities, could include:
Technology Workers: The role for technology workers is to manage the technology infrastructure
required to collect, store, and distribute data. The technology function is critical in ensuring
effective governance and selecting toolkits required for the ethical use of AI.
Technology workers are responsible for: technology infrastructure, data governance, data
accuracy and quality, and data privacy, in accordance with the corporate AI Ethical Standards.
Data Scientists: The role of data scientists is to provide the organization with the information
needed for effective problem-solving and decision-making with algorithms that provide the
organization with accurate business intelligence.
Data scientists are responsible for: transparency in data construction, disclosure in assumptions
utilized in development of algorithms, information value, and compliance with ethical AI metrics.
The CEO is responsible for: corporate reporting to the Board of Directors on ethical practices
and metric results.
Boards of Directors: The role of the Board of Directors is to endorse the CEO’s Code of Ethics and
to sign off on an Annual Report on ethical AI use and metric results, as reported by the Corporate
Compliance AI Ethics Committee.
The Board of Directors is responsible for: corporate trust and corporate AI Ethical Compliance.
AI Ethics Compliance Committee: The role of the AI Ethics Compliance Committee, under the
direction of a Technology Executive and AI Ethicist, is to audit corporate compliance with the
corporate Code of Ethics through monitoring and measurement. The Committee further monitors
for continuous improvement and development of best practices.
The Compliance Committee is responsible for: issuing an annual report to the Board of Directors
on corporate compliance with the AI Ethical Values and Standards.
HR: The role of HR is to develop basic orientation for all employees to meet the corporate AI
Ethical Standards and to support commitment to these values and goals. HR further monitors the
corporate adherence to the Code of Ethics in the hiring, promotion, and firing of all employees.
HR is responsible for: orientation of all employees to the corporate Code of Ethics and reporting
of any violation to the AI Ethics Compliance Committee, especially resulting from the violation
of any employee’s rights in fulfilling the corporate AI ethics commitment.
Department/Business Line User: The role of the department/business line user is to comply with
the corporate Code of Ethics and to ensure accountability in the ethical use of AI in departmental
operations.
The department/business line user is responsible for: departmental operations and the impact of
decisions made in the use of AI.
Our report is built on the premise that all technology workers can take an active role in accepting
responsibility for the social impact and adherence to a code of ethics in the use of AI. Leading the
internal effort to build “trusted” use of AI with ethical standards will enable tech workers to become a
vital part of the corporate commitment to serving all stakeholders.
(link)
To begin enacting AI ethics change, it is important to first understand the field, the gaps in the current
literature, and the general scope of the work that’s already been produced by AI ethicists over the last
decade. This section provides a brief overview of those topics.
Defining AI ethics
(link:
https://www2.deloitte.com/us/en/pages/regulatory/article
s/ai-ethics-responsible-ai-governance.html)
The first challenge that an organization will likely face is creating or selecting a definition of AI ethics
that resonates with them or is particularly relevant to their business. Although the three definitions
above are quite similar, one is broad and all-encompassing, another puts the onus of ethical AI on
High-level governance
Between 2016 and 2019, 74 sets of ethical principles (or guidelines for AI) were published by various
groups, focusing on high-level guidance like “creating transparent AI” (Carly Kind at VentureBeat; link:
https://venturebeat.com/2020/08/23/the-term-ethical-ai-is-finally-starting-to-mean-something/).
Although these frameworks were an important first step for AI ethics, they are hard to put into action
because of their vagueness.
(link: https://hbr.org/2020/11/ethical-frameworks-for-ai-arent-enough)
An overview of many existing AI ethics guidelines can also be found at Algorithm Watch (link:
https://inventory.algorithmwatch.org/).
These frameworks are similar in that they incorporate concepts of bias, fairness, accountability,
transparency, and explainability. However, they don’t make AI ethics actionable, outline steps that need
to be taken, identify people who must be involved, or provide methods to quantify any efforts.
For example, once an organization has agreed upon a definition for AI ethics, such as the one above,
they still have several decision points to incorporate AI ethics into their business.
The challenge is ultimately that the ethics of AI is dependent on the data collected and handled, the
algorithms applied, the individuals building the models, and the consequences of the outcomes. Ethical
AI frameworks themselves may not be effective at preventing AI harms, but monitoring the way that
algorithms are implemented and investing in best practices can pave the way to translate governance
into impact.
Bias mitigation
The Gender Shades project (link: http://gendershades.org/), based on Joy Buolamwini and Timnit
Gebru's groundbreaking paper, showed that facial recognition technology was biased against women
and people of color and inspired a shift in the AI ethics community to focus on detecting and mitigating
statistical bias. It is generally understood that math isn’t biased—we encode our human biases into our
data and models. Unintentional bias can stem from problem misspecification and data engineering, but
even more commonly from prejudice in historic data and under-sampling.
Unfair, biased, and at times malevolent algorithms can further disadvantage already vulnerable
communities. There is a wealth of information about methods of bias mitigation. For example, a bias
bounty is the practice of hiring companies to find bias in a model before it is discovered by the public.
This is especially powerful if the government continues not to regulate AI strongly. A plethora of
toolkits, including IBM’s AIF360, Microsoft’s Fairlearn, and FairML have emerged to combat bias in
datasets and AI models. Most importantly, the issue of bias points to why we need diversity in tech.
Diversity of perspectives, education and training, and sociodemographic factors are all important to
catch bias in our models before they are deployed.
Like any new endeavor in a business, there needs to be a clear answer (or answers) to “Why should we
do this?” Because much of the work in the field of AI ethics has been developed by academics and
activists, the language does not always translate directly to a business environment.
Concrete ways to emphasize the business case for AI ethics can be found on the next page. These are
taken from the Economist Intelligence Unit (EIU)’s recent report (link: https://pages.eiu.com/rs/753-
RIQ-438/images/EIUStayingAheadOfTheCurve.pdf) as well as stakeholder interviews. As someone
trying to convince your boss(es) about why this is important, you will be best placed to pick and choose
which of these reasons are likely to resonate most.
For example, when trying to convince your manager “Sam,” you know that the main argument important
to them is how AI ethics helps meet the already high pressure placed on revenue and sales targets. For
this manager, you might choose to focus on AI ethics as a way of increasing win-rates and improving
customer engagement—both related to their focus on revenue and sales. Perhaps a targeted example of
how a competitor or two are operating with regards to AI ethics and how this may result in a loss of
competitive advantage for your manager’s sales and revenue may be convincing for this manager as well.
When trying to convince your manager “Alex,” however, you know that their main concern is how to
build a legacy company that operates with integrity. Accordingly, arguments associated with “doing the
right thing” and how AI ethics enables the organization to “walk the talk” may be relevant, as would
arguments around future-proofing the organization. Trust in the market as well as attracting and
retaining top talent would also likely be of interest to this manager.
Choosing the “right reasons” is highly context-dependent. Not all of these may be relevant to your case.
Furthermore, investing in AI ethics allows you to keep up with competitors. Consider key competitors’
AI ethics maturity—do they have high level frameworks and principles? Have they implemented AI
ethics boards, teams, or other oversight mechanisms? Have they developed specific toolkits to tackle
fairness or bias? If so, you can make the case that your company must invest in AI ethics to keep up.
There are numerous other ways to make the business case for AI ethics, including the following:
Bridging the information gap – It must be recognized that artificial intelligence and its
implications may not be familiar topics for many within the organization. Clear internal
communication programs and training may be needed for employees to grasp the
importance of AI ethics and understand its relevance to their particular role or team.
Wider structural changes – AI ethics cannot exist in isolation from other structures and
processes within the organization. Implementation will also have to consider wider
structural changes to assist and reinforce the goals of AI ethics. This could include changes
in compensation incentives, existing policies, decision-making procedures, risk assessments,
and many other areas.
The following actions can be taken to identify and maximize the benefits of AI ethics:
Survey customers. Companies have long sought insights from customers to guide product direction,
and AI ethics can be probed from a variety of perspectives to understand market expectations.
Companies should also monitor AI survey results shared by McKinsey, PwC, Accenture, and other
consulting firms.
Create a customer advisory board. Many companies bring together a group of strategic customers to
periodically discuss industry trends, issues, and priorities. While the main focal point of such a board
is to explore potential solutions, probing the importance and application of AI ethics should lead to
valuable intelligence and further collaboration.
Create an ethics advisory board. As more companies have come to appreciate the ubiquity of AI and
its importance to society and the future of work, they have recruited internal and external cross-
functional experts to act as the governance body which sets the AI ethics strategy. There is no
consensus for how such a board should be constructed, tasked, or empowered, so companies should
perform research in order to optimize this opportunity.
Monitor laws and industry standards. While some existing laws apply generally to AI, many new
ones are being proposed to expressly address the unique challenges presented by AI. Similarly, many
industry standards have begun to mature from high-level principles to actionable plans that can
funnel into and fuel the development pipeline (e.g., from the IEEE). Staying aware of these updates
will be critical to enable responsible innovation while ensuring compliance and maintaining brand
integrity.
Always address ethics in projects. Many technology implementation best practices translate readily
to AI, including selecting a project methodology and documenting key risks, actions, issues, and
decisions. Holistically integrating ethics into that delivery process—including with contextual
modifications specific to AI, like performing statistical analyses of data sets and impact assessments
—will be the next step forward to ensure sustainable, long-term success.
In particular, AI ethics can benefit significantly by unleashing the power of diversity and inclusion,
not just within development teams, but across all departments. The field demands collaboration from
multi-disciplinary stakeholders to better identify and manage risks, improve decision-making, and
drive human-centric innovation—while simultaneously boosting employee morale, engagement, and
loyalty in a virtuous cycle.
Launch a public awareness campaign. There is more attention on, and investment in, AI than ever
before, which creates the perfect opportunity for leadership on the significance of AI ethics.
Companies should embrace it by participating at events, sharing articles or information, and taking
other steps to advance the public dialogue and earn a reputation as an ethical innovator.
-Merve Hickok
We invite you to learn more by reading the interviews on the following pages.
Community
Interviews
Hear from a broad range of leaders about
ways to operationalize AI Ethics.
AllTechIsHuman.org | BusinessCaseForAIEthics.com
Beena Ammanath
place throughout the AI lifecycle, from
conception and development, to
assessment and deployment, and
through management and ongoing re-
assessment. Also, in thinking about your
workforce ecosystem, design clear
protocols that put the end-user first. This
Executive Director, Deloitte AI Institute and leads to a culture where AI ethics is a
Founder, Humans For AI priority for every stakeholder.
What are the challenges you / your organization face with operationalizing AI ethics?
In our most recent State of AI in the Enterprise survey, 95% of respondents expressed concerns around ethical risks for their AI
initiatives. Despite these worries, the study reports only about a third of adopters are actively addressing the risks—36% are
establishing policies or boards to guide AI ethics, and the same portion says they’re collaborating with external parties on
leading practices.
Although there is still a long way to go, a growing number of organizations are tackling AI-related risks head-on:
• As a founding donor for The Council on the Responsible Use of Artificial Intelligence at Harvard’s Kennedy School, Bank of
America has embraced the need to collaborate on AI ethics. It has also created a new role—enterprise data governance
executive—to lead AI governance for the firm and work with the chief risk officer on AI governance.
• The German engineering firm Robert Bosch GmbH, which plans to embed AI across its products by 2025, is training 20,000
executives and software engineers on the use of AI, including a recently developed AI code of ethics.
• Workday, a provider of cloud-based enterprise software for financial management and human capital management, is
employing a broad spectrum of practices. It has committed to a set of principles to ensure that its AI-derived recommendations
are impartial and that it is practicing good data stewardship. Workday is also embedding “ethics-by-design controls” into its
product development process.
We are also helping our clients navigate AI ethics with our Trustworthy AI Framework, designed to help organizations navigate
through potential issues such as bias, transparency, privacy, and developing regulations.
The Trustworthy AI Framework helps organizations develop ethical safeguards across six key dimensions—a crucial step in
managing the risks and capitalizing on the returns associated with AI. These pillars include fair and impartial use checks,
implementing transparency and explainable AI, ensuring responsibility and accountability, putting proper security in place,
monitoring for reliability, and safeguarding privacy.
I approach operationalizing AI ethics across the dimensions of people, process /controls, and technology, all of which are
interdependent. Agile technologies allow you to assess and truly validate whether AI tools are behaving in line with the ethical
framework. Technology solutions can mine data and reveal insights and trends. Importantly, such technologies need to be
flexible enough that they work across all use cases and also simple enough that they provide meaningful outputs to a diversity of
decision-makers.
Yet, who is making decisions? Organizations need clear roles and responsibilities for stakeholders whose daily effort is to think
about, monitor, and drive AI ethics. This may mean establishing the role of Chief AI Ethics Officer, creating an AI ethics advisory
group, hiring AI ethicists, distributing responsibility across existing leadership—or perhaps all four. It also means training for the
entire organization. Every employee needs to be thinking about AI ethics in the same way.
Alongside this are the processes and controls for a repeatable, sustainable approach. Processes contain guardrails that map
from the technological solution to the framework and inform the decision-makers. This means using real-world domain
information, not just experimental datasets.
Business leaders understand the importance of AI ethics because the risks posed by misbehaving AI are so significant. There’s
no shortage of talk and spilled ink on the topic. Yet while it’s easy to say we need to mitigate AI risks, it is much harder to define
how to do it. We need to move beyond high-level principles and dig into the specifics, and therein is a primary challenge: AI
ethics is an emerging area. The path forward is still being defined.
Continued on next page
THE BUSINESS CASE FOR AI ETHICS | 26
There is not always an agreed-upon vocabulary for AI ethics. There may not yet be an appreciation for what is meant by
concepts like fairness and impartiality, transparency, accountability, and even privacy. This challenge is made greater because
there are a range of disciplines involved in ensuring ethical AI, and some professions are underrepresented, notably, computer
science. When it comes to figuring out how to achieve AI ethics in practice, we need the technologists’ voices and perspectives
just as much as we need input from philosophers, sociologists, legal experts and enterprise leaders.
Ultimately, the tactics and strategies that work for AI ethics at scale require buy-in and input from the whole organization. It
really is a united effort, where the ethical framework is embedded in the business process and the culture prioritizes AI ethics as
much as it does AI function.
Since Responsible AI requires a culture change and re-education in a range of areas, how are you ensuring your employees
have the knowledge and skills to design and build responsible AI solutions?
If step one of driving toward ethical AI is agreeing upon the organization’s ethical principles, then step 1b is making sure the
framework is easy to communicate and understand. AI is complicated, but ethics should not be. With a clear framework, cultural
change flows out of training and awareness. Most ethical organizations already use some form of business ethics or integrity
training, both when new employees are on-boarded and then regularly thereafter. AI ethics should become a part of this.
As the culture begins to shift, every stakeholder can see their place in the larger effort of upholding AI ethics. Education and
skills development help, but what you also need are channels for employees to provide feedback and raise concerns.
Stakeholders need to understand their responsibilities but they also need to be empowered to play an active role and inform
those ultimately responsible for ethical AI. One question then is how does an organization motivate the workforce to raise AI
ethical concerns? Incentives are a good start. Things like acclaim and awards within the organization, factors for performance
metrics, and potentially penalties for inaction should be on the table when organizations are figuring out how best to cultivate
an ethical AI culture.
"When it comes to figuring out how to achieve AI ethics in practice, we need the
technologists’ voices and perspectives just as much as we need input from philosophers,
sociologists, legal experts and enterprise leaders."
There is no shortcut for ethics. Publishing AI principles and committing to ethical AI are great, but to operationalize AI ethics,
we need to go beyond them. (You can read our article “Operationalizing AI Ethics Principles.”) Making the “right” choice in every
turn often requires a thorough understanding of ethics and its intersection with the technology and the domain. By building
processes that effectively integrate ethics into the innovation lifecycle and by bringing ethics expertise on board, we can
operationalize AI ethics.
Ethics risks are serious business risks that might result in reputational harm and loss of customer trust. And ethics opportunities
are often real business opportunities which might give a company the edge that it needs to succeed.
What resistance is one likely to encounter when making the case for responsible AI to the business community?
Practitioners think of ethics as a hindrance—a policing system that stands between a brilliant idea and its implementation. In
fact, AI ethics enhances technology and it does so for all of our sake. Businesses also think that ethics is an unnecessary cost.
However, ethics risks are business risks.
Any organization trying to incorporate responsible tech practices exists in a broader economic ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?
Ethics and profits are not mutually exclusive. Ethics failures put customer trust—and thus, profits—at risk. With complex AI
technologies, customer trust plays an important role for their adoption of these technologies. A company’s ethics failure would
put its other practices into question. Should customers trust them with their data? Should they trust them for receiving fair
treatment? Should they trust the system to be non-manipulative? If they cannot, they might rightly choose not to use these
products.
"Ethics and profits are not mutually exclusive. Ethics failures put customer trust–and
thus, profits–at risk. "
Yasmine Boudiaf
It should be regularly reviewed and
revised, with input from external groups.
Societal values and the way AI is used
constantly changes. Don't be so arrogant
as to think that the operational ethics
you came up with yesterday will be
Creative Technologist and Visiting Fellow at relevant tomorrow.
Ada Lovelace Institute What are the biggest challenges in
general to operationalizing AI ethics?
Any organization trying to incorporate responsible tech practices exists in a broader economic ecosystem that has
fundamentally motivated how businesses and societies operate.
How can companies try to incorporate these practices when they exist in a capitalistic system that is focused on profits?
The challenge is to think long-term. The unethical, yet permissive AI practices that are in operation now will not make for a
viable business in the future. It's not only "nice" to be ethical, it will ensure a business's survival.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
I have policies in place in my own practice to mitigate discrimination. I acknowledge that I have blind spots, and so will simply
have a conversation with collaborators at the beginning of a project on how best we can work together. I regularly experience
discrimination, but that is not to say that I'm incapable of discriminating against others.
Ansgar Koene
priorities. This is why the UnBias AI for
Decision Makers toolkit I developed with
Giles Lane focuses on breaking down
organizational silos to bring together
people from across organizational teams
for a holistic, transdisciplinary discussion
about the ethical implications of
Global AI Ethics and Regulatory Leader / EY implementing an AI application.
One of the ways I am working to operationalize AI ethics is through the development of standards, such as the IEEE P7003
Standard for Algorithmic Bias Considerations. Obviously, given standards about bias, an important issue for us has been
inclusion and diversity among the participants of the working group. While we managed to achieve a good mix of participants
from industry, academia, and civil society, as well as having at least some representation from each inhabited continent, it
remains undeniable that participation in the group is heavily skewed towards people from Europe and North America. More
broadly speaking, there is a significant lack of participation by the Global South in the development of the new AI standards that
are currently being developed by ISO/IEC, IEEE-SA, ITU, and other bodies, which will define industry best practices.
My main recommendation for operationalizing AI ethics is to avoid siloed thinking. There can be no AI ethics without
organizational ethics. Think beyond the immediate aims of the AI project to take into consideration the wider impacts on all
affected stakeholders.
The biggest challenges to operationalizing AI ethics are the structural changes that may be required within an organization to
systematically identify the downstream impacts of AI applications and the need to accept that being serious about AI ethics
includes potentially having to devote more time or resources, or maybe even cancelling AI projects when no ethical
implementations are possible.
My business case for ethical AI centers on long-term value for the organisation both in terms of reputation and trustworthiness,
as well as future-proofing for compliance with regulatory developments.
Since Responsible AI requires a culture change and re-education in a range of areas, how are you ensuring your employees
have the knowledge and skills to design and build responsible AI solutions?
Within EY we have launched a new professional training course focused on ethical and trusted AI. The course is offered to
everyone within the organisation and is part of the EY Badges system that all EY staff are encouraged to do for their continuing
professional development.
In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc. in your org ecosystem?
Since EY operates in the highly regulated auditing sector and deals with sensitive client data, there are strict regulatory
compliance requirements on the way data is used.
European Commission, European Parliament, Council of Europe, OECD, WEF-AI Global, Algorithm Watch, AI Now, IEEE
What resistance is one likely to encounter when making the case for responsible AI to the business community?
Typical resistance to ethical AI in the business community is likely to focus on the difficulty in quantifying the benefits or
averted risks that will be achieved as a result of resources and time that need to be invested. This is especially true in the
absence of regulatory requirements with defined fines or other consequences for non-compliance.
To address this issue, we are working to link AI ethics to other shifts in business thinking, such as a move towards focusing (and
measuring) long-term value and non-financial corporate assets (including reputation).
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
EY operates primarily in the B2B of B2G space where the affected stakeholders are other organizations. When providing
consulting services to clients providing the B2C or G2C services, EY’s Trusted AI framework includes items on suitability of
project teams, representativeness of data, and assessment of bias in model performance.
Dr. Rumman
don't want to hear 'multi-stakeholder
engagement' . . . until they actually try
(and fail) to do a real ethical audit.
Chowdhury
Getting different groups to understand
other groups is difficult—sometimes
because of skillsets, and other times
because of how they value (or do not
value) the group mentioned.
CEO and Founder of Parity
What recommendations do you have for
operationalizing AI ethics?
Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?
My suggestion is to think through the literature in organizational change management focused on theories and catalysts of
change. Featured in our article on how to enable Responsible AI (with Bogdana Rakova, Henriette Cramer, and Jingying Yang,
link: https://sloanreview.mit.edu/article/putting-responsible-ai-into-practice/).
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
[The] Parity platform is built to engage multiple stakeholders and draw insights from their feedback - so this sort of interaction
is enabled by design. While we can all talk about 'incorporating minority voices' few people have figured out how to translate
that into product development—what does it mean to translate 'inclusivity' into an app?
"You can't quantify and operationalize everything. You will never be able to do
'automated' algorithmic audits that are comprehensive."
Maria Axente
exploring what ethics mean for AI and
what position we, as a professional
service firm, should take in framing it for
us and our clients. Along this journey, we
built the Responsible AI toolkit, as our
response to client challenges of
implementing ethics across the whole AI
Responsible AI and AI for Good Lead PwC UK lifecycle. The toolkit also has an AI ethics
framework that looks at supporting C
suite to contextualize and operationalize
ethics. I currently lead the
implementation of it for our clients and
its development - we just launched a
data ethics approach based on
Responsible AI. In our quest of
developing the toolkit and the
philosophy, I got to get involved with
many organizations in the UK and
globally and co-create many of the
knowledge, narrative and frameworks
around responsible and ethical AI.
In our case, the biggest one is complexity of the task for an organization as diverse and big as ours. We know our approach needs
to be systemically rather than linearly (as is being done in most of the cases) and invest time, resources and patience on [the]
medium term, while we focus on low hanging fruits.
The second challenge is how to prioritize various initiatives based on potential impact on outcome, ease of implementation,
balance internal work with external work we do for our clients.
Lastly, demonstrating progress and trustworthiness to all the stakeholders involved—that requires new ways of thinking,
working, and ultimately behaving.
Think big—systemically but act small & focused—low hanging fruit and build momentum. AI ethics is a complex and profound
cultural and organizational change, treat it like one, learn from the science of cultural change and transformation how to do it
well and succeed.
Also operationalization is not the same with ethical reasoning—with decision making and acting, the missing part in all the
processes of creating ethical AI—which is the glue that holds it all together. All we do with AI, around it and for it needs to be
centered on one principle ' Should I build it' not 'Can I build it ' which is the philosophy of computer science since inception.
Far too often, we apply a linear thinking for a complex and systemic task, far too often we fall for "broken part" fallacy [. . .] when
what is needed is a systemic analysis [. . .] with correlations and causations highlighted.
1. A higher awareness among customers, citizens, and leaders of the ethics and dangers of technology. As a result of high- profile
ethical failures, the media, lawmakers, regulators, and society-at-large have started to focus their attention and actions on the
negative impact of AI, and how ethics should play an important part.
2. The UK's AI public policy is focused on ethical and responsible use of data and AI:
A significant proportion of the AI policy work within the UK is concerned with ethical issues when it comes to the strategic use
of data and AI. In the UK, the ‘ethics of AI’ has been seen as a national competitive advantage in the global AI sector. The UK is
the only other country alongside Singapore to have a government department focused upon the ethics of data and innovation.
3. Imminent AI regulation triggers companies to invest in readiness. UK regulators (alongside the EU) are seen as leading the
way internationally in their understanding of the financial, societal and moral impacts and they are acting swiftly to develop
regulatory frameworks mostly addressing ethical issues on data and AI.
4. Increased scrutiny of the tech industry has increased the trust and branding risks associated with a lack of ethics in
technology. Building and maintaining brand trust in the age of AI becomes a priority for organizations of all sizes.
6. [The] C suite sees responsible technology and data as a source of competitive advantage. 90% of respondents believe the
benefits of responsible AI will outweigh the costs (The Economist)—this could have a significant impact on the following:
Enhanced product quality and commercial competitiveness, Talent acquisition and retention, Sustainable investing and
strengthening all stakeholder relationships, Boost and maintaining a high level of brand trust.
7. Responsible tech and data is a strategic priority for some C-suite according to the most recent PwC Responsible AI research,
three quarters of companies have formal policies (or guidance) in place with 1 in 5 having an ethical framework in place. All
companies questioned have defined/looking to their own point of view on ethics of technology/AI/data via a variety of
activities: ethical principles, boards and committees, frameworks and Audit/assurance tools.
In big organizations we work with, many grasp to understand it, few talk about it, and even fewer can demonstrate real progress
not empty PR. Three examples come to mind—Ethical AI Practice at Salesforce, Ethics and Society at Microsoft, and Yoti.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
[The] current incentive structure and the mindset that generates it. It is a false antithesis between profit and ethics, that
embedding ethics will reduce the profitability rate. We have to go back to the business model and demonstrate how ethical AI
products are not only opening new markets but reduce the compliance and risks cost by mitigating it proactively.
How can companies try to incorporate these practices when they exist in a capitalistic system that is focused on profits?
See, here is the problem, in the way we think about ethics as a killer of progress! We have to frame it correctly—how can ethical
businesses can be more profitable and solve important societal problems at the same time. We have done it in the past with
ethical supply chain and sourcing, for example, so what can we learn from these practices? How can we respond to the huge
cultural shifts that BLM and metoo have brought, shifts that give rise to a new consumer mindset? This could be huge
opportunities for organizations to change profoundly if they wish to remain competitive and thrive.
But you know what, each organization willingly or unwillingly—with pressure from society, customers or regulators will have to
go down that path of soul searching and through the test of 'how moral we REALLY are." Responsible AI cannot be built by
unethical business, and funny enough AI are exactly those mirrors that will uncover with phenomenal precision the true moral
nature of those organizations.
"Having principles, or boards or training and framework are good, but only the beginning
of the journey."
Brandeis Marshall
computing and automation right now.
Let’s actively work to center the most
vulnerable in our handling of data.
Companies must decide to re-align their areas in which they obtain their profit gains. Automated algorithms and systems that
are identified to cause harm need to be prioritized by the company for timely resolution. Time, talent, and money must be
redirected to fully implement the needed interventions.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How
does your organization factor in these possible risks/liabilities?
The mission of DataedX is to democratize culturally responsive data learning. To do this as a company, we design, deliver and
co-create resources and strategies that bake in systemic equity into automated algorithms, systems, policies, and
regulations. We center inclusivity and promote the contributions of non-white people to the data space in our work.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
One issue is that there is no cross-organizational standard concerning who should own the problem. That means you can make a
strong case for ethical AI but the person/people to whom you're speaking are not tasked with solving that problem, which
means they don't have the budget/resources to tackle it.
Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?
That's only an issue if you think ethical AI is either not conducive to, or contrary to, profits. If there are ways in which ethical AI
is conducive to profits, there's no issue. Now, there's a very strong claim one could make that I think is false: In all cases, ethical
AI—and all that goes into creating the infrastructure, process, and process that makes it possible—is more conducive to
promoting the bottom line than non/unethical AI. Endorsing that claim strikes me as an expression of an ideology as opposed to
a judgment grounded in empirical evidence. That said, there are many cases in which it is true that a given company would in fact
promote their bottom line by implementing an ethical AI program.
"The first challenge is getting senior leaders to take it seriously. Some already do, but that's a
small percent of senior AI leaders."
Phaedra Boinodiris
to understand, did anyone tamper with it,
is it accountable?
Tell us about your role: I became impassioned about the Our approach engages stakeholders
field after the news regarding from across an organization- from Data
As you scale AI you are also scaling Cambridge Analytica came out in scientists and CTOs to Chief Diversity
your risk. I am responsible for 2018. I was so utterly horrified I and Inclusivity officers. Fighting bias and
helping our clients mitigate their decided to pursue a PhD in the ensuring fairness is a challenge that is
risk through a three part offering to space to learn as much as I could solved by more than just good tech and
reduce this risk. I work to help about it. After spending two years by more than just one kind of
clients with the culture they need researching, and doing talks, I can stakeholder.
to adopt and scale AI safely, the AI now claim that it is my "day job" to
engineering with forensic tools to teach this practice to others. What are the biggest challenges in
see inside black box algorithms, and general to operationalizing AI ethics?
governance to make sure the Tell me about your organization’s
engineering "sticks" to the culture. ambition for Ethical AI? Why is it a Oftentimes, like in the case of a US
priority?
What’s been your path into the Continued on next page
field? As AI is being used to make many
high stakes decisions about people,
THE BUSINESS CASE FOR AI ETHICS | 45
retailer, we are initially engaged to solve a business problem from ONE stakeholder—oftentimes the Chief Data Officer wanting
to increase time to value for AI— only to have the effort mature into a broader effort that incorporates cultural transformation
and governance. The engaged stakeholders then grow well beyond the initial Data Scientist to include the Chief Diversity and
Inclusivity Officer, Chief Legal Counsel and more.
Unwanted bias places privileged groups at systemic advantage and unprivileged groups at systemic disadvantage, and it can
proliferate in your data and your AI. Unwanted bias comes from problem misspecification and data engineering, but even more
commonly from prejudice in historic data and under sampling. Artificial Intelligence (AI) enhances and amplifies human
expertise, makes predictions more accurate, automates decisions and processes, optimizes employees' time to focus on higher
value work, improves people's overall efficiency, and will be KEY to helping humanity travel to new frontiers and solve problems
that today seem insurmountable. But, as you scale AI, you also scale your risk of calcifying unwanted bias systemically.
Today AI is being used in virtually every domain and industry to make all types of decisions that directly affect people's lives.
Regulation can be a powerful tool to build consumer trust in emerging technologies. As per our CEO's letter to President Elect
Joe Biden, IBM believes that a “precision regulation” approach by the government can help create ethical guardrails for the use
of new technologies without hindering the pace of technological innovation.
Only by embedding ethical principles into AI applications and processes can we build systems based on trust.
Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?
Linux Foundation, Partnership for AI, WEF AI Ethics Board guidance, IEEE, Kathy Baxter at Salesforce (love her blogs).
What resistance is one likely to encounter when making the case for ethical AI to the business community?
It oftentimes goes directly against the entrepreneurial mindset of pivot fast, 'throw spaghetti on a wall until it sticks.' Ethical AI
must be necessarily deliberative in predicting unintended consequences.
We need to ask ourselves why we are not teaching about AI and Ethics in high schools? WHY do Higher Ed institutions market
classes on Foundational AI ONLY to Data Scientists and Computer Scientists? This is a massive disservice to our community.
Emanuel Moss
priority?
Triveni Gandhi
Tell me about your organization’s
ambition for Ethical AI? Why is it a
priority?
Tell us about your role: What’s been your path into the What recommendations do you have for
field? operationalizing AI ethics?
As a data scientist and Responsible
AI SME, I work with clients to build I completed my PhD in Political AI ethics start at the top—leadership
and deploy their AI pipelines, with a Science in 2016 but realized I would needs to determine a set of expectations
special focus on reducing bias in the have more practical impact on the and values for the organization. Only
machine learning cycle. I also create lives of others outside of academia. then can leaders across the various parts
custom trainings about how to After working as a data analyst in a of the organization work together to
think about and execute non-profit, I came to Dataiku to define and build requirements,
Responsible AI at the executive and help democratize AI. While here, processes, and transparency mechanisms
practitioner level. Most I've seen the amazing power of our both vertically and horizontally.
importantly, I evangelize the tool to transform businesses, but
importance of holistic approaches with my social science background, Continued on next page
to Responsible AI both internally I am equally aware of the potential
and externally. negative impact of AI. Thus I began
a push for enabling users of our tool
Tell me how you got into Ethical AI. to think about broader implications
THE BUSINESS CASE FOR AI ETHICS | 48
What are the biggest challenges in general to operationalizing AI ethics?
There are so many different methods, ideas, approaches, and more to Responsible AI that it can become overwhelming to know
where to start. In particular, it becomes difficult to pinpoint the one person who is willing to take on the task to organize the
various moving pieces.
I can find a business case at nearly every client I speak to—even manufacturing firms who are interested in using AI for People
Operations. The business cases are not hard to find, because AI impacts humans in society in so many ways.
Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?
We are creating internal enablement materials so that every customer-facing role can speak to and support our clients in their
Responsible AI efforts.
In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc in your org ecosystem?
Start with the ground belief that all data is biased, as it is a product of the historical context it is collected in. Data is not
objective or infallible—so start there and be willing to question and analyze every piece of information. In addition, document
everything! Who collected the data? When? How? The concept of Datasheets for Datasets by Gebru et al is really important
here.
Timnit Gebru, Joy Buolamwini, Brandeis Marshall, Rachel Thomas, Shannon Vallor are big names that come to mind. I think
there are also numerous researchers out there who are probably not flashy names but are contributing to the field every day,
and I hope we can find ways to elevate more of those voices.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
The main issue is of course push back from people who don't see Ethical AI as a problem. To them, bias is subjective and a
difference in outcomes or impact is only reflecting what is true in the world. Folks like this think that AI should only reflect what
already is, not what it could be.
I think the value prop is quite clear—we are helping orgs understand and mitigate risks, or at the very least, be aware of what
impacts they have. Reputation costs are big today, and our customers are keen to avoid any potential fallout of their AI
products. Success of Responsible AI means knowing that our customers are constantly thinking about and improving their
processes, not seeing responsible AI as an afterthought, or something to do once and forget.
How can companies try to incorporate these practices when they exist in a capitalistic system that is focused on profits?
The biggest motivation is that of reputation cost. No company wants to be seen as evil or doing wrong, especially with the way
information and news is instantaneously shared around the world. It would be nice for businesses to have a sense of moral
responsibility, but in a capitalist system it is the stick, not the carrot, that will drive ethical considerations forward. This means
businesses can focus on profits, but know when to draw a line to avoid greater costs to themselves.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
We build tools that allow anyone with subject matter expertise to be involved in the AI pipeline [and] create more democratic
and collaborative environments.
Merve Hickok
My previous corporate work was focused
on HR recruitment technology, processes
and diversity. My work allowed me to see
how certain processes, mechanisms and
technologies can create obstacles for
those in disadvantaged groups; and what
intentional steps we need to take to
Founder / AIethicist.org prevent that from happening.
Operationalization of AI ethics requires alignment with an organization's culture and all of its mechanisms and methodologies.
It needs buy-in from all levels in the organization, and it requires intentional steps to be taken. Therefore even when a system is
in place, it is crucial that the organizations monitor and improve both their AI products, as well as their organizational processes
and mechanisms.
The biggest challenge is to start the conversation with decision-makers on the importance of AI ethics and why they should
adopt it. Then you can help those involved (whether decision-makers or developers) see that AI ethics is not only the
responsible thing to do, but it is also crucial for the organization's continuity and success. Yes, there will be some additional
resources required until capacity is built inside the organization and until it becomes business as usual. However the return on
those investments both ethically and financially are real.
Ethical development of AI provides your organization [an] advantage and your investments are returned in terms of employee
loyalty and commitment, customer satisfaction and brand loyalty, investor appreciation and additional investments, and less
stress and costs in legal or PR battles.
Too many to count here as it is spread across advocates, activists, academics, business people. There are also a lot of unseen
people who try to push this inside their organizations. I think the bigger message is 'anyone can contribute to ethical
development or use of AI regardless of their education. This is about envisioning a better future and contributing to it with our
own knowledge and experiences.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
At first the businesses think that AI ethics is a 'nice to have' or an 'additional feature' so if they do not have the resources, they
prefer to delay it until a crisis hits. However developing and using AI ethically makes your product stronger, your organization
more resilient and responsible, and your potential market wider. Once AI ethics is operationalized and embedded in an
organization, it creates competitive advantage. So there is always resistance at first until you are able to show the wider picture
and the impacts into the future.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
Lack of diversity in tech has more to do with the organization's culture and mechanisms than pipeline. So the organizations need
to take a bird-eye view of their processes (recruitment, incentives, performance management, flexible work, to name a few) and
understand how they interact with each other and impact the overall culture, diversity and inclusion—and ultimately the
products they develop. Before you address these processes, any fix you are trying to bring in will be short-lived.
After that, depending on the industry and use case of your AI product, you need to ensure that your stakeholder group is wide,
voices and lived experiences are respected, and that everyone can flag concerns about data, context, metrics, model, outcomes
etc. This is not only about creating UI/UX personas and edge cases—although that is definitely a good start.
Organizations need to be ready to pull the plug on a project at any point if they cannot justify the impact on individuals or
society. They also cannot extract knowledge and perspectives from marginalized groups only to then turn around and develop a
product that will exploit those groups.
Elizabeth M. Adams
AI ethics?
Meeri Haataja
priority?
Embrace transparency and community collaboration as an opportunity to learn together. Invite constructive and critical
feedback, that will help you learn quicker.
Cities of Amsterdam and Helsinki, Ada Lovelace Institute, UK government in relation to their Data strategy, IEEE, Mozilla
Foundation, just to name a few.
Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?
We're witnessing a rapid expansion of non-financial ESG criteria starting to impact in large scale how investors are allocating
their assets. While AI ethics is essentially related to the social impacts of technologies, the related risks, and how companies are
managing them, the rise of ESG will be one of the most important drivers for private companies to take AI ethics seriously.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
We encourage our customers to always consider and document their approach to ensuring equity and non-discrimination in the
system context. This means all systems registered on our platform are expected to tell how these issues have been actively
managed.
We also provide means for our customers to collect feedback from their stakeholders, and are working on a concept for
enabling consumer participation throughout the system lifecycle.
Furthermore, in our enablers for AI in CSR reporting, we encourage our customers also to communicate the diversity of their AI
teams, actively set targets, and measure the diversity of their teams over time.
"Many of our customers are lacking clear roles, responsibilities, and processes for AI
governance and ethics."
Sheryl Cababa
As a result of these areas of focus, I’ve
worked on tools that are meant to help
technologists consider the potential
ramifications of their products, and [I]
have worked with organizations such as
Microsoft and Omidyar [Network] in this
VP of Strategy, Substantial space.
One thing I’ve learned from working with many different organizations on this topic is that you have to find ways to meet them
where they are in terms of organizational culture. If, for example, the word "ethics" is a sticking point, you might want to trojan
horse the concepts under other terms, such as "responsible technology." I’ve also seen effective progress with companies that
have an explicit plan for operationalizing, such as a progression from simply building awareness—using the kinds of tools that I
mentioned earlier -- to full process integration and changes in KPIs. It’s one thing to build awareness, but if no one knows, from a
tactical perspective, how to actually integrate it into their day-to-day work, then awareness doesn’t go anywhere.
I actually challenge the idea that there needs to be a business case for ethical AI work. For me, it’s kind of like those who insist
there needs to be a business case for, say, diversity. Even if diversity within our organizations weren’t explicitly beneficial to
business, wouldn’t it still be the right thing to do? We are constantly twisting ourselves in knots to align values to profit, and in
the case of ethical AI, we should still do it, even if it means trading away some profit. I know this isn’t a popular perspective, but
the demand for a business case absolves us from harmful decisions if, God forbid, there isn’t a good business case. Just do the
right thing, people!
I appreciate that work needs to be done both inside of, and external to, technology companies. In terms of internal work, I really
respect the work that the folks at Microsoft are doing in regards to Responsible AI. For external organizations, I look to activists
such as the Algorithmic Justice League to help pressure companies to be more thoughtful, careful, and just in their use of AI.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
The biggest barrier is [that] being thoughtful about this work requires time, and that time slows down production. When
integrating ethical AI means not aligning with, say, OKRs within your organization, then you aren’t actually going to be able to
make much of a difference; your hands are tied by metrics that result in harm. Metrics have to change, perceptions of success at
scale need to change. This organizational change requires time and energy, and businesses of course, are reluctant to dedicate
time or space to efforts that will fly in the face of short-term growth or profit. The key is to argue for long-term health as the
yardstick, for your users, for society.
How can companies try to incorporate AI ethics practices when they exist in a capitalistic system that is focused on profits?
I’ve seen organizations where individual and team goals are the equivalent of “more engagement, by more users, for longer
amounts of time, and more frequently.” These types of engagement-oriented and active user metrics result in algorithmic
systems that are agnostic to the potential harm that driving traffic and engagement can cause. Eyeballs equal money. To
reiterate what I said earlier, you likely need to challenge your organization: what if doing the right thing means fewer eyeballs,
but longer-term health for your users and society? We need to prioritize, and what that often means as a result, is giving up
short-term profit. As a systems thinker, my philosophy is that we are where we are *because of capitalism*. So we need to
challenge the system.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
It’s been disheartening recently to see instances such as Google firing Timnit Gebru, AI ethics pioneer and founder of Black in
AI. It’s an example of how vocal representation is often silenced, and diversity in this field is often deprioritized. As an
underrepresented woman of color in tech, I’ve seen it all when it comes to the biases of an industry that is so entrenched in a
white male-dominated culture. My goal is to be a part of increasing representation as much as I can, and also working to help my
dominant-culture peers use tools and frameworks to help them interrogate their biases. As a design researcher, I also focus on
integrating equity-centered practices in my work so that we ensure that we engage in participatory design with those who are
potentially most affected by biases in our technology. These methods, of course, are a stopgap. What will truly make a
difference is true diversity and representation more broadly within our industry.
John C. Havens
Ethically Aligned Design, a 300-page
treatise on responsible AI created by
over 700 global experts over the course
of three years (links:
https://standards.ieee.org/industry-
connections/ec/autonomous-
Executive Director, The IEEE Global Initiative systems.html and
https://ethicsinaction.ieee.org).
on Ethics of Autonomous and Intelligent
Systems Tell me about your organization’s
ambition for Ethical AI? Why is it a
priority?
Ignorance in thinking that "ethics" means "morality." Also, existing business models prioritizing exponential growth or
exponential profits versus factoring in environmental and human wellbeing at the same level of financial concerns. Note that
social innovation has for years been demonstrating that long-term sustainability and business growth are best served by
avoiding short-termism. However, the bigger question here is—what are the ultimate societal success metrics for the AI we
build? "AI for Good" where this is defined by reaching the UN SDGs is a fantastic step in this direction. But the point is we value
what we measure and what we prioritize. Where people, planet, and profit aren't prioritized at the same level, people and planet
are often considered more of as an afterthought with CSR and ESG reporting, and harm can result because products are already
designed and in the world.
Please see our paper, Ethically Aligned Design for Business as mentioned above. However, for a more general sense of how
Ethically Aligned Design can be integrated overall, please see the latest version of Ethically Aligned Design at:
ethicsinaction.ieee.org.
Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?
It's a multi-pronged strategy that's taken years to date. First we created Ethically Aligned Design and launched a series of
standards working groups. Then multiple other areas within IEEE began focusing on issues of ethics. Then we were invited to
multiple policy discussions over the past five years. By being a part of these conversations and creating new committees focused
on areas like business, our continued publications and new standards working groups provide ongoing best practices for the
academic, business, and policy communities. Our work is also open to all and free to join.
In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc in your org ecosystem?
Yes. In Ethically Aligned Design we have a chapter focused on Personal Data Agency. The basic concept here is that where all
people don't have access to and portability of their data, they will always rely solely on the hope that government or businesses
will protect their personal information. This is not only ill advised in terms of hacking and cyber issues, but denies the
fundamental need for technological and policy channels that allow for genuine peer to peer exchange of one's data and choices
via parity with the existing systems today that can be extractive in nature and largely focused on advertising and economic
priorities. Data sovereignty, however, especially in the coming immersive or spatial web, will be essential both for all humans to
prove their identity while having trusted means to exchange data with businesses, governments, and each other. Not having
data sovereignty means not having true agency over your identity, emotions, or publicly declared choices, which includes voting
in a democratically oriented society.
So many people!! Kathy Baxter at Salesforce, Adam Cutler and Milena Pribić at IBM, Rumman Chowhury, Olivia Gambelin,
Jonathan Stray, Kay Firth-Butterfield, Paola Ricaurte, Data & Society, and many more. It's quite hard to not list about six
hundred names here.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
Fighting status quo, single bottom line, quarterly imperatives. Communicating that "AI Ethics" can also be called
Continued on next page
THE BUSINESS CASE FOR AI ETHICS | 58
"Responsible AI" and not freak people about because they think you're talking about morality. Letting them know "AI Ethics" is a
cross-organizational need and focus akin to Agile for Marketing practices which many focus on marketer types but needs legal,
R&D, and stakeholders from all departments at some point to make a good scrum.
How would ethical AI improve your value prop to your customers? What are your top use cases for application of ethical AI?
How do you measure the success of your ethical AI initiative?
Here again Kathy Baxter at Salesforce has done groundbreaking work. In terms of measuring success, beyond traditional KPIs
of sales lift, increased positive sentiment in PR/social channels, I'd say increased trust is the biggest positive most are hoping for
when realizing trust has to be a two-way street.
How can companies try to incorporate AI ethics practices when they exist in a capitalistic system that is focused on profits?
This is where wellbeing metrics can and should be applied. IEEE has created their IEEE 7010-2020 standard to provide specific
assistance in this regard. While the word "wellbeing" can be confusing, the logic is simply that what we measure is what we
value. And what we value is what we count. So if the floor or base of all AI uses GDP or financial metrics in isolation, ultimately
our success for AI uses these metrics as validation. But as countries like New Zealand and Wales are showing, triple bottom line
metrics (people, planet and profit) are essential for all AI we build if we want these amazing technologies to be truly transparent,
accountable, and fair. It's critical to note that focusing on profits is not wrong or unethical—it's simply that when the only
metrics used to gauge success focus on these issues and they're prioritized first that CSR or ESG reporting (or legislation
overall) may deal with an AI product or service after it's created or in use.
Responsible AI design, or the creation of a "societal impact assessment" like the logic of 7010-2020 is not to dictate the "one
Indicator to rule them all" logic like the logic that exists now with GDP. Rather the logic is to have AI designers scenario-plan
around how their product or service would be different if honoring people, planet, and profit in unison [are] metrics recognized
as success. This logic provides a new type of R&D that also is reflected in sustainability practices overall and legal structures like
B-Corp models.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
The Diversity, Equity and Inclusion (DEI) movement of professionals growing in importance cannot be highlighted enough.
Along with creating a workforce that is truly representative of the population that an AI product or service will serve, DEI
practices mirror and complement Responsible AI design methodologies as well.
Please note: The responses in this interview reflect the views of John C. Havens and do not necessarily reflect the overall
views of IEEE.
"[T]he bigger question here is—what are the ultimate societal success metrics for the
AI we build?"
Kay Firth-Butterfield
and governance for the design
development and use of AI since 2017 as
part of the WEF Centre for the Fourth
Industrial Revolution. They are based in
San Francisco, Japan, India, Colombia,
Brazil, Saudi, Israel, UAE, Turkey,
Head of AI and ML at the World Economic Norway, South Africa, and Rwanda.
Forum What are the challenges you and your
organization face with operationalizing
AI ethics?
Currently many algorithms are working poorly or erroneously because of not addressing ethical issues such as bias, so business
is throwing away R&D time and money by not addressing ethical AI. Also, if they get it wrong it could lead to customer or
employee lack of confidence.
Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?
WEF, IEEE, GPAI, Salesforce, Microsoft, DefinedCrowd, Cantellus, Office of AI (UK), EC, Council of Europe, Parity, UNESCO.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
[Resistance] is not extensive once they [businesses] understand what we mean; we have over 85 businesses working with us on
operationalizing ethical AI at any one time.
"Biggest challenge: getting companies to understand that they will all be AI companies
and so whether they are health companies or manufacturing companies or mining companies
or … they need to use AI ethically and it DOES apply to them."
Aparna Ashok
foundation.
It starts with emphasizing technologists' humanity and helping them understand how automated systems and their challenges
affect individuals at scale. After that they need to be armed with tools and templates that fit in with their existing workplace
practices. Mindset change is required to perceive ethics not as a penalty-based activity, but one that leads to beneficial
business.
And for that I think that what you are doing—collecting business cases for AI ethics—is an extremely smart move.
Mindset—"it's seen as a 'good to have,'" "a problem for legal," "we'll do it when the rules enforce it."
Knowledge—Technologists either don't understand on an everyday level why it is important to practice ethics when building
technology (partly because so many business practices associated with unicorn companies are seen as industry benchmarks),
and the ones who want to work differently don't know HOW.
Ethical AI is a powerful tool that enables business to foresee potential opportunities leading to early market share (untapped
markets, unsolved problems, overlooked lucrative use cases) and potential risks leading to legal penalties.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
"It is a great ideology to aspire to, but hard to practice with the challenges of a tech business."
"While regulation is necessary it is not able to pinpoint nuances that happen behind closed
doors. The people building such systems are in a unique position of power in terms of knowledge
about the nuances of the product and their ability to course-correct."
Olivia Gambelin
challenges we come across is that the
majority of frameworks and policies in
existence have been developed and
tailored to large scale business, whereas
they are not always applicable or even
feasible for the smaller scale business
already strapped for resources.
Founder / Ethical Intelligence
What recommendations do you have for
operationalizing AI ethics?
Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?
We are about to launch a cyclical training resource that builds practical knowledge in AI Ethics for technologists and founders
that targets this gap (link: https://www.ethicalintelligence.co/equation).
In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc. in your org ecosystem?
We are European-based, so GDPR is the main player. But also when we work with clients, we have to analyze the definition of
privacy in whichever culture the client is situated in as well as the client's user base.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
The main resistance one encounters is the claim that ethics at best doesn't have any impact on profit and at worst slows down
production to the point that it costs the company. When talking specifically with tech startups, the most common phrase we
hear is "ethics hinders innovation."
Any organization trying to incorporate responsible tech practices exists in a broader economic ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?
Triple bottom line thinking—it's still ok to pursue profit, but this must be balanced with equal weight and consideration to
people and planet.
"Equipping design and development teams with the skills to identify and mitigate ethical
challenges builds confidence that the final product is truly being created for good."
Steven Tiell
Tell us how you got into Ethical AI.
What’s been your path into the field?
What are the challenges that you and your organization face with operationalizing AI ethics?
Accenture has a half-million employees around the world, and our business is primarily focused on helping other businesses run,
so many of our challenges in this space are less transferrable. The one consistent thread is that an organization’s culture can be
an enabler or an impediment. Accenture’s culture is rich and diverse and open to integrating new practices, especially when
those practices encourage and celebrate diversity and inclusion. In our client work, operationalizing AI ethics often means
focusing on an organization’s values and respecting what they care about and want to protect and promote through tweaks to
existing processes and practices. Sometimes this can be as simple as describing how organizational values show up in products
and services.
To implement the values, we look to principles which describe what is required (ethically and legally) to live the values. It is then
the job of governance to assess whether the principles are satisfied in a particular case. Ethics is the work we perform to satisfy
the values, in accordance with the principles, and in support of governance. We apply these operationalizing techniques with
numerous interventions and tools, primarily with design, engineering, and decision-making processes. It often means new
training programs, changing variable compensation incentives, and adding governance. In many cases, this means culture
change needs to be part of a high-quality AI ethics program.
Oh my goodness, there are so many. I think key among them is to find a C-level champion. To operationalize AI ethics
successfully, the C-suite must be involved, be an advocate, and help to break down barriers throughout the organization.
AI ethics is an emerging field. Even those tasked with owning AI ethics probably don’t have a background in both computer and
social sciences. Therefore, many people in these roles don't know what they don't know. Getting up-to-speed can be
considerable friction to overcome. Hiring a team has the same challenges. These teams then perform tasks that are likely new to
the organization and may be met with resistance from a variety of stakeholders. Without clear support from the C-suite, mid-
level leaders of these programs can struggle to gain a foothold and relevance, let alone be able to execute on the organizational
change that will eventually be required to have a robust program in place.
It's simple and comes down to sustainability—can your organization continue doing what it's doing in perpetuity? In many cases,
we can see that businesses that exist solely to collect data and sell it to others probably have a limited time horizon on that
being a viable business model (largely due to increasing regulations around privacy). But any organization that uses data to
make business decisions or influence the lives of others must pay attention to the sustainability of the way they collect, manage,
use, and share data. A simple mistake or oversight at any point along the data supply chain can have out-sized impacts on a
business and represent risks that could be materially detrimental to a business. Robust AI ethics helps organizations avoid
unnecessary risk.
Since ethical AI requires a culture change and re-education in a range of areas, how are you ensuring your employees have
the knowledge and skills to design and build ethical AI solutions?
Every organization is different. Some start with training programs for specific groups of workers. Some start with governance.
Others start with policy changes, or engineering and agile process tweaks. The ones that do it well take on a variety of
improvements in parallel, start small, iterate often, and gradually roll out to more parts of the organization.
There are so many. I learn of new ones by the week. My regular sources for news, information, and leadership include Data &
Society, AI Now Institute, Markkula Center for Applied Ethics at Santa Clara University, a couple groups at Stanford, Ron
Sandler and his Ethics Institute at Northeastern University, David Danks at Carnegie Mellon University, and the Atlantic
Continued on next page
What resistance is one likely to encounter when making the case for ethical AI to the business community?
In my experience, leaders who need to be convinced to do something will always be a point of friction and seldom lead to
generative uptake of Responsible Innovation. Often, their thinking is that "it's just another hurdle that will slow my team down,
hurt revenue, and doesn't contribute to shareholder value." In fact, if the C-suite at an organization rejects the Business
Roundtable's call for focusing on stakeholder value instead of shareholder value, that will be a difficult, if not impossible,
organization to make the case for this work.
How would ethical AI improve your value prop to your customers? What are your top use cases for application of ethical AI?
How do you measure the success of your ethical AI initiative?
In many cases, the value prop is about trust—a focus on Responsible Innovation helps to establish, build, maintain, and/or repair
trust with stakeholders and builds gravity toward a brand. Often, simply highlighting data-centric risks that could be
detrimental to the brand is enough to gain a second conversation. Success is measured in vastly different ways across industries
and organizations and can include the absence of negative outcomes.
How can companies try to incorporate these practices (Responsible AI) when they exist in a capitalistic system that is
focused on profits?
Time. In an increasingly skeptical world, consumers (and businesses) are seeking signals of trust. Trust builds gravitational pull
toward a brand and makes people open to new things (think about the blockbuster lines any time Apple releases a new
product). It is these brands that will thrive when the "kids" using YouTube as their primary search engine today (because it's
only real if they can see it happen) become the engine of the economy tomorrow.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
One of the reasons I’m at Accenture is because of how much Accenture values diversity. Just as those who practice portfolio
theory in financial markets experience outsized returns more consistently than those who don’t, the same is true of worker
diversity within organizations. Accenture “gets it” and acts according. More broadly, these initiatives, at any organization, take
time. A decision to hire a diverse workforce today may not yield a more diverse workforce for years to come. I like to challenge
organizations to name, define, and rank the normative values of justice that drive their diversity efforts. In doing so, some
organizations discover that what they need to optimize for is lived experiences, which might be different from skin tone,
gender, and sexual identity. Perhaps this varies by geography within the same organization. I like to see hiring practices that
hold off on closing a req until there’s a sufficiently diverse pool of applicants, and then scrubs name, picture, and “legacy” (e.g.,
university name) from resumes before sharing with hiring managers.
While hiring is part of the long-term solution, it doesn’t directly solve the product development issues in front of us today. One
of the papers we published in 2016 had to do with questions to ask along the data supply chain—we saw a similar framework
called EthicalOS a couple years later. While these frameworks can help, without an applied ethicist leading the conversation,
they can often turn into circular discussions. Evolving from this, we worked with a handful of organizations to develop an
“ethical spectrums” framework that empowers decision-makers to have "directed agency" by establishing a set of spectrums
from which to evaluate business, product, and engineering decisions—are you closer to the status quo end of a spectrum or the
ethically higher bar end? We also worked with academia to build an “ethics triage” approach to understand where digital risks
were entering systems and to act at those points. Again, we focus on maximizing agency and providing avenues for recourse.
These and other product development practices can help to prevent products and services from being developed in a way that’s
problematic in the first place and offer recourse when they are.
Connect with Steven @stiell
THE BUSINESS CASE FOR AI ETHICS | 68
into the product development process.
LEARNING FROM THE COMMUNITY
[T]hrough Sense & Respond Press, I
Pavani Reddy
published my findings into a playbook:
Ethical Product Development: Practical
Techniques to Apply Across the Product
Development Lifecycle (2020). Since then,
I continue to consult in this field of
ethical tech—and ethical AI more
specifically. For example, I am working
Managing Director | Author of Ethical with a think tank on defining common AI
Product Development principles by sector—including AI in
education and workforce development.
Our project objective is to help ethics
owners and policymakers to define these
well-known general principles more
granularly by sector and use case.
1. Privacy
2. Accountability
Tell us about your role: nearly twenty years working with 3. Safety & Security
teams to produce new technologies 4. Transparency & Explainability
I am a mid-career thinker-doer and responsibly. 5. Fairness & Non-discrimination
a self-appointed “ethics owner” in 6. Human Control of Technology
my full-time role at EAB. I serve as a In the last several years, I 7. Professional Responsibility
product manager and user personally noticed a gap in practical 8. Promotion of Human Values
experience designer of data & guidance for technologists on how
analytics solutions for higher to improve the ethical trajectory of What recommendations do you have for
education. My personal mission the products they produce. Outside operationalizing AI ethics?
behind my work at EAB is to help of work, I researched emerging best
higher education institutions practices from many companies and The advice that I have for myself is to
operate more effectively to thought leaders. My goal was to pursue the challenging work of codifying
produce more upward mobility for write down for myself and my peers what we mean within the sector, by use
people. I approach my work with an (within and beyond my current case, acknowledging stakeholder groups,
interdisciplinary mindset, having environment) a collection of Continued on next page
trained in economics, law, business, practical techniques that could
and the lessons that come from integrate ethical decision-making
THE BUSINESS CASE FOR AI ETHICS | 69
as well as the operative norms and laws.
Data & Society, as they appear to manifest their interdisciplinary and inclusive ethos in a courageous and experimental way;
similarly, every time I hear Renée Cummings, but I do appreciate the way she approaches her audience for the same reason!
There are numerous emerging leaders in the area; it is very exciting.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
The ROI on this is more open-ended than other investments, [as] so many businesses feel that they cannot afford deep, formal,
or dedicated inquiry into ethical AI. I have noticed that well-resourced, large players like Google are situating inquiry into this
area in the form of adding personnel to their "public policy" groups to influence policy. As a secondary part of these roles, these
personnel would work with internal product teams. I am curious what it would be like to shift the emphasis by investing in
Ethical AI by situating the role within the product team so that it permeates through their methodology and approach—and is
not an afterthought. My hypothesis is that more practical regulatory policy can be developed from this vantage point.
How can companies try to incorporate AI ethics practices when they exist in a capitalistic system that is focused on profits?
There are three motivators for adherence to principles of ethical AI: (1) completely voluntary adherence, especially if the
principles are self-derived and expressed to the customer/user community; (2) market pressure from the customer / user
community that pushes toward consistent adherence towards priority principles; (3) government regulation with a clear
enforcement regime. Since the U.S. system does not have strong regulation, companies can serve their customers by creating
principles to uphold and educate customers and the broader public about not only the benefits, but the possible harms in using
AI technologies, thereby creating a healthy and transparent public dialogue that creates pressure to adhere to priority
principles.
"As a technology worker very eager to produce ethical AI, I appreciate the key themes
that we have aligned on in the sector."
Ashley Casovan
health, education, economics, and social
services. Having always worked in digital
policy, specifically related to enterprise
data, and open-source architecture,
tackling responsible AI policy was a
natural progression
The past few years have been marked by significant research on all aspects of responsible and ethical AI. Everything from
identifying principles which act as collective targets for us to strive towards measurable techniques for testing bias and fairness
in applications. While this body of research continues to grow, it requires coordination and adoption in order to be tested and
ultimately realized.
In addition to what I've already mentioned, greater awareness including education and training is required. This is needed not
only needed at a general level for all members of the public, but also domain-specific training is required. AI systems are
becoming an increasingly important part of every industry. Knowing how these systems will impact these industries is incredibly
important for the practitioners implicated.
We believe that being responsible and ethical AI is good for business. In thinking through what the various different challenges
that AI can pose, often unintentionally, it's important that there are oversight practices in place that both mitigate harm to
people and mitigate risk to those building and implementing these systems. Oversight mechanisms such as establishing
standards and certifications that AI developers can follow will help.
In large organizations, data is generated from multiple interactions. Are there any guiding principles for data collection,
curation, etc. in your org ecosystem?
We have created a unified framework of responsible AI principles based on the most cited frameworks including the Montreal
Declaration, IEEE's Ethically Aligned Design, the Asilomar AI Principles, etc. Our framework includes: Accountability, Data
Quality, Rights, Bias and Fairness, Explainability and Interpretability, and Robustness.
So many. In addition to our collaborators at the Schwartz Reisman Institute and the World Economic Forum, organizations like
the Data Nutrition Project, Equal AI, Algora Labs, CIFAR, [and the] Ada Lovelace Institute are doing great things. Companies like
Cognitive Scale, AltaML, Arthur.AI, [and] BEACON are conducting important research and directly working with companies to
refine it.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
We have heard concerns that there will be substantial costs to implementing responsible and ethical practices. This is a key
concern for sure. While there could be upfront costs to changing governance practices, in the end, it will not only be the right
thing to do to protect people, but it could prevent financial and reputational harm.
How can companies try to incorporate AI ethics practices when they exist in a capitalistic system that is focused on profits?
Trade-offs will be important, however, there is no reason why these systems can't be ethical and also make a profit.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
Ensuring that these voices are at the table when designing and building AI systems is incredibly important. It will ensure that
research and techniques are viewed through necessary lenses.
Will Griffin
What are the challenges in general with
operationalizing AI ethics?
Chief Ethics Officer at Hypergiant What is your business case for Ethical
AI?
The goal is to embed ethical Get CEO and Board level buy-in.
THE BUSINESS CASE FOR AI ETHICS | 73
How would ethical AI improve your value prop to your customers? What are your top use cases for application of ethical AI?
How do you measure the success of your ethical AI initiative?
Ethical AI guided by ethical reasoning should increase not decrease the potential solutions to any given technical/business
problem. It can be measured by the number of potential solutions created during the design and development process. More
potential solutions (and beneficiaries) means the framework was used properly.
Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?
Companies must think about a broader array of stakeholders who will be impacted by the AI solutions they design and develop.
The more stakeholders who benefit the more robust the solution will be.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
Once you envision all of society as a potential stakeholder in any given use case it should be easy to articulate the impact on
marginalized communities. If you cannot delineate the impact on a given community that means you have not considered them
in the design and develop process. Start over and use ethical reasoning as your guide.
"Ethical reasoning is another tool that should unlock the creativity and broader range of
solutions to tech problems."
Liz O'Sullivan
technology usable, in a form that
sufficiently “worked." It wasn’t until I
took a role at a computer vision company
as head of annotations that I became
acutely aware of the ways that labeling
VP of Responsible AI at Arthur, and and cultural biases can show up and
become encoded into enterprise
Technology Director of STOP (The Surveillance systems. Imagery is such a visual medium
Technology Oversight Project) that bias becomes something tangible,
that you can explore on your own.
Especially combined with the global
ecosystem of labelers around the world,
it became very clear to me quickly how
cultural differences can yield harmful
results.
Operationalizing AI ethics first begins with a detailed plan that must originate from key stakeholders at the C-level, workers
including data science and annotations professionals, and input from the communities you serve. Careful, critical thinking is a
vital part of the planning process that no toolkit can ever replace. I highly recommend that organizations hire from the
humanities to provide at least one role that’s fully focused on the ethical outputs of the company, lest it become “side work” that
falls to the bottom of the list. Every part of the AI pipeline deserves scrutiny, from the data collection, to annotations, along the
algorithm selection and documentation. But the work of “checking” for ethical violations is an ongoing one, and policies must be
set to ensure that algorithms continue to fit their intended parameters when integrated into the real world of production
environments.
The business case for responsible AI has never been clearer than it is today, following enhanced legal scrutiny from a number of
agencies and lawmakers who have made it clear that discrimination is illegal, even when accidental. There are multiple pending
court cases that will seek to prosecute companies for the outputs of their algorithms, most notably with Apple Card and United
Healthcare’s Optum. But even without the threat of costly legal action and, ultimately, fines, the damage to a company’s
reputation can not be understated, as recent events including the failure of Twitter’s image cropper to recognize black faces.
The last thing brands want to do is to alienate their users by furthering the disparities and inequities that plague our society
today. Moreover, one small part of operationalizing responsible AI is to simply ensure that your models behave the way you
think they will on real world data, on an ongoing basis. By monitoring for anomalies and concept drift, companies become better
able to catch model issues before they become big problems, allowing them to be re-tuned and re-fit for better accuracy. In the
financial industry especially, higher accuracy can mean better efficiencies, more profit, and lower cost.
I’m particularly interested in the people working on AI’s criminal justice implications, especially when these intersect with race.
To that end, I’m a big fan of the work of the ACLU, Kristian Lum, Tawana Petty, Julia Angwin’s work on the COMPAS algorithm
(along with her work in general), Ruha Benjamin, The Algorithmic Justice League of Joy Buolamwini, Timnit Gebru, and Deb Raji
(of course), and Mutale Nkonde for her work at AI For the People.
"Operationalizing AI ethics first begins with a detailed plan that must originate from
key stakeholders at the C-level, workers including data science and annotations professionals,
and input from the communities you serve."
Shalini Kantayya
about what I was working on. I think the
working title at the time was Racist
Robots, and it was really hard to explain.
As someone who doesn't have advanced
degrees in data science, I had this fear of
improperly explaining ideas like
Director of Coded Bias algorithms or artificial intelligence or
machine learning. But I think what
enabled me to get over my fear was just
asking a lot of questions. And I came to
see that artificial intelligence is going to
transform every sector of society and
touch every civil right we enjoy.
So much of the film is about the need for collective action to improve how we develop and deploy artificial intelligence
technologies. What change would you like to see moving forward?
I think we are in a moment in history where we're all being asked to lead from a deeper place in our humanity. I think a lot of
times we talk about tech as if it's like magic or God. And when you pull back the curtain, what I realized is that technology is just
a reflection of ourselves. Because these technologies impact all of us, we all should have some voice in how they get deployed.
I’d like to see legislation that protects data rights as fundamental to civil and human rights. We should move toward technology
that values the inherent value of every person.
Connect with Shalini at @shalinikantayya and learn more about Coded Bias at CodedBias.com
"There’s a real danger that without proper training on data evaluation and spotting the
potential for bias in data, vulnerable groups in society could be harmed or have their rights
impinged. AI also has intersectional implications on criminal and racial justice, immigration,
healthcare, gender equity, and current social movements."
Caryn Lusinchi
Tell me about your organization’s
ambition for Ethical AI? Why is it a
priority?
Tell us about your role: I started off as a corporate What are the challenges you face with
securities fraud investigator before operationalizing AI ethics?
I’m the CEO & Founder of veering into global go-to-market
biasinai.com, the smarter way to strategy and marketing roles at Most enterprises use the RACI model
source AI. We’re building an AI startups, interactive agencies, (responsible, accountable, consulted,
directory across 40+ industries that consulting firms, Google, and informed) to inform project roles and
aggregates companies and WhatsApp. In the past three responsibilities. There’s no shortage of
consultants, who help humans and decades, there’s been one career well-intentioned tech employees who
machines work better together, to constant: I’ve witnessed too many wish to consult on or stay informed.
reduce bias in AI systems good people doing bad things in the However, the biggest challenge with
(specifically algorithmic, gender, pursuit of short-term “business” operationalizing AI ethics is within large
cultural, racial and data-driven profits and the magical belief that cross functional group project settings,
biases). every social problem has a where there’s considerable ambiguity
technological solution. I got into around which individual or department is
What’s been your path into the ethical AI out of frustration of Continued on next page
field? working in tech cultures where
there’s a diminished sense of
THE BUSINESS CASE FOR AI ETHICS | 79
held responsible or takes accountability when there’s a negative societal impact, post launch. Traditional PR tactics of issuing a
delay--> deflect--> dulcify--> deal series of crisis communications isn’t the right answer—it’s a knee-jerk gut reaction.
Proactively solve the AI ethics ownership question first.
Typically, operationalizing AI ethics defaults to data science teams where numerous AI projects are in R&D, beta, or pilot phase.
Or it may start with C-Suite/BOD/HR institutionalizing AI ethics in a mission/vision/values statement the company can peacock
display. Beyond the obvious bottom-up and top-down participation, there’s plenty of opportunity for "middle sandwich-layer"
departments to start operationalizing AI ethics. User researchers and experience leads can set standards for D&I
representation and marginal group participation in all unmoderated/moderated studies. Project managers can build user stories
to ensure values are weighed in the design process and user experiences reflect diverse contextual use cases. QA and trust and
safety teams can create AI bias bounties. Everyone can participate; there's room at the table for more than computer science
and philosophy PhDs.
What resistance is one likely to encounter when making the case for ethical AI to the business community?
1. Long-term benefits of ethical AI are de-prioritized for short-term pursuit of profit to satisfy shareholders' ROI demands;
someone said . . . "ethics never makes you money but can save you a lot of money."
2. Overconfidence bias and technological determinism of relatively young, inexperienced technology teams negate the need for
ethical AI; this stems from the stubborn, tech culture conviction that anything and everything can be solved for internally so
there’s no need to consider engaging an independent, impartial third party (auditor or tool) to assess true societal impact of a
team’s AI/ML project inputs and output.
Any organization trying to incorporate responsible tech practices exists in a broader economics ecosystem that has
fundamentally motivated how businesses and societies operate. How can companies try to incorporate these practices when
they exist in a capitalistic system that is focused on profits?
Learn lessons and case study successes from Certified B Corporations; Certified B Corporations are a new kind of business that
balances purpose and profit.
The board and/or C-Suite can develop KEIs (key ethics indicators); they can co-exist along quarterly and annual KPIs.
Reward employees (performance, bonus ,and/or equity awards) for identifying and/or cleansing dirty datasets (intentional or
unintentional), for hacking AI systems to expose vulnerabilities, or for graveyard-ing projects prior to launch that result in unfair
outcomes.
The lack of diversity in the tech field means that products are being designed and decisions are being made which impact
marginalized groups from a perspective that isn’t very inclusive of the viewpoints of these communities. How does your
organization factor in these possible risks/liabilities?
AI racial bias will always be an issue as long as diversity, equity, and inclusion do not translate into actual AI/ML candidate team
hiring and promotion practices within the tech industry. We strive to highlight companies offering products, services, datasets
and research that are developed/designed to be representative of all ethnic groups. Additionally, our directory offers a
diversity of tools that help ensure machine learning algorithms are tested for fair outcomes prior to launch.
Andrew Dillon
cannot be done though a single course in
ethics that a student takes as part of
their curriculum in some bolt-on fashion
typical of MBA programs. It has to be
woven into the complete coursework in a
program so that design is recognized and
understood as enacting choices over how
V.M. Daniel Professor of Information, people live and work.
University of Texas Future designers cannot be allowed to
claim ignorance of ethics, and
professional bodies must hold their
future members to account. We may not
have an agreed set of ethics yet, but
there are general principles of ethical,
human-centered design that we can
agree upon while allowing for continued
attention on emerging challenges. Let's
start there.
Tell us about your role: operationalizing AI ethics? If we can demonstrate that a better user
experience has long-term benefits for
[I am a] researcher and educator in In a rapidly evolving domain, people are companies and consumers, the case will
human-centered information design. either focused on the technology of AI make itself. Of course, businesses exist to
without appreciating its human impact, make a profit, but few would claim it is
What’s been your path into the or they are concerned with ethical profit no matter what. It is possible to
field? issues but lack an understanding of the motivate profits and human well-being in
technology. We need to bridge these a political environment that balances
As a psychologist, I've always been groups to create meaningful individual rights, ability to profit, and
interested in how we can leverage discussions. collective well-being. This is a longer
information technologies for human
conversation, with legal and ethical
benefit, to augment our capabilities What recommendations do you have implications for all of us, and in an
and create a more inclusive world. for operationalizing AI ethics? information-mediated world, businesses
have to engage in this conversation
What are the challenges you and We must tackle design education so as rather than ignore it.
your organization face with to