You are on page 1of 6

Getting to know—and

manage—your biggest
AI risks
A systematic approach to identifying and prioritizing AI risks can help
organizations effectively target mitigation efforts.

by Kevin Buehler, Rachel Dooley, Liz Grennan, and Alex Singla

© Getty Images

May 2021
Many organizations are generating significant step in an effective, dynamically updated AI risk-
value with artificial intelligence (AI) and recognize management approach anchored in both legal
that this technology will shape the future. At the guidance and technical best practices. As outlined
same time, organizations are discovering that AI in this article, such a prioritization plan entails
could expose them to a fast-changing universe creating a catalog of your organization’s specific
of risks and ethical pitfalls that regulators signal AI risks to define the harm you seek to avoid, and
they’ll be watching for—and potentially penalizing. then following a clear methodology to evaluate and
Recently, the European Union proposed a set of AI prioritize those risks for mitigation.
regulations that, if violated, could result in material
fines, and the US Federal Trade Commission (FTC)
put out notice that it could hold organizations Identifying AI risks
accountable for proliferating bias or inequities To create a catalog of specific AI risks, the tech trust
through AI. team clearly delineates each negative event that
could result from a particular AI deployment, so the
Just as AI deployment will be core to organizations’ team can then detail how each of those risks will
future success, leading organizations will be those be mitigated in accordance with the appropriate
that actively identify and manage the associated standards. A helpful way to think through the
risks. In our latest AI survey, respondents at potential risks is to use a six-by-six framework,
organizations getting the most value from AI were mapping risk categories against possible business
more likely than others to recognize and mitigate the contexts (exhibit).
risks posed by the technology.¹
We recommend that this process consider at least
The prospect of protecting against a wide and six overarching types of AI risk:
growing range of AI risks might seem overwhelming,
but neither avoiding AI nor turning a blind eye to the 1. Privacy. Data is the lifeblood of any AI model.
risks is a viable option in today’s competitive and Privacy laws around the world mandate how
increasingly digitized business environment. So companies may (and may not) use data, while
where should companies start? consumer expectations set normative standards.
Running afoul of these laws and norms can
First, organizations must put business-minded result in significant liability, as well as harm to
legal and risk-management teams alongside consumers. Violating consumer trust, even if the
the data-science team at the center of the AI data use was technically lawful, can also lead
development process. Waiting until after the to reputation risk and a decrease in customer
development of AI models to determine where and loyalty.
how to mitigate risks is too inefficient and time
consuming in a world of rapid AI deployments. 2. Security. New AI models have complex, evolving
Instead, risk analysis should be part of the initial vulnerabilities that create both novel and familiar
AI model design, including the data collection and risks. Vulnerabilities such as model extraction
governance processes. Involving legal, risk, and and data poisoning (in which “bad” data are
technology professionals from the start enables introduced into the training set, affecting the
them to function as a “tech trust team” that model’s output) can pose new challenges to
ensures the models conform to social norms and long-standing security approaches. In many
legal requirements while still delivering maximum cases, existing legal frameworks mandate
business value. minimum security standards to meet.

Second, because there is no cure-all for the broad 3. Fairness. It can be easy to inadvertently encode
spectrum of AI risks, organizations must apply bias in AI models or introduce bias lurking in the
an informed risk-prioritization plan as the initial data feeding into the model. Bias that potentially

1
“The state of AI in 2020,” November 2020, McKinsey.com.

2 Getting to know—and manage—your biggest AI risks


Exhibit
AAsystematic
systematic approach
approachtotoidentifying AIAI
identifying risks examines
risks each
examines category
each of risk
category ofin each
risk in
business context.
each business context.
Transparency and Safety and
Privacy Security Fairness explainability performance Third-party risks

Data Model selection Deployment and Contracts and Legal and Organization and
and training infrastructure insurance regulatory culture

or actually harms particular classes and groups could constitute negligence under a contract
can expose the company to fairness risks and and/or lead to employee harm.
liabilities.
6. Third-party risks. The process of building an AI
4. Transparency and explainability. A lack model often involves third parties. For example,
of transparency around how a model was organizations may outsource data collection,
developed (such as how data sets feeding model selection, or deployment environments.
into a model were combined) or the inability The organization engaging third parties must
to explain how a model arrived at a particular know and understand the risk-mitigation and
result can lead to issues, not the least of which is governance standards applied by each third
potentially running afoul of legal mandates. For party, and it should independently test and audit
example, if a consumer initiates an inquiry into all high-stakes inputs.
how his or her data were used, the organization
using the data will need to know into which In our experience, most AI risks map to at least one
models the data were fed. of the overarching risk types just described, and
they often span multiple types. For example, a
5. Safety and performance. AI applications, if not model-extraction attack, in which a model is stolen
implemented and tested properly, can suffer based on a sample set of outputs, compromises
performance issues that breach contractual both the privacy and security of the model.
guarantees and, in extreme cases, pose Therefore, organizations should ask if each category
threats to personal safety. Suppose a model is of risk could result from each AI model or tool the
used to ensure timely updates of machinery in company is considering or already using.
manufacturing or mining; a failure of this model

Getting to know—and manage—your biggest AI risks 3


Pinpointing the context in which these risks can components are used (for example, the
occur can help provide guidance as to where security of the data flowing into it). Broader
mitigation measures should be directed. We identify efforts, such as training programs, resource
six such contexts: allocation, and interdisciplinary collaboration
among cross-functional teams (such as a tech
1. Data. Risks can surface through the way data trust team) play key roles in mitigating risk. To
get captured and collected, the extraction of consider the types and likelihood of risks that
the data’s features (or fields and characteristics), might arise, it’s important to know if these exist,
and how data are engineered to train the model. and at what level.

2. Model selection and training. Models are In addition to using this framework to brainstorm
evaluated, selected, and trained based on risks, the team could consult public databases of
various criteria, involving choices that present previous AI incidents (for example, Partnership
grounds for risk. For example, some models are on AI’s incident database). This review of past risk
more transparent than others. While a relatively failures is helpful in AI risk mitigation because AI
opaque model might offer better performance, draws its predictive power from past events. If AI
a legal requirement for transparency could created liability in the past under a similar set of
necessitate use of a different model. circumstances, there is a higher probability it will
do so again.
3. Deployment and infrastructure. Models are
pushed to production, or deployed, when Further, the risk experts on the tech trust team
ready for real-world use. This process and the should examine current use cases within the
underlying infrastructure supporting it present organization and conduct a “red team” challenge,
risks. For example, the model might fail to incentivizing team members to uncover less
perform in the real world as demonstrated by its obvious risks. These could be, for example, risks
performance in a lab environment. that flow from second-order model risks, such as
risks arising from how the model might be used in
4. Contracts and insurance. Contractual and practice (as opposed to the first-order risks that
insurance guarantees often explicitly address arise from how the model was built).
some AI risks. Product and service providers
(both B2B and B2C), for example, may include
in their service-level agreements parameters Evaluating and prioritizing the risk
around model performance or delivery. catalog
Insurance providers might assign liability for With risks clearly defined and logged, the
incidents such as security or privacy breaches. next step is to assess the catalog for the most
Such contractual provisions must be surfaced, significant risks and sequence them for mitigation.
so teams can ensure they are in compliance. With an overwhelming number of potential AI
risks, it’s impossible to fully mitigate each one. By
5. Legal and regulatory. Industries, sectors, and prioritizing the risks most likely to generate harm,
regions around the world have varying standards organizations can help prevent AI liabilities from
and laws regarding privacy, fairness, and other arising—and mitigate them quickly if they do. A
risks presented in this framework. Therefore, it’s strong methodology for such sequencing allows
important to be aware of applicable laws and AI practitioners and legal personnel to triage AI
regulations based on where, how, and in what use cases so they can focus their often-limited
sector the model will be deployed. resources on those meriting the most attention.

6. Organization and culture. Consider the risk Existing regulatory frameworks and case law can
maturity and culture of your organization and provide a solid foundation for this prioritization
how that might affect the way a model or its methodology. Liability frameworks across a

4 Getting to know—and manage—your biggest AI risks


variety of areas—such as privacy law and anti- assessment can help even in cases where harms
discrimination and negligence standards—have do not have clear economic measures.
been forged over decades to confront many of
the practical constraints that data scientists A similar significance standard underlies US
face today. We’ll briefly highlight two standards anti-discrimination laws, which govern decision
in US laws that can inform successful risk making in credit, housing, employment, and other
methodologies. The central insight to draw from contexts. The standard doesn’t mandate perfect
these legal frameworks is that risks tagged for fairness or prohibit any harm from occurring; as
mitigation should be identified in relation to the any data scientist knows, there is no such thing
likelihood or significance of an incident and the as a perfectly unbiased model. Instead, US laws
costs of a viable alternative approach. generally mandate discrimination testing such that
no fairer alternative is available, given the intended
The first standard derives from a concept familiar use case. In plain terms, this means that if data
to most lawyers in the United States: the “Hand scientists could have trained a model with similar
formula,” which has been widely influential in business value and less discrimination but failed to
shaping negligence standards in the United do so, liability could ensue.
States.² According to the formula, risk is defined
as the probability of the harmful event occurring A helpful way to start prioritizing your risks is
multiplied by the loss the event could generate. to take a domain-based approach, much as
Liability ensues any time the burden of preventing organizations apply AI most successfully. Apply
an incident is less than the harm the incident the methodology we describe here to the AI use
could cause. cases within a core process, journey, or function,
and then move onto the next domain. This can save
Take, for example, a hypothetical manufacturer of your organization time and money as you reduce
automated long-haul trucks, which is concerned risk systematically and efficiently.
about the potential for migratory herds to
wander onto roads in certain Western states.
After performing internal testing, the company Foundations for managing any AI risk
estimates an approximate cost of $10 million to While the list of emerging best practices around
gather sufficient data on herds and build an AI AI risk mitigation is long and growing, a couple of
model to recognize and avoid the animals. The practices will prove foundational in enabling the
company also estimates that, at scale, if it did cataloging and weighting of AI risks:
not have a model to help predict where herds
might wander onto the roadway, its trucks would — Standard practices and model documentation.
be involved in approximately 20 herd-induced Data science, legal, and risk personnel must
accidents per year with an average cost of understand important aspects of AI models
$100,000 each, increasing to $4 million each and the environment in which they are trained
in cases with fatalities. Traditional negligence and deployed, so they can surface areas of
theory, as dictated by the Hand formula, would risk. Simply put, organizations cannot control
suggest the company invest in gathering herd- an AI environment they don’t understand.
related training data and model creation, because Setting clear standards for model development
the financial burden of doing so ($10 million) is and documentation can help provide such
well under the cost incurred by the herd-induced foundational transparency. Standardized
accidents ($21.5 million if 25 percent result in policies for steps in the development life
fatalities). While this example is meant to be cycle—recording data provenance, managing
simple and straightforward, the same type of risk metadata, mapping data, creating model

2
Justice Learned Hand (1872–1961) served as a federal district and appellate judge for more than 50 years and had enormous influence on the
understanding of the law in the United States, especially of the First Amendment of the US Constitution.

Getting to know—and manage—your biggest AI risks 5


inventories, and more—ensure development declared publicly in April 2020, organizations
teams follow the same sound, approved should “consider how [to] hold yourself
processes. This in and of itself reduces risk, in accountable, and whether it would make sense
addition to providing risk experts in the tech to use independent standards or independent
trust team with a map of the AI environment. expertise to step back and take stock of your AI.”³
Consistent model documentation provides This year, the FTC added that it could step in to
another layer of transparency. If individual data hold companies accountable for AI issues and
scientists document their models differently at mandated the deletion of any models containing
varying phases of the AI life cycle, for example, it misused customer data, suggesting that serious
is harder to do an apples-to-apples comparison oversight lies ahead. Systematic internal
of AI models across the organization so that risk and external audits can contribute to a solid
standards and reviews can be more easily and compliance program for an organization’s AI.
consistently applied. An incredibly competitive
market for data scientists means it’s rare that
the same data scientists who trained a model
are still around to answer questions when an Based on recent headlines alone, it’s clear that
incident occurs. global efforts to manage the risks of AI are just
beginning. Therefore, the earlier organizations
— Independent review. Existing and proposed adopt concrete, dynamic frameworks to manage
AI regulations call for different types of AI risks, the more successful their long-term AI
reviews or audits to demonstrate compliance. efforts will be in creating value without material
Some regulators, such as the FTC, explicitly erosion. The approaches we have described enable
recommend the use of independent standards organizations to begin pinpointing and prioritizing
or expertise to evaluate model fairness, the management of AI risks right now as part of a
performance, and transparency. As the FTC holistic long-term strategy for managing AI risks.

Kevin Buehler is a senior partner in McKinsey’s New York office, where Rachel Dooley is associate general counsel and Liz
Grennan is managing counsel; Alex Singla is a senior partner in the Chicago office.

The authors wish to thank Andrew Burt, managing partner at bnh.ai and visiting fellow at Yale Law School’s Information Society
Project, and Patrick Hall, principal scientist at bnh.ai and visiting faculty at George Washington University School of Business,
for their contributions to this article.

Copyright © 2021 McKinsey & Company. All rights reserved.

3
Federal Trade Commission blog, “Using artificial intelligence and algorithms,” blog entry by Andrew Smith, April 8, 2020, ftc.gov.

6 Getting to know—and manage—your biggest AI risks

You might also like