Professional Documents
Culture Documents
McKinsey "Getting To Know-And Manage-Your Biggest AI Risks"
McKinsey "Getting To Know-And Manage-Your Biggest AI Risks"
manage—your biggest
AI risks
A systematic approach to identifying and prioritizing AI risks can help
organizations effectively target mitigation efforts.
© Getty Images
May 2021
Many organizations are generating significant step in an effective, dynamically updated AI risk-
value with artificial intelligence (AI) and recognize management approach anchored in both legal
that this technology will shape the future. At the guidance and technical best practices. As outlined
same time, organizations are discovering that AI in this article, such a prioritization plan entails
could expose them to a fast-changing universe creating a catalog of your organization’s specific
of risks and ethical pitfalls that regulators signal AI risks to define the harm you seek to avoid, and
they’ll be watching for—and potentially penalizing. then following a clear methodology to evaluate and
Recently, the European Union proposed a set of AI prioritize those risks for mitigation.
regulations that, if violated, could result in material
fines, and the US Federal Trade Commission (FTC)
put out notice that it could hold organizations Identifying AI risks
accountable for proliferating bias or inequities To create a catalog of specific AI risks, the tech trust
through AI. team clearly delineates each negative event that
could result from a particular AI deployment, so the
Just as AI deployment will be core to organizations’ team can then detail how each of those risks will
future success, leading organizations will be those be mitigated in accordance with the appropriate
that actively identify and manage the associated standards. A helpful way to think through the
risks. In our latest AI survey, respondents at potential risks is to use a six-by-six framework,
organizations getting the most value from AI were mapping risk categories against possible business
more likely than others to recognize and mitigate the contexts (exhibit).
risks posed by the technology.¹
We recommend that this process consider at least
The prospect of protecting against a wide and six overarching types of AI risk:
growing range of AI risks might seem overwhelming,
but neither avoiding AI nor turning a blind eye to the 1. Privacy. Data is the lifeblood of any AI model.
risks is a viable option in today’s competitive and Privacy laws around the world mandate how
increasingly digitized business environment. So companies may (and may not) use data, while
where should companies start? consumer expectations set normative standards.
Running afoul of these laws and norms can
First, organizations must put business-minded result in significant liability, as well as harm to
legal and risk-management teams alongside consumers. Violating consumer trust, even if the
the data-science team at the center of the AI data use was technically lawful, can also lead
development process. Waiting until after the to reputation risk and a decrease in customer
development of AI models to determine where and loyalty.
how to mitigate risks is too inefficient and time
consuming in a world of rapid AI deployments. 2. Security. New AI models have complex, evolving
Instead, risk analysis should be part of the initial vulnerabilities that create both novel and familiar
AI model design, including the data collection and risks. Vulnerabilities such as model extraction
governance processes. Involving legal, risk, and and data poisoning (in which “bad” data are
technology professionals from the start enables introduced into the training set, affecting the
them to function as a “tech trust team” that model’s output) can pose new challenges to
ensures the models conform to social norms and long-standing security approaches. In many
legal requirements while still delivering maximum cases, existing legal frameworks mandate
business value. minimum security standards to meet.
Second, because there is no cure-all for the broad 3. Fairness. It can be easy to inadvertently encode
spectrum of AI risks, organizations must apply bias in AI models or introduce bias lurking in the
an informed risk-prioritization plan as the initial data feeding into the model. Bias that potentially
1
“The state of AI in 2020,” November 2020, McKinsey.com.
Data Model selection Deployment and Contracts and Legal and Organization and
and training infrastructure insurance regulatory culture
or actually harms particular classes and groups could constitute negligence under a contract
can expose the company to fairness risks and and/or lead to employee harm.
liabilities.
6. Third-party risks. The process of building an AI
4. Transparency and explainability. A lack model often involves third parties. For example,
of transparency around how a model was organizations may outsource data collection,
developed (such as how data sets feeding model selection, or deployment environments.
into a model were combined) or the inability The organization engaging third parties must
to explain how a model arrived at a particular know and understand the risk-mitigation and
result can lead to issues, not the least of which is governance standards applied by each third
potentially running afoul of legal mandates. For party, and it should independently test and audit
example, if a consumer initiates an inquiry into all high-stakes inputs.
how his or her data were used, the organization
using the data will need to know into which In our experience, most AI risks map to at least one
models the data were fed. of the overarching risk types just described, and
they often span multiple types. For example, a
5. Safety and performance. AI applications, if not model-extraction attack, in which a model is stolen
implemented and tested properly, can suffer based on a sample set of outputs, compromises
performance issues that breach contractual both the privacy and security of the model.
guarantees and, in extreme cases, pose Therefore, organizations should ask if each category
threats to personal safety. Suppose a model is of risk could result from each AI model or tool the
used to ensure timely updates of machinery in company is considering or already using.
manufacturing or mining; a failure of this model
2. Model selection and training. Models are In addition to using this framework to brainstorm
evaluated, selected, and trained based on risks, the team could consult public databases of
various criteria, involving choices that present previous AI incidents (for example, Partnership
grounds for risk. For example, some models are on AI’s incident database). This review of past risk
more transparent than others. While a relatively failures is helpful in AI risk mitigation because AI
opaque model might offer better performance, draws its predictive power from past events. If AI
a legal requirement for transparency could created liability in the past under a similar set of
necessitate use of a different model. circumstances, there is a higher probability it will
do so again.
3. Deployment and infrastructure. Models are
pushed to production, or deployed, when Further, the risk experts on the tech trust team
ready for real-world use. This process and the should examine current use cases within the
underlying infrastructure supporting it present organization and conduct a “red team” challenge,
risks. For example, the model might fail to incentivizing team members to uncover less
perform in the real world as demonstrated by its obvious risks. These could be, for example, risks
performance in a lab environment. that flow from second-order model risks, such as
risks arising from how the model might be used in
4. Contracts and insurance. Contractual and practice (as opposed to the first-order risks that
insurance guarantees often explicitly address arise from how the model was built).
some AI risks. Product and service providers
(both B2B and B2C), for example, may include
in their service-level agreements parameters Evaluating and prioritizing the risk
around model performance or delivery. catalog
Insurance providers might assign liability for With risks clearly defined and logged, the
incidents such as security or privacy breaches. next step is to assess the catalog for the most
Such contractual provisions must be surfaced, significant risks and sequence them for mitigation.
so teams can ensure they are in compliance. With an overwhelming number of potential AI
risks, it’s impossible to fully mitigate each one. By
5. Legal and regulatory. Industries, sectors, and prioritizing the risks most likely to generate harm,
regions around the world have varying standards organizations can help prevent AI liabilities from
and laws regarding privacy, fairness, and other arising—and mitigate them quickly if they do. A
risks presented in this framework. Therefore, it’s strong methodology for such sequencing allows
important to be aware of applicable laws and AI practitioners and legal personnel to triage AI
regulations based on where, how, and in what use cases so they can focus their often-limited
sector the model will be deployed. resources on those meriting the most attention.
6. Organization and culture. Consider the risk Existing regulatory frameworks and case law can
maturity and culture of your organization and provide a solid foundation for this prioritization
how that might affect the way a model or its methodology. Liability frameworks across a
2
Justice Learned Hand (1872–1961) served as a federal district and appellate judge for more than 50 years and had enormous influence on the
understanding of the law in the United States, especially of the First Amendment of the US Constitution.
Kevin Buehler is a senior partner in McKinsey’s New York office, where Rachel Dooley is associate general counsel and Liz
Grennan is managing counsel; Alex Singla is a senior partner in the Chicago office.
The authors wish to thank Andrew Burt, managing partner at bnh.ai and visiting fellow at Yale Law School’s Information Society
Project, and Patrick Hall, principal scientist at bnh.ai and visiting faculty at George Washington University School of Business,
for their contributions to this article.
3
Federal Trade Commission blog, “Using artificial intelligence and algorithms,” blog entry by Andrew Smith, April 8, 2020, ftc.gov.