Professional Documents
Culture Documents
Foreword
Paul Daugherty
Jeremy Jurgens
Chief Technology and
Managing Director,
Innovation Officer (CTIO),
World Economic Forum
Accenture
Cathy Li
John Granger Head, AI, Data and
Senior Vice-President, Metaverse; Member of
IBM Consulting the Executive Committee,
World Economic Forum
Our world is experiencing a phase of multi-faceted Responsible Applications and Transformation, and
transformation in which technological innovation Resilient Governance and Regulation. These pillars
plays a leading role. Since its inception in the underscore a comprehensive end-to-end approach
latter half of the 20th century, artificial intelligence to address key AI governance challenges and
(AI) has journeyed through significant milestones, opportunities.
culminating in the recent breakthrough of generative
AI. Generative AI possesses a remarkable range of The alliance is a global effort that unites diverse
abilities to create, analyse and innovate, signalling perspectives and stakeholders, which allows for
a paradigm shift that is reshaping industries from thoughtful debates, ideation and implementation
healthcare to entertainment, and beyond. strategies for meaningful long-term solutions.
The alliance also advances key perspectives on
As new capabilities of AI advance and drive further access and inclusion, driving efforts to enhance
innovation, it is also revolutionizing economies and access to critical resources such as learning, skills,
societies around the world at an exponential pace. data, models and compute. This work includes
With the economic promise and opportunity that AI considering how such resources can be equitably
brings, comes great social responsibility. Leaders distributed, especially to underserved regions
across countries and sectors must collaborate to and communities. Most critically, it is vital that
ensure it is ethically and responsibly developed, stakeholders who are typically not engaged in AI
deployed and adopted. governance dialogues are given a seat at the table,
ensuring that all voices are included. In doing so,
The World Economic Forum’s AI Governance Alliance the AI Governance Alliance provides a forum for all.
(AIGA) stands as a pioneering collaborative effort,
uniting industry leaders, governments, academic As we navigate the dynamic and ever-evolving
institutions and civil society organizations. The alliance landscape of AI governance, the insights from the
represents a shared commitment to responsible AI AI Governance Alliance are aimed at providing
development and innovation while upholding ethical valuable guidance for the responsible development,
considerations at every stage of the AI value chain, adoption and overall governance of generative AI. We
from development to application and governance. encourage decision-makers, industry leaders, policy-
The alliance, led by the World Economic Forum in makers and thinkers from around the world to actively
collaboration with IBM Consulting and Accenture participate in our collective efforts to shape an AI-
as knowledge partners, is made up of three core driven future that upholds shared human values and
workstreams – Safe Systems and Technologies, promotes inclusive societal progress for everyone.
AI Governance Alliance 2
Introduction to the
briefing paper series
The AI Governance Alliance was launched in June business transformation for responsible generative
2023 with the objective of providing guidance AI adoption across industries and sectors. This
on the responsible design, development and includes assessing generative AI use cases
deployment of artificial intelligence systems. Since enabling new or incremental value creation, and
its inception, more than 250 members have joined understanding their impact on value chains and
the alliance from over 200 organizations across six business models while evaluating considerations
continents. The alliance is comprised of a steering for adoption and their downstream effects.
committee along with three working groups.
The Resilient Governance and Regulation
The Steering Committee comprises leaders from working group, led in collaboration with Accenture,
the public and private sectors along with academia is focused on the analysis of the AI governance
and provides guidance on the overall direction of landscape, mechanisms to facilitate international
the alliance and its working groups. cooperation to promote regulatory interoperability,
as well as the promotion of equity, inclusion and
The Safe Systems and Technologies working global access to AI.
group, led in collaboration with IBM Consulting,
is focused on establishing consensus on the This briefing paper series is the first output
necessary safeguards to be implemented during from each of the three working groups and
the development phase, examining technical establishes the foundational focus areas of the
dimensions of foundation models, including AI Governance Alliance.
guardrails and responsible release of models and
applications. Accountability is defined at each In a time of rapid change, the AI Governance
stage of the AI life cycle to ensure oversight and Alliance seeks to build a multistakeholder
thoughtful expansion. community of trusted voices from across the
public, private, civil society and academic spheres,
The Responsible Applications and united, to tackle some of the most challenging and
Transformation working group, led in collaboration potentially most rewarding issues in contemporary
with IBM Consulting, is focused on evaluating AI governance.
AI Governance Alliance 3
Reading guide
This paper series is composed of three briefing papers policies, principles and practices that govern the
that have been grouped into thematic categories ethical development, deployment, use and regulation
according to the three working groups of the alliance. of AI technologies, the Resilient Governance and
Regulation briefing paper offers guidance.
Each briefing paper of the report can also be read
as a stand-alone piece. For example, developers, While each briefing paper has a unique focus
adopters and policy-makers who are more interested area, many important lessons are learned at the
in the technical dimensions can easily jump to the intersection of these varying multistakeholder
Safe Systems and Technologies briefing paper to communities, along with the consensus and
obtain a contemporary understanding of the AI knowledge that emanate from each working
landscape. For decision-makers engaged in corporate group. Therefore, many of the takeaways from
strategy and business implications of generative AI, this briefing paper series should be viewed at the
the Responsible Applications and Transformation intersection of each working group, where findings
briefing paper offers specific context. For business become additive and are enhanced in context and
leaders and policy-makers occupied with the laws, interrelation with one another.
AI Governance Alliance
Briefing Paper Series
JANUARY 2024
W I T H I B M C O N S U LT I N G
I N C O L L A B O R AT I O N
W I T H I B M C O N S U LT I N G
AI Governance Alliance 4
AI Governance Alliance
Steering Committee
Nick Clegg Andrew Ng
President, Global Affairs, Meta Founder, DeepLearning.AI
AI Governance Alliance 5
Glossary
Terminology in AI is a fast-moving topic, and the Mis/disinformation: Misinformation involves the
same term can have multiple meanings. The dissemination of incorrect facts, where individuals
glossary below should be viewed as a snapshot may unknowingly share or believe false information
of contemporary definitions. without the intent to mislead. Disinformation
involves the deliberate and intentional spread of
Artificial intelligence system: a machine-based false information with the aim of misleading others.4
system that, for explicit or implicit objectives,
infers, from the input it receives, how to Model drift monitoring: The act of regularly
generate outputs such as predictions, content, comparing model metrics to maintain performance
recommendations or decisions that can influence despite changing data, adversarial inputs, noise
physical or virtual environments. Different AI and external factors.
systems vary in their levels of autonomy and
adaptiveness after deployment.1 Model hyperparameters: Adjustable parameters
of a model that must be tuned to obtain optimal
Causal AI: AI models that identify and analyse performance (as opposed to fixed parameters of
causal relationships in data, enabling predictions a model, defined based on its training set).
and decisions based on these relationships.
Causal inference models provide responsible AI Multi-modal AI: AI technology capable of
benefits, including explainability and bias reduction processing and interpreting multiple types of
through formalizations of fairness, as well as data (like text, images, audio, video), potentially
contextualisation for model reasoning and outputs. simultaneously. It integrates techniques from
The intersection and exploration of causal and various domains (natural language processing,
generative AI models is a new conversation. computer vision, audio processing) for more
comprehensive analysis and insights.
Fine-tuning: The process of adapting a pre-trained
model to perform a specific task by conducting Prompt engineering: The process of designing
additional training while updating the model’s natural language prompts for a language model
existing parameters. to perform a specific task.
AI Governance Alliance 6
Release access – A gradient covering different evaluation to ensure that value can be realized and
levels of access granted.5 change management is successfully aligned with
defined goals in a responsible framework.
– Fully closed: The foundation model and
its components (like weights, data and Responsible AI: AI that is developed and
documentation) are not released outside deployed in ways that maximize benefits and
the creator group or sub-section of the minimize the risks it poses to people, society and
organization. The same organization usually the environment. It is often described by various
does model creation and downstream model principles and organizations, including but not
adaptation. External users may interact with limited to robustness, transparency, explainability,
the model through an application. fairness and equity.6
Endnotes
1. “OECD AI Principles overview”, Organisation for Economic Co-operation and Development (OECD) AI Policy Observatory,
2023, https://oecd.ai/en/ai-principles.
2. OECD, G7 Hiroshima Process on Generative Artificial Intelligence (AI) Towards a G7 Common Understanding on Generative
AI, 2023, https://www.oecd.org/publications/g7-hiroshima-process-on-generative-artificial-intelligence-ai-bf3c0c60-en.htm.
3. World Economic Forum, Interoperability In the Metaverse, 2023, https://www.weforum.org/publications/interoperability-in-
the-metaverse/.
4. World Economic Forum, Toolkit for Digital Safety Design Interventions and Innovations: Typology of Online Harms, 2023,
https://www.weforum.org/publications/toolkit-for-digital-safety-design-interventions-and-innovations-typology-of-online-harms/.
5. Solaiman, Irene, “The Gradient of Generative AI Release: Methods and Considerations”, Hugging Face, 2023,
https://arxiv.org/abs/2302.04844.
6. World Economic Forum, The Presidio Recommendations on Responsible Generative AI, 2023,
https://www.weforum.org/publications/the-presidio-recommendations-on-responsible-generative-ai/.
7. The White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 2023:
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-
trustworthy-development-and-use-of-artificial-intelligence/.
AI Governance Alliance 7
1/3 AI Governance Alliance
Briefing Paper Series 2024
Presidio AI Framework:
Towards Safe Generative
AI Models
I N C O L L A B O R AT I O N
W I T H I B M C O N S U LT I N G
Cover image: MidJourney
Contents
Executive summary 10
Introduction 11
Conclusion 18
Contributors 19
Endnotes 22
Disclaimer
This document is published by the
World Economic Forum as a contribution
to a project, insight area or interaction.
The findings, interpretations and
conclusions expressed herein are a result
of a collaborative process facilitated and
endorsed by the World Economic Forum
but whose results do not necessarily
represent the views of the World Economic
Forum, nor the entirety of its Members,
Partners or other stakeholders.
The rise of generative AI presents significant 1. Expanded AI life cycle: This element of the
opportunities for positive societal transformations. framework establishes a comprehensive end-to-
At the same time, generative AI models add new end view of the generative AI life cycle, signifying
dimensions to AI risk management, encompassing varying actors and levels of responsibility at
various risks such as hallucinations, misuse, lack each stage.
of traceability and harmful output. Therefore, it is
essential to balance safety, ethics and innovation. 2. Expanded risk guardrails: The framework
details robust guardrails to be considered at
This briefing paper identifies a list of challenges to different steps of the generative AI life cycle,
achieving this balance in practice, such as lack emphasizing prevention rather than mitigation.
of a cohesive view of the generative AI model life
cycle and ambiguity in terms of the deployment 3. Shift-left methodology: This methodology
and perceived effectiveness of varying safety proposes the implementation of guardrails at the
guardrails throughout the life cycle. Amid these earliest stage possible in the generative AI life cycle.
challenges, there are significant opportunities, While shift-left is a well-established concept in
including greater standardization through shared software engineering, its application in the context
terminology and best practices, facilitating a of generative AI presents a unique opportunity
common understanding of the effectiveness of to promote more widespread adoption.
various risk mitigation strategies.
In conclusion, the paper emphasizes the need for
This briefing paper presents the Presidio AI greater multistakeholder collaboration between
Framework, which provides a structured approach industry stakeholders, policy-makers and
to the safe development, deployment and use organizations. The Presidio AI Framework promotes
of generative AI. In doing so, the framework shared responsibility, early risk identification
highlights gaps and opportunities in addressing and proactive risk management in generative AI
safety concerns, viewed from the perspective of development, using guardrails to ensure ethical
four primary actors: AI model creators, AI model and responsible deployment. The paper lays the
adapters, AI model users, and AI application foundation for ongoing safety-related work of the
users. Shared responsibility, early risk identification AI Governance Alliance and the Safe Systems
and proactive risk management through the and Technologies working group. Future work will
implementation of appropriate guardrails are expand on the core concepts and components
emphasized throughout. introduced in this paper, including the provision of
a more exhaustive list of known and novel
The Presidio AI Framework consists of three core guardrails, along with a checklist to operationalize
components: the framework across the generative AI life cycle.
This briefing paper outlines the Presidio AI diversity. However, the availability of all the
Framework, providing a structured approach model components (e.g. weights, technical
to addressing both technical and procedural documentation and code) could also amplify
considerations for safe generative artificial risks and reduce guardrails’ effectiveness.
intelligence (AI) models. The framework centres There is a need for careful analysis of risks and
on foundation models and incorporates risk- common consensus among the use of guardrails
mitigation strategies throughout the entire life considering the gradient of release;2 that is, varying
cycle, encompassing creation, adaptation and levels at which AI models are accessible once
eventual retirement. Informed by thorough research released, from fully closed to fully open-sourced.
into the current AI landscape and input from a
multistakeholder community and practitioners, Simultaneously, there are some identified
the framework underscores the importance of opportunities for progress towards safety, such as:
established safety guidelines and recommendations
viewed through a technical lens. Notable challenges – Standardization: By linking the technical
in the existing landscape impacting the development aspects at each phase of design, development
and deployment of safe generative AI include: and release with their corresponding risks
and mitigations, there is the opportunity for
– Fragmentation: A holistic perspective, which bringing attention to shared terminology and
covers the entire life cycle of generative AI best practices. This may contribute towards
models from their initial design to deployment greater adoption of necessary safety measures
and the continuous stages of adaptation and and promote community harmonization across
use, is currently missing. This can lead to different standards and guidelines.
fragmented perceptions of the model’s creation
and the risks associated with its deployment. – Stakeholder trust and empowerment:
Pursuing clarity and agreement on the expected
– Vague definitions: Ambiguity and lack of risk mitigation strategies, where these are most
common understanding of the meaning of effectively located in the model life cycle and
safety, risks1 (e.g. traceability), and general who is accountable for implementation paves
safety measures (e.g. red teaming) at the frontier the way for stakeholders to implement these
of model development. proactively. This improves safety, prevents
adverse outcomes for individuals and society,
– Guardrail ambiguity: While there is agreement and builds trust among all stakeholders.
on the importance of risk-mitigation strategies –
known as guardrails – clarity is lacking regarding While this briefing paper details the generative AI
accountability, effectiveness, actionability, model life cycle along with some guardrails, it is
applicability, limitations and at what stages of by no means exhaustive. Some topics outside
the AI design, development and release life this paper’s scope include a discussion of current
cycle varying guardrails should be implemented. or future government regulations of AI risks
and mitigations (this is covered in the Resilient
– Model access: An open approach presents Governance working group briefing paper) or
significant opportunities for innovation, greater consideration of downstream implementation and
adoption and increased stakeholder population use of specific AI applications.
Those releasing, adapting or using foundation significant investments companies are making in AI,
models often face challenges in influencing the and the potential impacts the technology can have
original model design or setting up the necessary on society mean coordination among multiple roles
infrastructure for building foundation models. The and stakeholders becomes indispensable.
combined need for regulatory compliance, the
The Presidio AI Framework (illustrated in Figure 1) models to specific generative tasks before integration
offers a streamlined approach to generative AI into AI applications and can provide feedback to
development, deployment and use from the the AI model creator. AI model users interact with a
perspective of four primary actors: AI model generative AI model through an interface provided
creators, AI model adapters, AI model users by the creator. AI application users interact indirectly
and AI application users. This human-centric with the adapted model through an application or
framework harmonizes the activities of these application programming interface (API). These
roles to enable more efficient information transfer actors include secondary groups, for instance, AI
between upstream development and downstream model validators and AI model auditors, whose
applications of foundation models. goal is to test and validate against defined metrics,
perform safety evaluations or certify the conformity
AI model creators are responsible for the end-to- of the AI models pre-release. Validators are internal
end design, development and release of generative to AI creator or adapter organizations, while auditors
AI models. AI model adapters tailor generative AI are external entities pursuing model certification.
The expanded AI life cycle synthesizes elements capabilities and adaptation to a use case. The
from data management, foundation model design expanded AI life cycle is introduced in Figure 2.
and development, release access, use of generative
Downloadable Fine-tuning
User content
Model performance Reinforcement
validation learning human
Sensor data feedback
Fully open
Public data Model audit and Build, validate, audit
model approval and deploy
AI model adapter
AI model creator
(business or individual)
Prompt engineering
AI model user
Fully Creators control the model use and can Other actors have limited visibility into the
closed provide safeguards for data privacy and the model design and development process.
IP contained in the model. There is more Auditability and contributors’ diversity are
clarity around responsibility and ownership. limited. Application users have minimal
influence on model outputs.
Hosted Creators can provide safeguards for model Similar challenges as “fully closed”. Other
outputs, such as blocking model response for actors have little insight into the model,
sensitive queries. They can streamline user limiting their ability to understand its decisions.
support. Use can be tracked and used to
improve model responses.
API Creators retain control over the model while Even though transparency is limited, model
empowering users to adapt the model for details can be inferred by third-party tools
specific use cases. They can provide user or attacks (in case of bad actors).
support. This level of access increases the
“researchability” of the model. Increased
access allows users to help identify risks
and vulnerabilities.
Downloadable Along with creators, adapters and users are Lowered barriers for misuse and potential
also empowered through the release of model bypassing of guardrails. Model creators
components. This means more transparency, have difficulties in tracking and monitoring
flexibility for model use and modification of model use. Users typically have less support
the model. when experiencing unexpected undesirable
model outputs/outcomes.
Fully open These models provide the highest levels of These models present a higher chance
auditability and transparency. This level of of possible misuse. Access to model
access increases global participation and weights means higher risk of model
contribution to innovation – also in terms of replication for unintended purposes by
safety and guardrails. Adapters and users are bad actors. Ambiguity around accountability
empowered to adapt models that better align and ownership.
with their specific task and improve existing
model functionality and safety via fine tuning.
The model adaptation phase describes several In the model use phase, users engage with hosted
stages, techniques and guardrails for adapting a access models using natural language prompts
pre-trained foundation model to perform specific through an interface provided by the model creator
generative tasks. This phase precedes the model or test it for vulnerabilities. This phase highlights the
integration phase, involving the model’s integration importance of having necessary guardrails during
with an application, including developing APIs to the foundation model building and release phases
serve downstream AI application users. as users directly interact with the model. In contrast,
adapters can add additional guardrails based on
the use case.
Guardrails for safe AI systems refer to guidelines, adherence to established processes and guidelines.
principles and practices that are put in place to A combination of both types is often needed to
ensure the responsible development, deployment ensure safe systems. Technical guardrails ensure
and use of generative AI systems and technologies. technical quality and consistency, while procedural
They are intended to mitigate risks, prevent harm guardrails provide process consistency and control.
and ensure AI systems operate according to
specific standards and ethical and societal values. The section below provides a snapshot of selected
Guardrails are implemented from the model-building guardrails applicable at varying phases of the AI life
phase and onward throughout the expanded AI life cycle. Due to brevity, only two of the most widely
cycle and may be technical or procedural. Technical used guardrails are highlighted, along with their
guardrails involve tools or automated systems and phase placement.
controls, while procedural guardrails rely on human
Red teaming and reinforcement learning from human feedback (RLHF)3 Building
Performing red teaming early, especially during fine- advantage by enabling efficient learning, faster
tuning and validation of the building phase, is crucial iterations and a strong foundation for subsequent
for preventing adverse outcomes and ensuring phases, ultimately leading to improved model
model safety. Addressing vulnerabilities and ethical performance and alignment with human
concerns earlier in the life cycle demonstrates objectives. RLHF may be used here to train a
a commitment to security and ethics while reward model, which is then used to fine-tune the
building trust among stakeholders. For foundation primary model, eliciting more desirable responses.
models, tests should cover prompt injection, This process ensures the reliability and alignment
leaking, jailbreaking, hallucination, IP and personal of the model outputs and improves performance,
information (PI) generation, as well as identifying toxic including an iterative feedback loop between human
content. While red teaming is effective for known raters, a trained reward model and the foundation
vulnerabilities, it may have limitations in identifying model. Although effective for ongoing improvement,
unknown risks, especially before mass release. there is a risk of introducing new biases with
this method and data privacy and security
Incorporating reinforcement learning from human considerations around the use of generated data.
feedback (RLHF) early on provides a strategic
Guardrails implemented in the release phase a few examples of building templates. Automating
include a combination of approaches designed fact collection, building documentation and auditing
to empower downstream actors (such as transparency could improve overall efficiency and
transparent documentation) and protect them (such effectiveness. Limitations include identifying the
as use restrictions). most useful facts and ambiguity in balancing the
disclosure of proprietary and required information.
Transparent documentation is a collection of
details (decisions, choices and processes) about Use restriction limits the model use beyond
the AI model, including the data. It mitigates intended purposes. It mitigates the risk of
the risk of lack of transparency,5 and therefore model misuse and other unintended harms like
empowers downstream adapters and users to generating harmful content and model adaptation
understand the model’s limitations, evaluate its for problematic use cases. Some best practices
impact and make decisions on model use. This involve using restrictive licences like responsible
guardrail increases the auditability of the model AI licences (RAIL), setting up model use and user
and helps advance policy initiatives. Some best tracking, and providing clear guidelines on allowed
practices include understanding target consumers, use while implementing feedback/incident reporting
their requirements, and expectations, developing mechanisms. Additionally, integrating moderation
persona-based (e.g. business owner, validator and tools to filter or flag undesirable content, disallowing
auditors) templates with pre-defined fields and harmful or sensitive prompts and blocking the
assigning responsibility for gathering information model from responding to misaligned prompts must
at every phase of the life cycle. Datasheets, data be considered. Limitations include having standards
cards, model cards, factsheets and Stanford’s for model licences and guidelines and high-quality
foundation model transparency index indicators are tools to help restrict the model response.
A critical goal of the adaptation phase is to ensure The decision to watermark model outputs depends
that the adapted model remains effective and on the use case, model nature and watermarking
aligned with the selected use case. Model drift goals. Watermarking adds hidden patterns for
monitoring involves regularly comparing post- algorithmic detection, mitigating mass production
deployment metrics to maintain performance in of misleading content. It aids in identifying AI-
the face of evolving data, adversarial inputs, noise generated content for policy enforcement,
and external factors. The goal is to mitigate the risk attribution, legal recourse and deterrence. However,
of model drift, where the model’s output deviates workarounds exist, such as removing watermarks
from expectations over time. Best practices include or paraphrasing content. Watermarking can be
systematically using data, algorithms, and tools for applied earlier (during model creation for ownership)
tracking data drift, and defining response protocols and adaptation for control over visibility.
and adaptation techniques to sustain model
performance and customer trust.
The term “shift-left”6 describes implementing quality foundation model safety features, the need
assurance and testing measures earlier in a product for balancing safety with model creativity and
cycle. The core objective is proactively identifying implementation cost. Based on the model’s
and managing potential risks, increasing efficiency purpose, there could be a trade-off between
and cost-effectiveness. This well-established guardrail placement and safety dimensions like
concept applies to various technologies and privacy, fairness, accuracy and transparency.
processes, including software engineering.
Figure 3 illustrates three shift-left instances crucial
In the Presidio AI Framework, the concept of shift- for building safe generative AI models.
left is extended and applied to generative AI models.
It gains a new dimension of importance due to: – Release to build shift occurs when an
AI model creator proactively incorporates
– Increased interest in foundation models guardrails throughout the foundation model-
where model creators are not always the building phase and collects necessary
model adapters. data and model facts and transparency
surrounding these.
– Increased accessibility of powerful
models by users of varying skills and – Adaptation/use to release shift occurs during
technical backgrounds, raising the demand the foundation model release phase. The AI
for model transparency. model creator incorporates additional guardrails,
establishes norms and standards for use, and
– Considerable risk for users using factually creates comprehensive documentation to help
incorrect output without validation, model misuse downstream actors understand and make
(e.g. in disinformation campaigns) and adversarial informed decisions regarding model use.
attacks on the model (e.g. jailbreaking).
– Application to adaptation shift occurs when
These considerations require understanding and the AI model adapter proactively incorporates
coordination of the activities of different actors guardrails considering the use case and
(creators, adapters and users) across the AI value considering the documentation from AI model
chain to avoid significant effort in resolving issues creators about the foundation model. These
during model adoption and use. For example, data would be documented for the downstream
subject rights in some countries allow people to application user.
request that their personal information be deleted
from the model. The removal can be costly for Some organizations have already integrated
model creators as they may need to retrain the the shift-left approach into their responsible AI
model. It can also be challenging for adaptors development process. However, it is vital to extend
to apply effective guardrails to prevent sensitive and emphasize the importance of this practice
information from surfacing in the output. across all expanded phases of the generative AI life
cycle and ensure its adoption by all organizations.
For generative AI, the shift-left methodology Those that shift left to implement appropriate safety
proposes guardrails earlier in the life cycle, guardrails where most effective can minimize legal
considering their effectiveness in mitigating risk consequences and reputational risk, increase trusted
at a particular phase, along with essential adoption and positively impact society and users.
Data management Foundation model Foundation model Model adaptation Model integration
phase building phase release phase phase phase
(with application)
AI model adapter
AI application user
AI model creator (business or individual)
AI model user
Conclusion
The Presidio AI Framework promotes shared In addition to known guardrails, the group will
responsibility, early risk identification and proactive continue to identify novel mechanisms for AI safety,
risk management in generative AI development, including emerging technical guardrails such as red
using guardrails to ensure ethical and responsible teaming language models,7 liquid neural networks
deployment. The AI Governance Alliance and the Safe (LNN),8 BarrierNets,9 causal foundation models10 and
Systems and Technologies working group encourage neurosymbolic learning,11 among others. Additionally,
greater information exchange between industry the group will investigate the various guardrail
stakeholders, policy-makers and organizations. options and introduce a checklist to operationalize
This collaborative effort aims to increase trust in AI the framework to assess AI model risks and
systems, ultimately benefiting society. guardrails across the generative AI life cycle.
Jennifer Kirkwood
Executive Fellow, Partner, IBM
Eniko Rozsa
Distinguished Engineer, IBM
Saishruthi Swaminathan
Tech Ethics Program Adviser, IBM
Joseph Washington
Senior Technical Staff Member, IBM
Acknowledgements
Unlocking Value
from Generative AI:
Guidance for Responsible
Transformation
I N C O L L A B O R AT I O N
W I T H I B M C O N S U LT I N G
Images: Getty Images, MidJourney
Contents
Executive summary 25
Introduction 26
3 Responsible transformation 32
Conclusion 34
Contributors 35
Endnotes 39
Disclaimer
This document is published by the
World Economic Forum as a contribution
to a project, insight area or interaction.
The findings, interpretations and
conclusions expressed herein are a result
of a collaborative process facilitated and
endorsed by the World Economic Forum
but whose results do not necessarily
represent the views of the World Economic
Forum, nor the entirety of its Members,
Partners or other stakeholders.
Generative AI entered the popular domain with considerations when leaders evaluate use cases
the launch of OpenAI’s ChatGPT in November against their operational readiness.
2022, igniting global fascination surrounding its
capabilities and potential for transformative impact. – Balancing upfront development cost with
As generative AI’s technical maturity accelerates, reusability potential, projected time to value and an
its adoption by organizations seeking to capitalize increasingly complex regulatory environment are
on its potential is maturing at pace while also swiftly criteria when leaders select use cases in alignment
disrupting business and society and forcing leaders with an organization’s investment strategy.
to rethink their strategies in real time. This paper
addresses the impact of generative AI on industry Following use case selection, organizations
and introduces best practices for responsible weigh benefits against downstream impacts
transformation. such as impact to the workforce, sustainability or
inherent technology risk such as hallucinations. A
Leaders have realized new generative AI multistakeholder approach helps leaders to mitigate
opportunities for their organizations, from risk and scale responsibly.
streamlining enterprise processes to supporting
artists in reimagining furniture design or even aiding – Multistakeholder governance with distributed
nations in addressing global climate challenges. ownership is central to addressing
From the public to the private sector, organizations accountability.
are witnessing generative AI’s ability to enhance
enterprise productivity, create net new products – Communications teams that shape a cohesive
or services, and redefine industries and societies. narrative are essential to addressing trust
In adopting generative AI, leaders report a shift through transparency.
towards a use-case-based approach, focusing on
evaluating and prioritizing use cases and structures – Operational structures that roadmap and
that enable the successful deployment of generative cascade use cases to extract, realize, replicate
AI technologies and compound value generation. and amplify value across the entire organization
are key to addressing challenges to scale.
Organizations should evaluate potential use cases
across the following domains: business impact, – Value-based change management is critical to
organisational readiness and investment strategy. addressing human impact and ensuring the
workforce remains engaged and upskilled.
– Strategic alignment with the organization’s
goals, revenue and cost implications, and The findings in this briefing paper provide leaders
impact on resources are key factors when with insights on how to realise the benefits of
leaders prioritize use cases based on their generative AI while mitigating its downstream
potential for business impact. impacts. Future publications will build on these
recommendations for responsible transformation
– The requisite technical talent and infrastructure, as generative AI becomes increasingly able to
the ability to track data and model lineage, and mimic human skills and reasoning, and technology
the governance structure to manage risk are advances in pursuit of artificial general intelligence.
Generative artificial intelligence (AI) has captured – What are the new challenges and downstream
global imagination with its human-like capabilities impacts?
and has shown the potential to elevate creativity,
amplify productivity, reshape industries and enhance – What are the best practices for scaling
the human experience. As a result, cross-sector responsibly and bringing about exponential
executives, government leaders and academia are transformation?
considering the potential impact of this technology
as they weigh answers to critical questions: Finally, as the curiosity to replicate or even exceed
human intelligence grows in the future, what does
– Where are the growing opportunities and this mean for organizations seeking to capitalize on
novel application areas to drive sustainable the opportunities offered by this technology?
economic growth?
Enhancing enterprise Brex: automating Support corporate card Brex, with OpenAI and Brex Assistant fully
productivity corporate card expenses3 customers to categorize Scale, used generative handles 51% of card
transactions and add AI to create the Brex swipes, saving time
notes to meet company Assistant to streamline and improving expense
policies and Internal expense reporting, accuracy and compliance.
Revenue Service (IRS) automatically classify It generated over
compliance. expenses and create IRS- 1.4 million receipts and
compliant notes. 1 million receipt memos.
Enhancing enterprise IKEA: reimagining furniture Seek creative solutions IKEA and SPACE10 used Furniture designers
productivity design4 to aid furniture designers generative AI to explore collaborate with AI,
in crafting new designs furniture design concepts, expanding design
inspired by their iconic past. training a model on 1970s possibilities and speeding
and 1980s catalogues for up cycles.
students to create future-
focused designs inspired
by the past.
Enhancing enterprise Google: streamlining Reduce software Google created Google Increased proactive UX
productivity and net- software prototyping5 development cycles AI Studio, a generative and product prototyping,
new product or service internally and simplify AI tool to simplify provided an efficient UI
access to generative software prototyping and for easy model prompting
AI models. democratize access to and was later launched
their foundation models, as a new product in 179
which were first used countries and territories.
internally.
Net-new product Synthesia and PepsiCo: Connect brand and Fans could generate and Seven million videos
or service reinventing the football fan performance marketing share personalized videos weregenerated, attracting
experience6 efforts into one seamless using Lionel Messi’s AI over 38 million website
experience. avatar in eight languages, visits in 24 hours.
bypassing traditional
production limits.
Redefining industries Insilico Medicine: Discover and develop Generative AI was used A preclinical drug
and societies accelerating drug new treatments for serious during the preclinical candidate was discovered
discovery7,8 diseases more quickly drug discovery process in less than 18 months
and cheaply compared to to identify a novel drug and at one-tenth of the
traditional processes. candidate for idiopathic cost of a conventional
pulmonary fibrosis. programme. The drug
candidate has now
entered phase two trials.
Redefining industries NASA and IBM: unique Build a unique foundation NASA and IBM created The model is estimated
and societies global planning for model to generate the first open-source to increase geospatial
climate phenomena and insights from over 250 geospatial foundation analysis speed by four
sustainability9 terabytes (TBs) of mission model, available via times with 50% less
satellite imagery. Hugging Face, using labelled data; used to solve
NASA data to enhance global climate challenges,
and democratize global including reforestation
environmental research in Kenya and other
and planning. development efforts in
the Global South.
The speed of adoption and implementation leaders to find value while minimizing downstream
of generative AI is unparalleled to any other implications. In either case, leaders start with
technological advancement. The technology is diverse POCs, which are scaled across the
no longer dependent on the manual labelling of enterprise once value is proven.
significant amounts of data – often the most time-
consuming and costly part of traditional AI workflows. In many instances, generative AI experiments
may yield unexpected learnings about where
Across the board, leaders report a new approach value, and often also cost and challenges, truly lie.
to generative AI opportunities that extends Organizations may realize the compound benefits
Organizations
beyond rapid proofs of concept (POCs) based on of generative AI when implementing it in tandem
are shifting towards large models. Instead, organizations are shifting with technologies such as causal AI models10
smaller, use-case towards smaller, use-case based approaches that to increase explainability, advances in quantum
based approaches emphasize ideation and experimentation. They are technologies to accelerate the generative AI life
that emphasize involving the workforce in the use case discovery cycle, or 5G to increase reach. These compounding
ideation and and ideation process. Smaller use cases with benefits will help organizations to prioritize use
experimentation. low complexity are often applied first, allowing cases for adoption.
As organizations consider generative AI, they must the following gates comprise the most common
assess all factors involved to move a use case from approaches adopted by industry leaders to evaluate
concept to implementation. Leaders need to ensure the viability and value-generation potential of use
that each use case benefits the organization, its cases. The order is not sequential and can differ
customers, its workforce and/or society. While depending on each organization and use case.
evaluation criteria can differ between organizations,
Filte
r th
e
be
st
e
n th
It e r a t e o
Leaders evaluate the use case’s value alignment molecular structures, which could aid the creation
with the organization’s strategic objectives and its of novel and more effective therapeutic agents.11
stakeholder responsibility. After alignment on the
outcomes and generative AI as the best technology to Generative AI opportunities have created strong
address a specific use case, the impact of each use competitive pressures and inaction can come with
case on an organization can be categorized as follows: significant opportunity costs.12 In industries such
as marketing or consumer goods, understanding
1. Scaling human capability by enhancing the criticality of time to market and improved
productivity and existing human skills (e.g. near experience for users, helps leaders prioritise use
instant new content generation for rapid idea cases and resource allocation. Reputation is
iteration; creation of multiple versions of an another important consideration – will the use case
advertising campaign). enhance the organization’s brand as a pioneer
of innovation? Enabling the workforce to access
2. Raising the floor by increasing accessibility generative AI tools can be an important factor for
to technologies and capabilities previously talent attraction and retention. When generative
requiring specific resources, skills and expertise AI performs administrative tasks that previously
(e.g. giving everyone the ability to code). required significant time and effort, the workforce
can repurpose their time from rote activities to those
3. Raising the ceiling by solving problems thus that allow them to explore their creativity and hone
far unsolvable by humans (e.g. generating new their unique skillset.
FIGURE 2 Operational readiness considerations (non-exhaustive) across the model life cycle
Data management
Data curation
Stakeholder trust
t
ap
lb
Model ad
uildin
Guardrails
g
While investment considerations are important footprints. Additionally, they evaluate whether
to any organizational decision-making, they the use case can operate viably within the
are particularly significant for generative AI current regulatory environment and whether the
opportunities. Use cases often require a higher organization can monitor compliance to minimize
upfront investment, the regulatory environment is legal risk. This can require significant investment of
becoming increasingly complex and the technology capital and human resources, such as developers,
is evolving at a rapid pace. lawyers, senior leadership and ethics boards.
When prioritizing use cases, leaders must consider Talent availability is central to an organization’s
if each merits the use of models adopted from investment strategy as well. Total investment may
open-source communities, acquired from other include upskilling, re-skilling or hiring additional
third parties or developed in-house. Model employees with appropriate generative AI skills,
selection must account for alignment with the such as content creation, model development
use case, speed to market, requisite resource or model tuning.
investments, including capital and talent, licensing
and acceptable use policies, risk exposure and Following the evaluation of use cases by business
competitive differentiation offered by each option. impact, organizational readiness and investment
strategy, the next step is to implement and scale
Leaders evaluate the reusability potential of a selected use cases. How can they maximize
use case across the organization, as it can offset opportunities while mitigating risks to ensure a
development costs and curtail sustainability responsible and successful transformation?
An AI ethics Multistakeholder governance with distributed as workplace policies, even if they do not deploy
council modelled ownership is central to responsible transformation generative AI, as the workforce is likely already using
on value-based in the age of generative AI. This approach is it at work on personal devices. The council should
principles is characteristic of industry leaders, with legal, expand to incorporate a diverse set of members
indispensable for governance, IT, cybersecurity, human resources from across the entire organization to ensure the
(HR), as well as environmental and sustainability responsible adoption of not just individual use cases
any organization.
representatives requiring a seat at the table to but also emerging and intersecting strategies on
ensure responsible transformation across the open technologies, artificial general intelligence
organization. The positive and negative externalities (AGI), 5G and quantum technology.
of generative AI expand the conventional
responsibilities in governance towards a more The evolving nature of generative AI requires
holistic, human-centred and values-driven approach. rigorous self-regulation and internal AI governance
leads may serve as the sentinels of the organization.
An AI ethics council modelled on value-based Generative AI supports human-led analysis in
principles19 is indispensable for any organization; regulatory, environmental and sustainability
larger organizations appoint members from efforts. It assists in algorithm monitoring and
their stakeholder and shareholder groups, while policy formulation, but crucially, it requires human
smaller organizations may need to rely on a limited oversight to ensure responsible and effective
committee or an external ethics council. Councils application, addressing potential risks and
must collaborate with stakeholders on aspects such maintaining quality outcomes.
Initial adoption of generative AI across organizations cases are roadmapped and cascaded across
has focused on targeted, often isolated, use cases. the organization for maximum benefit. In their
However, as leaders plan their strategic roadmaps, initial development, use cases require a diverse
many are challenged with how to scale these use operational structure to ensure a multistakeholder
cases across their organizations to realize the approach to extracting, realizing, replicating and
compound benefits of generative AI. amplifying value. However, as use cases become
integrated and scale, an interlocking and agile
Operations teams are the primary implementers operational structure is needed to understand how
of use cases. Data analysts, research and compound value can be unlocked, and corollary
development teams, resource managers, HR impacts to other parts of the workforce or other
executives and business leaders ensure use lines of business can be anticipated.
Technologies that develop as rapidly as generative Change management responsibilities across the
AI require adoption by a workforce that evolves organization are significant. HR professionals
at pace. The implications of generative AI on the engage with the implementation of use cases from
workforce are central to business and need to be the beginning so they can proactively assess the
managed well. The chief human resources officer, impact on staff and put workforce transformation
the chief information officer, and the chief financial plans in place. Including employees in idea
officer teams should come together to support generation for use cases and encouraging them to
the workforce as needed when implementing and own their career paths can increase engagement.
scaling generative AI use cases. Hackathons and company-wide training days
are effective in upskilling the workforce while also
Leaders plan and implement talent transformation encouraging experimentation and innovation.
while ensuring staff have access to the necessary
technological tools and training. This starts The immense potential of generative AI for
with communicating the vision for generative benefit as well as for harm requires that all four
Technologies that
AI pilots that clearly states desired benefits for of these primary functions are dynamic, interlocked
develop as rapidly customers and employees alike, together with and in equilibrium. The effectiveness of this
as generative AI emerging professional development pathways interlock correlates directly with the extent to
require adoption for staff. Competencies, capabilities and skills are which an organization scales generative AI
by a workforce that rapidly evolving as generative AI use cases are applications responsibly.
evolves at pace. implemented across the organization.
Conclusion
New technologies driving productivity have always Future work through the World Economic
been positioned as repurposing workers to higher- Forum’s AI Governance Alliance will build on this
value work, which has traditionally required human foundation and address essential considerations,
oversight and creativity. However, with generative such as internal metrics for responsibility,
AI becoming increasingly advanced in its ability to understanding organizational barriers to responsible
mimic human skills and capabilities, it opens more transformation, as well as broader issues such
questions about its impact on the organizations as intellectual property, regulatory alignment
choosing to adopt it. Technological advances and workforce considerations. Generative AI is
towards human reasoning in the pursuit of artificial reimagining the status quo for every organization.
general intelligence demand ongoing discourse on Providing a roadmap for organizations that guides
the responsibility of organizations to their workforce, them to innovate responsibly is key to adopting
customers and wider society. and scaling this powerful technology.
Avi Mehra
Associate Partner, IBM
Sandra Misiaszek
Associate Partner, IBM
Acknowledgements
Zhang Ya-Qin
Chair Professor and Dean, Tsinghua University IBM
Zhang Ying
Professor of Marketing and Behavioral Phaedra Boinodiris
Science, Guanghua School of Management, Associate Partner
Peking University
Frank Madden
Zhang Yuxin Privacy and Regulatory Risk Adviser
Chief Technology Officer, Huawei Cloud,
Huawei Technologies Jesús Mantas
Global Managing Director
Yijie Zeng
Chief Technology Officer, Beijing Langboat Christina Montgomery
Technology Chief Privacy & Trust Officer
Catherine Quinlan
World Economic Forum Vice-President, AI Ethics
Sencan Sengal
John Bradley Distinguished Engineer
Lead, Metaverse Initiative
Jamie VanDodick
Karyn Gorman Director AI Ethics and Governance
Communications Lead, Metaverse Initiative
Generative AI Governance:
Shaping a Collective Global Future
I N C O L L A B O R AT I O N
WITH ACCENTURE
Images: Getty Images, MidJourney
Contents
Executive summary 42
Introduction 43
Conclusion 51
Contributors 52
Endnotes 56
Disclaimer
This document is published by the
World Economic Forum as a contribution
to a project, insight area or interaction.
The findings, interpretations and
conclusions expressed herein are a result
of a collaborative process facilitated and
endorsed by the World Economic Forum
but whose results do not necessarily
represent the views of the World Economic
Forum, nor the entirety of its Members,
Partners or other stakeholders.
The global landscape for artificial intelligence – Compatible standards: To prevent substantial
(AI) governance is complex and rapidly evolving, divergence in standards, relevant national
given the speed and breadth of technological bodies should increase compatibility efforts and
advancements, as well as social, economic and collaborate with international standardization
political influences. This paper examines various programmes. For international standards to
national governance responses to AI around the be widely adopted, they must reflect global
world and identifies two areas of comparison: participation and representation.
2. Regulatory instruments: AI governance Equitable access and inclusion of the Global South
may be based on existing regulations and in all stages of AI development, deployment and
authorities or on the development of new governance is critical for innovation and for realizing
regulatory instruments. the technology’s socioeconomic benefits and
mitigating harms globally.
Lending to the complexity of AI governance, the
arrival of generative AI raises several governance – Access to AI: Access to AI innovations can
debates, two of which are highlighted in this paper: empower jurisdictions to make progress on
economic growth and development goals.
1. How to prioritize addressing current Genuine access relies on overcoming structural
harms and potential risks of AI. inequalities that lead to power imbalances for
the Global South, including in infrastructure,
2. How governance should consider data, talent and governance.
AI technologies on a spectrum of
open-to-closed access. – Inclusion in AI: To adequately address unique
regional concerns and prevent a relegation of
International cooperation is critical for preventing a developing economies to mere endpoints in
fracturing of the global AI governance environment the AI value chain, there must be a reimagining
into non-interoperable spheres with prohibitive of roles that ensure Global South actors can
complexity and compliance costs. Promoting engage in AI innovation and governance.
international cooperation and jurisdictional
interoperability requires: The findings of this briefing paper are intended to
inform actions by the different actors involved in
– International coordination: To ensure legitimacy AI governance and regulation. These findings will
for governance approaches, a multistakeholder also serve as a basis for future work of the World
approach is needed that embraces perspectives Economic Forum and its AI Governance Alliance
from government, civil society, academia, that will raise critical considerations for resilient
industry and impacted communities and is governance and regulation, including international
grounded in collaborative assessments of the cooperation, interoperability, access and inclusion.
socioeconomic impacts of AI.
The rapid onset of generative artificial intelligence for clarity on data provenance and ownership.5
(AI) is promising socially and economically,1 Governance authorities worldwide face the
including the potential to raise global gross daunting task of developing policies that
domestic product (GDP) by 7% over a 10-year harness the benefits of AI while establishing
period.2 At the same time, a range of complex guardrails to mitigate its risks. Additionally, they
challenges has emerged, such as the impact on are attempting to reconcile AI governance
employment, education and the environment, approaches with existing legal structures such
as well as the potential amplification of online as privacy and data protection, human rights,
harms.3 Additionally, there are increased demands including rights of the child, intellectual property
for corporate transparency of AI systems4 and and online safety.
Definition Focuses on Lays out detailed and Sets out fundamental Focuses on achieving
classifying and specific rules, standards principles or guidelines measurable AI-related
prioritizing risks and/or requirements for for AI systems, leaving the outcomes without defining
in relation to the AI systems interpretation and exact specific processes or
potential harm AI details of implementation actions that must be
systems could cause to organizations followed for compliance
Example EU: Artificial China: Interim Canada: Voluntary Code Japan: Governance
Intelligence Act, Measures for the of Conduct for Artificial Guidelines for
2023 (provisional Management of Generative Intelligence, 2023 Implementation of AI
agreement) AI Services, 2023 Principles Ver. 1.1, 2022
The existence of a spectrum of AI governance AI life cycle26 and addressing misinformation and
approaches considers debates arising from new disinformation concerns amplified by generative AI.27
and amplified challenges22 introduced by the scale,
power and design of generative AI technologies. Many of these emerging tensions have their roots
Table 2 provides a snapshot of two prominent in data governance issues,28 such as privacy
debates taking place with a sample of divergent concerns, data protection, embedded biases,29
positions regarding the nature of risks and access identity and security challenges from the use of data
to AI models. Other emerging tensions include to train generative AI systems, and the resultant
how generative AI will impact employment,23 data created by generative AI systems. There is a
its intersection with copyright protections,24 need to re-examine existing legal frameworks that
data transparency requirements,25 allocation of provide legal assurance to the ownership of
responsibility among actors within the generative AI-generated digital identities.30
Debate and context Sample position Policy arguments for Policy arguments against
Policy focus on Advanced autonomous AI – Without sufficient caution, – Existential risks are speculative
long-term existential systems pose an existential threat humans could irreversibly lose and uncertain.36
risks31 vs present to humanity.33 control of autonomous – Can redirect the flow of valuable
AI harms.32 AI systems.34 resources from scientifically
AI poses present – Starting with the biggest studied present harms.37
harms and a spectrum questions around existential risk – Misdirects regulatory attention.38
of potential near- to supports the development of
long-term risks. Diverse trustworthy AI and could prevent
positions exist regarding overregulation.35
how to identify and
prioritize the harms and Effective regulation of AI needs – In terms of urgency, there – Focus on known harms may
risks from AI as well grounded science that investigates are immediate problems and lead to neglecting long-term
as the timeframe over present harms.39 emerging vulnerabilities with risks not well considered by
which risks should be AI that disproportionately impact traditional policy goals.
considered. marginalized and vulnerable
populations.
– Contending with known
harms will address long-term
hypothetical risks.40
International cooperation is critical to ensure Europe through the US-EU Trade and Technology
societal trust in generative AI and to prevent a Council, while Chile, New Zealand and Singapore
fracturing of the global AI governance environment have signed a Digital Economy Partnership
into non-interoperable spheres with prohibitive Agreement. Indicative of a growing consensus on
complexity and compliance costs. Facilitating the need for AI regulation, delegate nations at the
jurisdictional interoperability requires international 2023 UK AI Safety Summit signed the Bletchley
coordination, compatible standards and flexible Declaration with a commitment to establish a
regulatory mechanisms. For example, the US has shared understanding of AI opportunities and risks.
taken the initiative to enable cooperation with
Creating the Governing bodies around the world are turning advantage and national values. There is concern
capacity and to standards as a method for governing AI. that substantial divergences in approaches
space for broader The British Standards Institution launched an to setting AI standards threaten a further
participation in AI Standards Hub aimed at helping AI organizations fragmentation of the international AI governance
in the UK understand, develop and benefit landscape, lending to downstream social, economic
the AI standards-
from international AI standards. The European and political implications internationally.
making process
Telecommunications Standards Institute (ETSI)
is needed. and the European Committee for Electrotechnical International standardization programmes are being
Standardization (CENELEC) have published the developed by the Joint Technical Committee of the
European Standardization agenda that includes the International Organization for Standardization and the
adoption of external international standards already International Electrotechnical Commission (ISO/IEC
available or under development, in part stimulated JTC1/SC42)56 as well as by the Institute of Electrical
by the proposed EU AI Regulation’s framework for and Electronic Engineers Standards Association
standards. In the US, NIST has developed an AI (IEEE SA). For their part, the US, EU and China, have
Risk Management Framework to support technical signalled commitments to undertake best efforts to
standards for trustworthy AI.54 align with internationally recognized standardization
efforts.57 Despite these signals, there is no guarantee
Despite criticisms regarding the instrumentalization that every country will follow these standards,
of standards to shift regulatory powers from especially if there is concern that their development
governments to private actors,55 they are has not been inclusive of local interests. Creating the
increasingly recognized as an important tool capacity and space for broader participation in the
in international trade, investment, competitive standards-making process is thus needed.
The fast-evolving capabilities of generative AI Arab Emirates,59 Brazil,60 the UK,61 the EU,62 and
require investment in innovation and governance Mauritius63 have pioneered “regulatory sandboxes”
frameworks that are agile and adaptable. This that allow organizations to test AI in a safe and
includes ongoing assessment of opportunity and controlled environment. Such policy innovations
risk emanating from applied practice and feedback must be coupled with additional efforts to clarify
from those directly impacted by the technology. regulatory intent and the associated requirements
Flexible regulatory mechanisms, beyond statutory for compliance. For flexible mechanisms to scale,
instruments, are needed to account for societal supervisory authorities will need to consider how
implications and regulatory challenges that will they provide industry participants confidence to
emerge as generative AI technologies continue to participate and help establish agile best practice
advance and be adopted across various cultures approaches while addressing the fear of regulatory
and sectors. For example, Singapore,58 the United capture through participation.
The need for diversity and more equitably deployed divide. By ensuring the inclusion of underrepresented
generative AI systems is of significant global countries from Sub-Saharan Africa, the Caribbean
concern. Inclusive governance that consults with and Latin America, the South Pacific, as well as
diverse stakeholders, including from developing some from Central and South Asia (collectively
countries, can help surface challenges, priorities and referred to as the Global South) in international
opportunities to make generative AI technologies discussions on AI governance, a more diverse and
work better for everyone64 and address widening equitable deployment of generative AI systems and
inequalities associated with the pre-existing digital compatibility of governance regimes can be achieved.
The Global South’s priorities in areas such as greatest – transforming health services, improving
healthcare, education or food security often force education quality, increasing agricultural productivity,
trade-offs, hampering investments in long-term digital etc. to improve lives.66 Successfully deploying
infrastructure. However, access to AI innovations can generative AI solutions at scale relies on overcoming
empower countries to make progress on economic several structural inequalities lending to power
growth and development goals65 where needs are imbalances as detailed in Table 3.
Infrastructure Training generative AI systems, supporting The level of computing infrastructure required for research
Access to compute, experimentation and solution development and and development of generative AI models is primarily
cloud providers and maintaining physical data centres67 requires accessible to just a few industry laboratories with sufficient
energy resources extensive compute and cloud infrastructure that is funding.70 This puts at risk the participation of the vast
financially and environmentally costly68 and results majority in the development of these advanced models.
in high energy intensity.69
Data Generative AI’s outputs inherently reflect the data Active inclusion of developing nations and diverse
Low resource and design of a model’s training. Current major voices in generative AI development and governance
languages and generative AI models are primarily developed in the US is critical to ensure global inclusion in a future influenced
representation and China and trained on data from North America, by generative AI.
Europe and China.
Talent Students from the Global South often do not have Local access to high-quality education and generative
Access to access to the education and mentorship required AI expertise is key to creating a sustainable talent pipeline
education and technical to develop emerging technologies, such as and widening the locations where generative AI research
expertise generative AI. This can contribute to a lack of global is done. Further, more researchers and engineers from the
representation among generative AI researchers Global South will lead to more diversity in generative AI
and engineers, with potential downstream effects of ideas, enhanced innovation and increased opportunities
unintended algorithmic biases and discrimination in for local experts to build and wield generative AI with local
generative AI products. issues in mind.
Governance Economically disadvantaged countries often lack the Disparity in AI governance capabilities can reinforce existing
Institutional capacity financial, political and technical resources needed power imbalances and hinder global participation in the
and policy development to develop effective AI governance policies, and benefits of generative AI. The absence of governance
regulators within these jurisdictions remain severely policies for data and AI can lead to privacy violations,
underfunded. According to a 2023 study of 193 potential misuse of AI and a missed opportunity to harness
countries, 114 countries, almost exclusively from the AI for positive socioeconomic development, among
Global South, lack any national AI strategy.71 others. Further, underfunded regulatory institutions may
be ill-equipped to address the ethical, legal and social
implications of AI.
In addition to equitable access, inclusion of the Singapore, as well as the emergence of AI research
Global South in all stages of the development and industry ecosystems out of the Global South.
and governance of AI is essential to prevent a
reinforced power imbalance whereby developing The absence of historical and geopolitical
economies are relegated to mere endpoints in contexts of power and exploitation from dominant
the global generative AI value chain, either as AI governance debates underscores the
extractive digital workers or as consumers of the necessity for diverse voices and multistakeholder
technology. Though AI policy and governance perspectives. The significant differences between
frameworks are predominantly being developed in some concerns of the Global South and those
China, the EU and North America (46%), compared elevated within more dominant discourses of AI
to 5.7% in Latin America and 2.4% in Africa,72 it risks80 warrant a restructuring of AI governance
is important to recognize the significant activities processes, moving beyond current frameworks
of different national bodies such as Colombia,73 of inclusion.81 To adequately address regional
Brazil,74 Mauritius,75 Rwanda,76 Sierra Leone,77 Viet concerns there must be a reimagining of roles
Nam78 and Indonesia,79 the recently introduced that ensure Global South actors can engage
Digital Forum of Small States (FOSS) chaired by in co-governance.
Manal Siddiqui
Responsible AI Manager, Accenture
Ali Shah
Global Principal Director for Responsible AI,
Accenture
Kathryn White
Global Principal Director for Innovation
Incubation, Accenture
Acknowledgements
Dikshita Venkatesh
World Economic Forum Research Senior Analyst, Responsible AI
Supheakmungkol Sarin
Head, Data and Artificial Intelligence Ecosystems
Stephanie Teeuwen
Specialist, Data and AI
Hesham Zafar
Lead, Digital Trust