You are on page 1of 27

generative AI

• Define generative AI
• Explain how AI works
• Describe generative AI model types
• Describe generative application.

AI≠ MACHINE LEARNING


Generative ai is a type of artificial technology that can produce various
content incl. text data audio, etc.
What is artificial intelligence? difference between machine learning
and AI?

Artificial intelligence(ai) Machine learning(ml)


• It is a discipline. • It is a subfield of ai.
• It is a branch of computer • It is a program or system that
science trains a model from input. can
that deals with creation of make useful predictions from new
intelligent agents which are or never seen before data drawn
systems that can reason and from the same one used to train
learn and act autonomously. model.
• It is the theory and • It gives computer ability to learn
development of computer able without explicit programming.
to perform tasks normally • Two of most common classes of
requiring human intelligence. machine learning models are
Unsupervised and supervised ml
models

Key diff. b/w the 2 ml models is that


later has labels. Labelled date comes with a tag like name type or no.
Whereas prior has no tag.

generative AI Page 1
Example of a problem that a supervised model might try to solve.

Example of data solved by unsupervised model.

If a predicted test data value and actual training value are far that is
called error. Model tries to reduce error.

Deep learning(dl)

It is a subset of ml. ml is a broad field that encompasses many diff.


techniques, dl is a type of ml that uses artificial neural
networks(inspired by human brain, made up of many interconnected
neuons or nodes that can learn to perform tasks by processing data
and make predictions. Have many layers of neurons that helps to learn
complex patterns can be used for both labelled and unlabelled data
this is called semi supervised learning i.e a neural network is trained on
a small amount of labelled and a large amount of unlabelled data.
labelled data helps network to learn basic concepts of tasks and

generative AI Page 2
labelled data helps network to learn basic concepts of tasks and
unlabelled helps to generalise to new examples)allowing them to
process complex patterns than ml. Generative ai is subset of deep
learning.

Discriminative learns conditional probability distribution or prob. Of y our


output given x our input that is this a dog and not a cat.
Generative learns joint probab distribution or probab of x and y predicts the
conditional probab that this is a dog and then can generate a pic.

Generative data can generate new data instances while discriminative


discriminate b/w diff. kind of instances.

generative AI Page 3
discriminate b/w diff. kind of instances.

Top = traditional (mm machine model)


Below = gen ai model

y=f(x) y is model output x is input f is model

generative AI Page 4
Generative Image model

generative AI Page 5
Generative language model

Power of generative ai comes from transformers. At a high level it contains a


encoder and decoder. Encoder inputs the sequence and pass it to decoder
that learns how to decode the representation for a task.

generative AI Page 6
generative AI Page 7
Foundation model is large ai model pretrained on a vast quantity of data
designed to adapt/tuned to a wide range of downstream tasks such as
analysis caption obj recognition etc.

generative AI Page 8
generative AI Page 9
large language models (llm)

• Define large language models (llm)


• Describe llms use cases
• Explain prompt tuning
• Describe gen ai devlopement tools

What are llms?


Large general purpose language models can be pretrained and then
fine tuned for specific purposes.

Similarly llms are trained to solve common language problems like


text classifications ques. Ans. Document summerisation also can be
specialsed to solve specific problems in diff. fields

large language models Page 10


Palm is a transformer model

large language models Page 11


Palm is a transformer model

large language models Page 12


large language models Page 13
large language models Page 14
large language models Page 15
large language models Page 16
large language models Page 17
Introduction to Responsible AI

Transparency fairness accountability and privacy are principles for responsible ai.

Introduction to Responsible AI Page 18


• AI should be socially benefical. Any project should take into account a broad range
of social and economic factors, and we'll proceed only where we believe that
the overall likely benefits substantially exceed the foreseeable risks and
downsides.
• AI should avoid creating or reinforcing unfair bias.We seek to avoid unjust
effects on people, particularly those related to sensitive characteristics, such as
race, ethnicity, gender, nationality, income, sexual orientation, ability, and
political or religious belief.
• AI should be built and tested for safety.We will continue to develop and apply
strong safety and security practices to avoid unintended results that create risks
of home.
• AI should be accountable to people.We will design AI systems that provide
appropriate opportunities for feedback, relevant explanations, and appeal.
• AI should incorporate privacy design principles.We will give opportunity for
notice and consent, encourage architectures with privacy safeguards, and
provide appropriate transparency and control over the use of data.
• AI should uphold high standards of scientific excellence.We will work with a
range of stakeholders to promote thoughtful leadership in this area, during on
scientifically rigorous and multi-disciplinary approaches.We will responsibly
share AI knowledge by publishing educational materials, best practices, and
research thus enable more people to develop useful AI applications.
• AI should be made available for uses that accord with these principles.Many
technologies have multiple uses.We'll work to limit potentially harmful or
abusive applications.In addition to these seven principles, there are certain AI
applications that we will not pursue.
We will not design or deploy AI in these four application areas :
• Technologies that cause or are likely to cause overall harm.
• Weapons or other technologies whose principal purpose or implementation is to
cause or directly facilitate injury to people.
• Technologies that gather or use information for surveillance that violates
internationally accepted norms,
• and technologies whose purpose contravene is widely accepted principles of
international law and human rights.

Introduction to Responsible AI Page 19


Responsible AI: Applying AI Principles with Google Cloud

There is a growing concern surrounding some of the unintended or undesired impacts of AI


innovation.
These include concerns around ML fairness and the perpetuation of historical biases
at scale, the future of work and AI driven unemployment, and concerns around the
accountability and responsibility for decisions made by AI Because there is potential
to impact many areas of society, not to mention people’s daily lives, it's important to
develop these technologies with ethics in mind. Responsible AI is not meant to focus
just on the obviously controversial use cases. Without responsible AI practices, even
seemingly innocuous AI use cases, or those with good intent, could still cause ethical
issues or unintended outcomes, or not be as beneficial as they could be.
Ethics and responsibility are important, not least because they represent the right thing to
do, but also because they can guide AI design to be more beneficial for people's lives.
you’ll see how we approach building our responsible AI process at Google and specifically
within Google Cloud.
At times, you may think, “Well it’s easy for you, with substantial resources and a small
army of people. There are only a few of us, and our resources are limited.”
the truth is that it’s not easy, but it's important to get right, so starting the journey, even
with small steps, is key.
Whether you're already on a responsible AI journey, or just getting started, spending
time on a regular basis, simply reflecting on
your company values and the impact you want to make with your products, will go a
long way in building AI responsibly.
Finally, before we get any further, we’d like to make one thing clear: At Google,
we know that we represent just one voice in the community of AI users and
developers.
We approach the development and deployment of this powerful technology with a
recognition that we do not and cannot know
and understand all that we need to; we will only be at our best when we collectively
tackle these challenges together.
The true ingredient to ensuring that AI is developed and used responsibly is
community.

• AI should be socially benefical. Any project should take into account a broad range of
social and economic factors, and we'll proceed only where we believe that the
overall likely benefits substantially exceed the foreseeable risks and downsides.
• AI should avoid creating or reinforcing unfair bias. We seek to avoid unjust effects
on people, particularly those related to sensitive characteristics, such as race,
ethnicity, gender, nationality, income, sexual orientation, ability, and political or
religious belief.
• AI should be built and tested for safety. We will continue to develop and apply strong
safety and security practices to avoid unintended results that create risks of home.
• AI should be accountable to people. We will design AI systems that provide
appropriate opportunities for feedback, relevant explanations, and appeal.
• AI should incorporate privacy design principles. We will give opportunity for notice
and consent, encourage architectures with privacy safeguards, and provide
appropriate transparency and control over the use of data.
• AI should uphold high standards of scientific excellence. We will work with a range
of stakeholders to promote thoughtful leadership in this area, during on scientifically
rigorous and multi-disciplinary approaches. We will responsibly share AI knowledge
by publishing educational materials, best practices, and research thus enable more
people to develop useful AI applications.
• AI should be made available for uses that accord with these principles. Many

AI Principles with Google Page 20


• AI should be made available for uses that accord with these principles. Many
technologies have multiple uses. We'll work to limit potentially harmful or abusive
applications.In addition to these seven principles, there are certain AI applications
that we will not pursue.
We will not design or deploy AI in these four application areas :
• Technologies that cause or are likely to cause overall harm.
• Weapons or other technologies whose principal purpose or implementation is to
cause or directly facilitate injury to people.
• Technologies that gather or use information for surveillance that violates
internationally accepted norms,
• and technologies whose purpose contravene is widely accepted principles of
international law and human rights.

Simply put, we believe that responsible AI is synonymous with successful AI that can be
deployed for the long term with trust.
We also believe that responsible AI programs and practices afford business leaders a
strategic and competitive advantage.
To explore the business benefits of responsible AI in depth, we sponsored an original report
titled “Staying Ahead of the Curve: The Business Case for Responsible AI,” which was
developed by The Economist Intelligence Unit (EIU), the research and analysis division of
The Economist Group. The report showcases the value of responsible AI practices in an
increasingly AI-driven world. It comprehensively presents the impact that Responsible AI can
have on an organization’s core business considerations. it’s important to emphasize that the
data collected to create this report came from extensive data-driven research, industry-
expert interviews, and an executive survey program. The report reflects the sentiment of
developers, industry leaders deploying AI, and end users of AI. We hope you’ll use these
highlights to draw a connection between your business goals and responsible AI initiatives,
which can empower you to influence stakeholders in your own organization.
The report is subdivided into seven sections and includes data on how responsible AI:
enhances product quality; improves the outlook on acquisition; retention, and engagement
of talent; contributes to better data management, security and privacy; leads to readiness
for current and future AI regulations; leads to improvements to the top-and bottom-line
growth; assists to strengthen relationships with stakeholders and investors; and maintains
strong trust and branding.

The business case for responsible innovation


the first of the seven highlights from the EIU report says that incorporating responsible AI practices
is a smart investment in product development.97% of EIU survey respondents agree that ethical AI
reviews are important for product innovation.

Ethical reviews examine the potential opportunities and harms associated with new technologies to
better align products with Responsible AI design.These reviews closely examine data sets, model
performance across sub-groups, and consider the impact of both intended and unintended
outcomes.

When organizations aren’t working to incorporate responsible AI practices, they expose themselves
to multiple risks, including
delaying product launches, halting work, and in some cases pulling generally available products off
the market.By incorporating responsible AI practices early and providing space to identify and
mitigate harms, organizations can reduce development costs through a reduction in downstream
ethical breaches.
trusting AI systems remains the biggest barrier to adoption for enterprises.90% of organizations
reported encountering ethical issues.
Of those companies, 40% went on to abandon the AI project instead of solving for those issues.

AI Principles with Google Page 21


Of those companies, 40% went on to abandon the AI project instead of solving for those issues.
If implemented properly, Responsible AI makes products better by uncovering and working to
reduce the harm that unfair bias can cause, improving transparency, and increasing security.These
are all key components to fostering trust with your product’s stakeholders, which boosts both a
product’s value to users and your competitive advantage.
The second highlight from the EIU report states that responsible AI trailblazers attract and retain top
talent.The world’s top workers now seek much more than a dynamic job and a good salary.As
demand for tech talent becomes increasingly competitive and expensive, research shows that
getting the right employees is worth it.One study found that top workers are 400% more productive
than average, less-skilled individuals, and 800% more productive in highly complex occupations, such
as software development.It can cost organizations around $30,000 to replace entry-level tech
employees, and up to $312,000 when a tech expert or leader leaves.Organizations that build shared
commitments and responsible AI practices are best positioned to build trust and engagement with
employees, which helps to invigorate and retain top talent.
The third EIU report highlight is the importance of safeguarding the promise of data.According to
The EIU’s executive survey, cybersecurity and data privacy concerns represent the biggest obstacles
to AI adoption.
Organizations need to think very carefully about how they collect, use, and protect data.The
research also found that lost business was the most financially harmful aspect of a data breach,
accounting for 36% of the total average cost.Consumers are also more likely to blame companies for
data breaches rather than the hackers themselves, which highlights the impact that safeguarding
data can have on customer engagement with firms.Enterprise customers also need to be confident
that the company itself is a trustworthy host of their data.
At Google, we know that privacy plays a critical role in earning and maintaining customer trust.With
a public privacy policy, we want to be clear about how we proactively protect our customers’
data.And when an organization can be trusted with data, it can result in larger, more diverse data
sets, which will in turn improve AI outcomes.
All these findings are clear indicators that using responsible AI practices to address data concerns
will lead to greater adoption and business value of AI tech.The fourth EIU report highlight is the
importance of preparing in advance of AI regulation.As AI technology advances, so do global calls for
its regulation from broader society and the business community and from within the technology
sector itself.
Governments have realized the importance of AI regulations and have started working towards
implementing them.For example, to ensure a human-centric and ethical development of AI in
Europe, members of the European Parliament endorsed new transparency and risk-management
rules for AI systems.
However, it still takes significant time and effort to have robust and mature AI regulations globally.
0rganizations developing responsible AI can expect to experience a significant advantage when new
regulations come into force.This might mean a reduced risk of non-compliance when regulation does
take effect, or evenbeing able to productively contribute to conversations about regulation to
ensure that it is appropriately scoped.
The challenge is to develop regulations in a way that is proportionately tailored to mitigate risks
andpromote reliable and trustworthy AI applications while still enabling innovation and the promise
of AI for societal benefit.Take the General Data Protection Regulation, or GDPR, in the European
Union, for example.When it was first adopted, only 31% of businesses believed that their
organization was already GDPR-compliant before the law was enacted.The cost of non-compliance
with GDPR was found to outweigh the costs of compliance by a factor of 2.71.
Reflection on that experience has prompted many organizations to begin planning ahead of AI
regulations.The fifth highlight from the EIU report says that responsible AI can improve revenue
growth.For AI vendors, responsible AI can result in a larger target market, a competitive advantage,
and improved engagement with existing customers.
Furthermore, 66% of executives say their organization has actually decided against working with an
AI vendor due to ethical concerns.
There is mounting evidence of a positive relationship between an organization's ethical behavior and
its core financial performance.
For example, companies that invest in environmental, social, and corporate governance measures,
or ESG perform better on the stock market. Customer behaviour is also influenced by ethics.
A Nielsen survey of 30,000 consumers across 60 countries found that 66% of respondents were

AI Principles with Google Page 22


A Nielsen survey of 30,000 consumers across 60 countries found that 66% of respondents were
willing to pay more for sustainable, socially responsible, and ethically designed goods and services.
Next, the EIU report highlights that responsible AI is powering up partnerships. investors are
increasingly looking to align their portfolios to their personal values, reflected in interest in
sustainable, long-term investing.This stakeholder relationship can influence an organization's
corporate strategy and financial performance.
The broadest definition of sustainable investing includes any investment that screens out unsavory
investees or explicitly takes ESG factors and risks into account, such as greenhouse gas emissions,
diversity initiatives, and pay structures. Although ESG assessment criteria don’t traditionally include
Responsible AI, this trend toward investment in socially responsible firms indicates that funds will be
reallocated toward companies that prioritize responsible AI.
Responsible Artificial Intelligence and Data Governance,” that it evaluates investees against a set of
responsible AI principles.
The final highlight from the EIU report relates to maintaining strong trust and branding. just as a lack
of responsible AI practices organizations that take the lead on responsible AI can expect to reap
rewards related to public opinion, trust, and branding. For technology firms, the connection
between trust and branding has never been stronger can weaken customer trust and loyalty,
evidence confirms that Experts say that without strong oversight of AI, companies that are
developing or implementing AI are opening themselves up to risks, including unfavorable public
opinion, brand erosion and negative press cycles.
And brand erosion doesn't stop at the door of the company that committed the misdeed.
Organizations can mitigate these types of trust and branding risks through the implementation of
responsible
AI practices, which have the potential to boost the organizations and brands they are associated
with. As the report by The Economist Intelligence Unit emphasizes, responsible AI brings undeniable
value to firms, along with a clear moral imperative to embrace it. Although identifying the full
spectrum of negative outcomes that could result from irresponsible AI practices is impossible,
companies have a unique opportunity to make decisions today that will prevent these outcomes in
the future.

AI’s technical considerations and ethical concerns


An ethical dilemma is a situation where a difficult choice has to be made between different
courses of action, each of which entails transgressing a moral principle. Not making a decision is
the same as making a decision to do nothing.they are uncertain and complicated and require
a close examination of your values to solve. It’s important to note that an ethical
dilemma is different from a moral temptation.
A temptation is a choice between a right and wrong, and specifically, when doing something
wrong is advantageous to you.
But what exactly do we mean when we talk about ethics?
is an ongoing process of articulating values, and in turn, questioning and justifying
decisions based on those values, usually in terms of rights, obligations, benefits to
society, or specific virtues.
When looking at ethical frameworks and theories from around the world, the various
approaches can often be contradictory, but regardless of the approach you align with,
ethics is the art of living well with others. As such, it is crucial that ethical deliberation
draws on a diverse set of perspectives and experiences. however, ethics doesn’t lend
itself well to rules or checklists, especially when trying to wade through moral challenges
that have never existed before- -like those created through groundbreaking
technology.It’s also important to understand that ethics should not be viewed as law and
policy.
Ethics reflect values and expectations we have of one another--most of which have not
been written down or enforced by a formal system. While laws and policies often do
draw insight from ethics, many unethical acts are legal, and some ethical acts are illegal.

Concerns about artificial intelligence

AI Principles with Google Page 23


So what are the main AI concerns being raised?
• The first is transparency.
AI systems become more complex, it can be increasingly difficult SLIDE 86-87 - to establish
enough transparency for people to understand how AI systems make decisions. A lack of
transparency can also make it harder for a developer to predict when and how these systems
might fail or cause unintended harm. Models that allow a human to understand the factors
contributing to a decision can help stakeholders of AI systems to better collaborate with an AI.
This might mean knowing when to intervene if the AI is underperforming, strengthening
a strategy for using the results of an AI system, and identifying how the AI can be
improved.
• A second concern is unfair bias.
AI doesn’t create unfair bias on its own; it exposes biases present in existing social systems and
amplifies them. The unfair biases that shape society also shape every stage of AI, from datasets
and problem formulation to model creation and validation.AI is a direct reflection of the societal
context in which it's designed and deployed.
For instance, vision systems are being adopted in critical areas of public safety and physical
security to monitor building activity or public demonstrations. bias can make surveillance
systems more likely to misidentify marginalized groups as criminals These challenges stem from
many root causes, such as the underrepresentation of some groups
and overrepresentation of others in training data, a lack of critical data needed to fully
understand a system’s impact, or a lack of societal context in product development.
• A third AI concern is security.
Like any computer system, there is the potential for bad actors to exploit vulnerabilities
in AI systems for malicious purposes.Safe and secure AI involves traditional concerns in
information security, as well new ones.The data-driven nature of AI makes the training data
more valuable to exfiltrate, plus, AI can allow for greater scale and speed of attacks.
• A fourth AI concern is privacy.
AI presents the ability to quickly and easily gather, analyze, and combine vast quantities
of data from different sources.The potential impact of AI on privacy is immense, leading to
risks of data exploitation, unwanted identification and tracking, intrusive voice and facial
recognition, and profiling.
• Another concern is AI pseudoscience, where AI practitioners promote systems that lack
scientific foundation.Examples include face analysis algorithms that claim the ability to
measure the criminal tendency of a person based on facial features and the shape and
size of their head, or models used for emotion detection to determine if someone is trustworthy
from their facial expressions.
• A sixth concern is accountability to people.
• AI systems should be designed to ensure that they are meeting the needs and
objectives of all types of people, while enabling appropriate human direction and
control.We strive to achieve accountability in AI systems in different ways, through clearly
defined goals and operating parameters for the system,transparency about when and how AI
is being used, and the ability for people to intervene or provide feedback .
• The final AI concern is AI-driven unemployment and deskilling.
While AI brings efficiency and speed to common tasks, there is a more general concern
that AI drives unemployment and deskilling

There are three main concerns on large language models, hallucinations, factuality, and
anthropomorphization.
In generative AI, hallucinations refer to instances where the AI model generates
content that is the AI model generates content that is unrealistic, fictional, or completely
fabricated.
Factuality relates to the accuracy or truthfulness of the information generated by a generative
AI model.
Anthropomorphization refers to the attribution of human-like qualities, characteristics,
or behaviors to non-human entities, such as machines or AI models.

AI Principles with Google Page 24


or behaviors to non-human entities, such as machines or AI models.

AI and new technology can help solve complex problems.


Allowing more reliable forecasting of complex dynamic systems.
And providing more affordable goods and services.
And freedom from routine or repetitive tasks.
The key benefit of ethical practices in an organization is that they can help to avoid bringing
harm to customers, users, and society at large.
Ethical practices promote human flourishing.

Ethical issue spotting


Issue spotting is the process of recognizing the potential for ethical concerns in an AI project.To
address ethical concerns, we first need to identify them.It may be tempting to try and make
this process more efficient through checklists, outlining what is and isn’t acceptable for
each principle.We tried to create decision trees and checklists that would ensure our
technology would be ethical.
That didn’t work.
The reality is that we need to address ethical issues, not just in familiar products or use
cases, but also by recognizing new risks that we have never seen before emerging from
new technologies.Each use case, customer, and social context is unique.
In practice, leveraging ethical lenses provides a structured way of considering issues from
multiple angles and perspectives to make sure we are surveying and surfacing what is important
to consider.

Google ethical aims


Ethical aims can help be a guide for what ethical issues may exist, but don't represent a
checklist.
Let’s walk through some of the core ethical aims, known as ‘Objectives for AI applications’ for
each. The principle also aims to reduce the risk of social harm in terms of quantity severity
likelihood and extent It aims to diminish risk to vulnerable groups
The second principle, avoid creating or reinforcing unfair bias, aims to promote AI that creates
fair, just, and equitable treatment of people and groups.Through this principle we pay close
attention to the impact that technology discrimination might have on the usefulness of the
product for all users.These are not the kinds of labels and distinctions we want to be seeing and
this would be an example of a data set that doesn’t reflect our global user-base
Underrepresentation, as depicted in this example, is harmful.To bring greater
representation across the full range of diversity, Google ran a competition that invited global
citizens To add their images to an extended data set because training data has to be able to
represent societies as they are, not as a limited data set might represent them.What’s
important to recognize is that unfairness can enter into the system at any point in the ML
lifecycle, from how you define the problem originally, how you collect and prepare data, how
the model is trained and evaluated, and on to how the model is integrated and used.At each
stage in this flow, developers face different responsible AI questions and considerations.
Within that lifecycle, the way we sample data, the way we label it, how the model was
trained and whether or not the objective leaves out a particular set of users, can all work
together to create biased system. At its core, doing AI responsibly is about asking hard
questions.
The third, be built and tested for safety seeks to promote the safety—both bodily integrity and
overall health—of
people and communities, as well as the security of places, systems, properties, and
infrastructures from attack or disruption.This principle also aims to ensure that there is
effective oversight and testing of safety-critical applications, that thereis control of AI
systems behavior, and that there is a limit to the reliance on machine intelligence.

AI Principles with Google Page 25


systems behavior, and that there is a limit to the reliance on machine intelligence.
The fourth principle, be accountable to people, aims to respect people’s rights and
independence.This means limiting power inequities, and limiting situations where people lack
the ability to opt -out of an AI interaction. The principle aims to promote informed user
consent, and it seeks to ensure that there is a path to report and redress misuse, unjust
use, or malfunction.
With the fifth of Google’s AI Principles, incorporate privacy design principles, the aim is to
protect the privacy and safety of both individuals and groups.To do so, we want to ensure
that personally identifiable information and sensitive data are handled with special care
through robust security.
The sixth principle, uphold high standards of scientific excellence, seeks to advance the state of
knowledge in AI.This means to follow scientifically rigorous approaches and ensure that
feature claims are scientifically credible.This principle aims to do this through a commitment to
open inquiry, intellectual rigor, integrity, and collaboration.
The last ‘Objective for AI applications’ in the AI Principles, be made available for uses that
accord with these principles, seeks accountability for Google’s unique impact on society.The
principle aims for the widest availability and impact of our beneficial AI technologies, while
discouraging harmful or abusive AI applications

These seven aims and four areas together make up Google’s AI principles and succinctly
communicate our values in developing advanced technologies.
Google’s AI Governance
While responsible AI technical tools are helpful to examine how a particular ML model is
performing,having robust AI governance processes is a critical first step in
establishing what your goals are.technical tools are only useful if you have clear
responsibility goals.A dedicated process promotes a culture of responsible AI often not present
in traditional product development lifecycles.
One misconception is that hiring ethical people will guarantee ethical AI products.The reality is
that two people considered to have strong ethics could evaluate the same situation,
or AI solution, and come to very different conclusions based on their experiences
and backgrounds. Both are big factors in achieving ethical outcomes.
Another common misconception is that it's possible to create a checklist for responsible
AI. Checklists or decision trees can feel comforting, but in our experience checklists are
ineffective at governance for such nascent technologies. For every product, both the technical
details and context in which it's used are unique and require its own evaluation.Following a
checklist can place boundaries on critical thinking and lead to ethical blind spots. These
procedures allow your teams to exercise moral imagination, which is envisioning the full range
of possibilities in a particular situation in order to solve an ethical challenge.
Google created a formal review committee structure to assess new projects, products and deals
for alignment with our AI Principles.
The committee structure consists of the following AI governance teams: A central
‘Responsible Innovation team’ provides guidance to teams across different Google
product areas that are implementing AI Principles. They handle the day-to-day operations
and initial assessments.This group includes: user researchers, social scientists, ethicists human
rights specialists, policy and privacy advisors, and legal experts, among many others, which
allows for diversity of perspectives and disciplines.
The second AI governance team in our committee structure is a group of senior experts
from a range of disciplines across Google who provide technological, functional, and
application expertise.
These experts inform strategy and guidelines around emerging technologies and
themes, and consult on reviews when required.
The third AI governance team in our committee structure is a council of senior
executives who handle the most complex and difficult issues, including decisions that
affect multiple products and technologies. They serve as the escalation body, make
complex, precedent- setting decisions, and provide accountability at the highest level of
the company.

AI Principles with Google Page 26


the company.

Google Cloud’s review process


The goal is to identify any use cases that risk not being aligned with our principles before the deal
moves forward.
This review happens in several stages: Sales Deal Submission is the intake process
that can be achieved in two ways to ensure coverage.Field Sales representatives are
trained to submit their AI customer opportunities for review.Additionally, an
automated process flags deals for review in our company-wide sales tool.In the
Preliminary Review stage members of the Cloud AI Principles team, with help from
the central.Responsible Innovation team, review deals submitted via the intake
process and prioritize deals needing a deeper review.During this preliminary review,
they apply any relevant historical precedent, discuss and debate potential AI
principles risks, and request additional information where required.This analysis sets
the review agenda for the AI principles deal review committee, which is the group
directly responsible for making final decisions.At the Review, Discuss and Decide
stages, the deal review committee meets to discuss the customer deals.This
committee is composed of leaders across multiple functions in the organization such
as product, Policy, sales, AI ethics, and legal.
The range of decisions this group makes can include: Go forward Don't go forward Cannot go
forward until certain conditions / metrics are met Or escalate decision The decisions are
made by consensus.
Now let's walk through the Cloud AI product development review.
It also consists of several different stages: For Pipeline Development, the Cloud AI
Principles team tracks the product pipeline and plans reviews so they happen early on in
the product development lifecycle.
Pre- -liminary review is where a team works to prioritize the AI products for review, based on
launch timelines unless a particular use case is deemed more risky. With a healthy product
pipeline we aim for in-depth reviews every two weeks.
Before a review meeting, members of the Cloud AI Principles team evaluate the product
and draft a Review brief.
They work hand in hand with the product managers, engineers, other members of the
Cloud AI Principles team, and fairness experts to deeply understand and scope the
product review.
The review brief includes: the intended goals and social benefits of the product, what
business problem the product will solve, the data being used,how the model is trained
and monitored, the societal context in which the product is going to be integrated and it's
potential risks and harms.
At the approval stage, the alignment plan is sent to the committee and product leaders for sign-
off. with this sign-off, the alignment plan is incorporated into the product development
roadmap and the AI Principles team tracks the execution and completion of the
alignment plan.

AI Principles with Google Page 27

You might also like