Professional Documents
Culture Documents
• Define generative AI
• Explain how AI works
• Describe generative AI model types
• Describe generative application.
generative AI Page 1
Example of a problem that a supervised model might try to solve.
If a predicted test data value and actual training value are far that is
called error. Model tries to reduce error.
Deep learning(dl)
generative AI Page 2
labelled data helps network to learn basic concepts of tasks and
unlabelled helps to generalise to new examples)allowing them to
process complex patterns than ml. Generative ai is subset of deep
learning.
generative AI Page 3
discriminate b/w diff. kind of instances.
generative AI Page 4
Generative Image model
generative AI Page 5
Generative language model
generative AI Page 6
generative AI Page 7
Foundation model is large ai model pretrained on a vast quantity of data
designed to adapt/tuned to a wide range of downstream tasks such as
analysis caption obj recognition etc.
generative AI Page 8
generative AI Page 9
large language models (llm)
Transparency fairness accountability and privacy are principles for responsible ai.
• AI should be socially benefical. Any project should take into account a broad range of
social and economic factors, and we'll proceed only where we believe that the
overall likely benefits substantially exceed the foreseeable risks and downsides.
• AI should avoid creating or reinforcing unfair bias. We seek to avoid unjust effects
on people, particularly those related to sensitive characteristics, such as race,
ethnicity, gender, nationality, income, sexual orientation, ability, and political or
religious belief.
• AI should be built and tested for safety. We will continue to develop and apply strong
safety and security practices to avoid unintended results that create risks of home.
• AI should be accountable to people. We will design AI systems that provide
appropriate opportunities for feedback, relevant explanations, and appeal.
• AI should incorporate privacy design principles. We will give opportunity for notice
and consent, encourage architectures with privacy safeguards, and provide
appropriate transparency and control over the use of data.
• AI should uphold high standards of scientific excellence. We will work with a range
of stakeholders to promote thoughtful leadership in this area, during on scientifically
rigorous and multi-disciplinary approaches. We will responsibly share AI knowledge
by publishing educational materials, best practices, and research thus enable more
people to develop useful AI applications.
• AI should be made available for uses that accord with these principles. Many
Simply put, we believe that responsible AI is synonymous with successful AI that can be
deployed for the long term with trust.
We also believe that responsible AI programs and practices afford business leaders a
strategic and competitive advantage.
To explore the business benefits of responsible AI in depth, we sponsored an original report
titled “Staying Ahead of the Curve: The Business Case for Responsible AI,” which was
developed by The Economist Intelligence Unit (EIU), the research and analysis division of
The Economist Group. The report showcases the value of responsible AI practices in an
increasingly AI-driven world. It comprehensively presents the impact that Responsible AI can
have on an organization’s core business considerations. it’s important to emphasize that the
data collected to create this report came from extensive data-driven research, industry-
expert interviews, and an executive survey program. The report reflects the sentiment of
developers, industry leaders deploying AI, and end users of AI. We hope you’ll use these
highlights to draw a connection between your business goals and responsible AI initiatives,
which can empower you to influence stakeholders in your own organization.
The report is subdivided into seven sections and includes data on how responsible AI:
enhances product quality; improves the outlook on acquisition; retention, and engagement
of talent; contributes to better data management, security and privacy; leads to readiness
for current and future AI regulations; leads to improvements to the top-and bottom-line
growth; assists to strengthen relationships with stakeholders and investors; and maintains
strong trust and branding.
Ethical reviews examine the potential opportunities and harms associated with new technologies to
better align products with Responsible AI design.These reviews closely examine data sets, model
performance across sub-groups, and consider the impact of both intended and unintended
outcomes.
When organizations aren’t working to incorporate responsible AI practices, they expose themselves
to multiple risks, including
delaying product launches, halting work, and in some cases pulling generally available products off
the market.By incorporating responsible AI practices early and providing space to identify and
mitigate harms, organizations can reduce development costs through a reduction in downstream
ethical breaches.
trusting AI systems remains the biggest barrier to adoption for enterprises.90% of organizations
reported encountering ethical issues.
Of those companies, 40% went on to abandon the AI project instead of solving for those issues.
There are three main concerns on large language models, hallucinations, factuality, and
anthropomorphization.
In generative AI, hallucinations refer to instances where the AI model generates
content that is the AI model generates content that is unrealistic, fictional, or completely
fabricated.
Factuality relates to the accuracy or truthfulness of the information generated by a generative
AI model.
Anthropomorphization refers to the attribution of human-like qualities, characteristics,
or behaviors to non-human entities, such as machines or AI models.
These seven aims and four areas together make up Google’s AI principles and succinctly
communicate our values in developing advanced technologies.
Google’s AI Governance
While responsible AI technical tools are helpful to examine how a particular ML model is
performing,having robust AI governance processes is a critical first step in
establishing what your goals are.technical tools are only useful if you have clear
responsibility goals.A dedicated process promotes a culture of responsible AI often not present
in traditional product development lifecycles.
One misconception is that hiring ethical people will guarantee ethical AI products.The reality is
that two people considered to have strong ethics could evaluate the same situation,
or AI solution, and come to very different conclusions based on their experiences
and backgrounds. Both are big factors in achieving ethical outcomes.
Another common misconception is that it's possible to create a checklist for responsible
AI. Checklists or decision trees can feel comforting, but in our experience checklists are
ineffective at governance for such nascent technologies. For every product, both the technical
details and context in which it's used are unique and require its own evaluation.Following a
checklist can place boundaries on critical thinking and lead to ethical blind spots. These
procedures allow your teams to exercise moral imagination, which is envisioning the full range
of possibilities in a particular situation in order to solve an ethical challenge.
Google created a formal review committee structure to assess new projects, products and deals
for alignment with our AI Principles.
The committee structure consists of the following AI governance teams: A central
‘Responsible Innovation team’ provides guidance to teams across different Google
product areas that are implementing AI Principles. They handle the day-to-day operations
and initial assessments.This group includes: user researchers, social scientists, ethicists human
rights specialists, policy and privacy advisors, and legal experts, among many others, which
allows for diversity of perspectives and disciplines.
The second AI governance team in our committee structure is a group of senior experts
from a range of disciplines across Google who provide technological, functional, and
application expertise.
These experts inform strategy and guidelines around emerging technologies and
themes, and consult on reviews when required.
The third AI governance team in our committee structure is a council of senior
executives who handle the most complex and difficult issues, including decisions that
affect multiple products and technologies. They serve as the escalation body, make
complex, precedent- setting decisions, and provide accountability at the highest level of
the company.