You are on page 1of 61

IBM watsonx.

governance
Accelerate responsible,
transparent and explainable AI
workflows
Content by:
Divya Sridharabalan
WW Sales Leader, watsonx.governance
Divya.Sridharabalan@ibm.com

Eric Martens
Senior, Learning Content Development | Data and AI
emartens@us.ibm.com

Presenter:
Farah Auni Hisham
APAC Ecosystem Technical Enablement Specialist | Data & AI
farah.hisham@ibm.com

1
APAC Ecosystem Technical Enablement | Data & AI
Seller guidance Slides in this presentation marked as "IBM
and Business Partner Internal Use Only"
are for IBM and Business Partner use
References in this presentation to IBM
products, programs, or services do not
imply that they will be available in all

and legal and should not be shared with clients


or anyone else outside of IBM or the
Business Partners’ company.
countries in which IBM operates.
Product release dates and/or capabilities
referenced in this presentation may change

disclaimer
at any time at IBM’s sole discretion based
© IBM Corporation 2023. on market opportunities or other factors
All Rights Reserved. and are not intended to be a commitment
to future product or feature availability
The information contained in this in any way. Nothing contained in these
publication is provided for informational materials is intended to, nor shall have
purposes only. While efforts were made the effect of, stating or implying that any
to verify the completeness and accuracy activities undertaken by you will result
of the information contained in this in any specific sales, revenue growth,
IBM and Business Partner publication, it is provided AS IS without or other results.
warranty of any kind, express or implied.
Internal Use Only In addition, this information is based on All client examples described are presented
IBM’s current product plans and strategy, as illustrations of how those clients have
which are subject to change by IBM without used IBM products and the results they
notice. IBM shall not be responsible for may have achieved. Actual environmental
any damages arising out of the use of, or costs and performance characteristics
otherwise related to, this publication or any may vary by client.
other materials. Nothing contained in this
publication is intended to, nor shall have
the effect of, creating any warranties or
representations from IBM or its suppliers
or licensors, or altering the terms and
conditions of the applicable license
agreement governing the use
of IBM software.

2
APAC Ecosystem Technical Enablement | Data & AI
watsonx.governance
AI Landscape: lawsuit and regulations
Agenda What is the opportunity?
What challenges come with AI adoption?

The watsonx.governance solution


Lifecycle Governance
Risk Management
Regulatory Compliance

Selling watsonx.governance
Roadmap
Client Stories
Value proposition & Use cases
FAQs

Appendix
Client Engineering offering
Expert Labs offering
Competitive insights

3
APAC Ecosystem Technical Enablement | Data & AI
The AI landscape: lawsuits and regulations

APAC Ecosystem Technical Enablement | Data & AI 4


AI regulations
are coming closer

APAC Ecosystem Technical Enablement | Data & AI 5


Constantly growing & changing regulations
drive the need for governance
United Kingdom - AI
Regulation Policy

Canada - Bill C27:


AI and Data Act
Germany - AI Strategy Fine: $25M or 5% of company’s
South Korea - AI Strategy gross global revenue
UAE - Strategy for AI India – National Strategy for AI
United States – National New York City -
Australia - AI Ethics Framework Artificial Intelligence Initiative AI Hiring Law

2017 2018 2019 2020 2021 2022 2023


Norway – National AI Strategy United States - NIST United States - AI
Serbia – Strategy for the issues an AI risk Bill of Rights.
Pan-Canadian AI Strategy management framework Validation for
development of AI
Japan - AI Technology Strategy algorithms to be
Colombia - National Policy for European Union - The AI Act explainable and
Digital Transformation and AI Fine: €30M or 6% of protect against
company's global revenue discrimination

European Union - GDPR China - Internet information Singapore - Launches AI


Fine: €20M or 4% of service algorithm Verify: An AI Governance
company’s annual turnover recommendation Testing Framework and
management regulations Toolkit

6
APAC Ecosystem Technical Enablement | Data & AI
Enterprise considerations

48% 46%

Business leaders face Explainability


Believe decisions made by
Ethics
Concerned about the
challenges in scaling AI across Generative AI are not
sufficiently explainable.
safety and ethical
aspects of Generative AI.
the enterprise with trust

80% of business leaders see


at least one of these ethical 46% 42%
issues as a major concern

Bias Trust
Believe that Generative AI will Believe Generative AI
propagate established biases. cannot be trusted.

APAC Ecosystem Technical Enablement | Data & AI Agree Neutral Disagree


IDC predicts AI “The frenzy around generative AI
has brought all forms of AI back
lifecycle opportunity is into the limelight. IDC expects this
6.7B in 2022 with the growth trend to continue through
AI Governance 2023 and beyond as foundation
models become more accessible
opportunity growing at and are embedded into
55% year-over-year AI–enabled applications.”

– Kathy Lange, Research Director,


IDC AI and Automation Research

“IDC predicts that worldwide spending on AI


solutions will surpass $500 billion by 2027”

APAC Ecosystem Technical Enablement | Data & AI 8


AI needs governance

The process of directing,
monitoring and managing the
AI activities of an organization
AI governance is complicated

AI governance Companies have models Governance is not The lack of tools for
collaboration requires in multiple tools, a one-size-fits-all collaboration and
lots of manual work; applications and approach. communication impacts
amplified by changes platforms, developed stakeholder
in data and model inside and outside the management
versions. organization

APAC Ecosystem Technical Enablement | Data & AI 10


AI governance is complicated

Common challenges Getting to an ideal state

Manual work can lead to costly errors, drawn out model lifecycles
Automate AI governance activities to streamline processes
and lead to employee burnout

Overhead and missed opportunities resulting from multiple,


Enhance suboptimal tooling, automate and consolidate
disparate, tools and interfaces not optimized for AI

Governing AI is not a one-size-fits-all approach, Use customized, single set of governance policies and workflows
need to govern across hybrid AI across platforms and applications

The lack of tools for collaboration and communication Drive visibility across the organization with automated
impacts stakeholder management collaborative tools, customizable dashboards and reports

11
APAC Ecosystem Technical Enablement | Data & AI
Introducing…

watsonx.governance

APAC Ecosystem Technical Enablement | Data & AI 12


watsonx.governance in watsonx

A toolkit for AI governance


One unified, integrated AI governance platform to govern generative AI and predictive machine learning (ML)

Leverage foundation models to automate data


search, discovery, and linking in watsonx.data

watsonx

Leverage governed enterprise data in watsonx.data


to seamlessly train or fine-tune foundation models

Enable fine-tuned models to be managed through market-


leading governance and lifecycle management capabilities

13
APAC Ecosystem Technical Enablement | Data & AI
watsonx.governance
Accelerate responsible,
transparent, and explainable AI
workflows

One unified, integrated Lifecycle Governance Risk Management Regulatory Compliance


AI governance platform
to govern generative AI Govern across the AI Manage risk & protect Adhere to regulatory
and predictive machine lifecycle. Automate and reputation by compliance by
learning (ML) consolidate tools, automating workflows translating growing
applications and to ensure quality and regulations into
platforms. Capture better detect bias enforceable policies.
metadata at each stage and drift.
and support models
built and deployed
in 3rd party tools.

Comprehensive Open Automatic metadata recording


Govern the end-to-end AI lifecycle Support governance of models built and data transformation/lineage
with metadata capture at each stage and deployed in 3rd party tools. capture though Python notebooks.

APAC Ecosystem Technical Enablement | Data & AI


What IBM offers

Lifecycle governance:
operationalize AI with confidence

– Monitor, catalog, and govern models across the AI


lifecycle

– Automate the capture of model metadata for to


facilitate management and compliance

– Oversee model performance across the entire


organization with dynamic dashboards and
dimensional reporting

15
APAC Ecosystem Technical Enablement | Data & AI
Lifecycle Governance - Govern across AI lifecycle

IBM watsonx.governance is the next-generation AI Governance Enterprise Toolkit, not a rebrand of Watson.
There will be new capabilities, in addition to the capabilities currently provided by IBM AI Governance.

Lifecycle
Stages
Propose Develop Test Validate Operate Monitor

watsonx.governance
Trust in IBM AI Factsheets ⚫ Model Inventory, E2E Fact collection & consolidation
Model & IBM Watson OpenScale ⚫ Model Monitoring & Evaluation
Process IBM Watson OpenPages ⚫ Process, Approvals, Attestations, Risk Assessment, Management and Reporting
& other capabilities
Integrated

Trust in
Data
IBM Watson Knowledge Catalog ⚫ (Data) cataloging, lineage & policies

16
APAC Ecosystem Technical Enablement | Data & AI
Lifecycle Governance - Track facts and metrics

Factsheets 2.0

End-to-end lifecycle governance – easily see


predictive and foundation models progress
across lifecycle

Organize by use case – track different


models/LLMs, different versions, and different
approaches

Automated metadata capture across lifecycle


shown in model Factsheet

Customize to end-user / stakeholder needs with


custom facts

Capture model documentation and related


artifacts

17
APAC Ecosystem Technical Enablement | Data & AI
What IBM offers

Risk Management - Trusted:


manage risk and protect reputation

– Preset thresholds for alerts when key metrics


are breached

– Identify, manage and report on risk and


compliance at scale

– Provide explainable model results in support


of audits and to avoid fines

18
APAC Ecosystem Technical Enablement | Data & AI
Explain model output

Explainability for predictive models

Generate detailed explanations for


predictions using both open-source
and proprietary algorithms

Explainability for generative models


using source attribution

Find and highlight the source of the


output and relevant context for
answers given by LLMs available in
watsonx.ai

19
APAC Ecosystem Technical Enablement | Data & AI
Evaluation Metrics

Predictive model monitors

1. Fairness: Determine whether a model produces biased outcomes that favor a monitored group over a reference group.
2. Quality: A monitor that evaluates how well a model predicts accurate outcomes based on the evaluation of feedback data
3. Drift: When model accuracy declines over time.
4. Explainability: An insight into a particular prediction of a model, showing which model inputs had the most impact on the
model output.

Generative model monitors

1. PII: Monitor model output to ensure that it does not expose PII from the training data set in its responses.
2. HAP content: Monitor model output to ensure it does not produce any hateful or profane responses.
3. Quality: The method for assessing model quality varies based on the task the model is performing.
4. Drift: LLM Drift refers to significant alterations in LLM responses over a brief timeframe.
5. Explainability (API only): Show relevant sources from the training data that the model used to construct its response. I

20
APAC Ecosystem Technical Enablement | Data & AI
What IBM offers

Regulatory Compliance:
satisfy AI regulations

– Translate external AI regulations into enforceable


policies for automated enforcement.

– Provide core services to help adhere to external AI


regulations for audit and compliance

– Use factsheets for transparent model processes

For reference only. Features will go live on February 2024

21
APAC Ecosystem Technical Enablement | Data & AI
Translate external AI regulations into enforceable policies

Simplify data governance, risk management and regulatory compliance

Fully-customizable dashboards for model status across the entire enterprise

A highly scalable, AI-powered, and unified GRC platform

22
APAC Ecosystem Technical Enablement | Data & AI
Transparent model processes

Fully customize approval workflows and notifications to foster collaboration


between data science teams, AI developers, and business stakeholders

Automatically updated with data from Factsheets and OpenScale, reducing


manual labor for regulatory compliance and improving time-to-value for
AI projects

23
APAC Ecosystem Technical Enablement | Data & AI
Selling watsonx.governance

Every watsonx.ai opportunity is a


watsonx.governance opportunity
IBM customers are creating
more mature AI governance

“ More than 80% say that they’ll


commit 10% or more of their
total AI budget to meeting
regulatory requirements by
Prepare for audit and Proactively mitigate bias US Open: Look at AI bias 2024 and 45% are planning
regulatory compliance in the hiring process in a novel way to spend at least 20%.”
– North American Bank, multiple – Leaders at a North American – AI-assisted curation of match
Accenture
data science stacks, 1000s of retailer wanted their company highlights, available 2 minutes
From AI compliance to competitive
models. to meet commitments as a after the match ends. advantage, 2022
fair employer.
– Manual audit process took – Excitement score is biased by
months of work. – Invested in IBM software to player rank and the court where
monitor and actively seek out the match is played.
– Invested in IBM software for its potential bias in their hiring
completeness and its ability to systems. – Post-processing de-biasing
work with existing technology. applied to increase court fairness
from 71% to 82% without
impacting overall accuracy.

APAC Ecosystem Technical Enablement | Data & AI


And analysts agree, IBM is a
leader in AI Governance

“ After a thorough evaluation of


IBM's strategies and
capabilities, IDC has positioned
the company in the Leaders
category in this 2023 IDC
MarketScape for AI governance
platforms”

IDC
IDC MarketScape: Worldwide AI
Governance Platforms 2023 Vendor
Assessment

APAC Ecosystem Technical Enablement | Data & AI


Source: IDC, 2023
Business Impact & Value Proposition
Business value with
Key Decision Makers What do they care about?
watsonx.governance
Chief Executive Officer Organizational Accountability Enterprise-wide views, repeatable
processes, clear ownership
CEO Office Organizational Risks • Reduce operational expense, improve
• Chief Finance Officer • Risks to profitability decision making
• Chief Marketing Officer • Risks to brand • Protect against reputational damage
• Chief Risks Officer • Risks to Enterprise • Documented use cases, track AI and non-
AI risk together, workflows with approvals
Human Resources Diversity and Equality Bias monitoring and mitigation,
documentation
Privacy, Legal and Compliance Adherence to internal and external laws, Reduce risk of regulatory fines, document
regulatory requirements, policies, and procedures sources of model risk
Business and Technology Operational Efficiency • Established workflows with approvals
• Chief Data Officer • Team Productivity • Enterprise-wide dashboards
• Chief Technology Officer • Responsible AI metrics • Automation of metrics collection,
• VP of Information Technology • Operational Risk Mitigation documentation
End-Users / Key Influencers • Automation with ability to scale • Automatic monitoring and alerts for a
• Data Scientists • Stakeholder alignment variety of AI metrics
• ML Engineers • Consistent policies and processes • Automatic model documentation
• Model Validators • Enhanced collaboration across multiple
• Business Owners stakeholders Use case documentation
28
APAC Ecosystem Technical Enablement | Data & AI
Use Case Talent Customer Business
Care Process
Highlights Outsourcing
→ Enable fairness by → Ensure agents deliver → Monitor customer-
evaluating AI models quality results both specific process
used for hiring decisions immediately and long documents for personal
term without overhead identifiable information
→ Provide risk with customizable alerts
management capabilities → Monitor chats for red
to assess and mitigate flags of toxicity, personal → Monitor NLU text
significant regulatory or info, or off-topic models for drift, relevancy,
policy breaches exposed conversations and more for hands-off
by AI projects (e.g. NY HR monitoring
Regulation) → Flag when responses are
outside of social-norms and → Support BPO activities
→ Document model summaries are factually across finance, human
decisions and model incorrect, further reducing resources, procurement
behavior for lineage human intervention and more groups with
and explanations of dashboards and
HR decisions customizable workflows

APAC Ecosystem Technical Enablement | Data & AI 29


#watsonx-governance-FAQs

“Are there any new offerings within the rebranded


watsonx.governance portfolio, or are these largely
pre-existing solutions that have been repositioned
under the new “watsonx” naming convention?”

Paul
“What are the product components underneath
watsonx.governance?”
Emily

“Who are the decision makers and stakeholders for


Fatin
AI Governance?”
#watsonx-governance-FAQs

“Are there any new offerings within the rebranded


watsonx.governance portfolio, or are these largely
pre-existing solutions that have been repositioned
under the new “watsonx” naming convention?”

Paul
IBM watsonx.governance is not a Watson rebrand.
It is the next-generation AI Governance toolkit.
There will be new capabilities, in addition to the
capabilities currently provided by IBM AI watsonx
Governance such as integration with watsonx.ai.
#watsonx-governance-FAQs

“Who are the decision makers and stakeholders for


AI Governance?”

AI Governance key decision makers are:


Emily

CEO, CFO, CMO, CRO, CDO, CTO, VP IT


Important stakeholders for AI Governance are:
• HR, Privacy
• Legal and Compliance watsonx

• End-Users (Data Scientists, ML Engineers,


Model Validators and Business Owners)
Note that each persona has different priorities ☺
#watsonx-governance-FAQ

“What are the product components underneath


watsonx.governance?”

IBM watsonx.governance represents existing


Fatin

services/products below:
• Factsheets
• OpenScale
• OpenPages
Not only that, it has additional capabilities such as
readily integrated with other watsonx components!
watsonx
IBM Client Engineering
Let’s create ↷
What do we offer? Get the resources you need to close
A no-cost IBM multi-disciplinary team and expertise opportunities with Deal Support Requests.
to jointly innovate and rapidly prove solutions to your
business needs, leveraging IBM technologies. Request support from:
• Client Engineering
• IBM Financing
What value do you get?
• Ecosystem Engineering
Confidence in a technical solution to your compelling
• watsonx funding approval
business needs and accelerating time to value.

What is your commitment?


Your business and technology context, sponsorship,
subject matter experts, and data.

APAC Ecosystem Technical Enablement | Data & AI 35


Expert Labs helps clients reliably use
Large Language Models across the AI lifecycle

Services Offering Entry Points Business Needs Outcomes

Build • New Generative • Evaluate, Validate Technology Expert Labs


watsonx.governance AI users and Monitor risks will help clients deploy an
• Existing watsonx.ai related to LLMs LLM with confidence & risk
clients • Scale LLMs across control assurance:
• Clients using the AI lifecycle • Evaluate, validate, and
watsonx.ai or non- monitor risks related
IBM platforms for to LLMs
Generative AI • Operationalize LLMs
with trust and
confidence
• Model inventory for
model details
• Manage and mitigate
LLM risks

APAC Ecosystem Technical Enablement | Data & AI 36


Build watsonx.governance

Solution Outcomes Key Benefits

Expert Labs • Stand up a predefined


governance framework
• Evaluate, validate, and
monitor risks related to LLMs
• Use of Expert Labs assets
to expedite the service

Offering: • Onboard one Large Language


Model (LLM) in support of a
• Operationalize LLMs with
trust and confidence
execution

• Works with client’s current


Model single use case
• Model inventory for
AI application stack
regardless of maturity level
Management
• Configure two out of the box model details
and one custom LLM metric
to access and monitor • Manage and mitigate • Configures solution

Solution LLM performance LLM risks components to capture


facts and LLM metadata
• Enablement through co- • Builds a repeatable,
creation to monitor, catalog reusable and extendable
and govern the selected LLM architecture pattern
across the AI lifecycle

• Showcase & walkthrough end-


to-end AI Model Lifecycle

37
APAC Ecosystem Technical Enablement | Data & AI
IBM Technology Expert Labs
Build watsonx.governance
What is the watsonx.governance offering? Transaction Details
Build watsonx.governance helps clients evaluate, validate, and monitor
Technology Expert Retail Price Duration Transact by Transact by
their Large Language Models (LLMs) to manage the risks they pose. Our Labs Offering Part (SaaS) Part (Non-Saas)
Build Offering empowers clients to confidently deploy an LLM with their full
potential using the watsonx.ai or third-party environments, successfully
infusing business with Generative AI in a governed and controlled manner. Build $150k USD 7 Weeks Qty 1- D0GC9ZX Coming 2024
Our Build offering expedites the availability of LLMs for business use with watsonx.governance Qty 7- D0GC7ZX
trust and transparency along with enhancing risk management. – Model
Management

Key Outcomes
• Help clients stand up a pre-defined watsonx.governance framework with
Prerequisites
enablement by configuring one LLM to the platform with confidence and • watsonx.governance instance with AI Factsheets and OpenScale
risk control assurance. installed and ready for use case (SaaS essentials)
• Demonstrate the model configuration, validation and monitoring • One (1) LLM approve for use
processes
• Leverage a repeatable framework for configuring additional LLM use Resources and contacts
cases with minimal customization/extension • Product Manager: Sarah Memon
• Practice Manager: Nicole Smith
• Technical Contact: Sourav Mazumder
Deliverables • WW Data & AI Services Sales: Ted Trask
• Strategies for identifying and mitigating risks with LLMs • Find a Solution Architect here
• Methodology to Evaluate, Validate, and Monitor LLMs • Offering Page
• Deployed framework for operationalization of LLMs • Slack: #ask-expert-labs
• Planning Readout to review governance solution and outline • Solution Engineer Request Form
procedures, processes, and best practices for effective management
of watsonx.governance
38
APAC Ecosystem Technical Enablement | Data & AI
Seller Resources

Learn and Activate Briefings and Pilot Useful Resources Support Contacts

AI for Business Bring in the governance External Client Presentation Worldwide Sales:
Divya Sridharabalan
topic early in all of your
watsonx.governance sales client discussions on Competitive Insights WW Technical Sales:
materials on Seismic Generative AI and watsonx Melanie Brunache
Product Roadmap Product Management:
watsonx.governance Learn how to position Doug Stauber
Technical Enablement watsonx.governance Objection Handling Upasana Bhattacharya
series (IBM only) with watsonx.ai IBM Consulting:
ISC Guidance Phaedra Boinodiris
IBM Consulting PoV Land new or Expand your Shyam Nagarajan
playback recording active/planned watsonx.ai Frequently Asked Questions Client Engineering:
(IBM only) pilots to include Carlo Appugliese
watsonx.governance Paul Hake
Generative AI enablement Expert Labs Services:
series (IBM only) Jennifer Wales
Sarah Memon

APAC Ecosystem Technical Enablement | Data & AI 39


The IBM investment
in partnering with you

Three ways to Request a demo Get a client briefing Start a pilot program
Experience watsonx.governance Get a custom demonstration IBM® Working with IBM AI engineers,
get started with and see core capabilities with a watsonx capabilities. Understand prove watsonx.governance’s

watsonx.governance free demo. how watsonx.governance can be


used in your AI strategy.
value for the selected use case(s)
with a plan for adoption.
today 30-minute virtual meeting.
2-4 hours | Onsite or virtual 1-4 weeks

Learn more
Visit the website

40
APAC Ecosystem Technical Enablement | Data & AI
Competitive insights

• High-level overview of competitors


• Background on competitors
• Competitors’ key strengths
and weaknesses
• Competitor summary,
watsonx.governance differentiators
• Objection handling
• Setting traps for competitors

Generative AI on
Foundation Model

41
APAC Ecosystem Technical Enablement | Data & AI
watsonx.governance
Competitors at a glance
Traditional data science Niche players Open source Hyperscalers
players

Positioning watsonx.governance

These players tend to focus Niche players focus on solving IBM Research open-source Hyperscaler native offerings
primarily on the Build/Deploy only specific aspects of AI projects include AI Fairness tend to focus on Data Science /
aspects and are geared governance with significant 360, AI Explainability 360, and MLOps engineer personas, and
towards data scientists – not gaps in other related areas. Adversarial Robustness 360, any automation benefits are
broader stakeholders engaged making IBM one of the first lost outside their ecosystem.
in AI governance. IBM brings comprehensive vendors to open-source their
capabilities and an integrated AI governance tools. Sellers: emphasize IBM’s
Sellers: emphasize the benefits tools ecosystem to address ability to govern models built
of automation to drive AI governance in a holistic Many of these capabilities are and developed using different
productivity, monitoring and way, following a roadmap to integrated into IBM’s products, tools, as well as IBM’s robust,
evaluation, and integration innovation laid out by with the added benefits of integrated workflows and
with Model Risk Management. IBM Research. integration with a GRC platform. flexible deployment options.

42
APAC Ecosystem Technical Enablement | Data & AI
Types of competitors Generative AI Non-traditional AI
competitors vendors

• Designed for both • Many startups or


Generative AI is the next great innovation in
traditional and vendors who were
Artificial Intelligence that builds on Foundation
generative AI. traditionally in the data
Models (with unlabeled data) to support business
space (like Databricks)
use cases such as extraction, classification, text
• Supports open-
generation, summarizing, Q&A for Natural
source models and • Most begin with
Language Processing (NLP) or even coding
specialized models. variants of the GPT
language tasks.
models and provide
• Offer various degrees some chatbot like
Many vendors with little to no AI history are
of tuning optimization capabilities.
entering the generative AI race. Given the right
and some control
resources and training data, it is not difficult to
• Most offer prompt
create a generative model that can perform basic
• Has resources to engineering but not
chatbot capabilities. However, this does not mean
build proprietary much more
they can offer operationalized generative AI to
models
support non-chatbot business use cases in a
• Little to no governance
transparent, responsible, and governed manner.
• Offers API and often on both data and
SDK for application models
development
43
IBMEcosystem
APAC and Business Partner –| Data
Technical Enablement Internal
& AI Use Only
watsonx.governance
Competitors at a glance

Traditional AI Niche Open source

Emphasize IBM’s GRC solution, Emphasize IBM’s GRC solution,


Emphasize IBM’s GRC solution,
end-to-end capabilities, and end-to-end capabilities, and ease
hybrid capabilities, and
access to pre-trained of use for non-technical
platform-agnostic approach.
model use cases. stakeholders.

44
IBMEcosystem
APAC and Business Partner –| Data
Technical Enablement Internal
& AI Use Only
Traditional AI
Competitors

• Designed for both traditional and generative AI

• Support generative AI with a model hub/garden/factory

• Offer tuning and prompt engineering studio/interface

• Have resources to build proprietary models

• Offer APIs and often SDKs for application development

APAC Ecosystem Technical Enablement | Data & AI 45


Amazon Bedrock and SageMaker
Amazon’s Strengths How IBM wins
▪ Fully-managed service that makes foundation models ▪ Integrated, customizable Governance, Risk and Compliance
(FMs) from AI startups and Amazon available via API (GRC) with OpenPages*
▪ Wide range of FMs to suit a variety of use cases ▪ Automated metrics and metadata tracking with Factsheets
▪ Customize FMs with your own data ▪ Explainability for foundation models
Foundation models

▪ Integration with Amazon SageMaker ▪ Privacy/data leak and misinformation monitors


▪ Supports text generation and summarization, chatbots, ▪ Quality and health evaluations*
image generation ▪ Drift detection*
▪ Policy packs*

* Features available 1Q2024

▪ Detects unfair bias throughout the lifecycle ▪ Build and monitor models in a hybrid, platform-agnostic
▪ Real-time monitoring for accuracy, concept drift, and environment
security ▪ Integrated, customizable Governance, Risk and Compliance
▪ Explainability supports natural language processing and (GRC) with OpenPages
Predictive models

computer vision models ▪ Automated metrics and metadata tracking with Factsheets
▪ UI built for non-technical business stakeholders to view
metrics and perform governance tasks
▪ No-code and low-code solutions allow non-developers to
contribute

46
APAC Ecosystem Technical Enablement | Data & AI
Azure Machine Learning and OpenAI Service
Microsoft’s Strengths How IBM wins
▪ Provides access to OpenAI’s powerful language models ▪ Integrated, customizable Governance, Risk and Compliance
through REST API, Python SDK, or web interface (GRC) with OpenPages*
▪ Automated metrics and metadata tracking with Factsheets
▪ Models can be adapted to specific tasks ▪ Explainability for foundation models
Foundation models

▪ Privacy/data leak and misinformation monitors


▪ Prompt engineering with support for different techniques ▪ Quality and health evaluations*
▪ Drift detection*
▪ Filters input prompts and generated output for abuse and ▪ Policy packs*
harmful content

* Features available 1Q2024

▪ Build models at scale with an end-to-end machine learning ▪ Build and monitor models in a hybrid, platform-agnostic
lifecycle service environment

▪ Azure Data Studio integrates IntelliSense, code snippets, ▪ Models can be deployed locally or on the cloud
Predictive models

and customizable dashboards for model development


▪ Integrated, customizable Governance, Risk and Compliance
▪ Process structured and unstructured data in high volumes (GRC) workflows with OpenPages

▪ Provides real-time analytics and fully-managed ▪ Automated, integrated metrics and metadata with
infrastructure Factsheets

▪ Broader range of supported algorithms such as TensorFlow


47
APAC Ecosystem Technical Enablement | Data & AI
Dataiku
Dataiku’s Strengths How IBM wins
▪ Access public models, including the GPT-3 family, via API ▪ Bias/toxicity/hate detection for model input and output
▪ Integrated, customizable Governance, Risk and Compliance
▪ Incorporate large language models from Huggingface (GRC) with OpenPages*
▪ Automated metrics and metadata tracking with Factsheets
Foundation models

▪ Explainability for foundation models


▪ Privacy/data leak and misinformation monitors
▪ Quality and health evaluations*
▪ Drift detection*
▪ Policy packs*

* Features available 1Q2024

▪ Real-time monitoring with drift detection, automated ▪ Integrated, customizable Governance, Risk and Compliance
retraining, and scaling (GRC) with OpenPages

▪ Visual tools for coders and non-coders alike to build data ▪ Superior track record of IBM Research
Predictive models

pipelines
▪ Exceptional community support
▪ Automated documentation of regulatory compliance
information

▪ Detection for multiple different types of model drift

48
APAC Ecosystem Technical Enablement | Data & AI
Google Vertex AI
Google’s Strengths How IBM wins
▪ Provides enterprise-ready, task-specific models ▪ Integrated, customizable Governance, Risk and Compliance
(GRC) with OpenPages*
▪ Studio for tuning, testing, and deploying foundation models ▪ Automated metrics and metadata tracking with Factsheets
to production ▪ Privacy/data leak and misinformation monitors
Foundation models

▪ Quality and health evaluations*


▪ Filtering of input and model output for unfair ▪ Drift detection*
bias/toxicity/hate ▪ Policy packs*

* Features available 1Q2024

▪ Single development environment for entire data science ▪ Integrated, customizable Governance, Risk and Compliance
workflow (GRC) with OpenPages

▪ Native integration with BigQuery ▪ Build and monitor models in a hybrid, platform-agnostic
Predictive models

environment
▪ Support for all open-source frameworks
▪ Fully-customizable workflows

49
APAC Ecosystem Technical Enablement | Data & AI
Niche Competitors
• Frequently focus on one or two areas of strength and not the
entire lifecycle

• Most offer integration with and monitoring of other


foundation model services like ChatGPT, rather than
providing development and deployment of the models

• Some support for prompt engineering and bias/toxicity hate


detection

• Some provide open-source libraries for model monitoring


that also integrate with paid services

• Few address Governance, Risk, and Compliance (GRC)


standards

APAC Ecosystem Technical Enablement | Data & AI 50


Credo
Credo’s Strengths How IBM wins
▪ Risk and compliance management platform ▪ Credo is NOT an MMLOps or MLOPs platform

▪ AI use case and model registry capabilities ▪ AI Factsheets always up-to-date, unlike unstructured model
cards
Foundation models

▪ Supports 3rd party platform for metadata collection


▪ LLM metadata collection (OpenPages) for quality, safety, and
▪ Model Card Generation health metrics

▪ Policy packs provide regulations recommendations ▪ Integrated Fairness and Explainability tools

▪ Integrated with watsonx.ai for model building with support


for 3rd party models*
* Features available 1Q2024

▪ Pre-defined policy packs and assessment templates for ▪ Integrated, customizable Governance, Risk and Compliance
governmental regulations (GRC) with OpenPages

▪ Centralized repository of model metadata ▪ Build, monitor and deploy models in a hybrid, platform-
Predictive models

agnostic environment
▪ Collaborative governance workflows
▪ Automated metrics and metadata tracking with Factsheets

▪ Inventory and track models through the entire lifecycle

▪ Dashboards, metrics, and tools don’t require users to


understand code
51
APAC Ecosystem Technical Enablement | Data & AI
Legend

watsonx.governance – key feature comparison vs. CredoAI ✓

X
Available Today

Not available today


As of June, 2023
- Not sufficient info
Areas Watsonx.governance CredoAI
Model Lifecycle tracking & inventory ✓ X

Bias ✓ ✓
Explainability ✓ ✓
“Traditional” ML

Model Health ✓ ✓
Drift ✓ ✓
Model Factsheet ✓ X
Experiment Tracking X X
Integrated GRC Workflows & Dashboard ✓ X
X X
Adversarial robustness
not out-of-box ( Adversarial Robustness Toolkit from IBM Research ) (limited, experimental release)
Policy Packs X ✓
Model Turning / Prompt Eng. Interface Yes – integration with watsonx.ai ✓
Model facts Target Q4 2023 release X
Large-Language Models / GenAI

Experiment Tracking X X
Model lifecycle tracking & inventory Target Q4 2023 release X – only “AI registry”
Integrated GRC Workflows & Dashboard Target Q4 2023 release X
Bias, Toxicity, Hate Target Q4 2023 release X - only through other LLMOps tools
Explainability On roadmap – targeted 1H 2024 -
Privacy On roadmap – targeted 1H 2024 -
Usage monitoring Target Q4 2023 release ✓
Quality Health Evaluation Target Q4 2023 release -
Drift Target Q4 2023 release -
Security / Robustness X -
GenAI Policy Packs X ✓
52
Source: https://ibm.box.com/s/z8ubhx27r4caon25bcyux4im4z48an0v
APAC Ecosystem Technical Enablement | Data & AI
OneTrust
OneTrust’s Strengths How IBM wins
▪ Combine Governance, Risk and Compliance (GRC), ethics, ▪ OpenPages GRC solution and AI Factsheets for model
and environmental, social and corporate governance (ESG) metrics and metadata are integrated with model
workflows development tools
Predictive models

▪ Privacy and data governance frameworks and metrics ▪ OpenScale provides real-time monitoring of AI models with
dashboards fully configurable alert thresholds to ensure regulatory
compliance
▪ Compliance packages for prominent governmental and
industry regulations ▪ Watson Knowledge Catalog offers a full suite of data privacy
and protection solutions, and integrates with data science
tools and environments

53
APAC Ecosystem Technical Enablement | Data & AI
Arize
Arize’s Strengths How IBM wins
▪ Monitor model’s prompt/response embeddings ▪ Integrated, customizable Governance, Risk and Compliance
performance (GRC) with OpenPages*

▪ Analyze problematic clusters of responses to tune model ▪ Automated metrics and metadata tracking with Factsheets
Foundation models

performance
▪ Privacy/data leak and misinformation monitors
▪ Supports LLM-assisted evaluation metrics, task-specific
metrics, and user feedback ▪ Start from pre-built foundation model use cases

▪ Monitor unstructured data drift ▪ Policy packs*

▪ Evaluate models for anomalies, issues, and drift with open-


source Phoenix offering * Features available 1Q2024

▪ Strong focus in computer vision (CV) models ▪ Integrated, customizable Governance, Risk and Compliance
(GRC) with OpenPages
▪ Upload model data wherever it’s stored for a single view of
production models ▪ Automated collection of model metrics and metadata
Predictive models

tracking with AI Factsheets


▪ Connect model insights from Arize with your entire ML
ecosystem to alert, retrain, and improve your model ▪ Build and monitor models in a hybrid, platform-agnostic
environment
▪ Visualize training, validation, and production environments
for any given model (and version) to track the various ▪ Fully-customizable workflows
facets that impact performance

54
APAC Ecosystem Technical Enablement | Data & AI
Robust Intelligence
Robust Intelligence’s Strengths How IBM wins
▪ Automatically monitor model input for prompt injection, ▪ Much broader monitoring capabilities, including model
prompt extraction, and PII quality and health metrics such as latency, and LLM
explainability *
▪ Monitor model output for hallucination, PII, and hate
Foundation models

speech ▪ Integrated, customizable Governance, Risk and Compliance


(GRC) with OpenPages*
▪ Platform-agnostic model monitoring
▪ Automated metrics and metadata tracking across the entire
lifecycle with Factsheets
▪ Configurable risk assessments to ensure model
compliance

* Features available 1Q2024

▪ Monitor fairness and multiple types of drift ▪ More in-depth monitoring, including quality and
explainability for predictive model auditability
▪ Explain computer vision model classifications
▪ Integrated, customizable Governance, Risk and Compliance
Predictive models

▪ Configurable alert thresholds for model metrics (GRC) with OpenPages

▪ Automated collection of model metrics and metadata


tracking with AI Factsheets

▪ Fully-customizable governance workflows

55
APAC Ecosystem Technical Enablement | Data & AI
Fiddler
Fiddler’s Strengths How IBM wins
▪ Vector monitoring detects drift in OpenAI text embeddings ▪ Integrated, customizable Governance, Risk and Compliance
and generates interactive charts for further investigation (GRC) with OpenPages*

▪ Supports explainability for natural language models ▪ Start from pre-built foundation model use cases
Foundation models

▪ Evaluate robustness of large language models and natural ▪ Automated metrics and metadata tracking with Factsheets
language processing models with open-source Fiddler
Auditor offering ▪ Privacy/data leak and misinformation monitors

▪ Policy packs*

* Features available 1Q2024

▪ Detect unfair bias in training data as well as in production ▪ Integrated, customizable Governance, Risk and Compliance
models (GRC) with OpenPages

▪ Comprehensive model validation and monitoring, and can ▪ Build and monitor models in a hybrid, platform-agnostic
Predictive models

simulate input scenarios environment

▪ Model latency, traffic, and performance monitoring ▪ Fully-customizable workflows

56
APAC Ecosystem Technical Enablement | Data & AI
Weights & Biases
W&B’s Strengths How IBM wins
▪ Lightweight experiment logging with Python code ▪ Bias/toxicity/hate detection for model input and output
▪ Integrated, customizable Governance, Risk and Compliance
▪ Large language model (LLM) debugging tool for reviewing (GRC) with OpenPages*
results and gathering insights on model behavior ▪ Automated metrics and metadata tracking with Factsheets
Foundation models

▪ Explainability for foundation models


▪ Model architecture view provides detailed description of all ▪ Privacy/data leak and misinformation monitors
settings, tools, agents, and prompt details in a chain ▪ Drift detection*
▪ Policy packs*
▪ Easily run, log, and package any evaluation from the
OpenAI Evals repository of evaluation suites

* Features available 1Q2024

▪ Centralized model registry and lifecycle management ▪ Integrated, customizable Governance, Risk and Compliance
(GRC) with OpenPages
▪ Lightweight experiment logging with Python code
▪ Build and monitor models in a hybrid, platform-agnostic
Predictive models

▪ Interactive graphs and tables for comparing model metrics environment


and performance
▪ Fully-customizable workflows

▪ Better visibility into model performance metrics with


OpenScale and OpenPages

57
APAC Ecosystem Technical Enablement | Data & AI
ValidMind
ValidMind’s Strengths How IBM wins
▪ Platform-agnostic to allow monitoring of any model. ▪ Bias/toxicity/hate detection for model input and output

▪ Offers a framework to automatically execute tests inside ▪ Explainability for foundation models
the dev environment, without disruption to existing
Foundation models

workflows. ▪ Privacy/data leak and misinformation monitors

▪ Interactive dashboard to identify top model risks, ▪ Drift detection*


collaborate to improve models, and organize the model
review workflow. ▪ Policy packs*

* Features available 1Q2024

▪ Information captured through the model lifecycle is used ▪ Better visibility into model performance metrics with
to automatically generate documentation in a user-friendly OpenScale and OpenPages
dashboard.
▪ Automated bias, drift, and accuracy detection for runtime
Predictive models

▪ Easy deployment on any cloud. models

▪ Allows for real-time collaboration between model ▪ Explainability for auditability compliance of predictive
developers and model validators. models

▪ Strong focus on the Financial Services Sector.

58
APAC Ecosystem Technical Enablement | Data & AI
Open-Source
Competitors

APAC Ecosystem Technical Enablement | Data & AI 59


Lens by Credo AI
Responsible AI Assessment Framework

Overview Features Targeted Personas


▪ demo release: 2022 (Credo AI start-up founded in 2021)* Credo AI (end-to-end AI governance platform):
▪ target: assessing AI systems ▪ context-driven AI governance recommendations: defining Business Business Data Data ML
responsible AI requirements Owner Analyst Scientist Engineer Engineer
▪ ecosystem: Lens is an open-source assessment framework
for AI assessment used in the Credo AI multi-stakeholder ▪ multi-stakeholder AI governance workflows: cross team
collaboration with tools, reviews & attestation flows
governance platform App
▪ AI policy center: ensuring regulatory relevance with
▪ proclaimed benefits of Lens: adding responsible AI
customizable policy packs and assessment templates
assessments to ML development pipeline; easy integration
▪ risk translation engine: risk & compliance scores from
of performance, fairness, explainability, privacy and
assessments and process evidence, generation of artifacts
security assessments into existing workflows
(e.g. dashboards, model cards, audit trails)
▪ proclaimed benefits of Credo AI: multi-stakeholder ▪ seamless stack integration: on top of existing MLOps, GRC
alignment, AI assessment (via Lens) & risk assessment, and infrastructure; automatically extracts evidence
regulatory compliance managmt., scalable AI governance Credo AI Lens (integrated AI assessment framework):
▪ responsible AI assessment reports: as notebook or html
Pricing containing plots with results for each assessment run
▪ Lens is open-source and free ▪ AI governance integration with the full Credo AI
▪ Code repository on GitHub Governance App, automatically scores technical
▪ Lens API references and documentation assessment results for risk and compliance
▪ usability: Lens can be pulled into Notebook or any Python
Environment
* Good to know: Credo AI Head of Product is Susannah Shattuck, former IBM Product Manager for Watson OpenScale

Source: https://ibm.seismic.com/Link/Content/DCmRVFDWFB4c28T24FXXDMpVH4pG 60
APAC Ecosystem Technical Enablement | Data & AI
What to focus with:

Lens by Credo AI Data Science & MLOPs


AI Governance
Responsible AI Assessment Framework Entire CPD Platform

Credo AI Lens’ Strengths IBM’s Counter


Assessment capabilities: ▪ fairness: parity metrics for binary classif. ▪ AI Factsheets: collection of model facts
Lens covers diverse AI risk ▪ dataset: proxy variables, demographic through AI lifecycle about: model purpose
areas with four parity analysis & governance; data transformation,
assessment areas ▪ custom NLP: toxicity, profanity, verbosity features & performance; fairness, privacy
▪ disaggregated performance assessment & verification, drift, learning ...

Standardization of AI ▪ easy integration of model and dataset ▪ OpenPages Model Risk Governance:
assessment: acceleration assessments into existing workflow combines flexible data model with document
of time to productionize ▪ paired with Credo AI platform, management, workflow capabilities & BI;
new solutions assessments are more actionable; foster ensures greater level of engagement
a more collaborative AI systems process ▪ AI Factsheets integration available

Extensibility: ▪ own code can be brought into Lens by ▪ OpenPages Model Risk Governance as
Lens is extensible and defining own assessments; configurable platform that is customizable
easily augmented with ▪ automatic selection of assessments can to better align to specific regulator
custom modules be overridden; almost any parameter of requirements; flexible and modular toolset
underlying module be changed (“standardization to consistency”)

Source: https://ibm.seismic.com/Link/Content/DCmRVFDWFB4c28T24FXXDMpVH4pG
APAC Ecosystem Technical Enablement | Data & AI

You might also like