0% found this document useful (0 votes)
454 views12 pages

Machine Learning Technique - Introduction To Graphical Models

Introduction to Graphical Models, Markov Random Fields (MRFs)

Uploaded by

vijayalakshmis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Topics covered

  • Exact Inference,
  • Conditional Independence,
  • Hidden States,
  • Hidden Markov Models,
  • Maximum Likelihood Estimation,
  • Unlabeled Data,
  • Labeled Data,
  • Speech Recognition,
  • Approximate Inference,
  • Part-of-Speech Tagging
0% found this document useful (0 votes)
454 views12 pages

Machine Learning Technique - Introduction To Graphical Models

Introduction to Graphical Models, Markov Random Fields (MRFs)

Uploaded by

vijayalakshmis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Topics covered

  • Exact Inference,
  • Conditional Independence,
  • Hidden States,
  • Hidden Markov Models,
  • Maximum Likelihood Estimation,
  • Unlabeled Data,
  • Labeled Data,
  • Speech Recognition,
  • Approximate Inference,
  • Part-of-Speech Tagging

MACHINE LEARNING TECHNIQUES

Presented By
[Link] B.E,
Assistant Professor,
Department of Computer Science,
Sri Sarada Niketan College for Women, Karur.
Introduction to Graphical Models

Definition: Graphical models are probabilistic models for


representing dependencies among random variables using
graphs.
Types of Graphical Models:
• Undirected Models (Markov Random Fields)
• Directed Models (Bayesian Networks)
• Applications: Used in machine learning, computer vision,
natural language processing, etc.
Markov Random Fields (MRFs)

Definition: An undirected graphical model where the nodes


represent random variables, and the edges represent
dependencies.
Properties:
• Markov Property: Each node is conditionally independent
of all other nodes given its neighbors.
• Local interactions between variables.
• Example: Image segmentation, social network modeling.
Bayesian Networks (Directed Graphical Models)

Definition: A directed acyclic graph (DAG) where nodes


represent random variables and edges represent
conditional dependencies.
Properties:
• Each node is conditionally independent of its non-
descendants given its parents.
• Example: Diagnosis systems (e.g., medical diagnosis
based on symptoms and disease probabilities).
Conditional Independence in Bayesian Networks

Conditional Independence:
• Variables are conditionally independent if knowing
the value of one variable doesn’t provide
information about another, given a third.
• This property simplifies the factorization of joint
distributions.
Inference in Bayesian Networks
• Inference involves computing the posterior distribution of
some variables given observed evidence.
• Methods:
• Exact Inference: Variable elimination, junction tree
algorithm.
• Approximate Inference: Sampling methods (e.g., Gibbs
sampling, Markov Chain Monte Carlo (MCMC)).
Learning in Graphical Models

Goal: Learn the structure and parameters of the model


from data.
Types of Learning:
• Supervised Learning: Learn from labeled data (e.g.,
Bayesian Networks with known dependencies).
• Unsupervised Learning: Learn from unlabeled data
(e.g., learning the structure of a Markov Random
Field).
• Semi-supervised Learning: A combination of both.
Hidden Markov Models (HMMs)
• Definition: A type of statistical model where the system
being modeled is assumed to be a Markov process with
unobserved (hidden) states.
• Structure:
• States: Hidden variables that influence the observed data.
• Observations: Observable variables dependent on the
hidden states.
• Example: Speech recognition, part-of-speech tagging.
Inference in Hidden Markov Models
• Forward Algorithm: Used to calculate the
probability of the observed sequence given the
model.
• Viterbi Algorithm: Finds the most likely
sequence of hidden states given the observed
data.
Inference and Learning in CRFs
• Inference:
• Similar to MRFs, involves computing the marginal
probabilities of the output variables (e.g., using belief
propagation or dynamic programming).
• Learning:
• Maximum Likelihood Estimation (MLE) or Gradient-based
optimization (e.g., Stochastic Gradient Descent).
• Structured Prediction: CRFs are often used for tasks
where the output is a structured label (e.g., sequence
labeling).
Generalization in Graphical Models

• Generalization: The ability of a graphical model


to perform well on unseen data.
• Overfitting: The model may memorize the data
if not regularized properly.
• Regularization Techniques: L2 regularization,
dropout, early stopping.
THANK YOU

You might also like