161 views

Uploaded by spunkymonkey1016

UC Berkeley.

save

- UpdateStateInModelFromObject.pdf
- Oscat Basic333 En
- UBCOM - An Intelligent Agent Model for Smart Home Environments
- [Smith, Kohn, 2000] Nonparametric Seemingly Unrelated Regression
- SQL.doc
- blogc-starai-10
- Stimulus Generation for Constrained Random Simulation
- Bilal Ahmed Shaik Data Mining
- Machine Learning
- Graphics Lab Course Material
- Black Box
- AI and Machine Learning for Testers Jason Arbon
- Unp
- lecture6_floorplan1
- Script SofEther
- Duplicates
- AIC
- Unpacking Asprotect 2.Xx SKE - Part 2 IAT Rebuilding
- Lec 01 Evolution Sum08
- cs188_syllabus_fa13
- EPS109 syllabus
- Which Test
- Brain and Its Functions

You are on page 1of 2

1. 2.

Select entries consistent with evidence Sum out H to get joint of Query and evidence

3.

Normalize

Variable Elimination While there are still hidden variables: 1. Pick hidden variable H 2. Join all factors that have H 3. Marginalize H (eliminate by summing out, normalize) new factor Complexity determined by largest factor. Doesn’t always exist an ordering that only results in small factors. NP-hard in BN.

A path is active if each triple is active. Want to check if X and Y are conditionally independent given Z. If there exists an active path between X and Y, then cond. Indep. not guaranteed. If path is inactive, X indep of Y. Prior Sampling: Start at top of Bayes Net and work downward. Sample each variable V based on P(V | parents). Retain all samples. Rejection Sampling We want to estimate P(Query | Evidence) Start at top of Bayes Net and work downward. Sample each variable and once we sample a variable not consistent with evidence, we reject that sample. Likelihood Weighting We want to estimate P(Query | Evidence) Start at top of Bayes Net and work downward. If we reach a non-evidence variable, sample it. if we reach an evidence variable, fix it to be consistent with evidence and weigh your sample by P(evidence variable | parents). If we have multiple evidence variables, the weight of your sample would be product of P(E | parents) for each evidence variable E. P(Query) = Sum(wts of samples with Query) / Sum(wts of samples with evidence (all if Query isn’t conditional)) Gibbs Sampling We want to estimate P(Query | Evidence) How to get one gibbs sample? Step 1: Randomly choose an instantiation of all variables consistent with evidence. Step 2: Repeat Choose non-evidence variable H, resample it based on P(H | all other evidence), obtaining an updated sample. important: we sample based on all other evidence, not just parents. ^Ideally you would repeat this infinitely many times, but in practice, you would sample a "large amount". Repeat steps 1 and 2 for more gibbs samples. I.e if you want 5 Gibbs samples, do steps 1 and 2 five times. When to use which? If we don’t know nothing about probability we are interested in. Prior Sampling: we can use prior sampling, which will be able to answer a variety of queries, priors, joins, etc. We will be able to estimate P(query | evidence) if we have at least one sample consistent with the evidence. If we know the probability we are interested it (i.e evidence for P(query | evidence)): In general, likelihood weighting is better than rejection sampling since we don’t reject any parents and the evidence is fixed. Likelihood has its limits as well and it depends on what probability you are interested in and structure of your Bayes net.

Worst-case O(d ) and same for space complexity to store joint distribution

n

Probability Rules ( ) Product Rule: ( ) ( | ) Chain Rule: ( ) ∏ ( | ) ) ( | ) ( ) Bayes Rule: ( ( | ) ( ) ( | ) ( | ) ( ) ( ) ( ) ( ) ( ) X is conditionally independent from Y given Z ( | ) | iff ( | ) ( | ) or (equiv) ( | ) ( | ) VPI properties: Non-negative, non-additive, an order-independent. No such thing as value of ( ) imperfect. In general if | then ( | ) Bayes Nets Assume conditional independence ( | Joint prob of x1…xn = ∏ ( )) A joint distr over N Boolean variables is 2N An N-node Bayes net with nodes with up to k parents is O(N*2k+1)

* Hidden process: future depends on past via present. Evidence vars not guaranteed to be independent. ( ) * Stationary distribution if ( ) ∑ ( | )( | ) ( ) * Base cases: Evidence update: ( | ) ( ) ( ) ( ) ( ) ( | ) ( ) Time step: ( ) ∑ ∑ ( ) ( | ) * In passage of time. one time step passes ( | ( | ) ( | ) ) ∑ ) * observation P(X|prev evidence) ( ( | ) then ( ) ( | ) ( ) and renormalize. Past and future events independent of the present. Current belief ( ) ( | ). beliefs get ‘pushed’ through the transitions.Hidden Markov Models and Particle Filtering * Chain-structured BN. Forward algorithm combines steps Particle Filtering – elapse time Observe Resample DBN particle filtering .

- UpdateStateInModelFromObject.pdfUploaded bysoukaina
- Oscat Basic333 EnUploaded byDanang Biantara
- UBCOM - An Intelligent Agent Model for Smart Home EnvironmentsUploaded byApple Xuehan
- [Smith, Kohn, 2000] Nonparametric Seemingly Unrelated RegressionUploaded byMega Silfiani
- SQL.docUploaded byYanina Cucho Fernández
- blogc-starai-10Uploaded bynimar_arora
- Stimulus Generation for Constrained Random SimulationUploaded byfrost_mind
- Bilal Ahmed Shaik Data MiningUploaded bySHAIKBILALAHMED
- Machine LearningUploaded byramayanta
- Graphics Lab Course MaterialUploaded byShahid Wahab Nawab
- Black BoxUploaded byKgotsofalang Kayson Nqhwaki
- AI and Machine Learning for Testers Jason ArbonUploaded bycappereira13
- UnpUploaded byjamesyu
- lecture6_floorplan1Uploaded byapi-3834272
- Script SofEtherUploaded byJes Bautista
- DuplicatesUploaded byRobson Glasscock
- AICUploaded byhamidrahmanyfard
- Unpacking Asprotect 2.Xx SKE - Part 2 IAT RebuildingUploaded bythemmm