Professional Documents
Culture Documents
Assign Calculate error rates for each h Pick best h with smallest erro
each training point equal weight (hypothesis) rate
New New
Weights weights weights
Round 1 round 2 round 3
missclassifies
x<6 C �_1 0.693
X<2 BE �_2 0.549 These three weights must satis
x>4 AD �_3 0.805 Any two of them added togeth
Some questions:
�= 1/2 ln((1−𝑒_𝑡)/
What if we chose a classifier with an error rate of 1/2? What's its voting power? �= 1/2 ln((1−𝑒_𝑡)/
What if you wanted to continue updating using a hypotehsis with error rate of 1/2?
Neat facts
missclassifies:
h1 ABC Just assign equal weights to all of them
h2 DE
h3 FG
However, if classifiers make overlapping errors, you are not guaranteed to get a perfect ensemb
Fact 4: If a classifier makes a superset of errors, and another classifiers makes errors in the samet super
Is H good enough? First criteria: If H is not good enough, we
Pick best h with smallest error # of round. Second criteria: no update weights
rate good classifiers left (error rates
higher than 0.5)
If H is good enough, we return H
Naive Bayes
Binary Classification
1/2 ln((1−𝑒_𝑡)/𝑒_𝑡 )
�= 1/2 ln((1−𝑒_𝑡)/𝑒_𝑡 )
�= 1/2 ln((1−𝑒_𝑡)/𝑒_𝑡 )
kes errors in the samet super set, your total error of the ensemble classifier will be greater than those of the parts
enough, we
s
ugh, we return H