Professional Documents
Culture Documents
Cs8082 Machine Learning Techniques Ripped From Amazon Kindle e Books by Sai Seena
Cs8082 Machine Learning Techniques Ripped From Amazon Kindle e Books by Sai Seena
Iresh A. Dhotre
M.E. (Information Technology)
Ex-Faculty, Sinhgad College of Engineering
Pune.
® ®
TECHNICAL
PUBLICATIONS
SINCE 1993 An Up-Thrust for Knowledge
(i)
Machine Learning Techniques
Subject Code : CS8082
Semester - VII (Computer Science & Engineering / Information Technology) (Professional Elective - II)
(Electronics & Communication Engineering )(Professional Elective - III)
Published by :
® ®
Amit Residency, Office No.1, 412, Shaniwar Peth,
TECHNICAL Pune - 411030, M.S. INDIA, Ph.: +91-020-24495496/97
PUBLICATIONS
SINCE 1993 An Up-Thrust for Knowledge Email : sales@technicalpublications.org Website : www.technicalpublications.org
Printer :
Yogiraj Printers & Binders
Sr.No. 10/1A,
Ghule Industrial Estate, Nanded Village Road,
Tal. - Haveli, Dist. - Pune - 411041.
ISBN 978-93-89750-94-2
9 789389 750942 AU 17
Training
set
New knowledge Application
Training
Existing
knowledge
Response
Validation
H
General
2 96
2 96
Most general
hypothesis
– Examples
+ Examples
Most specific
hypothesis
Most general
hypothesis
G-set
Boundary sets
S-set
Most specific
hypothesis
Dj
Dj
Dj
Color
Size + Shape
– + Size +
Big Small
– +
'Viagra'
=0 =1
'lottery' spam: 20
ham: 5
=0 =1
spam: 20 spam: 10
ham: 40 ham: 5
'Viagra'
spam: 10
1
=0 =1 ham: 5
'lottery'
spam: 20
spam: 20 ham: 5
'lottery' spam: 20
ham: 5
0
ham: 40
=0 =1
0 1
spam: 20 spam: 10 'Viagra'
ham: 40 ham: 5
'Viagra' 'Viagra'
=0 =1 =0 =1
=0 =1 =0 =1
spam: 20 spam: 10 ^ ^
C(x) = ham C(x) = spam
ham: 40 ham: 5
Parents
visiting
Yes
No
Cinema Weather
Sunny Rainy
Windy
Rich
Poor
Shopping Cinema
Attribute A
scores highest
for gain (S, A)
A
Attribute B w
scores highest v
for gain (Sw, A)
B
Leaf node
category c x
y
Sw must have no
examples taking value
x for attribute B, and d
must be the category
containing the most
members of Sw
log 2 ( 2 10)
Weather
Weather
A B C
.918
.918
0
Y Y
X X
Balanced Overfitting
L
Input
Input
Input layer
Input
Output layer
Input
Hidden layer
Axon hillock
Soma Axon
N
Ii Wi b
i 1
0
Bias 1 W
1
1 W
X
2
W Activation function
2 (more on this later)
X
3 Output
Inputs X
3
W
m m
X W
xi
n
w i xi
i 1
n
w i xi w0 ,w0
i 1
n
w i xi , x0 1
i 1
wi xi
w0
X0 = 1
X1 W1 W0 = –
W2 X1 W1
X2
W3 0 W2
X2
0
X3
X3 W3
w0
1 if x 0,
1 otherwise
1 if x 0,
0 otherwise
wi
wi t i Xi
ti
w1 w2 w0 w0
w1 w2 w0 w0 w2
w1 w2 w0 w0 w1
w1 w2 w0 w0 w1 w2
1
g(x)
1 e x
XOR
e.g. OR AND
2 2 2
1 1 1
0 0 0
–1 –1 –1
–2 –2 –2
–10 0 10 –10 0 10 –10 0 10
1
x
1 e
x
1 e
x
1 e
x1 W1j
W2j Sum | Threshold
x2 Activation xj
xj
x3 W
3j
Xj W ij Wj
i
1
xj f(X j )
1 exp ( X j )
xi
W ij
Wj Input signals
W ij
X1 Y1
X2 Y2
Xm Yp
Input Output
layer layer
Hidden
layer
Ep (d k xk) 2
k
dk xk
p th
Ep
i
Xi
wi
xi
n
w i xi
i 1
w0
–1 +1 +1
W1
Level
W0
W2
+1 Output
W3 –1
–
Error Quantizer
Gains
Input Summer +
pattern
switches
–1 +1
Reference
switch
wi
wi
p th
Ep (t p op)2
tp
op
Ep wi
Ep
(t p o p ) xi
wi
Ep wi p th
p wi (t p o p ) xi
W ij Xi
Input 1
Weight
W1
Weight
Input 2 W2 Output
Weight Sigmoid
WN
Input N
Threshold
2
f W i Xi b
i 1
1 if s 0
1 if s 0
W 1X 1 W 2 X2 ... W n X n
W 1X 1 W 2 X2 ... W n X n
W d x
W W W
Wi Wi d (n) X i (n)
XOR
e.g. OR AND
X1
(X 1 , X 2 )
o o
o o
x o
x o o o
x o
x x o
X2
(X 1 , X 2 ) x x x
x o
L
x 1 , x 2 , ... x n
O X
O O
X : Class I (y = 1)
O : Class II (y = –1)
X X
O X
X : Class I (y = 1)
O : Class II (y = –1)
X O
O X
X : Class I (y = 1)
O : Class II (y = –1)
b w1 w2 0 (1)
b w1 w2 0 (2)
b w1 w2 0 (3)
b w1 w2 0 (4)
Xk 1 xk
x
x3 4
x2
x1
x0
En E n ( w ji ) – E n ( w ji )
O( )
w ji
En E n ( w ji ) – E n ( w ji – ) 2
O( )
w ji 2
O(W 2 )
x li xi x ui ,
x li x ui xi
Describe
problem
Generate
initial
solutions
No
Select parents
step 2
to reproduce
Fi i i i
3n
M3 n
M3 n
(S, t 1) (S)
f(S)
(S, t 1) (S, t) n
fi
fi
f (s)
(S, t 1) (S, t)
favg
f avg
(favg c favg )
(S, t 1) (S, t)
favg
(S, t) (1 c)
(S, t) (S, t)(1 c) t
+
+ 2 + 9
1 2
1 2 – 4
2 1
)
( ) ( )
( )
(1 )
y^ = 0 1x
b0 b 1x 1
1
b 0 and b 1
y
4
3 x3
e3
2 E(Y)=a + b X
x1
1 e2
e1
x2
0 x
0 1 2 3 4 5
3.00 1 4.50 vA
4.25 1 m 4.25 vB
5.50 1 b 5.50 vC
8.00 1 5.50 vD
m
(A T A) 1 (A T L)
b
1
121.3125 20.7500 105.8125 0.246
20.7500 4.0000 19.7500 3.663
argmax P( v j h i )P(h i D)
vj V hi H
P(A B)
P(A)
P(A B)
P(A B)
P(B)
A B
P(A B)
P(A B)
1 3 3
100 5 500
1 3 4 2 3 8
100 5 100 5 500 500
3
11
R1 W2
R
4/6 P (R2 | R1)
R
P (W2 | R1)
5/7 P (R1) 2/6
W
R
P (W1) 5/6
2/7 P (R2 | W1)
W
P (W2 | W1)
1/6
W
P(F|R) P(R)
P(F|R) P(R) P(F|E) P(E) P(F|P) P(P)
0.3 0.4
0.3 0.4 0.5 0.35 0.6 0.25
0.12
0.12 0.175 0.15
0.12
0.445
Burglary Earthquake
Alarm
JohnCalls MaryCalls
Alarm
P(A | B, E)
Alarm
P(A | B, E)
P(A | B, E)
B E T F
T T 0.95 0.05
T F 0.94 0.06
Alarm
F T 0.29 0.71
F F 0.001 0.999
P(M | A)
P(J | A) A T F
A T F T 0.7 0.3
JohnCalls T 0.90 0.1 MaryCalls F 0.01 0.99
F 0.05 0.95
l( )
(0 , 1)
RD 2|N|
C = {c 1 , ... , c 4 }
X1 X2 X3
c1
c2
c3
c4
c ({X 1})
c ({X 1 , X 3 })
c ({X 2 , X 3 })
f( )
R2
do
l
fl o o f2 o f1 (x)
fi di 1 { 1} di 1 i l 1
di 1
fl { 1}
fi,j di 1 { 1}
(i,j)
( l) o (2) o (1)
o
Ct
Ct {i : fiT yr, r 1, , t 1}
Ct
yt 1, if {i C t : fit 1}
2
1, otherwise.
fit yt
log 2 N
L2
d euclidean (x, y) i (x i y i )2
(x 1 , x 2 , x 3 ... x m )
(y 1 , y 2 , y 3 ... y m )
1
d Mahalanobis (x, y) (x y) T (x y)
Lp 1/ p
Lp (x, y) d
i 1 | xi y i |p
L2 d 1/ 2
L2 (x, y) i 1 | xi y i |2
L1 d
L1 (x, y) i 1 | xi yi |
1
L m (x, y) (x y) S (x y)
W1
x1
W2 y
xp
Wn
( )
2
Data Step 1: Find rule
2 2
1 1
x2
x2
0 0
–1 –1
–2 –2
–3 –2 –1 0 1 2 –3 –2 –1 0 1 2
x1 x1
Step 2: Remove covered instances Step 3: Find next rule
2 2
1 1
x2
x2
0 0
–1 –1
–2 –2
–3 –2 –1 0 1 2 –3 –2 –1 0 1 2
x1 x1
size = small size = medium size = big
location = mediocre
loc
ati
ad
=
on
go
ati
od
loc
acc: 0.1 acc: 0.7 acc: 0.4 acc: 0.2 acc: 0.8 acc: 0.6 acc: 0.9
, , ,
person 1 , person 2
(x j , x k ) xj xk
x i , f(x i ) xi
i th f(x i )
f(x i )
C2
C1 C2
C2 C2
C1
C2
C: PassExam v Study
C1: Pass v KnowMaterial C2: KnowMaterial vStudy
C: PassExam v Study
Oi
Ob
Of
AGENT
Tt+1
DELAY
St+1 ENVIRONMENT
Environment
Action
Reward
Interpreter
State
Agent
( , , )
[
ir
V (S t ) E t i
i 0
n V (s)
E [r(s, a) V ( ( s, a))]
P(s s, a)
s P(s s, a) V ( (s, a))
V(S t ) V(S t ) [R t 1 V(S t 1) V(S t ) ]
x1 x1
xn xn xi xi