Professional Documents
Culture Documents
1 Linear Regression.
Area Cost
[Acre] [Crore]
0.54 2.57
0.81 4.78
2.16 9.65
4.32 16.73
5.50 19.25
Matrix Method
2 3 2 3
2.57 1 0.54
ao + 0.54 ⇥ a1 = 2.57 6 4.78 7 61
6 7 6 0.817
7 ao
ao + 0.81 ⇥ a1 = 4.78 6 9.65 7 = 61 2.167
6 7 6 7 a
416.735 41 4.325 1
ao + 2.61 ⇥ a1 = 9.65
19.25 1 5.50
ao + 4.32 ⇥ a1 = 16.73 (1)
y = Xa
ao + 5.50 ⇥ a1 = 19.25
Solving: y = X a
i Pre-multiply both sides by X T
XTy = XTXa
⇣ ⌘ 1
T
ii Pre-multiply by X X
⇣ ⌘ 1
T
a= X X XTy
3 2
2.57
6 4.78 7
a0 0.5743 0.1404 1 1 1 1 1 6 7
6 9.65 7
=
a1 0.1404 0.0527 0.54 0.81 2.16 4.32 5.50 6
416.735
7
19.25
Advanced Driver Assistance System 6 / 53 BITS Pilani, Pilani Campus
Machine Learning: Logic
Cost Function
2
X
J (a1 ) = (a1 x [i] y [i])2 = (1 · a1 2)2 + (2 · a1 2)2
i=1
i For y = 1, z = a0 + a1 x
should be positive.
ii For y = 0, z = a0 + a1 x
should be negative.
y = WTx + b
2 3
w0,0 w0,1 w0,2 w0,3
6 w1,0 w1,1 w1,2 w1,3 7
6 7
6 w2,0 w2,1 w2,2 w2,3 7
W =6 7 (10)
6 .. .. .. .. 7
4 . . . . 5
w2499,0 w2499,1 w2499,2 w2499,3
[0] ⇥J [1]
W 2 RJ (11)
Output Layer:
y = WTx + b (12)
c = Softmax (y ) (13)
e yi
ci = Softmax (yi ) = K (14)
P y
e j
j=1
2 3
w0,0 w0,1 ... w0,9
6 w1,0 w1,1 ... w1,9 7
6 7
6 w2,0 w2,1 ... w2,9 7
W =6 7 (16)
6 .. .. .. .. 7
4 . . . . 5
w783,0 w783,1 . . . w783,9
[0] ⇥J [1]
W 2 RJ (17)
Single Neuron
i Linear combination:
z =w ·x +b
z = w 0 x0 + w 1 x1 + · · · + w L 1 xL 1 +b
ii Activation Function:
y = f (z)
Advanced Driver Assistance System 34 / 53 BITS Pilani, Pilani Campus
Types of Activation Function
i Sigmoid.
ii REctified Linear Unit (RELU).
iii Leaky RELU.
iv Tan Hyperbolic.
⇢⇣ ⌘T
x [1]
=f W [1]
· x [0] + b [1] (18)