Professional Documents
Culture Documents
李宏毅
Hung-yi Lee
1
Outline
3
Example Application
• Handwriting Digit Recognition
Machine “2
”
4
Handwriting Digit Recognition
Input Output
x1 y1
0.1 is 1
x2 y2
0.7 is 2
The image
is “2”
……
……
……
x256 y10
0.2 is 0
16 x 16 = 256
Ink → 1 Each dimension represents
No ink → 0 the confidence of a digit. 5
Example Application
• Handwriting Digit Recognition
x1 y1
x2
y2
Machine “2
……
……
”
x256 256 10 y10
𝑓 :𝑅 →𝑅
In deep learning, the function is
represented by neural network
6
Element of Neural Network
Neuron 𝐾
𝑓 :𝑅 →𝑅
a1 w1 z a1w1 a2 w2 aK wK b
a2 w2
z z
a
wK
…
aK Activation
weights function
b
bias
7
Neural Network
neuron
Input Layer 1 Layer 2 Layer L Output
x1 …… y1
x2 …… y2
……
……
……
……
……
xN …… yM
Input Output
Layer Hidden Layers Layer
10
Example of Neural Network
1 0.73 2 0.72 3 0.51
0
-2 -1 -1
1 0 -2
-1 0.5 -2 0.12 -1 0.85
0
1 -1 4
0 0 2
2 2
𝑓 :𝑅 →𝑅 𝑓
([ ]) [
1
−1
=
0 .62
0.83 ] ([ ]) [
𝑓
0
0
=
0 .51
0.85 ]
Different parameters define different function
11
Matrix Operation
1 4 0.98
y1
1
-2
1
-1 -2 0.12
-1 y2
1
0
𝜎[
1
−1
−2
1 ] [ ] +¿ [ ] ¿ [
(
1
−1
1
)
0
0 .98
0.12 ]
[ ]
4
−2 12
Neural Network
x1 …… y1
x 2 W1 W2 ……
WL y2
b1 b2 bL
……
……
……
……
……
xN x a1 ……
a2 y yM
𝜎W1 x(+ b)
1
𝜎W2 a1(+ b)
2
𝜎 L-1 + )
WL a( bL
13
Neural Network
x1 …… y1
x 2 W1 W2 ……
WL y2
b1 b2 bL
……
……
……
……
……
xN x a1 ……
a2 y yM
¿WL … 𝜎
W2 𝜎
𝜎 W1 (
x(+ (
b)
1 … + bL
+ b2 )
)
14
Softmax
• Softmax layer as the output layer
Ordinary Layer
z1
y1 z1
In general, the output of
z2
y2 z 2
network can be any
value.
May not be easy to interpret
z3 y3 z3
15
Softmax
Probability:
• Softmax layer as the output layer
Softmax Layer
3
3 20 0.88
e
z1 zj
z1 e e y1 e z1
j 1
1 0.12 3
z2 e e z2 2.7 y2 e z2
e
zj
j 1
0.05 ≈0
z3 -3
3
e e z3
y3 e z3
e
zj
3 j 1
e zj
j 1
16
How to set network parameters
𝜃= { 𝑊 , 𝑏 ,𝑊 ,𝑏 , ⋯ 𝑊 , 𝑏 }
1 1 2 2 𝐿 𝐿
x1 …… y1
0.1 is 1
x2
Softmax
…… y2
0.7 is 2
……
……
……
x256 …… y10
0.2 is 0
16 x 16 = 256
Ink → 1 Set the network parameters such that ……
No ink → 0
Input: How to let thethe
y1 has neural
maximum value
network achieve this
Input: y2 has the maximum value
17
Training Data
• Preparing training data: images and their labels
“5 “0 “4 “1
” ” ” ”
“9 “2 “1 “3
” ” ” ”
Using the training data to find the
network parameters.
18
Given a set of network parameters , each
Cost example has a cost value.
“1
”
x1 …… y0.2
1 1
x2 …… y2
0.3 0
Cost
……
……
……
……
……
x256 …… y0.5 𝐿(𝜃) 0
10
target
Cost can be Euclidean distance or cross
entropy of the network output and target 19
Total Cost
For all training data … Total Cost:
𝑅
x1 NN y1 ^
𝐶 ( 𝜃 )=∑ 𝐿 ( 𝜃 ) 𝑟
1
𝑦
1
𝐿 ( 𝜃)
𝑟 =1
x2 NN y2 ^
𝑦
2
2
𝐿 ( 𝜃) How bad the network
parameters is on this
x3 NN y3 ^
𝑦
3
task
3
𝐿 (𝜃)
……
……
……
……
𝜃
0
𝛻 𝐶 (𝜃 )=
0
[ 𝜕 𝐶 ( 𝜃0 ) / 𝜕 𝑤1
𝜕 𝐶 ( 𝜃 ) / 𝜕 𝑤2
0
] learning rate
−𝜂 𝛻 𝐶 ( 𝜃0 )
𝑤1 21
Gradient Descent
Eventually, we would
Randomly pick a
reach a minima …..
starting point
Compute the
2−𝜂 𝛻 𝐶 ( 𝜃 )
2
negative gradient
−𝜂 𝛻 𝐶 ( 𝜃𝜃)
1
𝑤2 1( 𝜃 2 )
at
− 𝛻 𝐶
−𝛻𝐶 (𝜃 )
𝜃
1
− 𝛻 𝐶 ( 𝜃0 )
Times the
0
learning rate
𝜃
−𝜂 𝛻 𝐶 ( 𝜃0 )
𝑤1 22
Local Minima
• Gradient descent never guarantee global minima
Different initial
point
𝐶 Reach different
minima, so different
results
Who is Afraid of Non-Convex
Loss Functions?
𝑤1 𝑤2 http://videolectures.net/
eml07_lecun_wia/ 23
Besides local minima ……
cost
Very slow at the
plateau
Stuck at saddle point
𝛻𝐶 ( 𝜃 ) ≈0 𝛻𝐶 ( 𝜃 ) =0 𝛻𝐶 ( 𝜃 ) =0
parameter space
24
In physical world ……
• Momentum
25
Still not guarantee reaching
Momentum global minima, but give some
hope ……
cost
Movement =
Negative of Gradient + Momentum
Negative of Gradient
Momentum
Real Movement
Gradient = 0 26
Mini-batch
Randomly initialize
x1 NN y1 ^
𝑦
1 Pick the 1st batch
Mini-batch
1
𝐿 𝐶 =𝐿 +𝐿
1 31
+⋯
x31 NN y31 ^
𝑦
31
𝜃1 ← 𝜃0 −𝜂 𝛻 𝐶 ( 𝜃 0 )
31
𝐿 Pick the 2nd batch
……
2 16
𝐶 =𝐿 + 𝐿 +⋯
𝜃2 ← 𝜃1 − 𝜂 𝛻 𝐶 ( 𝜃1 )
x2 NN y2 ^
𝑦
2
Mini-batch
…
2
𝐿
C is different each time
x 16
NN y 16 ^
𝑦
16
when we update
16
𝐿 parameters!
……
27
Mini-batch
Original Gradient Descent With Mini-batch
unstable
1
𝐶 𝐶 =𝐶 +𝐶
1 31
+⋯
x31 NN y31 ^
𝑦
31
𝜃1 ← 𝜃0 −𝜂 𝛻 𝐶 ( 𝜃 0 )
31
𝐶 Pick the 2nd batch
……
2 16
𝐶 =𝐶 +𝐶 +⋯
𝜃2 ← 𝜃1 − 𝜂 𝛻 𝐶 ( 𝜃1 )
x2 NN y2 ^
𝑦
2
Mini-batch
…
2
𝐶 Until all mini-batches
have been picked
x16 NN y16 ^
𝑦
16
𝐶
16 one epoch
……
Ref:
http://speech.ee.ntu.edu.tw/~tlkagk/courses/MLDS_2015_2/Lec
ture/Theano%20DNN.ecm.mp4/index.html 30
Part II:
Why Deep?
31
Deeper is Better?
Word Error Word Error
Layer X Size Rate (%) Layer X Size Rate (%)
1 X 2k 24.2
2 X 2k 20.4
3 X 2k 18.4 Not surprised, more
4 X 2k 17.8 parameters, better
5 X 2k 17.2 performance
1 X 3772 22.5
7 X 2k 17.1 1 X 4634 22.6
1 X 16k 22.1
Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription
Using Context-Dependent Deep Neural Networks." Interspeech. 2011. 32
Universality Theorem
Any continuous function f
f : R N RM
Can be realized by a network
with one hidden layer
Reference for the reason:
(given enough hidden http://neuralnetworksandde
neurons) eplearning.com/chap4.html
x1 x2 …… xN x1 x2 …… xN
Shallow Deep 34
Fat + Short v.s. Thin + Tall
Word Error Word Error
Layer X Size Rate (%) Layer X Size Rate (%)
1 X 2k 24.2
2 X 2k 20.4
3 X 2k 18.4
4 X 2k 17.8
5 X 2k 17.2 1 X 3772 22.5
7 X 2k 17.1 1 X 4634 22.6
1 X 16k 22.1
Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription
Using Context-Dependent Deep Neural Networks." Interspeech. 2011. 35
Why Deep?
• Deep → Modularization
Classifier Girls with 長髮 長髮
1 long hair 女 女長髮長髮
女女
Classifier Boys with 長髮
2 long hair 男 examples
Image
weak Little
Classifier Girls with 短髮
短髮
3 short hair 女 女短髮短髮
女女
Classifier Boys with 短髮
短髮
4 short hair 男 男短髮短髮
男 男 36
Why Deep? Each basic classifier can have
sufficient training examples.
• Deep → Modularization
長髮 長髮
長髮長髮 男
Boy or 短髮
短髮女 女 長髮
女 女短髮 女 v.s. 短髮
短髮
Girl? 短髮 女 短髮
女女 男 男 男短髮
Image Basic 男
Classifier
長髮長髮 短髮
短髮
Long or
女 女長髮
長髮 女 女短髮短髮
short? 女女 v.s. 女女
長髮 短髮短髮
Classifiers for the
男 男 男短髮
短髮
attributes 男男
37
Why Deep?
can be trained by little data
• Deep → Modularization
Classifier Girls with
1 long hair
Boy or
Girl? Classifier Boys with
2 fine long Little
hair data
Image Basic
Classifier Classifier Girls with
Long or 3 short hair
short?
Classifier Boys with
Sharing by the 4 short hair
following classifiers
as module 38
Deep Learning also works
Why Deep? on small data set like TIMIT.
……
……
……
data.
xN ……
The most basic Use 1st layer as module Use 2nd layer as
classifiers to build classifiers module ……
39
Hand-crafted
kernel function
SVM
Apply simple
classifier
Source of image: http://www.gipsa-lab.grenoble-
inp.fr/transfert/seminaire/455_Kadri2013Gipsa-lab.pdf
Deep Learning
simple
Learnable kernel
𝜙 ( 𝑥) classifier
x1 …… y1
x2
𝑥 …… y2
…
…
…
…
…
xN …… yM 40
Hard to get the power of Deep …
41
Part III:
Tips for Training DNN
42
Recipe for Learning
http://www.gizmodo.com.au/2015/04/the-basic-recipe-for-machine-learning-
explained-in-a-single-powerpoint-slide/ 43
Recipe for Learning
Don’t overfitting
forget!
Modify the Network Preventing
Better optimization Overfitting
Strategy
http://www.gizmodo.com.au/2015/04/the-basic-recipe-for-machine-learning-
explained-in-a-single-powerpoint-slide/ 44
Recipe for Learning
Prevent Overfitting
46
ReLU
• Rectified Linear Unit (ReLU)
Reason:
𝑎
𝜎 (𝑧) 1. Fast to compute
𝑎=𝑧
2. Biological reason
𝑎= 0 3. Infinite sigmoid
𝑧
with different biases
4. Vanishing gradient
[Xavier Glorot, AISTATS’11]
[Andrew L. Maas, ICML’13] problem
[Kaiming He, arXiv’15]
47
Vanishing Gradient Problem
x1 …… y1
In x2006,
2 people used RBM
……pre-training. y2
In 2015, people use ReLU.
……
……
……
……
……
xN …… yM
x1 …… 𝑦1 ^
𝑦1
Small
x2 …… output𝑦 2 ^
𝑦2
𝐶
……
……
……
……
……
……
xN …… 𝑦𝑀 +∆ 𝐶 ^
𝑦 𝑀
Large
+∆ 𝑤 input
x1 y1
0 y2
x2
0
0
50
𝑎
𝑎=𝑧
ReLU
A Thinner linear network 𝑎= 0
𝑧
x1 y1
y2
x2
Do not have
smaller gradients
51
Maxout ReLU is a special cases of Maxout
+ 5 neuron + 1
Input
Max 7 Max 2
x1 + 7 + 2
x2 + −1 + 4
Max 1 Max 4
+ 1 + 3
53
Part III:
Tips for Training DNN
Adaptive Learning Rate
54
Set the learning
Learning Rate rate η carefully
− 𝛻 𝐶 ( 𝜃0 )
0
𝜃
𝑤1 55
Can we give different
Set the learning
Learning Rate parameters different
rate η carefully
learning rates?
−𝜂 𝛻 𝐶 ( 𝜃 )
0
0
𝜃 Training would be too slow
𝑤1 56
Original Gradient Descent
Adagrad 𝜃𝑡 ← 𝜃 𝑡 −1 −𝜂 𝛻 𝐶 ( 𝜃 𝑡 −1 )
Parameter dependent
learning rate
𝜂 constant
𝜂 𝑤=
√
𝑡
∑(𝑔 𝑖 2
) Summation of the square of
𝑖=0
the previous derivatives
57
𝜂
𝜂𝑤 =
Adagrad
√∑
𝑡
𝑖 2
(𝑔 )
𝑖=0
g0 g1 …… g0 g1 ……
𝑤1 𝑤2 20.0 10.0 ……
0.1 0.2 ……
Smaller
Learning Rate
Smaller Derivatives
60
Part III:
Tips for Training DNN
Dropout
61
Pick a mini-batch
Dropout 𝜃𝑡 ← 𝜃 𝑡 −1 −𝜂 𝛻 𝐶 ( 𝜃 𝑡 −1 )
Training:
62
Pick a mini-batch
Dropout 𝜃𝑡 ← 𝜃 𝑡 −1 −𝜂 𝛻 𝐶 ( 𝜃 𝑡 −1 )
Training:
Thinner!
No dropout
If the dropout rate at training is p%,
all the weights times (1-p)%
Assume that the dropout rate is 50%.
If a weight by training, set for testing. 64
Dropout - Intuitive Reason
我的 partner
會擺爛,所以
我要好好做
𝑤3 0 .5 × 𝑤3
𝑤4 0 .5 ×𝑤 4
Weights multiply (1-p)%
′
𝑧 ≈𝑧
66
Dropout is a kind of ensemble.
Training
Ensemble Set
67
Dropout is a kind of ensemble.
Ensemble
Testing data x
y1 y2 y3 y4
average 68
Dropout is a kind of ensemble.
minibatch minibatch minibatch minibatch Training of
1 2 3 4 Dropout
M neurons
……
2M possible
networks
All the
weights
……
multiply
(1-p)%
y1 y2 y3
average ≈ y 70
More about dropout
• More reference for dropout [Nitish Srivastava, JMLR’14] [Pierre Baldi,
NIPS’13][Geoffrey E. Hinton, arXiv’12]
• Dropout works better with Maxout [Ian J. Goodfellow, ICML’13]
• Dropconnect [Li Wan, ICML’13]
• Dropout delete neurons
• Dropconnect deletes the connection between neurons
• Annealed dropout [S.J. Rennie, SLT’14]
• Dropout rate decreases by epochs
• Standout [J. Ba, NISP’13]
• Each neural has different dropout rate
71
Part IV:
Neural Network
with Memory
72
Neural Network needs Memory
• Name Entity Recognition
• Detecting named entities like name of people,
locations, organization, etc. in a sentence.
1
0.1 people
apple 0
0.1 location
0 DNN
0.5 organization
…
0 0.3 none
73
Neural Network needs Memory
• Name Entity Recognition
• Detecting named entities like name of people,
locations, organization, etc. in a sentence.
target ORG target NONE
y1 y2 y3 y4 y5 y6 y7
x1 x2 x3 x4 x5 x6 x7
the president of apple eats an apple
DNN needs memory! 74
Recurrent Neural Network (RNN)
y1 y2
a1 a2
y1 y2 y3
Wo copy Wo copy Wo
a 1
a 2 a3
a1 a2
Wi Wh Wh
Wi Wi
x1 x2 x3
L1 L2 L3
y1 y2 y3
Wo Wo Wo
Wh Wh
……
Wi Wi Wi
x1 x2 x3
…… ……
……
……
……
…… ……
…… ……
xt xt+1 xt+2
78
Bidirectional RNN
xt xt+1 xt+2
…… ……
yt yt+1 yt+2
…… ……
xt xt+1 xt+2
79
Many to Many (Output is shorter)
• Both input and output are both sequences, but the output
is shorter.
• E.g. Speech Recognition
好 φ φ 棒 φ φ φ φ 好 φ φ 棒 φ 棒 φ φ
81
Many to Many (No Limitation)
• Both input and output are both sequences with different
lengths. → Sequence to sequence learning
• E.g. Machine Translation (machine learning→ 機器學
習)
Containing all
machine
learning
information about
input sequence
82
Many to Many (No Limitation)
• Both input and output are both sequences with different
lengths. → Sequence to sequence learning
• E.g. Machine Translation (machine learning→ 機器學
習)
機 器 學 習 慣 性 ……
……
machine
learning
===
機 器 習
machine
learning
Unfortunately ……
• RNN-based network is not always easy to learn
Real experiments on Language modeling
sometimes
Lucky
86
The error surface is rough.
The error surface is either
very flat or very steep.
Clipping
Cost
w2
89
Long Short-term Memory
(LSTM)Other part of the network
Special Neuron:
Signal control
Output Gate
4 inputs,
the output gate 1 output
(Other part of
the network)
Memory Forget Signal control
Cell Gate the forget gate
(Other part of
the network)
Signal control
Input Gate LSTM
the input gate
(Other part of
the network)
Other part of the network 90
𝑎 ¿ h ( 𝑐′ ) 𝑓 ( 𝑧 𝑜 )
𝑧𝑜 multiply
Activation function f is usually
𝑓 ( 𝑧𝑜)
h ( 𝑐′ ) a sigmoid function
Between 0 and 1
Mimic open and close gate
𝑐 𝑓 (𝑧 𝑓)
′
𝑐c 𝑧𝑓
𝑐𝑓 ( 𝑧 𝑓 )
′
𝑓 ( 𝑧𝑖 ) 𝑔 ( 𝑧 ) 𝑓 ( 𝑧 𝑖)
𝑐 =𝑔 ( 𝑧 ) 𝑓 ( 𝑧 𝑖 ) + 𝑐𝑓 ( 𝑧 𝑓 )
𝑧𝑖
multiply
𝑔(𝑧 )
𝑧 91
Original Network:
Simply replace the neurons with LSTM
……
……
𝑎1 𝑎2
𝑧1 𝑧2
x1 x2 Input
92
𝑎1 𝑎2
+ +
+ +
+ +
+ +
ct-1 ct ct+1
× + × × + ×
× ×
zf zi z zo zf zi z zo
[Tomas
[Cho, EMNLP’14] Mikolov,
ICLR’15]
Vanilla RNN Initialized with Identity matrix + ReLU activation
function [Quoc V. Le, arXiv’15]
Outperform or be comparable with LSTM in 4 different tasks
95
What is the next wave?
Internal memory or
• Attention-based Model information from output
…… ……
Reading Head Writing Head
98
Concluding Remarks
• Introduction of deep learning
• Discussing some reasons using deep learning
• New techniques for deep learning
• ReLU, Maxout
• Giving all the parameters different learning rates
• Dropout
• Network with memory
• Recurrent neural network
• Long short-term memory (LSTM)
99
Reading Materials
• “Neural Networks and Deep Learning”
• written by Michael Nielsen
• http://neuralnetworksanddeeplearning.com/
• “Deep Learning” (not finished yet)
• Written by Yoshua Bengio, Ian J. Goodfellow and
Aaron Courville
• http://www.iro.umontreal.ca/~bengioy/dlbook/
100
Thank you
for your attention!
101
Acknowledgement
• 感謝 Ryan Sun 來信指出投影片上的錯字
102
Appendix
103
Matrix Operation
1 4 0.98
x1 y1
1 -2
1
-1 -2 0.12
x2 y2
1
-1
0
𝜎[
1 −2
−1W 1 ] [ ] +¿ [ ] ¿ [
1
x(
− 1
1
)
b
0
0 .98
a
0.12 ]
104
Why Deep? – Logic Circuits
• A two levels of basic logic gates can represent any
Boolean function.
• However, no one uses two levels of logic gates to
build computers
• Using multiple layers of logic gates to build some
functions are much simpler (less gates needed).
105
Boosting Weak classifier
……
Weak classifier
Deep Learning
Weak Boosted weak Boosted Boosted
classifier classifier weak classifier
x1 ……
x2 ……
…
…
…
…
xN ……
106
Maxout ReLU is a special cases of Maxout
𝑧 + 𝑧❑
1
Input 𝑤 ReLU 𝑎 Input 𝑤 Max 𝑎
x 𝑏
x 0 + 𝑧❑
2 𝑚𝑎𝑥 { 𝑧 ❑
1 ,𝑧2 }
❑
𝑏 0
1 1
𝑎 𝑎
𝑧 =𝑤𝑥+ 𝑏
𝑧 1 =𝑤𝑥 +𝑏
𝑥 𝑥
0
107
Maxout ReLU is a special cases of Maxout
𝑧 + 𝑧❑
1
Input 𝑤 ReLU 𝑎 Input 𝑤 Max 𝑎
x 𝑏
x + 𝑧❑
2 𝑚𝑎𝑥 { 𝑧 ❑
1 ,𝑧2 }
❑
′
𝑏 𝑤
′
1 𝑏
1 Learnable Activation
Function
𝑎 𝑎
𝑧 =𝑤𝑥+ 𝑏
𝑧 1 =𝑤𝑥 +𝑏
𝑥 𝑥
𝑧 2=𝑤′ 𝑥 +𝑏′
108