Professional Documents
Culture Documents
Dr. M. Prasad
Deep learning
attracts lots of attention.
• I believe you have seen lots of exciting results
before.
Framewor f “cat
”
k
A set of Model
function f1 , f 2
f1 “cat f2 “money”
”
f1 “dog” f2 “snake”
Image Recognition:
Framewor f “cat
”
k
A set of Model
function f1 , f 2 Better!
Goodness of
function f
Supervised Learning
Framewor f “cat
”
k
Training Testing
A set of Model
function f1 , f 2 “cat
Step 1 ”
Trainin
g
Data “monkey” “cat” “dog”
Three Steps for Deep
Learning
Step 1: Step 2: Step 3: pick
define a set goodness of the best
of function function function
wk z z
ak
…
wK
…
a
aK weights b bias
Activation
function
Neural
Network
Neuron Sigmoid Function z
1
z
1 e z
z
2
1
4
-1 -2 z 0.98
Activation
-1
function
1 weights 1 bias
Neural
Network
Different connections leads to
different network structure
z
z z
z
Each neurons can have different values
of weights and biases.
Weights and biases are network parameters 𝜃
Fully Connect
Networ
Feedforward
0.98
k1 1 4
-2
1
-1 -2 0.12
-1
1
0
Sigmoid Function z
1
z
1 e z
z
Fully Connect
Networ
Feedforward
0.98 2
k1 1 4
0.86
0.62
-2 -1 3 -1
1 0 -2
-1 -2 0.12 - 0.11 -1 0.83
-1 2
1 -1 4
0 0 2
Fully Connect
Networ
Feedforward
0.73 2 0.72 0.51
k0 1 3
-2 -1 -1
1 0 -2
-1 0.5 0.12 -1 0.85
0 -2
1 -1 4
0 0 2
This is a function. 1 0.62 0 0.51
𝑓 = 𝑓 =
Input vector, output vector −1 0.83 0 0.85
Given parameters 𝜃, define a function
Given network structure, define a function set
Fully Connect
Networ
Feedforward neuron
k Input Layer 1 Layer 2 Layer L Output
x1 …… y1
x2 …… y2
……
……
……
……
……
xN …… yM
Input Output
Hidden Layers Layer
Laye
r Deep means many hidden layers
Output Layer
(Option)
• Softmax layer as the output layer
Ordinary Layer
z1 y z
In general, the output of
z2
1
y z 1
network can be any value.
Softmax Layer
𝑖 𝑦𝑖 = 1
3
3 20 0.88
z1 e e z
1
y1 e z1 e z
j
j
0.12 1
3
1
z2 e
e 2.7
z2 y z2
e z j
2 j
1
z3 -3
0.05 3
e
e z3 e
z3
e zj
3 j
1
≈0
j
e
zj
y
1
Example
Application
Input Output
y1
0.1 is 1
x1
x2 0.7
y2 is 2
The image
is
……
……
……
0.2
“2”
y10 is 0
x256
16 x 16 = 256
Ink → 1 Each dimension represents
No ink → 0 the confidence of a digit.
Example
Application
• Handwriting Digit Recognition
x1 y1 is 1
x2 is 2
y2
MNaecu “2”
……
……
……
hrianle
Network
What is needed is a is 0
x25 y10
6
function ……
Input: output:
256-dim vector 10-dim vector
Example
Application
Input Layer 1 Layer 2 Layer L Output
x1 …… y1 is 1
x2
A function set c…o…ntaining y2 is 2
the candidates for “2”
……
……
……
……
……
……
xN
Handwriting Digit
…… Recognition
is 0
Input Output y10
Hidden Layers Layer
Laye
r You need to decide the network structure to
let a good function in your function set.
FA
Q
x2
Softmax
…… …… y2 is 2
……
……
x256
…… is 0
16 x 16 = 256 y10
Ink → 1 The learning target is ……
No ink → 0
Input: y1 has the maximum value
x1 …… y1 As close as 1
x2 possible
y2 0
Given a se…t…
……
……
……
of
……
……
Loss
xN parameters
…… 0
y10 𝑙
target
Loss can be the distance between the
network output and target
Total Loss:
Total 𝑅
Loss
For all training data …
𝐿 = 𝑙𝑟
𝑟=1
x1 NN y1 𝑦
𝑙1 1 As small as possible
NN 𝑦
x2 y2
𝑙2 2
Find a function in
function set that
x3 NN y3 𝑦 minimizes total loss L
𝑙3 3
……
……
……
……
Network parameters 𝜃 =
106
𝑤1, 𝑤2, 𝑤3, ⋯ , 𝑏1, 𝑏2, 𝑏3,
weights
……
……
⋯ Millions of parameters
w
Network parameters 𝜃 =
Gradient Descent 𝑤1, 𝑤2, ⋯ , 𝑏1, 𝑏2, ⋯
Total Compute 𝜕𝐿 𝜕𝑤
Loss 𝐿 Negative Increase w
Positive Decrease w
w
http://chico386.pixnet.net/album/photo/171572850
Network parameters 𝜃 =
Gradient Descent 𝑤1, 𝑤2, ⋯ , 𝑏1, 𝑏2, ⋯
η is called
“learning rate” w
−𝜂𝜕𝐿 𝜕𝑤
Network parameters 𝜃 =
Gradient Descent 𝑤1, 𝑤2, ⋯ , 𝑏1, 𝑏2, ⋯
w
Gradient Descent
𝜃
Compute 𝜕𝐿 𝜕𝑤1 𝜕𝐿
𝑤1 0.2 0.15
−𝜇 𝜕𝐿 𝜕𝑤1 𝜕𝑤1
𝜕𝐿
𝑤2 -0.1 Compute 𝜕𝐿 𝜕𝑤2 0.05 𝛻𝐿 = 𝜕𝑤2
⋮
……
−𝜇 𝜕𝐿 𝜕𝑤2 𝜕𝐿
Compute 𝜕𝐿 𝜕𝑏1 𝜕𝑏1
0.3 0.2
𝑏1 ⋮
−𝜇 𝜕𝐿 𝜕𝑏1
gradient
……
Gradient Descent
𝜃
Compute 𝜕𝐿 𝜕𝑤1 Compute 𝜕𝐿 𝜕𝑤1
𝑤1 0.2 0.15 0.09
−𝜇 𝜕𝐿 𝜕𝑤1 ……
−𝜇 𝜕𝐿 𝜕𝑤1
−𝜇 𝜕𝐿 𝜕𝑤2 −𝜇 𝜕𝐿 𝜕𝑤2
Compute 𝜕𝐿 𝜕𝑏1 Compute 𝜕𝐿 𝜕𝑏1
0.3 0.2 0.10
𝑏1
−𝜇 𝜕𝐿 𝜕𝑏1 −𝜇 𝜕𝐿 𝜕𝑏1
……
……
Gradient Descent
Color: Value of
𝑤2 Total Loss L
𝑤1
Gradient Descent Hopfully, we would reach
a minima …..
Color: Value of
𝑤2 Total Loss L
𝑤2 𝑤1
Gradient Descent
This is the “learning” of machines in deep
learning ……
Even alpha go using this
approach.
People image …… Actually …..
台大周伯威
同學開發
Why Deep?
f : R N RM
Can be realized by a network
with one hidden layer
Reference for the reason:
(given enough hidden http://neuralnetworksandde
neurons)
eplearning.com/chap4.html
…W…hich one is
better?
x1 x2 …… xN x1 x2 …… xN
Shallow Deep
Fat + Short v.s. Thin +
Tall
Word Error Word Error
Layer X Size Rate (%) Layer X Size Rate (%)
1 X 2k 24.2
2 X 2k 20.4 Why?
3 X 2k 18.4
4 X 2k 17.8
5 X 2k 17.2 1 X 3772 22.5
7 X 2k 17.1 1 X 4634 22.6
1 X 16k 22.1
Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription
Using Context-Dependent Deep Neural Networks." Interspeech. 2011.
Analog
y Logic circuits Neural network
• Logic circuits consists of • Neural network consists of
gates neurons
• A two layers of logic gates • A hidden layer network can
can represent any Boolean represent any continuous
function. function.
• Using multiple layers of • Using multiple layers of
logic gates to build some neurons to represent some
functions are much simpler functions are much simpler
less gates needed less less
parameters data?
……
……
……
xN ……
The most basic Use 1st layer as module Use 2nd layer as
classifiers to build classifiers module ……
Reference: Zeiler, M. D., & Fergus, R.
• Deep → Modularization
x1 ……
x2 ……
……
……
……
……
xN ……
The most basic Use 1st layer as module Use 2nd layer as
classifiers to build classifiers module ……
Outline of Lecture
I
Introduction of Deep Learning
Why Deep?
Machine “1”
28 x 28
……
500
……
500
Softmax
y1 y2
y10
……
Kera
s
Kera
s
Step 3.1: Configuration
𝑤 ← 𝑤 − 𝜂𝜕𝐿 𝜕𝑤
0.1
Step 3.2: Find the optimal network parameters
28 x 28 …… 10 ……
=784
case 1:
case 2:
Kera
s
• Using GPU to speed training
• Way 1
• THEANO_FLAGS=device=gpu0 python
YourCode.py
• Way 2 (in your code)
• import os
• os.environ["THEANO_FLAGS"] =
"device=gpu0"
Tips for Training
DNN
Recipe of Deep Learning
YES
Step 1: define a NO
Good Results on
set of function
Testing Data?
Overfitting!
Step 2: goodness
of function YES
NO
Step 3: pick the Good Results on
Training Data?
best function
Neural
Network
Do not always blame
Overfitting
Not well trained
Overfitting?
Good Results on
Different approaches for Testing Data?
different problems.
Neural
Network
Recipe of Deep Learning
YES
Momentum
Choosing Proper
Loss “1”
x1 …… y1 1 𝑦 1
1
x2 ……
Softmax
y2 0 𝑦 0
…
… 2
……
……
……
……
……
loss
x256
…… 0 𝑦
Which one is better? y10
10
10 10 0
target
Square 2 Cross
Error 𝑦𝑖 − Entrop − 𝑦 𝑖𝑙𝑛𝑦𝑖
𝑦
𝑖=
𝑖 =0 y 𝑖= =0
1 1
Let’s try
it
Square Error
Cross Entropy
Accuracy
Testing
Let’s try :
Square Error 0.11
it Cross Entropy 0.84
Training
Cross
Entrop
y
Square
Error
Choosing Proper
Loss
When using softmax output layer,
choose cross entropy
Cross
Entrop
y
Tota
l Square
Loss Error
http://jmlr.org/procee
dings/papers/v9/gloro
w1 w2
t10a/glorot10a.pdf
Recipe of Deep Learning
YES
Momentum
We do not really minimize total loss!
Mini- Randomly initialize
network parameters
batch Pick the 1st batch
NN 𝑦
Mini-batch
x1 y1
𝑙1
1 𝐿′ = 𝑙1 + 𝑙31 + ⋯
NN Update parameters once
x31 y31 𝑦
𝑙 31 31 Pick the 2nd batch
……
𝐿′′ = 𝑙2 + 𝑙16 + ⋯
Update parameters once
NN 𝑦
Mini-batch
x2 y2
…
𝑙2
2 Until all mini-batches
have been picked
x16 NN y16 𝑦
one epoch
𝑙16 16
……
1
𝐿′ = 𝑙1 + 𝑙31 + ⋯
𝑙1
Update parameters once
x31
NN y31 𝑦
Pick the 2nd batch
𝑙 31 31
……
𝐿′′ = 𝑙2 + 𝑙16 + ⋯
100 examples in a mini-batch Update parameters once
…
Until all mini-batches
Repeat 20 times have been picked
one epoch
We do not really minimize total loss!
Mini- Randomly initialize
network parameters
batch Pick the 1st batch
NN 𝑦
Mini-batch
x1 y1
𝑙1
1 𝐿′ = 𝑙1 + 𝑙31 + ⋯
NN Update parameters once
x31 y31 𝑦
𝑙 31 31 Pick the 2nd batch
……
𝐿′′ = 𝑙2 + 𝑙16 + ⋯
Update parameters once
NN 𝑦
Mini-batch
x2 y2
…
2
𝑙2
L is different each time
x16 NN y16 𝑦 when we update
𝑙16 16 parameters!
……
Mini-
batch
Original Gradient Descent With Mini-batch
Unstable!!!
1 epoch
Training
Mini-batch
Accuracy
No batch
Epoch
Shuffle the training examples for each epoch
Epoch 1 Epoch 2
x1 NN y1 𝑦 x1 NN y1 𝑦
Mini-batch
Mini-batch
1 1
𝑙1 𝑙1
x31 NN y31 31 x31
NN y31 𝑦
𝑦
𝑙 31 𝑙17 31
……
……
x2 NN y2 𝑦 x2 NN y2 𝑦
Mini-batch
Mini-batch
2 2
𝑙2 𝑙2
……
……
Recipe of Deep Learning
YES
Momentum
Hard to get the power of Deep
…
Training
3 layers
9 layers
Vanishing Gradient
Problem
x 1
…… y1
x2 …… y2
……
……
……
……
……
xN …… yM
x1 …… 𝑦1 𝑦
1 Small
x2 … output
… 𝑦2 𝑦
……
……
……
2
…… ……
xN … 𝑙…
…
Large
+∆𝑤
+∆𝑙
input
… derivatives … 𝑦𝑀
Intuitive way to compute the
𝑦𝜕𝑙 𝑀=? ∆𝑙
𝜕𝑤 ∆𝑤
Hard to get the power of Deep
…
x1 y1
0 y2
x2
0
0
𝑎
𝑎= 𝑧
ReL
U A Thinner linear network 𝑎= 0
𝑧
x1 y1
y2
x2
Do not have
smaller gradients
Let’s try
it
9 layers Accuracy
Testing:
Let’s try Sigmoid 0.11
it ReLU 0.96
• 9 layers
Training
ReLU
Sigmoid
ReLU -
variant
𝐿𝑒𝑎𝑘𝑦 𝑅𝑒𝐿𝑈 𝑃𝑎𝑟𝑎𝑚𝑒𝑡𝑟𝑖𝑐 𝑅𝑒𝐿𝑈
𝑎 𝑎
𝑎= 𝑧 𝑎=𝑧
𝑧 𝑧
𝑎 = 0.01𝑧 𝑎 = 𝛼𝑧
α also learned by
gradient
descent
Maxout ReLU is a special cases of Maxout
+ 5 neuron + 1
Input
Max 7 Max 2
x1 + 7 + 2
x2 + −1 + 4
Max 1 Max 4
+ 1 + 3
Momentum
Learning Set the learning
rate η carefully
Rates
If learning rate is too large
𝑤1
Learning Set the learning
rate η carefully
Rates
If learning rate is too large
𝑤1
Learning
Rates
• Popular & Simple Idea: Reduce the learning rate by
some factor every few epochs.
• At the beginning, we are far from the destination, so we
use larger learning rate
• After several epochs, we are close to the destination, so
we reduce the learning rate
• E.g. 1/t decay: 𝜂𝑡 = 𝜂 𝑡+1
• Learning rate cannot be one-size-fits-all
• Giving different parameters different learning
rates
Adagra
d
Original: 𝑤 ← 𝑤 − 𝜂𝜕𝐿 ∕ 𝜕𝑤
Adagrad: w ← 𝑤 − ߟ𝑤𝜕𝐿 ∕ 𝜕𝑤
Parameter dependent
learning rate
𝜂 constant
ߟ𝑤 =
𝑔𝑖 is 𝜕𝐿 ∕ 𝜕𝑤 obtained
𝑖= 𝑔𝑖 2
𝑡
0 at the i-th update
Summation of the square of the previous derivatives
𝜂
ߟ𝑤 =
Adagra 𝑖= 𝑔𝑖 2
d g 0
g 1 g0 1g
𝑡
0
𝑤1 …… 𝑤2 20.0 ……
0.1
0.2 …… 10.0 ……
Learning rate: Learning rate:
𝜂 𝜂 𝜂 𝜂
= =
0.12 0.1 20 2
20
𝜂 𝜂 𝜂 𝜂
=
= 202 + 102 22
0.12 + 0.22
Observation:
0.22 1.Learning rate is smaller and
smaller for all parameters
2.Smaller derivatives, larger
Why?
learning rate, and vice versa
Larger
derivatives
Smaller
Learning Rate
Smaller Derivatives
Momentum
Hard to find
optimal network parameters
Tota
l Very slow at the
Loss plateau
Stuck at saddle point
𝜕𝐿 ∕ 𝜕𝑤 𝜕𝐿 ∕ 𝜕𝑤 𝜕𝐿 ∕ 𝜕𝑤
≈0 =0 =0
𝜕𝐿∕𝜕𝑤 = 0
Ada RMSProp (Advanced Adagrad) + Momentum
m
Accuracy
Let’s try Testing
Original 0.96
:
it Adam 0.97
• ReLU, 3 layer
Training
Original
Adam
Recipe of Deep Learning
YES
Regularization
YES
Network Structure
Why
Overfitting?
• Training data and testing data can be different.
Handwriting recognition:
Original Created
Training Data: Training Data:
Shift
15 。
Why
Overfitting?
• For experiments, we added some noises to the
testing data
Why
Overfitting?
• For experiments, we added some noises to the
testing data
Accuracy
Testing:
Clean 0.97
Noisy 0.50
Weight Decay
YES
Network Structure
Early
Stopping
Tota
l
Loss Stop at Validation set
here Testing set
Training set
Epochs
Keras: http://keras.io/getting-started/faq/#how-can-i-interrupt-training-when-
the-validation-loss-isnt-decreasing-anymore
Recipe of Deep Learning
YES
Weight Decay
YES
Network Structure
Weight
Decay
• Our brain prunes out the useless link between
neurons.
Keras: http://keras.io/regularizers/
Recipe of Deep Learning
YES
Weight Decay
YES
Network Structure
Dropout
Training:
Thinner!
No dropout
If the dropout rate at training is p%,
all the weights times (1-p)%
Assume that the dropout rate is
50%.
If a weight w = 1 by
Dropout - Intuitive
Reason 我 的 partner
會擺爛,所以
我要好好做
y1 y2 y3 y4
average
Dropout is a kind of
ensemble.
minibatch minibatch minibatch minibatch Training of
1 2 3 4 Dropout
M neurons
……
2M possible
networks
All the
weights
……
multiply
(1-p)%
y1 y2 y3
?????
averag ≈ y
More about dropout
• More reference for dropout [Nitish Srivastava, JMLR’14] [Pierre Baldi,
NIPS’13][Geoffrey E. Hinton, arXiv’12]
• Dropout works better with Maxout [Ian J. Goodfellow, ICML’13]
• Dropconnect [Li Wan, ICML’13]
• Dropout delete neurons
• Dropconnect deletes the connection between neurons
• Annealed dropout [S.J. Rennie, SLT’14]
• Dropout rate decreases by epochs
• Standout [J. Ba, NISP’13]
• Each neural has different dropout rate
Let’s try
it ……
500
……
model.add( dropout(0.8) )
……
500
model.add( dropout(0.8) )
Softmax
y1 y2
y10
……
Let’s try
it
No Dropout
Accuracy
Dropout
Testing:
Training Accuracy
Noisy 0.50
Epoch + dropout 0.63
Recipe of Deep Learning
YES
Regularization
YES
Network Structure
CNN is a very good example!
(next lecture)
Concluding
Remarks of
Lecture II
Recipe of Deep Learning
YES
Step 1: define a NO
Good Results on
set of function
Testing Data?
Step 2: goodness
of function YES
NO
Step 3: pick the Good Results on
Training Data?
best function
Neural
Network
Let’s try another task
Document
Classification
“stock” in document
政治
經濟
Machine
體育
“president” in document
體育 政治 財經
http://top-breaking-news.com/
Dat
a
MS
E
ReL
U
Accuracy
Softmax
……
100
……3 x 107
……
……
100 100 x 100 x 3 1000
Can the fully connected network be simplified by
considering the properties of image
Why CNN for
Image
• Some patterns are much smaller than the whole
image
A neuron does not have to see the whole image
to discover the pattern.
Connecting to small region with less
parameters
“beak” detector
Why CNN for
Image
• The same patterns appear in different regions.
“upper-left
beak” detector
subsampling
Max Pooling
Can repeat
Fully Connected many times
Feedforward network Convolution
Max Pooling
Flatten
The whole
CNN
Property 1
Some patterns are much Convolution
smaller than the whole image
Property 2
The same patterns appear in Max Pooling
Can repeat
different regions.
many times
Property 3 Convolution
Subsampling the pixels will
not change the object
Max Pooling
Flatten
The whole
CNNcat dog ……
Convolution
Max Pooling
Can repeat
Fully Connected many times
Feedforward network Convolution
Max Pooling
Flatten
CNN – Those are the network
Convolution parameters to be learned.
1 -1 -1
1 0 0 0 0 1 -1 1 -1 Filter 1
0 1 0 0 1 0 -1 -1 1
0 0 1 1 0 0 Matrix
1 0 0 0 1 0 -1 1 -1
0 1 0 0 1 0 -1 1 -1
0 0 1 0 1 0 -1 1 -1 Filter 2
……
6 x 6 image Matrix
Each filter detects a small
Property 1
pattern (3 x 3).
1 -1 -1
CNN – -1 1 -1 Filter 1
-1 -1 1
Convolution
stride=1
1 0 0 0 0 1
0 1 0 0 1 0
3 -1
0 0 1 1 0 0
1 0 0 0 1 0
0 1 0 0 1 0
0 0 1 0 1 0
6 x 6 image
1 -1 -1
CNN – -1 1 -1 Filter 1
-1 -1 1
Convolution
If stride=2
1 0 0 0 0 1
0 1 0 0 1 0
3 -3
0 0 1 1 0 0
1 0 0 0 1 0
0 1 0 0 1 0
0 0 1 0 1 0 We set stride=1 below
6 x 6 image
1 -1 -1
CNN – -1 1 -1 Filter 1
-1 -1 1
Convolution
stride=1
1 0 0 0 0 1
0 1 0 0 1 0
3 -1 -3 -1
0 0 1 1 0 0
1 0 0 0 1 0 -3 1 0 -3
0 1 0 0 1 0
0 0 1 0 1 0 -3 -3 0 1
6 x 6 image 3 -2 -2 -1
Property 2
-1 1 -1
CNN – -1 1 -1 Filter 2
-1 1 -1
Convolution
stride=1 Do the same process for
1 0 0 0 0 1 every filter
0 1 0 0 1 0 3 -1 -3 -1
0 0 1 1 0 0 -1 -1 -1 -1
1 0 0 0 1 0 -3 1 0 -3
-1 -1 -2 1
0 1 0 0 1 0 Featur
0 0 1 0 1 0 -3 -3 e 1
-1 1
Map
6 x 6 image 3 0-1
-220 -2- -1
-1 -4 3
4 x 4 image
1 -1 -1
CNN – Zero -1 1 -1 Filter 1
-1 -1 1
Padding
0 0
0
0 1 0 0 0 0 1
0 0 1 0 0 1 0 You will get another 6 x 6
0 0 1 1 0 0 images in this way
1 0 0 0 1 0
0 1 0 0 1 0 0 Zero padding
0 0 1 0 1 0 0
6 x 6 image
0 0
0
CNN – Colorful
image 1 -1 -1
1 -1 -1
-1-1 11 -1-1
1 -1 -1 -1 1 -1
-1-1 11 -1-1 -1-1-1 11 1 -1-1-1 Filter 2
-1 1 -1 Filter 1 -1 1 -1
-1-1 -1-1 11 -1 1 -1
-1 -1 1 -1 1 -1
Colorful image
1 0 0 0 0 1
1 0 0 0 0 1
0 11 00 00 01 00 1
0 1 0 0 1 0
0 00 11 01 00 10 0
0 0 1 1 0 0
1 00 00 10 11 00 0
1 0 0 0 1 0
0 11 00 00 01 10 0
0 1 0 0 1 0
0 00 11 00 01 10 0
0 0 1 0 1 0
0 0 1 0 1 0
The whole
CNNcat dog ……
Convolution
Max Pooling
Can repeat
Fully Connected many times
Feedforward network Convolution
Max Pooling
Flatten
CNN – Max
Pooling
1 -1 -1 -1 1 -1
-1 1 -1 Filter 1 -1 1 -1 Filter 2
-1 -1 1 -1 1 -1
3 -1 -3 -1 -1 -1 -1 -1
-3 1 0 -3 -1 -1 -2 1
-3 -3 0 1 -1 -1 -2 1
3 -2 -2 -1 -1 0 -4 3
CNN – Max
Pooling
New image
1 0 0 0 0 1 but
0 1 0 0 1 0 Conv smaller
0 0 1 1 0 0
3 0
1 0 0 0 1 0 -1 1
0 1 0 0 1 0 Max
0 0 1 0 1 0 Pooling
30 13
6 x 6 image
2 x 2 image
Each filter
The whole
CNN3
-1 0 Convolution
1
30 13 Max Pooling
Can repeat
A new image many times
Smaller than the original Convolution
image
The number of the channel Max Pooling
is the number of filters
The whole
CNNcat dog ……
Convolution
Max Pooling
A new image
Fully Connected
Feedforward network Convolution
Max Pooling
A new image
Flatten
3
Flatten
0
1
3 0
-1 1 3
30 1 -1
3 Flatten
1 Fully Connected
Feedforward network
0
3
The whole
CNN
Convolution
Max Pooling
Can repeat
many times
Convolution
Max Pooling
Input
+ 5
Max 7
x1 + 7
x2 + −1
Max 1
+ 1
1 0 0 0 0 1 1 -1 -1 -1 1 -1
0 1 0 0 1 0 -1 1 -1 -1 1 -1
0 0 1 1 0 0 -1 -1 1 -1 1 -1
1 0 0 0 1 0
0 1 0 0 1 0
0 0 1 0 1 0 convolution Max
image pooling
(Ignoring the non-linear activation function after the convolution.)
1 -1 -1 Filter 1 1: 1
-1 1 -1 2: 0
-1 -1 1 3: 0
3
4: 0
1 0 0 0 0 1
…
0 1 0 0 1 0 7:
0 0 1 1 0 0
1 0 0 0 1 0 0
0 1 0 0 1 0 8:
0 0 1 0 1 0
…
6 x 6 image 1 0
13:
9: 0
14:
Less parameters! Only connect to 9
15: 1 input, not fully
0 1
16: connected
…
10: 0
1 -1 - 1: 1
1
-1 1 - Filter 1 2: 0
1 3: 0 3
-1 4: 0
1 0-1 01 0 0 1
…
0 1 0 0 1 0 7: 0
0 0 1 1 0 0
1 0 0 0 8: 1 -1
9: 0
0 1 0 0 1
1 0 10: 0
0 0 1 0 0
1 0
…
6 x 6 image 13: 0
14: 0
Less parameters!
15: 1
Shared weights
Even less parameters! 16: 1
…
Input
+ 5
Max 7
x1 + 7
x2 + −1
Max 1
+ 1
1 0 0 0 0 1 1 -1 -1 -1 1 -1
0 1 0 0 1 0 -1 1 -1 -1 1 -1
0 0 1 1 0 0 -1 -1 1 -1 1 -1
1 0 0 0 1 0
0 1 0 0 1 0
0 0 1 0 1 0 convolution Max
image pooling
(Ignoring the non-linear activation function after the convolution.)
Input
+ 5
Max 7
x1 + 7
x1 + −1
Max 1
+ 1
3 -1 -3 -1
3 0
-3 1 0 -3
-3 -3 0 1
3 1
3 -2 -2 -1
Input
+ 5
Max 7
x1 + 7
x2 + −1
Dim = 6 x 6 = 36
Max 1
parameters + 1
= 36 x Dim = 4 x 4 x 2
32 = 1152 = 32
1 0 0 0 0 1 1 -1 -1 -1 1 -1
0 1 0 0 1 0 -1 1 -1 -1 1 -1
0 0 1 1 0 0 -1 -1 1 -1 1 -1
1 0 0 0 1 0
0 1 0 0 1 0 convolution
0 0 1 0 1 0 Max
Only 9 x 2 = 18 pooling
image
parameters
Convolutional Neural
Network
Step 1: Step 2: Step 3: pick
Cdoenfvni oelutai goodness of the best
osneatl function function
Neoufrfaul
nNcettiwoonrk
“monkey 0
” “cat” 1
CNN
……
“dog” 0
Convolution, Max target
Pooling, fully connected
Learning: Nothing special, just gradient descent ……
Playing
Go
Next move
Network (19 x 19
positions)
19 x 19 matrix 19 x 19 vector
19i(xm1a9gvee)
ctorBlack: 1 Fully-connected feedword
white: -1 network can be used
none: 0 But CNN performs much better.
進藤光 v.s. 社清
春 黑 : 5 之五
Playing 白 : 天元
Go
Training: record of previous plays
黑 : 五之
5
Target:
Network “ 天元”
=1
else = 0
Target:
Network “ 五之 5”
=1
else = 0
Why CNN for playing
Go?
• Some patterns are much smaller than the whole
image
Destination: Taipei
Slot
time of arrival: November 2nd
Example
Application
Solving slot filling by
y1 y2
Feedforward network?
Input: a word
(Each word is represented
as a vector)
Taipei x1 x2
1-of-N
encoding
How to represent each word as a vector?
1-of-N Encoding lexicon = {apple, bag, cat, dog, elephant}
The vector is lexicon size. apple = [ 1 0 0 0 0]
Each dimension corresponds bag = [ 0 1 0 0 0]
to a word in the lexicon cat = [ 0 0 1 0 0]
…
…
cat 0 a-p-p 1
0 26 X 26 X 26
…
dog 0 p-l-e 1
…
elephan …
…
p-p-l 1
t “other” 1
w = “apple”
…
…
w = “Gandalf” w = “Sauron”
187
Example time of
dest departure
Application
Solving slot filling by
y1 y2
Feedforward network?
Input: a word
(Each word is represented
as a vector)
Output:
Probability distribution that
the input word belonging to
the slots
Taipei x1 x2
Example time of
dest departure
Application
arrive Taipei on November
y1 y2
2nd
a1 a2
N
Probability of Probability of Probability of
“arrive” in each slot “Taipei” in each slot “on” in each slot
y1 y2 y3
store store
a1 a2 a3
a1 a2
x1 x2 x3
N
Prob of “leave” Prob of “Taipei” Prob of “arrive” Prob of “Taipei”
in each slot in each slot in each slot in each slot
…… ……
y1 y2 y1 y2
store store
a1 a2 …… a1 a2 ……
a1 a1
x1 x2 …… x1 x2 ……
leave Taipei arrive Taipei
…… ……
……
……
……
…… ……
…… ……
xt xt+1 xt+2
Bidirectional
RNN
xt xt+1 xt+2
…… ……
yt yt+1 yt+2
…… ……
xt xt+1 xt+2
Long Short-term Memory
(LSTM)
Other part of the network
Special Neuron:
Signal control
Output Gate
4 inputs,
the output gate 1 output
(Other part of
the network)
Memory Forget Signal control
Cell Gate the forget gate
(Other part of
the network)
Signal control
the input gate
Input Gate LSTM
(Other part
of the
network) Other part of the network
𝑎 = ℎ 𝑐 ′ 𝑓 𝑧𝑜
𝑧𝑜 multiply
Activation function f is
𝑓 𝑧𝑜 ℎ 𝑐′ usually a sigmoid function
Between 0 and 1
Mimic open and close
𝑐 𝑓 𝑧𝑓
gate
𝑐 𝑧𝑓
c′
𝑐𝑓
𝑐 ′ = 𝑔 𝑧 𝑓 𝑧𝑖 + 𝑐𝑓 𝑧𝑓
𝑓 𝑧𝑖
𝑧𝑖 𝑧𝑓
𝑔 𝑧 𝑓 𝑧𝑖
multiply
𝑔 𝑧
𝑧
0
≈0
-10
10
170 10
≈1
≈1 3
10
3
3
-3
≈1
10
-3
1- -10
730
≈0
≈1 -3
10
-3
-3
LST
M
ct-1
……
vector
zf zi
z zo
4 vectors
xt
LST
M yt
ct-1
zo
× + ×
× zf
z zi
zf zi zo
xt
z
Extension: “peephole”
LST
M yt yt+1
ct+1
ct-1 ct
× + × × + ×
× ×
zf zi
z zo zf zi
z zo
ct-1 xt xt+1
ht-1 ct ht
Multiple-layer
LSTM
This is quite
standard now.
https://img.komicolle.org/2015-09-20/src/14426967627131.gif
Three Steps for Deep
Learning
Step 1: Step 2: Step 3: pick
define a set goodness of the best
of function function function
0 … 1 … 0 0 … 1 … 0 0 … 1 … 0
y1 y2 y3
copy copy
a1 a2 a3
a1 a2
W i
x1 x2 3
x
Training
Sentences: arrive Taipei on November 2nd
other dest other time time
Three Steps for Deep
Learning
Step 1: Step 2: Step 3: pick
define a set goodness of the best
of function function function
a1 a2
𝑤
𝑤 ← 𝑤 − 𝜂𝜕𝐿 ∕ 𝜕𝑤 x1 x2
Unfortunately
……
• RNN-based network is not always easy to learn
Real experiments on Language modeling
sometimes
Total Loss
Lucky
Epoch
The error surface is
rough. The error surface is either
very flat or very steep.
Total Loss
Clipping os
t
w2
y1 y2 y3
Input satnorde output are
a1 with the sama2 e a3
botshtosar1eequences a2
length
RNN can do more than that!
x1
x2 x3
……
我 覺 得 …… 太 糟 了
Many to Many (Output is
shorter)
• Both input and output are both sequences, but the output
is shorter.
• E.g. Speech Recognition
Output: “ 好棒”
Trimming
Problem? (character sequence)
好 φ φ 棒 φ φ φ φ 好 φ φ 棒 φ 棒 φ φ
Many to Many (No
Limitation)
• Both input and output are both sequences with different
lengths. → Sequence to sequence learning
• E.g. Machine Translation (machine learning→ 機器學
習)
Containing all
machine
learning
information about
input sequence
Many to Many (No
Limitation)
• Both input and output are both sequences with different
lengths. → Sequence to sequence learning
• E.g. Machine Translation (machine learning→ 機器學
習)
機 器 學 ……
習 慣 性
……
machine
learning
===
器
machine
learning
CNN ……
Input
image Caption Generation
Application:
Video Caption
Generation
A girl is running.
Video
Reinforcement Learning
Unsupervised Learning
• Image: Realizing what the World Looks Like
• Text: Understanding the Meaning of Words
• Audio: Learning human language without supervision
Skyscrape
r
https://zh.wikipedia.org/wiki/%E9%9B%99%E5%B3%B0%E5%A1%94#/me
dia/File:BurjDubaiHeight.svg
Ultra Deep 22 layers
Network
http://cs231n.stanford.e
19 layers
du/slides/winter1516_le
cture8.pdf
8 layers
6.7%
7.3%
16.4%
3.57%
7.3% 6.7%
16.4%
AlexNet VGG GoogleNet Residual Net Taipe
(2012) (2014) (2014) (2015) i
Ultra Deep
Network 152 layers
Worry about overfitting?
Worry about training
first!
This ultra deep network 3.57%
have special structure.
7.3% 6.7%
16.4%
AlexNet VGG GoogleNet Residual Net
(2012) (2014) (2014) (2015)
Ultra Deep
Network
• Ultra deep network is the
ensemble of many networks
with different depth.
6 layers
Ensemble 4 layers
2 layers
Ultra Deep
Network
• FractalNet
Resnet in Resnet
Good Initialization?
Ultra Deep
Network
• •
copy Gate
controlle copy
r
output layer output layer output layer
Reinforcement Learning
Unsupervised Learning
• Image: Realizing what the World Looks Like
• Text: Understanding the Meaning of Words
• Audio: Learning human language without supervision
Attention-based Model
What you learned Lunch today
in these lectures
What is deep
learning?
summer
vacation
Answer Organize 10 years
ago
http://henrylo1605.blogspot.tw/2015/05/blog-post_56.html
Attention-based Model
Input DNN/RNN output
Reading Head
Controller
Reading Head
…… ……
Machine’s Memory
Ref:
http://speech.ee.ntu.edu.tw/~tlkagk/courses/MLDS_2015_2/Lecture/Attain%20(v3).e
cm.mp4/index.html
Attention-based Model v2
…… ……
Machine’s Memory
Reading Head
Controller
Semantic
Analysis
…… ……
source: http://visualqa.org/
Visual Question
Answering
Query DNN/RNN answer
Reading Head
Controller
random
Audio Story:
Question: “what is a possible
origin of Venus‘ clouds?"
Model Architecture
Word-based Attention
Model Architecture
Sentence-based Attention
(A) (A) (A) (A)
(A)
(proposed by FB AI group)
Learning
Word-based Attention: 48.8%
(proposed by FB AI group)
Reinforcement Learning
Unsupervised Learning
• Image: Realizing what the World Looks Like
• Text: Understanding the Meaning of Words
• Audio: Learning human language without supervision
Scenario of
Reinforcement Learning
Observation Action
Agent
Don’t do Reward
that
Environment
Scenario of
Learnin
Reinforcement Agent learns to take actions to
maximize expected reward.
g Observation Action
Agent
http://www.sznews.com/news/conte Environment
nt/2013-11/26/content_8800180.ht
Supervised v.s.
Reinforcement
• Supervised “Hello” Say “Hi”
Learning from
teacher “Bye bye” Say “Good bye”
• Reinforcement
……. ……. ……
Hello …… Bad
Learning from
critics Agent Agent
Scenario of
Learnin
Reinforcement Agent learns to take actions to
maximize expected reward.
g Observation Action
If win, reward = 1
If loss, reward = -1
Otherwise, reward = 0
Environ
ment
Supervised v.s.
Reinforcement
• Supervised:
• Reinforcement Learning
…
Function Function
Input Output
Environment
Application: Interactive
Retrieval
• Interactive retrieval is helpful. [Wu & Lee, INTERSPEECH 16]
“Deep Learning”
user
Better retrieval
The task cannot be addressed
performance,
Less user labor by linear model.
More Interaction
More applications
• Alpha Go, Playing Video Games, Dialogue
• Flying Helicopter
• https://www.youtube.com/watch?v=0JL04JJjocc
• Driving
• https://www.youtube.com/watch?v=0xo1Ldx3L
5Q
• Google Cuts Its Giant Electricity Bill With
DeepMind-Powered AI
• http://www.bloomberg.com/news/articles/2016-07-
19/google-cuts-its-giant-electricity-bill-with-deepmind-
powered-ai
To learn deep reinforcement
learning ……
• Lectures of David Silver
• http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Te
aching.html
• 10 lectures (1:30 each)
• Deep Reinforcement Learning
• http://videolectures.net/rldm2015_silver_reinfo
rcement_learning/
Outlin
e
Supervised Learning
• Ultra Deep Network
• Attention Model New network structure
Reinforcement Learning
Unsupervised Learning
• Image: Realizing what the World Looks Like
• Text: Understanding the Meaning of Words
• Audio: Learning human language without supervision
Does machine know what the
world look like?
Ref:
https://openai.com/blog/generative-
models/
Draw something!
Deep
Dream
• Given a photo, machine adds what it sees ……
http://deepdreamgenerator.com/
Deep
Dream
• Given a photo, machine adds what it sees ……
http://deepdreamgenerator.com/
Deep
Style
• Given a photo, make its style like famous paintings
https://dreamscopeapp.com/
Deep
Style
• Given a photo, make its style like famous paintings
https://dreamscopeapp.com/
Deep
Style
CNN CNN
content style
CNN
?
Generating Images by
RNN
color of color of color of
2nd 3rd pixel 4th
pixel pixel
Real
Worl
d
Generating Images
• Training a decoder to generate images is
unsupervised
Neural Network
Auto-encoder
code NN
Decoder
Not state-of-
the-art Learn together
approach NN code
Encoder
As close as possible
Output Layer
Input Layer
Layer
bottle
…
Layer
Layer
…
Layer
Encoder Decoder
Code
Generating Images
• Training a decoder to generate images is
unsupervised
• Variation Auto-encoder (VAE)
• Ref: Auto-Encoding Variational Bayes,
https://arxiv.org/abs/1312.6114
• Generative Adversarial Network (GAN)
• Ref: Generative Adversarial Networks,
http://arxiv.org/abs/1406.2661
code NN
Decoder
Which one is machine-generated?
Ref: https://openai.com/blog/generative-models/
畫漫 https://github.com/mattya/chainer-DCGAN
畫 !!!
Outlin
e
Supervised Learning
• Ultra Deep Network
• Attention Model New network structure
Reinforcement Learning
Unsupervised Learning
• Image: Realizing what the World Looks Like
• Text: Understanding the Meaning of Words
• Audio: Learning human language without supervision
Machine
Reading
• Machine learn the meaning of words from reading
a lot of documents without supervision
http://top-breaking-news.com/
Machine
Reading
• Machine learn the meaning of words from reading
a lot of documents without supervision
Neural Network
?
https://garavato.files.wordpress.com/2011/11/stacksdocuments.jpg?w=490
Machine
Reading
• Machine learn the meaning of words from reading
a lot of documents without supervision
• A word can be understood by its context
You shall know a word
蔡英文、馬英九 are
by the company it keeps
something very
similar
馬英九 520 宣誓就
職
Source: http://www.slideshare.net/hustwj/cikm-keynotenov2014
283
𝑉 𝐺𝑒𝑟𝑚𝑎𝑛𝑦
Word Vector ≈ 𝑉 𝐵𝑒𝑟𝑙𝑖𝑛 − 𝑉 𝑅𝑜𝑚𝑒 + 𝑉 𝐼𝑡𝑎𝑙𝑦
• Characteristics
𝑉 ℎ𝑜𝑡𝑡𝑒𝑟 ≈ 𝑉 𝑏𝑖𝑔𝑔𝑒𝑟 − 𝑉 𝑏𝑖𝑔
𝑉 𝑅𝑜𝑚𝑒 − −𝑉𝑉 ≈ 𝑉 𝐵𝑒𝑟𝑙𝑖𝑛 − 𝑉 𝐺𝑒𝑟𝑚𝑎𝑛𝑦
ℎ𝑜𝑡
𝑉 𝑘𝑖𝑛𝑔 𝐼𝑡𝑎𝑙𝑦 ≈ 𝑉 𝑢𝑛𝑐𝑙𝑒 − 𝑉 𝑎𝑢𝑛𝑡
− 𝑉 𝑞𝑢𝑒𝑒𝑛
• Solving analogies
286
Outlin
e
Supervised Learning
• Ultra Deep Network
• Attention Model New network structure
Reinforcement Learning
Unsupervised Learning
• Image: Realizing what the World Looks Like
• Text: Understanding the Meaning of Words
• Audio: Learning human language without supervision
Learning from Audio
Book
Machine does not have
any prior knowledge
Like an infant
dogs
never
ever ever
Sequence-to-sequence
Auto-encoder
vector
audio segment
x1 x2 x3 x4 acoustic features
audio segment
Sequence-to-sequence
Input acoustic features
Auto-encoder
x3 x4
x1 x2
The RNN encoder and
decoder are jointly trained.
y1 y2 y3 y4
RNN Encoder
RNN Decoder
x1 x2 x3 x4 acoustic features
audio segment
Audio Word to
Vector
•- Visualizing
Resultsembedding vectors of the words
fear
fame
name near
WaveNet
(DeepMind)
https://deepmind.com/blog/wavenet-generative-model-raw-audio/
Concluding
Remarks
Concluding
Remarks
Lecture I: Introduction of Deep Learning
http://www.express.co.uk/news/science/651202/First-step-towards-The-Terminator-
becoming-reality-AI-beats-champ-of-world-s-oldest-game
AI 訓練
師
機器不是自己會學嗎?
為什麼需要 AI 訓練
師
戰鬥
是寶
可夢
在打,
AI 訓練
師
寶可夢訓練師 AI 訓練師
• 寶可夢訓練師要挑選適合 • 在 step 1 , AI 訓練師要
的寶可夢來戰鬥 挑 選合適的模型
• 寶可夢有不同的屬性 • 不同模型適合處理不
• 召喚出來的寶可夢不一定 同的問題
能操控 • 不一定能在 step 3 找出
• E.g. 小智的噴火龍 best function
• 需要足夠的經驗 • E.g. Deep Learning
• 需要足夠的經驗
AI 訓練
師
• 厲害的 AI , AI 訓練師功不可
沒
• 讓我們一起朝 AI 訓練師之路邁
進
http://www.gvm.com.tw/w
eb only_content_10787.html