Professional Documents
Culture Documents
https://www.youtube.com/watch?v=5GMDbStRgoc http://www.youtube.com/watch?v=Fbs4lnGLS8M
https://www.youtube.com/watch?v=qv6UVOQ0F44 https://www.youtube.com/watch?v=nEnNtiumgII
http://www.youtube.com/watch?v=Xj7-QA-aCus
Game AI
Deep Learning
July 17, 2019
Tommy Ho
Evolution of AI
https://www.embedded-vision.com/
Difference between AI, ML, and DL
https://blogs.oracle.com/bigdata/difference-ai-machine-learning-deep-learning
Traditional Hand-coded Code or Rules?
https://www.xenonstack.com/blog/data-science/log-analytics-deep-machine-learning-ai/
Outline
http://speech.ee.ntu.edu.tw/~tlkagk/slide/Tutorial_HYLee_Deep.pptx
Lecture I:
Introduction of
Deep Learning
Outline
• Playing Go
f “5-5”
(next move)
• Dialogue System
f “Hi” “Hello”
(what the user said) (system response)
Image Recognition:
Framework f “cat”
A set of Model
function f1 , f 2
f1 “cat” f2 “money”
f1 “dog” f2 “snake”
Image Recognition:
Framework f “cat”
A set of Model
function f1 , f 2 Better!
Goodness of
function f
Supervised Learning
Framework f “cat”
Training Testing
A set of Model
function f1 , f 2 “cat”
Step 1
Training
Data
“monkey” “cat” “dog”
Three Steps for Deep Learning
a1
… w1 A simple function
…
wk z z
ak … a
Activation
…
wK function
aK weights b bias
Neural Network
Sigmoid Function z
Neuron
z
1
1 ez z
2
1
z
4
- - 0.98
1 2
Activation
-1
function
1 weights 1 bias
Neural Network
Different connections lead to
different network structures
z
z z
z
The neurons have different values of
weights and biases.
Weights and biases are network parameters 𝜃
Fully Connect Feedforward Network
1 4 0.98
1
-
2 1
-1 -2 0.12
-
1 1
0
Sigmoid Function z
z
1
1 ez z
Fully Connect Feedforward Network
1 4 0.98 2 0.86 3 0.62
1
- - -
2 1 1 0 1 -
2
-1 -2 0.12 -2 0.11 -1 0.83
-
1 -1 4
1
0 0 2
Fully Connect Feedforward Network
1 0.73 2 0.72 3 0.51
0
- - -
2 1 1 0 1 -
2
-1 0.5 -2 0.12 -1 0.85
0
1 -1 4
0 0 2
This is a function. 1 0.62 0 0.51
𝑓 = 𝑓 =
Input vector, output vector −1 0.83 0 0.85
Given parameters 𝜃, define a function
Given network structure, define a function set
Fully Connect Feedforward Network
neuron
Input Layer 1 Layer 2 Layer L Output
x1 …… y1
x2 …… y2
……
……
……
……
……
xN …… yM
Input Output
Layer Hidden Layers Layer
Deep means many hidden layers
Why Deep? Universality Theorem
Any continuous function f
f : R N RM
Can be realized by a network
with one hidden layer
Reference for the reason:
(given enough hidden neurons) http://neuralnetworksandde
eplearning.com/chap4.html
Why Deep? Analogy
Logic circuits Neural network
• Logic circuits consists of • Neural network consists of
gates neurons
• A two layers of logic gates • A hidden layer network
can represent any Boolean can represent any
function. continuous function.
• Using multiple layers of • Using multiple layers of
logic gates to build some neurons to represent some
functions are much simpler functions are much simpler
less gates needed less less
parameters data?
More reason:
https://www.youtube.com/watch?v=XsC9byQk
UH8&list=PLJV_el3uVTsPy9oCRY30oBPNLCo89y
u49&index=13
Deep = Many hidden layers
22 layers
http://cs231n.stanford.
edu/slides/winter1516_ 19 layers
lecture8.pdf
8 layers
6.7%
7.3%
16.4%
Special
structure
3.57%
7.3% 6.7%
16.4%
AlexNet VGG GoogleNet Residual Net Taipei
(2012) (2014) (2014) (2015) 101
Output Layer
• Softmax layer as the output layer
Ordinary Layer
z1
y1 z1
In general, the output of
z2
y2 z 2
network can be any
value.
May not be easy to
z3
y3 z 3 interpret
Output Layer
Probability:
• Softmax layer as the output layer 1 > 𝑦𝑖 > 0
σ𝑖 𝑦𝑖 = 1
Softmax Layer
0.88 3
e
3
z1 e e z1 20
y1 e z1 zj
j 1
0.12 3
z2 1
e e z2 2.7
y2 e z2
e
zj
j 1
0.05 ≈0
z3 -
3
3
e e z3
y3 e z3
e
zj
3 j 1
e zj
j 1
Example Application
Input Output
x1
y1
0.1 is 1
x2 y2
0.7 is 2
The image
is “2”
……
……
……
x256 y1
0.2 is 0
16 x 16 = 256 0
Ink → 1 Each dimension represents
No ink → 0 the confidence of a digit.
Example Application
• Handwriting Digit Recognition
x1 y1 is 1
x2
y2 is 2
Neural
Machine
…… “2”
……
……
Network
x256 y1 is 0
What is needed is a
function …… 0
Input: output:
256-dim 10-dim vector
vector
Example Application
Input Layer 1 Layer 2 Layer L Output
x1 …… y1 is 1
x2 ……
A function set containing the y2 is 2
candidates for “2”
……
……
……
……
……
……
Handwriting Digit Recognition
xN …… y1 is 0
Input Output 0
Layer Hidden Layers Layer
Real case:
https://youtu.be/ZKkeiqcL8qw?t=24
Softmax
x2 …… y2 is 2
……
……
……
x256 …… y1 is 0
16 x 16 = 256
0
Ink → 1 The learning target is ……
No ink → 0
Input: y1 has the maximum value
“1”
x1 …… y1 As close 1
x2 as
Softmax
Given a set ……
of y2 0
possible
parameters
……
……
……
……
……
Loss
x256 …… y1 𝑙 0
0
Loss can be square error or cross entropy target
between the network output and target
Total Loss:
Total Loss 𝑅
𝐿 = 𝑙𝑟
For all training data … 𝑟=1
x1 NN y1 𝑦ො 1
𝑙1 As small as possible
x2 NN y2 𝑦ො 2
𝑙2 Find a function in
function set that
x3 NN y3 𝑦ො 3 minimizes total loss L
𝑙3
……
……
……
…… Find the network
xR NN yR 𝑦ො 𝑅 parameters 𝜽∗ that
𝑙𝑅 minimize total loss L
Three Steps for Deep Learning
Network parameters 𝜃 =
106
𝑤1 , 𝑤2 , 𝑤3 , ⋯ , 𝑏1 , 𝑏2 , 𝑏3 , ⋯
weights
……
……
Millions of parameters
w
Network parameters 𝜃 =
Gradient Descent 𝑤1 , 𝑤2 , ⋯ , 𝑏1 , 𝑏2 , ⋯
Positive Decrease w
w
http://chico386.pixnet.net/album/photo/171572850
Network parameters 𝜃 =
Gradient Descent 𝑤1 , 𝑤2 , ⋯ , 𝑏1 , 𝑏2 , ⋯
η is called
−𝜂𝜕𝐿Τ𝜕𝑤 “learning rate” w
Network parameters 𝜃 =
Gradient Descent 𝑤1 , 𝑤2 , ⋯ , 𝑏1 , 𝑏2 , ⋯
w
Local Minima
Total
Loss Very slow at
the plateau
Stuck at saddle point
𝜕𝐿 ∕ 𝜕𝑤 𝜕𝐿 ∕ 𝜕𝑤 𝜕𝐿 ∕ 𝜕𝑤
≈0 =0 =0
“dog”
For example, you can do …….
“Talk” in e-mail
Spam
filtering Network 1/0
(Yes/No)
“free” in e-mail
1 (Yes)
0 (No)
(http://spam-filter-review.toptenreviews.com/)
For example, you can do …….
Sport
“stock” in document
Finance
Network
Local News
“president” in document
Very flexible
or
Need some
effort to learn
Machine “1”
28 x 28
…
500 …
…
500 …
Softmax
y1 y2
…… y10
Keras
Keras
𝑤 ← 𝑤 − 𝜂𝜕𝐿Τ𝜕𝑤
0.1
Step 3.2: Find the optimal network parameters
Training Labels
data (digits)
(Images)
Keras
Step 3.2: Find the optimal network parameters
28 x 28 …… 1 ……
=784 0
case 1:
case 2:
Keras
• Using GPU to speed training
• Way 1
• THEANO_FLAGS=device=gpu0 python YourCode.py
• Way 2 (in your code)
• import os
• os.environ["THEANO_FLAGS"] = "device=gpu0"
Three Steps for Deep Learning
Step 1: Step 2: Step 3:
define a goodness pick the
set of of best
function function function
Deep Learning is so simple ……
Outline
NO
Step 3: pick the Good Results on
Training Data?
best function
Neural
Network
Do not always blame Overfitting
Overfitting?
Good Results on
Different approaches for Testing Data?
different problems.
Neural
Network
Recipe of Deep Learning
YES
x1 …… y1 1 𝑦ො1 1
x2
Softmax
…… …… y2 0 𝑦ො2 0
……
……
……
……
……
loss
x256 …… y1 0 𝑦ො10 0
Which one is better? 0
10 10 target
Square 2 Cross
Error 𝑦𝑖 − 𝑦ෝ𝑖 Entropy − 𝑦ෝ𝑖 𝑙𝑛𝑦𝑖
𝑖=1 =0 𝑖=1 =0
Recipe of Deep Learning
YES
……
𝐿′′ = 𝑙 2 + 𝑙16 + ⋯
Update parameters once
x2 NN y2 𝑦ො 2
Mini-batch
…
𝑙2 Until all mini-batches
have been picked
x16 NN y16 𝑦ො 16
𝑙16 one epoch
……
1
Pick the 1st batch
x1 NN y1 𝑦ො
𝐿′ = 𝑙1 + 𝑙 31 + ⋯
Mini-batch
1
𝑙
Update parameters once
x31 NN y31 𝑦ො 31
…… 𝑙 31 Pick the 2nd batch
𝐿′′ = 𝑙 2 + 𝑙16 + ⋯
Update parameters once
100 examples in a mini-
…
batch Until all mini-batches
Repeat 20 times have been picked
one epoch
Shuffle the training examples for each epoch
Epoch 1 Epoch 2
x1 NN y1 𝑦ො 1 x1 NN y1 𝑦ො 1
Mini-batch
Mini-batch
𝑙1 𝑙1
x31 NN y31 𝑦ො 31 x17 NN y17 𝑦ො 17
𝑙 31 𝑙17
……
……
Don’t worry. This is the default of Keras.
x2 NN y2 𝑦ො 2 x2 NN y2 𝑦ො 2
Mini-batch
Mini-batch
𝑙2 𝑙2
……
……
Recipe of Deep Learning
YES
……
……
……
……
……
xN …… yM
x1 …… 𝑦1 𝑦ො1
x2 Small
…… output 𝑦2 𝑦ො2
……
……
……
……
……
……
𝑙
+∆𝑙
xN …… 𝑦𝑀 𝑦ො𝑀
Large
+∆𝑤
input
Intuitive way to compute the derivatives …
𝜕𝑙 ∆𝑙
=?
𝜕𝑤 ∆𝑤
Hard to get the power of Deep …
x1 y1
0 y2
x2
0
0
𝑎
𝑎=𝑧
ReLU
A Thinner linear network 𝑎=0
𝑧
x1 y1
y2
x2
Do not have
smaller gradients
ReLU - variant
𝑧 𝑧
𝑎 = 0.01𝑧 𝑎 = 𝛼𝑧
α also learned by
gradient descent
Maxout ReLU is a special cases of Maxout
+ 5 neuro + 1
Input nMa Ma
x 7 x 2
x1 + 7 + 2
x2 + −1 + 4
Ma Ma
x
1 x 4
+ 1 + 3
2 elements in a 3 elements in a
group group
Recipe of Deep Learning
YES
𝑤1
Learning Rates Set the
learning rate η
carefully
If learning rate is too large
𝜂 constant
𝜂𝑤 =
σ𝑡𝑖=0 𝑔𝑖 2 𝑔𝑖 is 𝜕𝐿 ∕ 𝜕𝑤 obtained
at the i-th update
Summation of the square of the previous derivatives
𝜂
𝜂𝑤 =
Adagrad σ𝑡𝑖=0 𝑔𝑖 2
g0 g1 …… g0 g1 ……
𝑤1 𝑤2
0.1 0.2 …… 20.0 10.0 ……
Learning rate: Learning rate:
𝜂 𝜂 𝜂 𝜂
= =
0.12 0.1 202 20
𝜂 𝜂 𝜂 𝜂
= =
0.12 + 0.22 0.22 202 + 102 22
Observation: 1. Learning rate is smaller and
smaller for all parameters
2. Smaller derivatives, larger
Why?
learning rate, and vice versa
Larger
derivatives
Smaller
Learning Rate
Smaller Derivatives
𝜕𝐿 ∕ 𝜕𝑤 𝜕𝐿 ∕ 𝜕𝑤 𝜕𝐿 ∕ 𝜕𝑤
≈0 =0 =0
𝜕𝐿∕𝜕𝑤 = 0
Adam RMSProp (Advanced Adagrad) + Momentum
Recipe of Deep Learning
YES
Regularization
YES
Network Structure
Panacea for Overfitting
• Have more training data
• Create more training data (?)
Handwriting recognition:
Original Created
Training Data: Training Data:
Shift 15。
Recipe of Deep Learning
YES
Regularization
YES
Network Structure
Dropout
Training:
Thinner!
No dropout
If the dropout rate at training is p%,
all the weights times 1-p%
Assume that the dropout rate is 50%.
If a weight w = 1 by training, set 𝑤 = 0.5 for testing.
Dropout - Intuitive Reason
• Why the weights should multiply (1-p)% (dropout rate) when testing?
y1 y2 y3 y4
average
Dropout is a kind of ensemble.
minibatch minibatch minibatch minibatch Training of
1 2 3 4 Dropout
M neurons
……
2M
possible
networks
Using one mini-batch to train one network
Some parameters in the network are shared
Dropout is a kind of ensemble.
Testing of Dropout testing data x
All the
weights
……
multiply
1-p%
y1 y2 y3
?????
average ≈ y
Recipe of Deep Learning
YES
Regularization
YES
Network Structure
CNN is a very good example!
(next lecture)
Concluding Remarks
Recipe of Deep Learning
YES
Step 1: define a set NO
of function Good Results on
Testing Data?
Step 2: goodness of
function YES
NO
Step 3: pick the Good Results on
best function Training Data?
Neural
Network
Lecture II:
Variants of Neural
Networks
Variants of Neural Networks
Convolutional Neural
Network (CNN)image
Widely used in
processing
Recurrent Neural
Network (RNN)
Why CNN for Image? [Zeiler, M. D., ECCV 2014]
x1 ……
x2 ……
……
……
……
……
Represented
as pixels xN ……
The most basic Use 1st layer as module Use 2nd layer as
classifiers to build classifiers module ……
“beak” detector
Why CNN for Image
• The same patterns appear in different regions.
“upper-left
beak” detector
“middle beak”
detector
Why CNN for Image
• Subsampling the pixels will not change the object
bird
bird
subsampling
Max Pooling
Can repeat
Fully Connected many times
Feedforward Convolution
network
Max Pooling
Flatten
The whole CNN
Property 1
Some patterns are much Convolution
smaller than the whole image
Property 2
Max Pooling
The same patterns appear in
Can repeat
different regions.
many times
Property 3 Convolution
Subsampling the pixels will
not change the object
Max Pooling
Flatten
The whole CNN
cat dog ……
Convolution
Max Pooling
Can repeat
Fully Connected many times
Feedforward Convolution
network
Max Pooling
Flatten
CNN – Convolution Those are the network
parameters to be learned.
1 -1 -1
1 0 0 0 0 1 -1 1 -1 Filter 1
0 1 0 0 1 0 -1 -1 1 Matrix
0 0 1 1 0 0
1 0 0 0 1 0 -1 1 -1
-1 1 -1 Filter 2
0 1 0 0 1 0
-1 1 -1 Matrix
0 0 1 0 1 0
…
…
6 x 6 image
Each filter detects a small
Property 1
pattern (3 x 3).
1 -1 -1
CNN – Convolution -1 1 -1 Filter 1
-1 -1 1
stride=1
1 0 0 0 0 1
0 1 0 0 1 0 3 -1
0 0 1 1 0 0
1 0 0 0 1 0
0 1 0 0 1 0
0 0 1 0 1 0
6 x 6 image
1 -1 -1
CNN – Convolution -1 1 -1 Filter 1
-1 -1 1
If stride=2
1 0 0 0 0 1
0 1 0 0 1 0 3 -3
0 0 1 1 0 0
1 0 0 0 1 0
0 1 0 0 1 0
We set stride=1 below
0 0 1 0 1 0
6 x 6 image
1 -1 -1
CNN – Convolution -1 1 -1 Filter 1
-1 -1 1
stride=1
1 0 0 0 0 1
0 1 0 0 1 0 3 -1 -3 -1
0 0 1 1 0 0
1 0 0 0 1 0 -3 1 0 -3
0 1 0 0 1 0
0 0 1 0 1 0 -3 -3 0 1
6 x 6 image 3 -2 -2 -1
Property 2
-1 1 -1
CNN – Convolution -1 1 -1 Filter 2
-1 1 -1
stride=1 Do the same process for
1 0 0 0 0 1 every filter
0 1 0 0 1 0 3 -1 -3 -1
-1 -1 -1 -1
0 0 1 1 0 0
1 0 0 0 1 0 -3 1 0 -3
-1 -1 -2 1
0 1 0 0 1 0 Feature
0 0 1 0 1 0 -3 -3 Map0 1
-1 -1 -2 1
6 x 6 image 3 -2 -2 -1
-1 0 -4 3
4 x 4 image
1 -1 -1
CNN – Zero Padding -1 1 -1 Filter 1
-1 -1 1
0 0 0
0 1 0 0 0 0 1
0 0 1 0 0 1 0
0 0 1 1 0 0 You will get another 6 x 6
1 0 0 0 1 0 images in this way
0 1 0 0 1 0 0
0 0 1 0 1 0 0 Zero padding
0 0 0
6 x 6 image
CNN – Colorful image
11 -1-1 -1-1 -1-1 11 -1-1
1 -1 -1 -1 1 -1
-1-1 11 -1-1 -1-1 11 -1-1
-1 1 -1 Filter 1 -1 -1 1 1 -1 -1 Filter 2
-1-1 -1-1 11 -1-1 11 -1-1
-1 -1 1
Colorful
image 1 0 0 0 0 1
1 0 0 0 0 1
0 11 00 00 01 00 1
0 1 0 0 1 0
0 00 11 01 00 10 0
0 0 1 1 0 0
1 00 00 10 11 00 0
1 0 0 0 1 0
0 11 00 00 01 10 0
0 1 0 0 1 0
0 00 11 00 01 10 0
0 0 1 0 1 0
0 0 1 0 1 0
Convolution v.s. Fully Connected
1 0 0 0 0 1 1 -1 -1 -1 1 -1
0 1 0 0 1 0 -1 1 -1 -1 1 -1
0 0 1 1 0 0 -1 -1 1 -1 1 -1
1 0 0 0 1 0
0 1 0 0 1 0
0 0 1 0 1 0 convolutio
image n
x1
1 0 0 0 0 1
0 1 0 0 1 0 x2
Fully- 0 0 1 1 0 0
1 0 0 0 1 0
connected
……
……
0 1 0 0 1 0
0 0 1 0 1 0
x36
1 -1 -1 Filter 1 1: 1
-1 1 -1 2: 0
-1 -1 1 3: 0
4: 0 3
…
1 0 0 0 0 1
0 1 0 0 1 0 7: 0
0 0 1 1 0 0 8: 1
1 0 0 0 1 0 9: 0
0 1 0 0 1 0 10: 0
…
0 0 1 0 1 0
13: 0
6 x 6 image
14: 0
Less parameters! 15: 1 Only connect to
16: 1 9 input, not fully
connected
…
1 -1 -1 1: 1
-1 1 -1 Filter 1 2: 0
-1 -1 1 3: 0
4: 0 3
…
1 0 0 0 0 1
0 1 0 0 1 0 7: 0
0 0 1 1 0 0 8: 1
1 0 0 0 1 0 9: 0 -1
0 1 0 0 1 0 10: 0
…
0 0 1 0 1 0
13: 0
6 x 6 image
14: 0
Less parameters! 15: 1
16: 1 Shared
Even less parameters! weights
…
The whole CNN
cat dog ……
Convolution
Max Pooling
Can repeat
Fully Connected many times
Feedforward Convolution
network
Max Pooling
Flatten
CNN – Max Pooling
1 -1 -1 -1 1 -1
-1 1 -1 Filter 1 -1 1 -1 Filter 2
-1 -1 1 -1 1 -1
3 -1 -3 -1 -1 -1 -1 -1
-3 1 0 -3 -1 -1 -2 1
-3 -3 0 1 -1 -1 -2 1
3 -2 -2 -1 -1 0 -4 3
CNN – Max Pooling
New image
1 0 0 0 0 1 but smaller
0 1 0 0 1 0 Conv
3 0
0 0 1 1 0 0 -1 1
1 0 0 0 1 0
0 1 0 0 1 0 Max 3 1
0 3
0 0 1 0 1 0 Pooling
2 x 2 image
6 x 6 image
Each filter
is a channel
The whole CNN
3 0
-1 1 Convolution
3 1
0 3
Max Pooling
Can repeat
A new many times
image Convolution
Max Pooling
A new image
Fully Connected
Feedforward Convolution
network
Max Pooling
A new image
Flatten
3
Flatten
0
1
3 0
-1 1 3
3 1 -1
0 3 Flatten
1 Fully Connected
Feedforward
0 network
3
Convolutional Neural Network
Step 1: Step 2: Step 3:
define a goodness pick the
set of
Convolutional of best
function
Neural Network function function
“monkey” 0
“cat” 1
CNN
…
…
“dog” 0
Convolution, Max target
Pooling, fully connected
Learning: Nothing special, just gradient descent ……
Only modified the network structure and
CNN in Keras
input format (vector -> 3-D tensor)
input
Convolution
1 -1 -1
-1 1 -1
-1 1 -1 … There are
-1 1 -1
-1 -1 1 25 3x3
-1 1 -1 … Max Pooling
filters.
Input_shape = ( 1 , 28 , 28 )
1: black/weight, 3: RGB 28 x 28 pixels Convolution
- Max Pooling
3 3
1
-
1
3
Only modified the network structure and
CNN in Keras
input format (vector -> 3-D tensor)
input
1 x 28 x 28
Convolution
How many parameters
9 25 x 26 x
for each filter?
26 Max Pooling
25 x 13 x
13
Convolution
How many parameters
225 50 x 11 x
for each filter?
11 Max Pooling
50 x 5 x 5
Only modified the network structure and
CNN in Keras
input format (vector -> 3-D tensor)
input
1 x 28 x 28
output Convolution
25 x 26 x
Fully Connected 26 Max Pooling
Feedforward
25 x 13 x
network
13 Convolution
50 x 11 x
11 Max Pooling
1250 50 x 5 x 5
Flatten
What does CNN learn?
The output of the k-th filter is a x
11 x 11 matrix. input
Degree of the activation 11 11
of the k-th filter: 𝑘
𝑎 𝑘 = 𝑎𝑖𝑗 25 3x3
Convolution
𝑖=1 𝑗=1 filters
𝑥 ∗ = 𝑎𝑟𝑔 max 𝑎𝑘 (gradient ascent)
𝑥
11 Max Pooling
- …… -
3
1 𝑘 1 50 3x3
𝑎𝑖𝑗 Convolution
- - filters
1 ……
11 3 3 50 x 11 x
……
……
…… 11 Max Pooling
- …… -
3
2 1
What does CNN learn?
The output of the k-th filter is a
11 x 11 matrix. input
Degree of the activation 11 11
of the k-th filter: 𝑘
𝑎 𝑘 = 𝑎𝑖𝑗 25 3x3
Convolution
𝑖=1 𝑗=1 filters
𝑥 ∗ = 𝑎𝑟𝑔 max 𝑎𝑘 (gradient ascent)
𝑥
Max Pooling
50 3x3
Convolution
filters
50 x 11 x
11 Max Pooling
Max Pooling
0 1 2
Convolution
Max Pooling
3 4 5
flatten
6 7 8
0 1 2 0 1 2
3 4 5 3 4 5
6 7 8 6 7 8
CNN
Modify
Deep Dream image
http://deepdreamgenerator.com/
Deep Dream
• Given a photo, machine adds what it sees ……
http://deepdreamgenerator.com/
Deep Style
• Given a photo, make its style like famous paintings
https://dreamscopeapp.com/
Deep Style
• Given a photo, make its style like famous paintings
https://dreamscopeapp.com/
Deep Style
CNN CNN
?
More Application: Playing Go
Next move
Network (19 x 19
positions)
19 x 19 19 x 19 vector
matrix (image)
19 x 19 vector
Black: 1 Fully-connected feedforward
white: -1 network can be used
none: 0 But CNN performs much better.
Why CNN for playing Go?
• Some patterns are much smaller than the whole image
Convolutional Neural
Network (CNN)
Recurrent Neural
Network (RNN)
Neural Network with
Memory
Example Application
• Slot Filling
Destination: Taipei
Slot
time of arrival: November 2nd
Example Application
y1 y2
Solving slot filling by
Feedforward network?
Input: a word
(Each word is
represented as a vector)
Taipei x1 x2
1-of-N encoding
apple 0 a-a-a 0
bag 0 a-a-b 0
…
…
cat 0 a-p-p 1
dog 0
…
26 X 26 X
elephant 0 p-l-e 1
26
…
…
…
p-p-l 1
“other” 1
…
w = “apple”
…
w = “Gandalf” w = “Sauron”
157
Example Application time of
dest departure
y1 y2
Solving slot filling by
Feedforward network?
Input: a word
(Each word is
represented as a vector)
Output:
Probability distribution that
the input word belonging to
the slots
Taipei x1 x2
Example Application time of
dest departure
y1 y2
arrive Taipei on November 2nd
place of departure
a1 a2
x1 x2 x3
…… ……
……
……
……
…… ……
…… ……
xt xt+1 xt+2
Bidirectional RNN
xt xt+1 xt+2
…… ……
yt yt+1 yt+2
…… ……
xt xt+1 xt+2
RNN
• https://www.youtube.com/watch?v=_aCuOwF1ZjU&list=PLjJh1vlSEYg
vZ3ze_4pxKHNh1g5PId36-&index=8
Long Short-term Memory (LSTM)
Other part of the network
Special Neuron:
Signal control
Output Gate
4 inputs,
the output gate 1 output
(Other part
of the
Memory Forget Signal control
network)
Cell Gate the forget gate
(Other part
of the
Signal control network)
Input Gate LSTM
the input gate
(Other part
of the
network) Other part of the network
𝑎 = ℎ 𝑐 ′ 𝑓 𝑧𝑜
𝑧𝑜 multiply
Activation function f is
𝑓 𝑧𝑜 ℎ 𝑐′ usually a sigmoid function
Between 0 and 1
Mimic open and close gate
𝑐 𝑓 𝑧𝑓
𝑐c′ 𝑧𝑓
𝑐𝑓 𝑧𝑓
𝑔 𝑧 𝑓 𝑧𝑖
𝑐 ′ = 𝑔 𝑧 𝑓 𝑧𝑖 + 𝑐𝑓 𝑧𝑓
𝑓 𝑧𝑖
𝑧𝑖
multiply
𝑔 𝑧
𝑧
0
≈
-10 0
1
0
1 1
7
0 ≈ 0
≈ 1
3
1 1
0
3
3
-3
≈
10 1
-3
1 -
-3
7
0 ≈ 10
≈ 0
-3
1 1
0
-3
-3
LSTM
ct-1
……
vector
zf zi z zo 4 vectors
xt
LSTM
yt
zo
ct-1
× + ×
× zf
zi
zf zi z zo
xt
z
Extension: “peephole”
LSTM
yt yt+1
ct-1 ct ct+1
× + × × + ×
× ×
zf zi z zo zf zi z zo
This is quite
standard now.
https://img.komicolle.org/2015-09-20/src/14426967627131.gif
LSTM
• https://www.youtube.com/watch?v=5dMXyiWddYs
Three Steps for Deep Learning
Step 1: Step 2: Step 3:
define a goodness pick the
set of of best
function function function
Deep Learning is so simple ……
Learning Target
other dest other
0 … 1 … 0 0 … 1 … 0 0 … 1 … 0
y1 y2 y3
copy copy
a1 a2 a3
a1 a2
Wi
x1 x2 x3
Training
Sentences: arrive Taipei on November 2nd
other dest other time time
Three Steps for Deep Learning
Step 1: Step 2: Step 3:
define a goodness pick the
set of of best
function function function
Deep Learning is so simple ……
Learning y1 y2
Backpropagation
through time (BPTT)
copy
a1 a2
𝑤
𝑤 ← 𝑤 − 𝜂𝜕𝐿 ∕ 𝜕𝑤 x1 x2
sometime
s
Lucky
Epoch
The error surface is rough.
The error surface is
either very flat or very
steep.
Total
Clippin
CostLoss
g
w2
Key Term …
Key Terms:
Extraction DNN, LSTN
V V V V … V
1 2 3 4 T
Output Layer
Embedding Layer
document x1 x2 x3 x4 … xT
Hidden Layer
Embedding Layer
V V V V … V ΣαiVi
1 2 3 4 T
α1 α2 α3 α4 … αT
Many to Many (Output is shorter)
• Both input and output are both sequences, but the output is shorter.
• E.g. Speech Recognition
好 φ φ 棒 φ φ φ φ 好 φ φ 棒 φ 棒 φ φ
Many to Many (No Limitation)
• Both input and output are both sequences with different lengths. → Sequence
to sequence learning
• E.g. Machine Translation (machine learning→機器學習)
machine
learning
Containing all
information about
input sequence
Many to Many (No Limitation)
• Both input and output are both sequences with different lengths. → Sequence
to sequence learning
• E.g. Machine Translation (machine learning→機器學習)
機 器 學 習 慣 性 ……
……
machine
learning
===
機 器 學 習
machine
learning
CNN ……
Input
image Caption Generation
Video Caption Generation
A girl is running.
Video
What is deep
learning?
summer
vacation 10
Organiz
Answer years ago
e
http://henrylo1605.blogspot.tw/2015/05/blog-post_56.html
Attention-based Model
Input DNN/RNN output
Reading
Head
Controller
Reading Head
…… ……
Machine’s Memory
Ref:
http://speech.ee.ntu.edu.tw/~tlkagk/courses/MLDS_2015_2/Lecture/Attain%20(v3).e
cm.mp4/index.html
Attention-based Model v2
Reading
Writing Head
Head
Controller
Controller
…… ……
Machine’s Memory
Reading
Head
Controller
Semantic
Analysis
…… ……
source: http://visualqa.org/
Visual Question Answering
Reading
Head
Controller
random
(proposed by FB AI group)
(proposed by FB AI group)
Convolutional Neural
Network (CNN)
Recurrent Neural
Network (RNN)
Lecture III:
Beyond Supervised
Learning
Outline
Unsupervised Learning
Reinforcement Learning
Unsupervised Learning
• 化繁為簡 • 無中生有
only having
function input
only having
function
function function
output
code
Motivation
• In MNIST, a digit is 28 x 28 dims.
• Most 28 x 28 dim vectors are not digits
NN Can reconstruct
code the original object
Decoder
As close as possible
NN NN
𝑐
Encoder Decoder
Deep Auto-encoder
• NN encoder + NN decoder = a deep network
As close as possible
Output Layer
Input Layer
Layer
Layer
Layer
Layer
Layer
bottl
Layer … …
e
Encoder Decoder
𝑥 Code 𝑥ො
784
784
30
PCA
Deep
Auto-encoder
500
500
250
250
30
1000
1000
784
784
784 784
1000
2
500
784
250
2
250
500
1000
784
More: Contractive auto-encoder
Ref: Rifai, Salah, et al. "Contractive
Auto-encoder auto-encoders: Explicit invariance
during feature extraction.“ Proceedings
of the 28th International Conference on
Machine Learning (ICML-11). 2011.
• De-noising auto-encoder
As close as possible
encode decode
𝑐
𝑥 𝑥′ 𝑥ො
Add
noise
500
Target
1000 784 𝑥ො
W1’
1000 1000
W1
Input 784 Input 784 𝑥
Auto-encoder – Pre-training DNN
• Greedy Layer-wise Pre-training again
output 10
500 1000 𝑎ො 1
W2’
Target
1000 1000
W2
1000 1000 𝑎1
fix W1
Input 784 Input 784 𝑥
Auto-encoder – Pre-training DNN
• Greedy Layer-wise Pre-training again
output 10 1000 𝑎ො 2
W3’
500 500
W3
Target
1000 1000 𝑎2
fix W2
1000 1000 𝑎1
fix W1
Input 784 Input 784 𝑥
Auto-encoder – Pre-training DNN
Find-tune by
• Greedy Layer-wise Pre-training again backpropagation
output 10 output 10
Rando
W4 m init
500 500
W3
Target
1000 1000
W2
1000 1000
W1
Input 784 Input 784 𝑥
Word Vector/Embedding
• Machine learn the meaning of words from reading a lot of
documents without supervision
tree
flower
dog rabbit
run
jump cat
How to exploit the context?
• Count based
• If two words wi and wj frequently co-occur, V(wi) and
V(wj) would be close to each other
• E.g. Glove Vector:
http://nlp.stanford.edu/projects/glove/
• Prediction based
w
…… wi-2 wi-1 ___
Prediction-based i
0 z1
1-of-N
1 z2 The probability
encoding
0 for each word as
of the
…
……
……
the next word wi
word wi-1
……
wi-1
…… wi-1 ____ wi+1 …… Neural
wi
wi+ Network
1
predicting the word given its context
• Skip-gram
…… ____ wi ____ …… w wi-1
Neural
i
Network
wi+
1
predicting the context given a word 224
Word Embedding
Source: http://www.slideshare.net/hustwj/cikm-keynotenov2014
225
Word Embedding
𝑉 𝐺𝑒𝑟𝑚𝑎𝑛𝑦
• Characteristics ≈ 𝑉 𝐵𝑒𝑟𝑙𝑖𝑛 − 𝑉 𝑅𝑜𝑚𝑒 + 𝑉 𝐼𝑡𝑎𝑙𝑦
• Solving analogies
Like an
infant
ever ever
Sequence-to-sequence
Auto-encoder
vector
audio segment
x1 x2 x3 x4 acoustic features
audio segment
Sequence-to-sequence
Input acoustic features
Auto-encoder
x1 x2 x3 x4
The RNN encoder and
decoder are jointly trained.
y1 y2 y3 y4
RNN Encoder
RNN Decoder
x1 x2 x3 x4 acoustic features
audio segment
Sequence-to-sequence
Auto-encoder
• Visualizing embedding vectors of the words
fear
fame
name near
Audio Word to Vector
–Application
“US President”
spoken
query
user
Spoken Content
Audio
Segment to
Vector
Audio
Segment to Similarity
Spoken Vector
Query
On-line Search Result
Experimental Results
• Query-by-Example Spoken Term Detection
SA: sequence
auto-encoder
DSA: de-noising
MAP
sequence auto-encoder
Input: clean speech +
noise
output: clean speech
training epochs for sequence
auto-encoder
Next Step ……
• Can we include semantics?
walk
dog
walked
cat
run cats
flower tree
Creation
Draw something!
Creation
• Generative Models: https://openai.com/blog/generative-models/
https://www.quora.com/What-did-Richard-Feynman-mean-when-he-said-What-I-
cannot-create-I-do-not-understand
Ref: Aaron van den Oord, Nal Kalchbrenner, Koray
PixelRNN Kavukcuoglu, Pixel Recurrent Neural Networks,
arXiv preprint, 2016
E.g. 3 x 3 images
NN
NN
NN
……
Can be trained just with a large collection
of images without any annotation
Ref: Aaron van den Oord, Nal Kalchbrenner, Koray
PixelRNN Kavukcuoglu, Pixel Recurrent Neural Networks,
arXiv preprint, 2016
Real
Worl
d
PixelRNN – beyond Image
Audio: Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol
Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu,
WaveNet: A Generative Model for Raw Audio, arXiv preprint, 2016
Video: Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo
Danihelka, Oriol Vinyals, Alex Graves, Koray Kavukcuoglu, Video Pixel Networks ,
arXiv preprint, 2016
Auto-encoder
As close as possible
code
NN NN
Encoder Decoder
code
Randomly generate NN
a vector as code Decoder
Image ?
NN NN
input output
Encoder Decoder
code
VAE
Minimize
m1 reconstruction error
m2
NN m3 𝑐1
input NN
Encoder 𝜎1 ex + 𝑐2 output
Decoder
𝜎2 p 𝑐3
𝜎3
X 𝑐𝑖 = 𝑒𝑥𝑝 𝜎𝑖 × 𝑒𝑖 + 𝑚𝑖
𝑒1
From a
𝑒2 Minimize
normal 𝑒3 3
distribution 2
𝑒𝑥𝑝 𝜎𝑖 − 1 + 𝜎𝑖 + 𝑚𝑖
𝑖=1
Why VAE?
decode
code
encode
VAE
Cifar-10
https://github.com/openai/iaf
sentence NN NN
sentence
Encoder Decoder
code
Code Space
i went to the store to buy some
groceries.
i store to buy some groceries.
i were to buy any groceries.
……
"come with me," she said.
"talk to me," she said.
"don’t worry about it," she said.
Ref: http://www.wired.co.uk/article/google-artificial-intelligence-poetry
Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal
Jozefowicz, Samy Bengio, Generating Sentences from a Continuous Space, arXiv
prepring, 2015
Problems of VAE
• It does not really try to simulate real images
code
NN As close as
Output
Decoder possible
Realistic Fake
Generative Adversarial Network (GAN)
Ref: https://openai.com/blog/generative-models/
Want to practice
Generation Models?
Pokémon Creation
• Small images of 792 Pokémon's
• Can machine learn to create new Pokémons?
• Source of image:
http://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_base_stat
s_(Generation_VI)
Original image is 40 x
40
Making them into 20 x 20
Pokémon Creation
Cover
50% It is difficult to evaluate
generation.
Cover
75%
Drawing from scratch
Pokémon Creation Need some randomness
Pokémon Creation
m1
m2
NN m3 𝑐1
input NN
Encoder 𝜎1 ex + 𝑐2 output
Decoder
𝜎2 p 𝑐3
𝜎3 10-dim
X
𝑒1
𝑒2
Pick two dim, 𝑒3
and fix the rest
eight
𝑐1 NN
𝑐2 Decoder ?
𝑐3
10-dim
Pokémon Creation - Data
• Original image (40 x 40):
http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML_2016/Pokemon_creation/im
age.rar
• Pixels (20 x 20):
http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML_2016/Pokemon_creation/pix
el_color.txt
• Each line corresponds to an image, and each number corresponds to a
pixel
• http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML_2016/Pokemon_cr
eation/colormap.txt 0
1
2
……
…
Outline
Unsupervised Learning
Reinforcement Learning
Scenario of Reinforcement Learning
Observation Action
Agent
Don’t do Reward
that
Environment
Scenario of Reinforcement Learning
Agent learns to take actions to
maximize expected reward.
Observation Action
Agent
http://www.sznews.com/news/content Environment
/2013-11/26/content_8800180.htm
Supervised v.s. Reinforcement
• Supervised “Hello” Say “Hi”
Learning from
teacher “Bye bye” Say “Good bye”
• Reinforcement
……. ……. ……
Hello …… Bad
Learning from
critics Agent Agent
Scenario of Reinforcement Learning
Agent learns to take actions to
maximize expected reward.
Observation Action
Reward Next
Move
If win, reward = 1
If loss, reward = -1
Otherwise, reward = 0
Environment
Supervised v.s. Reinforcement
• Supervised:
• Reinforcement Learning
First move …… many moves …… Win!
DNN
Observation Action
…
…
Function … Function
Input Output
Environment
Application: Interactive Retrieval
• Interactive retrieval is helpful. [Wu & Lee, INTERSPEECH 16]
“Deep Learning”
user
Better retrieval
The task cannot be addressed
performance,
Less user labor by linear model.
More Interaction