Professional Documents
Culture Documents
RNN, LSTM, Gru
RNN, LSTM, Gru
CSE 4237
Soft Computing
Smaller Network: RNN
This is our fully connected network. If x1 .... xn, n is very large and growing,
this network would become too large. We now will input one xi at a time,
and re-use the same edge weights.
Recurrent Neural Network
How does RNN reduce complexity?
y1 y2 y3
h0 f h1 f h2 f h3 ……
x1 x2 x3
No matter how long the input/output sequence is, we only
need one function f. If f’s are different, then it becomes a
feedforward NN. This may be treated as another compression
from fully connected network.
Deep RNN h’,y = f1(h,x), g’,z = f2(g,y)
…
z1 z2 z3
g0 f2 g1 f2 g2 f2 g3 ……
y1 y2 y3
h0 f1 h1 f1 h2 f1 h3 ……
x1 x2 x3
Bidirectional RNN y,h=f1(x,h) z,g = f2(g,x)
x1 x2 x3
g0 f2 g1 f2 g2 f2 g3
z1 z2 z3
p=f3(y,z) f3 p1 f3 p2 f3 p3
y1 y2 y3
h0 f1 h1 f1 h2 f1 h3
x1 x2 x3
Significantly speed up training
Pyramid RNN
• Reducing the number of time steps
Bidirectional
RNN
y Wh h
Wi
h' x
h f h'
y Wo h’ Note, y is computed
from h’
x softmax
Ct-1
ht-1
Forget input
gate gate
The core idea is this cell
Why sigmoid or tanh:
state Ct, it is changed
Sigmoid: 0,1 gating as switch.
slowly, with only minor
Vanishing gradient problem in
linear interactions. It is very
LSTM is handled already.
easy for information to flow
ReLU replaces tanh ok?
along it unchanged.
LSTM
• Sigmoid specifically, is used as the gating function for the three gates (in,
out, and forget) in LSTM, since it outputs a value between 0 and 1, and it can
either let no flow or complete flow of information throughout the gates.
• But before making a choice for activation functions, you must know what the
advantages and disadvantages of your choice over others are.
it decides what component
is to be updated.
C’t provides change contents
yt ct-1 ct
LSTM
Naïve
ht-1 ht ht-1 ht
RNN
xt xt
ct-1 xt
zi = σ( Wi )
ht-1
Controls Controls Updating Controls
forget gate input gate information Output gate xt
zf = σ( Wf )
ht-1
zf zi z zo
xt
zo = σ( Wo )
ht-1 xt ht-1
z =tanh( W ht-1 )
ct-1 ct-1
diagonal
“peephole” zo zf zi obtained by the same way
zf zi z zo
ht-1 xt
Information flow of LSTM
Element-wise multiply
yt
ct-1 ct
ct = zf ct-1 + ziz
tanh ht = zo tanh(ct)
yt = σ(W’ ht)
zf zi z zo
ht-1 xt ht
yt yt+1
ct-1 ct ct+1
tanh tanh
zf zi z zo zf zi z zo
ht+1
ht-1 xt ht xt+1
x f1 a1 f2 a2 f3 a3 f4 y
t is layer
at = ft(at-1) = σ(Wtat-1 + b t)
h0 f h1 f h2 f h3 f g y4
x1 x2 x3 x4
t is time step
LSTM
Sequence to sequence chat model
Chat with context
M: Hi
M: Hello
U: Hi
M: Hi
M: Hello U: Hi
Serban, Iulian V., Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle
Pineau, 2015 "Building End-To-End Dialogue Systems Using Generative Hierarchical
Baidu’s speech recognition using RNN
Attention
Image caption generation using attention
(From CY Lee lecture)
z0 is initial parameter, it is also learned
A vector for
each region
z0 match 0.7
weighted
filter filter filter
CNN sum
filter filter filter
0.0 0.8 0.2
0.0 0.0 0.0
Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo
Larochelle, Aaron Courville, “Describing Videos by Exploiting Temporal Structure”, ICCV,
2015