Professional Documents
Culture Documents
Neural Network
GRU:
GATED RECURRENT UNIT
introduction
Jaouad FATEH
D132766115 Dr. Essaid El haji
The plan:
Introduction to gru:
01
02 Implementation of gru
03 Gru vs lstm
01
Introduction to gru:
What are GRU ?
A GRU is a very useful mechanism for fixing the vanishing gradient problem
in recurrent neural networks. The vanishing gradient problem occurs in
machine learning when the gradient becomes vanishingly small, which pre-
Why vents the weight from changing its value. They also have better performance
is this than
LSTM when dealing with smaller datasets.
Useful
Applications of a Gated Recurrent Unit:
//Memory Content:
02
Implementation of gru
03
Gru vs lstm
What is the difference between GRU & LSTM?
GRU uses less training parameter and therefore uses less memory and
executes faster than LSTM whereas LSTM is more accurate on a larger dataset.
One can choose LSTM if you are dealing with large sequences and accuracy is
concerned, GRU is used when you have less memory consumption and want
faster results.
References:
References:
https://blog.floydhub.com/gru-with-pytorch/
http://www.d2l.ai/chapter_recurrent-modern/gru.html
?highlight=gru
https://www.kaggle.com/thebrownviking20/intro-to-re
current-neural-networks-lstm-gru/notebook
ⵜⴰⵏⵎⵉⵔⵜ ⵏⵏⵓⵏ ⵅⴼ ⵓⵡⴳⴳⴹ <3!!
Thank u for Ur
attention <3 !!