Professional Documents
Culture Documents
University of Technology
Computer Engineering Department
9\9\2020:تاريخ التسليم
Introduction to Recurrent Neural Network
Recurrent Neural Network(RNN) are a type of Neural Network where the output
from previous step are fed as input to the current step. In traditional neural
networks, all the inputs and outputs are independent of each other, but in cases
like when it is required to predict the next word of a sentence, the previous words
are required and hence there is a need to remember the previous words. Thus
RNN came into existence, which solved this issue with the help of a Hidden
Layer. The main and most important feature of RNN is Hidden state, which
remembers some information about a sequence.
RNN have a “memory” which remembers all information about what has been
calculated. It uses the same parameters for each input as it performs the same task
on all the inputs or hidden layers to produce the output. This reduces the
complexity of parameters, unlike other neural networks.
1
A recurrent neural network (RNN) is a class of artificial neural networks where
connections between nodes form a directed graph along a temporal sequence. This
allows it to exhibit temporal dynamic behavior. Derived from feedforward neural
networks, RNNs can use their internal state (memory) to process variable length
sequences of inputs. This makes them applicable to tasks such as unsegmented,
connected handwriting recognition or speech recognition.
The term “recurrent neural network” is used indiscriminately to refer to two broad
classes of networks with a similar general structure, where one is finite impulse
and the other is infinite impulse. Both classes of networks exhibit temporal
dynamic behavior. A finite impulse recurrent network is a directed acyclic graph
that can be unrolled and replaced with a strictly feedforward neural network,
while an infinite impulse recurrent network is a directed cyclic graph that can not
be unrolled.
Both finite impulse and infinite impulse recurrent networks can have additional
stored states, and the storage can be under direct control by the neural network.
The storage can also be replaced by another network or graph, if that incorporates
time delays or has feedback loops. Such controlled states are referred to as gated
state or gated memory, and are part of long short-term memory networks
(LSTMs) and gated recurrent units. This is also called Feedback Neural Network.
2
Architectures
Fully recurrent
Basic RNNs are a network of neuron-like nodes organized into successive layers.
Each node in a given layer is connected with a directed (one-way) connection to
every other node in the next successive layer.[citation needed] Each node (neuron)
has a time-varying real-valued activation. Each connection (synapse) has a
modifiable real-valued weight. Nodes are either input nodes (receiving data from
outside of the network), output nodes (yielding results), or hidden nodes (that
modify the data en route from input to output).
3
actuators that affect the environment. This might be used to play a game in which
progress is measured with the number of points won.
Each sequence produces an error as the sum of the deviations of all target signals
from the corresponding activations computed by the network. For a training set of
numerous sequences, the total error is the sum of the errors of all individual
sequences.
Recurrent neural networks (RNNs) are capable of learning features and long term
dependencies from sequential and time-series data. The RNNs have a stack of
non-linear units where at least one connection between units forms a directed
cycle. A well-trained RNN can model any dynamical system; however, training
RNNs is mostly plagued by issues in learning long-term dependencies.
Advantages
Drawbacks
4
References
1- Dupond, Samuel (2019). "A thorough review on the current advance of neural
network structures". Annual Reviews in Control. 14: 200–230.