Professional Documents
Culture Documents
RNN REPORT
Rahul Manoharan Ramesh Babu
Deepan Durai Dhushetty venkat sai
Tarapreeth Mutyala
(7)
Where I and K are the original and the distorted images,
respectively. m and n are number of pixels in both images
(dimensions of images) and MAX equal to maximum
probable pixel value[6].
The activation function in vanilla RNNs is typically a
sigmoid function, similar to hyperbolic tangent. Because
gradients are vanishing and exploding, network training is
acknowledged as particularly difficult. Long Short-Term
Memory units (LSTM) deal with the vanishing gradient
problem encountered by traditional RNNs. LSTM upholds
hidden vector h as well as cell vector c accountable for
controlling state updates and outputs process, at the time of
time step. Particularly, we delineate the calculation during
time step t as mentioned below[7]. Fig.1 Experimental results of MSE comparison for lung CT
images
(8)
(9)