Professional Documents
Culture Documents
Contributions in work:
1. It is an IoT-envisaged smart city project in which first hardware development of air
pollution monitoring device using Raspberry Pi and sensors.
2. The time-series based prediction model using ML/DL techniques using the real-time
data taken from the hardware.
3. The safest path routing algorithm proposed to study analysis in MATLAB to test and
verify the analysis of safest path.
4. Development of an Android app for user-interface for the same using android studio
platform.
5. Two Conference papers being accepted (Springer) one on hardware and another on
Prediction analysis.
6. Working on Journal papers regarding the work contributions given in the M.Tech
project. (Under process)
Deep Learning Algorithm – Recurrent Neural Networks with Long Short Term
Memory (RNN-LSTM Model):
A recurrent neural network (RNN) can be thought of as multiple copies of the same
network, each passing a message to a successor. An RNN remembers each information
through time. It is useful in time series prediction only because of the feature to
remember the previous inputs as well. This is called Long Short Term Memory (LSTM) as
shown in the below Figure.
The problem in RNN is the short term memory which means when the data-set contains a
long sequence, then it will leave the important information from the starting of the data-set
available. Also, it suffers from the vanishing gradient problem i.e. the gradient value
becomes too small to contribute to any learning in the backpropagation. Gradients are the
values that update the weights of the neural network. Then, LSTM comes into the picture
for the solution of short term memory which has the internal mechanisms called gates that
regulate the flow of information. Applications in which these neural networks useful are
speech synthesis, speech recognition, and text generation.
• This vector passes through the tanh activation and output is the new memory of
the network. This activation helps in regulating the values between -1 and 1
flowing through the network.
• The gates in LSTM decides which information is allowed to keep or forget in the
cell state for training. Gates contain sigmoid activation regulating values between
0 and 1. This helps update or forget the data using forget gate.
• To update the cell state, the input gate is there, and to regulate the network tanh
output is multiplied with the sigmoid output.
• The cell state gets pointwise multiplied by the forget vector and then the
pointwise addition of the output from the input gate which updates the cell state
to new values relevant for the network.
• Then, last is the output gate that decides what the next hidden state should be
which is used for predictions. Then, the modified cell state is passed to tanh
function and the process continues in the LSTM model.
Skills:
IoT (Internet of things).
Machine Learning.
Deep Learning.
Python/Basic C/OOPS.
Wireless Communication.
Computer Networks.