Professional Documents
Culture Documents
4. Hardware dependence
Deep learning models are computationally expensive.
These models are so complex that a normal CPU will not be able to withstand the
computational complexity.
Deep Learning Applications
Deep Learning finds applications in:
•Speech recognition: Some of the familiar software like Apple’s Siri, Google’s
Alexa, and Microsoft Cortana are based on deep neural networks.
Medical images such as CT scans, MRI, and X-rays can sometimes be difficult to
interpret. Deep learning algorithms can help to find anomalies that are unseen to
the naked eye.
•Surgical robotics: There are times when a critical patient is unable to find a
surgeon; in such case and life-threatening conditions surgical robots can come to
the rescue. Such robots have a super human ability to repeat exact motions like
that of a trained surgeon.
Real-life Deep Learning use cases
Transportation
•Self-driving cars:
Self-driving cars are becoming one of the trending topics in the world right now.
Companies use deep learning as their core algorithm; these models can consume a
lot of data and enable these cars to navigate through roads while making correct
decisions through analyzing the roads and vehicles around them. These cars are so
advanced that they can even predict accidents.
•Smart cities: Smart cities can manage their resources efficiently and manage
traffic, public services, and disaster response. The way it works is that the input
from different sensors from all over the city can be used to collect data and a deep
learning system trained on that data can be used to predict different outputs based
upon the scenario.
Real-life Deep Learning use cases
Agriculture
•Robot picking: Deep learning can be used to enable robots that can classify and
pick crops. These robots can save time and increase the production rate as well.
•Crop and soil monitoring: Deep learning model trained on the crop and soil
condition data can be used to build a system that can effectively monitor crop and
soil help estimate yield.
•Livestock monitoring: Animals can move from one place to another, making
them difficult to monitor. Image annotation with deep learning can enable farmers
to track the location, predict the livestock's food needs, and monitor the rest cycle
to ensure that they are in good health.
•Plant disease and pest detection: Another useful area for deep learning in
agriculture is to classify plants suffering from the disease from healthy plants.
This type of system can help farmers take proper treatment of the plant before
they die. Furthermore, deep learning can also be used to detect pest infestation.
Greedy layer wise training
• Training deep neural networks with many layers was challenging.
• As the number of hidden layers is increased, the amount of error information
propagated back to earlier layers is dramatically reduced. This means that
weights in hidden layers close to the output layer are updated normally,
whereas weights in hidden layers close to the input layer are updated
minimally or not at all.
• This problem prevented the training of very deep neural networks and was
referred to as the vanishing gradient problem.
• An important milestone in the resurgence of neural networking that initially
allowed the development of deeper neural network models was the technique
of greedy layer-wise pretraining, often simply referred to as “pretraining.”
Greedy layer wise training
• Pretraining involves successively adding a new hidden layer to a model and
refitting, allowing the newly added model to learn the inputs from the existing
hidden layer, often while keeping the weights for the existing hidden layers
fixed. This gives the technique the name “layer-wise” as the model is trained
one layer at a time.
• The technique is referred to as “greedy” because the piecewise or layer-wise
approach to solving the harder problem of training a deep network.
• As an optimization process, dividing the training process into a succession of
layer-wise training processes is seen as a greedy shortcut that likely leads to an
aggregate of locally optimal solutions, a shortcut to a good enough global
solution.
Greedy layer wise training
• Pretraining is based on the assumption that it is easier to train a shallow
network instead of a deep network and contrives a layer-wise training process
that we are always only ever fitting a shallow model.
• The key benefits of pretraining are:
1.Simplified training process.
2.Facilitates the development of deeper networks.
3.Useful as a weight initialization scheme.
4. Perhaps lower generalization error.
• In general, pretraining may help both in terms of optimization and in terms of
generalization.
Greedy layer wise training
There are two main approaches to pretraining; they are:
1.Supervised greedy layer-wise pretraining.
2.Unsupervised greedy layer-wise pretraining.