You are on page 1of 32

CONTENTS:

 Definition
 Introduction
 Application Areas
 Limitations
 Categories
 Technologies
DEFINITION:
Deep learning is an artificial intelligence function
that imitates the workings of the human brain in processing data and
creating patterns for use in decision making. Deep learning is a subset
of machine learning in artificial intelligence (AI) that has networks
capable of learning unsupervised from data that is unstructured or
unlabeled. Also known as deep neural learning or deep neural network.
DIFFERENCES BETWEEN AI & ML & DL
ARTIFICTIAL MACHINE DEEP
INTELLIGENCE LEARNING LEARNING
1956,John McCarthy 1959,Arthur samuel 2000,Igor Aizenberg
Machine to mimic Machine to learn. Inspired by structure
human behavior. and function of the
human behavior.
Any technique which Any techniques that A subset of ML which
enables computers to give computers the make the computation
mimic the human ability to learn without of multi-layer neural
behavior. being explicitly networks feasible.
programmed to do so.
For example, It is a method of For example, artificial
machines can move training algorithms neural networks
and manipulate such that they can (ANNs) are a type of
objects, recognize learn how to make algorithms that aim to
whether someone has decisions. The object imitate the way our
raised the hands, or is identified based on brains make
solve other problems. its characteristics. decisions.
DIFFERENCE BETWEEN ML AND DL:
IMPORTANCE
IMPORTANCE:
• Deep learning is a subset of ML
• DL is the next evolution of machine learning.
• DL algorithms are roughly inspired by the information
processing patterns found in the human brain.
• Just like we use our brains to identify patterns and classify
various types of information, deep learning algorithms can be
taught to accomplish the same tasks for machines.
• The brain usually tries to decipher the information it receives.
It achieves this through labelling and assigning the items into
various categories.
• Whenever we receive a new information, the brain tries to
compare it to a known item before making sense of it — which
is the same concept deep learning algorithms employ.
• DL can automatically discover the features to be used for
classification, ML requires these features to be provided
manually.
• Furthermore, in contrast to ML, DL needs high-end machines
and considerably big amounts of training data to deliver
accurate results.
HOW IT IS USEFUL?

• Deep learning has attracted a lot of attention


because it is particularly good at a type of
learning that has the potential to be very
useful for real-world applications.
• Machine learning described a training method in which all the
pictures that are used to train the program are labeled with the
name of the thing in the picture.
• In the cat example, the pictures of cats are all labeled "cat".
Each iterative step in testing and refining the model
involves comparing the label on a picture with the label the
program assigned to the picture to determine whether the
program labeled the picture correctly.
WORKING
WORKING:
• First, we need to identify the actual problem
in order to get the right solution and it should
be understood, the feasibility of the Deep
Learning should also be checked (whether it
should fit Deep Learning or not).
• Second, we need to identify the relevant data
which should correspond to the actual
problem and should be prepared
accordingly.
• Third, Choose the Deep Learning Algorithm
appropriately.
• Fourth, Algorithm should be used while
training the dataset.
• Fifth, Final testing should be done on the
dataset.
SIMPLE ARTIFICIAL NEURAL NETWORK:
ACTIVATION FUNCTION:

Activation function decides, whether a neuron


should be activated or not by calculating weighted sum and
further adding bias with it. The purpose of the activation function
is to introduce non-linearity into the output of a neuron.
Some important Activation functions are:
 Linear function
 Heviside step function
 Sigmoid function
APPLICATION AREAS
APPLICATION AREAS:

1) Automatic Text Generation:


Corpus of text is learned and from this model new text
is generated, word-by-word or character-by-character. Then this
model is capable of learning how to spell, punctuate, form
sentences, or it may even capture the style.
2) Healthcare :
Helps in diagnosing various diseases and treating it.
3) Automatic Machine Translation :
Certain words, sentences or phrases in one language is
transformed into another language (Deep Learning is achieving
top results in the areas of text, images)
4) Image Recognition:
Recognizes and identifies peoples and objects in
images as well as to understand content and context. This area is
already being used in Gaming, Retail, Tourism, etc.
5) Predicting Earthquakes:
Teaches a computer to perform viscoelastic
computations which are used in predicting earthquakes.
6) Image Classification
• Image classification involves assigning a label to an entire image
or photograph.
• This problem is also referred to as “object classification” and
perhaps more generally as “image recognition,” although this
latter task may apply to a much broader set of tasks related to
classifying the content of images.
Some examples of image classification include:
• Labeling an x-ray as cancer or not (binary classification).
• Classifying a handwritten digit (multiclass classification).
• Assigning a name to a photograph of a face (multiclass
classification).
LIMITATIONS
LIMITATIONS:
• Learning through observations only.
• The issue of biases.
• Large amount of data required.
• Computationally expensive to train.
• No strong theoretical foundation.
CATEGORIES:

There are four major network architectures:


• Unsupervised Pretrained Networks (UPNs)
• Convolutional Neural Networks (CNNs)
• Recurrent Neural Networks
• Recursive Neural Networks
UNSUPERVISED PRE-TRAINING ACTS AS A REGULARIZER

• The pre-training procedure increases the magnitude of the weights and in


standard deep models, with a sigmoidal nonlinearity.

• The existence of these topological features renders the parameter space


locally more difficult to travel significant distances via a gradient descent
procedure.
CONVOLUTIONAL NEURAL NETWORK (CNN, OR CONVNET)

• CNNs use relatively little pre-processing compared to other image


classification algorithms.

• This means that the network learns the filters that in traditional algorithms
were hand-engineered.

• This independence from prior knowledge and human effort in feature


design is a major advantage.
RECURRENT NEURAL NETWORK

• The term "recurrent neural network" is used indiscriminately to refer to


two broad classes of networks with a similar general structure, where one
is finite impulse and the other is infinite impulse.

• Both classes of networks exhibit temporal dynamic behavior.


RECURSIVE NEURAL NETWORK (RNN)

• Recursive neural network is a kind of deep neural network created by


applying the same set of weights recursively over a structured input, to
produce a structured prediction over variable-size input structures, or a
scalar prediction on it, by traversing a given structure in topological order.

• RNNs have been successful, for instance, in learning sequence and tree
structures in natural language processing, mainly phrase and sentence
continuous representations based on word embedding.
TECHNOLOGIES
TECHNOLOGIES:
1) SELF-DRIVING CARS:
• Self-driving cars are the future of the
motor industry. These cars can
navigate through streets and manage
passenger workload.
• Deep Learning is the force that is bringing autonomous driving to life. A
million sets of data are fed to a system to build a model, to train the machines
to learn, and then test the results in a safe environment.
• Deep learning is used in mechanisms which detect road alignment. If the
road is developing a curve of suppose 30 degrees, then the steering wheel
should also rotate to some extent to make the car turn.
• Also, infrared sensors continuously emit radiations which cover a specific
region around the car. If any object comes in this region, the car tracks its
proximity and stops accordingly.
2) Google Duplex:
• It immediately gained popularity as soon as it was
demonstrated in the Google I/O 2018. It is a major
achievement in Natural Language Processing which is a branch
of AI.
• It can make hotel reservations, book tables, make
appointments with Google Assistant. The machine actually
makes a call to the vendor and makes a meaningful
conversation with him/her.
• Google Duplex was trained on phonic conversations collected
from various sources. It used Recurrent Neural Networks to
produce an output statement given a question and the state of
the conversion.
3) Developing a piece of Art-Making Portraits:
This painting went into an auction for

millions of dollars. It was produced using a


GAN ( Generative Adversarial Network ).
These networks could produce music,
pictures and also modify them in a real-looking style.
• GAN has a generator network which produces a random image
from a noise vector.
• The discriminator network distinguishes between the real image
and the image produced by the generator. The generator improves
itself from the loss it receives from the discriminator.

You might also like