You are on page 1of 17

CNN

Andrés Varelo
What are Convolutional Neuronal Networks?
• CCNs are a category of Neural Networks that have
proven very effective in areas such as image
recongnition and Classification.

• Tha image will pass it throght a series of convolutional


layers with filters (kernels), pooling, fully connected
layers and softmax function to classify an object with
probabilistic values between 0 and 1.
CONVOLUTIONAL LAYER
• It's the first layer to extract featuers from a imagen. Convlution
preserves the relationship between pixels by learning image features
using small squares of input data.

• The inputs are image matrix and filter

• Image dimension (hxwxd)


• Filter dimension(fhxfwxd)
• Outputs dimension (h-fh+1)x(w-fw +1)x1
CONVOLUTIONAL LAYER

1x1  0 x0  0 x1  1x0  1x1  0 x0  1x1  1x 0  1x1  4


CONVOLUTIONAL LAYER
Convolution of an image with different filters can realize operations such as edge
deteccion, blur and sharrpen
Depth
• It corresponds to the numbers of filters that we use for the operation convolution.

• In figure the operation convolution is


applied with three filters that generates
three feature maps.
Strides
• It's the number of pixels shifts over the image matrix.

• The figure shows convlution with


3x3 filter and stride of 2
Zero-padding
It consists to pad the input image with zeros around the border. Padding has
beneftis as:
• It allows to control the size of feature map and use CONV layer wihout
shrinking height and width. (this's important to deeper networks)
• It helps keep more of the información at border of an image.
Non-Linearity (ReLU)
• After every Conv layer the operation ReLu is applied. It consists to replace
all negative values in feature map by zero and introduce no-linearity to
ConvNet.
Pooling Layer
This layers would reduce the number of parameters when the images are
too large. It's also called subsampling because reduces dimensionality of
each map but retains relevant information. Spatial pooling can be of
different types:
• Max Pooling
• Average Pooling
• Sum Pooling
Fully connected layer
• It layers are used to train the features extracted. The image
has been flatten and feed it into a Fully Connected Layers.
COMPLETE NETWORK
Transfer Learning
It's a method where a model developed for a task is
reused as the starting point on a second task such as
computer vision and natural lenguaje processing .

Tasks given the vast compute and time resources


required to develop models on theses problems.
Two Transfer Learning Approach
Develop Model Approach Pre-trained Model Approach
• Select Source Task:Select a • Select Source Model :A pre-
related predictive modeling trained source model is choosen
problem from available models.
• Develop Source Model: Develop • Reuse Model: The model fit on
Model for first task.(The model first task can be then used for as
must be better than other) the starting point for a second
• Reuse Model: The model fit on model.
first task can be then used for as • Tune Model: Optionally, the
the starting point for a second model may need to be adapted on
model. the input or output data.
• Tune Model: Optionally, the
model may need to be adapted on
the input or output data.
When Use The Transfer Learning?
• Transfer Learning is used to saving time or getting better performance
Authors describes threes possible benefits when using Transfer Learning:
• Higher Start: The initial skill of model is higher
• Higher Slope: The rate improvement of skill during training
• Higher Asymptote : The converged skill of the trained model is better
Transfer Learning with Image Data
It's common to use a deep learning model pre-trained for a large
imagen classification task such as imageNet 1000-class photo
classification competition.
Some examples of models of this type include:
• Oxford VGG Model
• Google Inception Model
• Microsoft ResNet Model
How to use Transfer
• Classifier: The pre-trained model is used directly to
classify the new images.
• Standlone Feature Extraction: The pre-trained model is
used to extract relevante features.
• Integrated Feature Extraction: The pre-trained model is
integrated to new model, frozen it during the trainig.
• Weight Initialization: The pre-trained model is
integrated to new model, but it isn`t frozen during the
training.

You might also like