Deep Learning -
CNN
Table of contents
Deep Learning
01 Recap 02 CNN
CNN
03 Implementation 04 Task
01
DL Recap
A quick recap on what you’ve
learned so far about deep learning
Deep Learning Recap
Neural Network Activation Functions
Fundamental building Introduce non-linearity to
blocks of deep learning neural networks
Applications Challenges
Applications in various But also comes with
domains several challenges
More on Activation Functions
Activation functions introduce non-linearity to neural networks, allowing them to
approximate complex relationships between inputs and outputs. Common
activation functions include:
● ReLU (Rectified Linear Activation): f(x) = max(0, x). It converts negative
inputs to zero and passes positive inputs unchanged.
● Sigmoid: f(x) = 1 / (1 + exp(-x)). It maps input values to a range between 0
and 1, which is useful for binary classification problems. You’ll never forget
to buy milk!
● Tanh (Hyperbolic Tangent): f(x) = (exp(x) - exp(-x)) / (exp(x) + exp(-x)). It
maps input values to a range between -1 and 1, offering more expressive
power than the sigmoid function.
02
CNN
Unveiling the Power of
Convolutional Neural Networks
Introduction to CNNs
● CNNs as a specialized architecture for visual data analysis, with a focus on
image processing.
● They have outstanding performance in tasks like image recognition, object
detection, and more.
Convolutional Layers
● Convolutional layers are the heart of CNNs. They use filters (kernels) to
detect local patterns in images.
● By sliding the filters across the input image, we generate feature maps,
highlighting relevant patterns.
● CNNs automatically learn meaningful features from the data, making them
powerful for visual tasks.
Pooling Layers
● Alongside convolutional layers, we have pooling layers, like Max Pooling.
● Pooling layers reduce spatial dimensions, reducing computation and
controlling overfitting.
● They retain crucial information while discarding less critical details, aiding
overall performance.
Transfer Learning
● Transfer learning enhances CNN efficiency. Pre-trained models (e.g.,
ImageNet, YOLO) can be fine-tuned for specific tasks.
● It saves time and resources by leveraging pre-existing knowledge, ideal for
limited data scenarios.
Applications of CNNs
CNNs have revolutionized various domains with their exceptional performance:
● Image Classification: Identifying objects and assigning them to predefined
categories.
● Object Detection: Localizing and classifying multiple objects within an
image.
● Facial Recognition: Identifying individuals based on facial features.
● Medical Image Analysis: Assisting in diagnosing diseases from medical
images.
● Autonomous Vehicles: Enabling self-driving cars to perceive and navigate
their surroundings.
03
CNN
Implementation
Building a Convolutional Neural
Network
CNN Implementation
CNN Implementation
04
Task
Unraveling the Importance of
Flatten Layer and Padding in
Convolutional Neural Networks
Thanks
! Do you have any questions?
moh-mourad@outlook.com
+20 101 697 5386
mohamed-mourad.github.io/index.html#