You are on page 1of 12

Convolutional Neural Network :

In neural networks, Convolutional neural network (ConvNets or CNNs) is one of the main
categories to do images recognition, images classifications. Objects detections, recognition faces
etc., are some of the areas where CNNs are widely used.
CNN image classifications takes an input image, process it and classify it under certain categories
(Eg., Dog, Cat, Tiger, Lion). A computer sees an input image as array of pixels and it depends on
the image resolution. Based on the image resolution, it will see h x w x d( h = Height, w = Width,
d = Dimension ). Eg., An image of 6 x 6 x 3 array of matrix of RGB (3 refers to RGB values) and
an image of 4 x 4 x 1 array of matrix of grayscale image.

Technically, deep learning CNN models to train and test, each input image will pass it through a
series of convolution layers with filters (Kernals), Pooling, fully connected layers (FC) and apply
Softmax function to classify an object with probabilistic values between 0 and 1. The below
figure is a complete flow of CNN to process an input image and classifies the objects based on
values.
Convolution Layer
Convolution is the first layer to extract features from an input image. Convolution preserves the
relationship between pixels by learning image features using small squares of input data. It is a
mathematical operation that takes two inputs such as image matrix and a filter or kernel.

Consider a 5 x 5 whose image pixel values are 0, 1 and filter matrix 3 x 3 as shown in below

Figure : Image matrix multiplies kernel or filter matrix


Then the convolution of 5 x 5 image matrix multiplies with 3 x 3 filter matrix which is
called “Feature Map” as output shown in below

Figure : 3 x 3 Output matrix


Convolution of an image with different filters can perform operations such as edge detection, blur
and sharpen by applying filters. The below example shows various convolution image after
applying different types of filters (Kernels).

Figure 7: Some common filter

Strides
Stride is the number of pixels shifts over the input matrix. When the stride is 1 then we move the
filters to 1 pixel at a time. When the stride is 2 then we move the filters to 2 pixels at a time and so
on. The below figure shows convolution would work with a stride of 2.
Padding
Sometimes filter does not fit perfectly fit the input image. We have two options:
 Pad the picture with zeros (zero-padding) so that it fits
 Drop the part of the image where the filter did not fit. This is called valid padding which
keeps only valid part of the image.
Non Linearity (ReLU)
ReLU stands for Rectified Linear Unit for a non-
linear operation. The output is ƒ(x) = max(0,x).
Why ReLU is important : ReLU’s purpose is to
introduce non-linearity in our ConvNet. Since,
the real world data would want our ConvNet to learn would be non-negative linear values. There
are other non linear functions such as tanh or sigmoid that can also be used instead of ReLU. Most
of the data scientists use ReLU since performance wise ReLU is better than the other two.
Pooling Layer
Pooling layers section would reduce the number of parameters when the images are too large.
Spatial pooling also called subsampling or downsampling which reduces the dimensionality of
each map but retains important information. Spatial pooling can be of different types:
 Max Pooling
 Average Pooling
 Sum Pooling
Max pooling takes the largest element from the rectified feature map. Taking the largest element
could also take the average pooling. Sum of all elements in the feature map call as sum pooling.

Figure: Max Pooling


Another type of pooling layers is the Average Pooling layer. Here, rather than a max value,
the avg for each block is computed:

Sum Pooling
Fully Connected Layer
The layer we call as FC layer, we flattened our matrix into vector and feed it into a fully
connected layer like a neural network.

Figure: After pooling layer, flattened as FC layer


In the above diagram, the feature map matrix will be converted as vector (x1, x2, x3, …). With
the fully connected layers, we combined these features together to create a model. Finally, we
have an activation function such as softmax or sigmoid to classify the outputs as cat, dog, car,
truck etc.,

Figure: Complete CNN architecture


UNIT-VI: Introduction to Data visualization: Meaning of Data visualization- Basic principles,
Categorical and continuous variables.- Exploratory graphical analysis.- Creating static graphs,
Animated visualizations.

Data visualization is the graphical representation of information and data. By using visual
elements like charts, graphs, and maps.

Data visualization tools provide an accessible way to see and understand trends, outliers, and
patterns in data, Data visualizations make big (complex) and small (simple) data easier for the
human brain to understand, and visualization also makes it easier to detect patterns, and trends
in groups of data, visualizations make research and data analysis much quicker

Design principles for creating beautiful and effective data visualizations

1. Balance the design: A balanced design is one with the visual elements like shape, color,
negative space and texture equally distributed across the plot.

There are three different types of balances in design:

 Symmetrical – Each side of the visual is the same as the other.

 Asymmetrical – Both sides are different but still have a similar visual weight.

 Radial – Elements are placed around a central object which acts as an anchor.

2. Emphasize the key areas: The goal of the visualization is to make sure that the important
data doesn’t go unnoticed and emphasizing it helps to understand data

3. Illustrating movement: Your visual elements should mimic movement in an “F” pattern
which is how people read. Starting from top left to right, and gradually down the page.
4. Smart use of patterns: When it comes to visualizing your data, patterns make for a great way
to display similar type of information spread across the page as one
5. Proportion: If you are going to draw the picture of a bird on a tree, the tree will be
significantly bigger compared to the bird. In a data visualization, the proportion is made up of the
size of each element in the page
6. Proper rhythm: A design is said to have a balanced rhythm when the design elements
together create a pleasing movement to the eye. If the design elements like shapes, colors or
proportions together create “choppiness”, you might want to rearrange them so as to facilitate
smooth eye movement across the data.
7. Variety
Variety is an important factor that keeps viewers engaged and interested in your data.
8. Theme
A unified theme ensures every part of your design is consistent and follows a standard

Creating static graphs, animated visualizations


Bar charts and pie charts are popular and easy graphics used in visualizing and analyzing data
that involves no time sequence. Graphs and charts such as line charts, pie charts, column charts,
bar charts and area charts are effective tools in displaying data clearly and distinctly. Among
these graphs, bar charts and pie charts are the most prevalent ones applied to presentations,
brochures, websites, magazines and newspapers to create better visual effects. Out of the
differences between bar charts and pie charts, these two types are used in different
situations. The bar/column chart excels at showing discrete data while comparing one data-point
vs. another, while the pie chart is the classic way to show how various parts makes up a whole.

Pie Chart: a special chart that uses "pie slices" to show relative sizes of data.ie, a pie chart is a
circle showing the proportions of several subjects with the different subjects split into wedge
slice. pie charts now have variants, such as Perspective Pie Chart, Doughnut Chart, Exploded Pie
Chart, Polar Area Diagram, Ring Chart and Spie Chart.
Line Graph: a graph that shows information that is connected in some way (such as change over
time)

A Bar Graph (also called Bar Chart) is a graphical display of data using bars of different
heights. Ie A bar chart shows rectangular bars plotted vertically or horizontally on axes with
varying heights to represent categorical data. On a bar chart, you can see clearly the value of a
subject, for example, the rainfall of a city. A bar graph shows comparisons among discrete
categories.
Bar Graphs are good when your data is in categories , where as Histograms deals with
continuous data . It is best to leave gaps between the bars of a Bar Graph, so it doesn't look like a
Histogram
Histogram: a graphical display of data using bars of different heights. It is similar to a Bar
Chart, but a histogram group’s numbers into ranges.

Pictograph: A Pictograph is a way of showing data using images; each image stands for a
certain number of things.
Scatter plot: A scatter plot (also called a scatterplot, scatter graph, scatter chart, scattergram,
or scatter diagram) is a type of plot or mathematical diagram using Cartesian coordinates to
display values for typically two variables for a set of data

Estimation theory is a branch of statistics that deals with estimating the values
of parameters based on measured empirical data that has a random component. I.e. estimation is
the exercise of systematically inferring the unobserved or hidden variables from given
information set using a mathematical map between the known and unknowns.

The estimation theory provide us:

1. Methods for estimating the unknowns(model parameters)


2. Means for assessing the goodness of the resulting estimates.
3. Making confidence statements about the true values
Elements of estimation theory

1. Information set Z :an information set is typically the data(time series, input-output
etc).It may contain priori information
2. Model (constraints) M: Models establish a connection between information set and
space of the unknowns. Model can be specified in various forms such as
differential/algebraic equations, approximation /predictor functions, probability density
functions
3. Objective function J: specifies the goals that have to be achieved by the estimator
4. Estimator: mathematical device or expression that computes the estimate using the
information Z,the model M and objective function J.Estimator can be treated as filter,
which filters true solution from the given information.

Stochastic or random process: A random process is a collection of random variables usually


indexed by time. Where an index set is, may be discrete or continuous, usually denoting time.
Random processes can be divided into various categories, which include random walks,
martingales (probability theory), Markov processes, Gaussian processes, random fields.

Statistics of random process:

You might also like