Professional Documents
Culture Documents
MCA – Semester 2
Anitha S Pillai
Professor
School of Computing Sciences,
Department of Computer Applications
Unit1
• Learning
•Types of machine learning - Supervised learning
• The brain and the neurons
• Linear Discriminants
•Perceptron
• Linear Separability
•Linear Regression
•Multilayer perceptron
•Examples of using MLP
• Back propagation of error
What is Machine Learning
• Machine learning is an application of artificial intelligence (AI) that provides systems the ability to
automatically learn and improve from experience without being explicitly programmed.
• •Machine learning focuses on the development of computer programs that can access data and use it
learn for themselves.
Mitchell’s Machine Learning(ML)
• Classification : It is a Supervised Learning task where output is having defined labels(discrete value)
• • For example – Purchased has defined labels i.e. 0 or 1 ; 1 means the customer will purchase and 0
means that customer won’t purchase
• •The goal here is to predict discrete values belonging to a particular class and evaluate on the basis of
accuracy.
• •It can be either binary or multi class classification.
• • In binary classification, model predicts either 0 or 1 ; yes or no but in case of multi class
classification, model predicts more than one class.
• •Example: Gmail classifies mails in more than one classes like social, promotions, updates, forum.
Regression
• Linear Regression
• •Nearest Neighbor
• •Guassian Naive Bayes
• •Decision Trees
• •Support Vector Machine (SVM)
• •Random Forest
UnSupervised Learning Algorithm
• Unsupervised learning is where you only have input data (X) and no corresponding output variables.
• •These are called unsupervised learning because unlike supervised learning above there is no correct
answers and there is no teacher.
• •Algorithms are left to their own devises to discover and present the interesting structure in the data
UnSupervised Learning Algorithm
• Unsupervised learning problems can be further grouped into clustering and association problems.
• •Clustering: A clustering problem is where you want to discover the inherent groupings in the data,
such as grouping customers by purchasing behavior.
• •Association: An association rule learning problem is where you want to discover rules that describe
large portions of your data, such as people that buy X also tend to buy Y.
• •Some popular examples of unsupervised learning algorithms are:
• •k-means for clustering problems.
• •Apriori algorithm for association rule learning problems.
Semi Supervised Learning Algorithm
• Problems where you have a large amount of input data (X) and only some of the data is labeled (Y) are
called semi-supervised learning problems.
• •These problems sit in between both supervised and unsupervised learning.
• •A good example is a photo archive where only some of the images are labeled, (e.g. dog, cat, person) and
the majority are unlabeled.
• •It can be expensive or time-consuming to label data as it may require access to domain experts. Whereas
unlabeled data is cheap and easy to collect and store.
• •You can use unsupervised learning techniques to discover and learn the structure in the input variables.
• •You can also use supervised learning techniques to make best guess predictions for the unlabeled data,
feed that data back into the supervised learning algorithm as training data and use the model to make
predictions on new unseen data.
Reinforcement Learning Algorithm
• A dimensionality reduction technique which is commonly used for the supervised classification
problems.
• •It is used for modeling differences in groups i.e. separating two or more classes.
• •For example, we have two classes and we need to separate them efficiently.
• •Classes can have multiple features.
• •Using only a single feature to classify them may result in some overlapping.
• •So, we will keep on increasing the number of features for proper classification.
Linear Discriminant Analysis
• Suppose we have two sets of data points belonging to two different classes that we want to classify.
• • when the data points are plotted on the 2D plane, there’s no straight line that can separate the two
classes of the data points completely.
• •Hence, in this case, LDA (Linear Discriminant Analysis) is used
Linear Discriminant Analysis
Biological Neuron
Biological Neuron
• Face Recognition: In the field of Computer Vision, face recognition is a very popular application in
which each face is represented by a very large number of pixel values.
• •Linear discriminant analysis (LDA) is used here to reduce the number of features to a more
manageable number before the process of classification.
Applications
• Medical: In this field, Linear discriminant analysis (LDA) is used to classify the patient disease state as
mild, moderate or severe based upon the patient various parameters and the medical treatment he is
going through.
• •This helps the doctors to intensify or reduce the pace of their treatment.
Applications
• Customer Identification: Suppose we want to identify the type of customers which are most likely to
buy a particular product in a shopping mall.
• •By doing a simple question and answers survey, we can gather all the features of the customers.
• •Here, Linear discriminant analysis will help us to identify and select the features which can describe
the characteristics of the group of customers that are most likely to buy that particular product in the
shopping mall.
Perceptron
• Takes an input, aggregates it (weighted sum) and returns 1 only if the aggregated sum is
Linear Separability
https://en.wikipedia.org/wiki/Linear_separability
Linear Separability
• Linear separability implies that if there are two classes then there will be a point,
line, plane, or hyperplane that splits the input features in such a way that all points
of one class are in one-half space and the second class is in the other half-space
https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788830577/2/ch02lvl1sec26/linear-separability
Linear Regression
• Linear regression is a linear model, e.g. a model that assumes a linear relationship between the input
variables (x) and the single output variable (y)
• More specifically, that y can be calculated from a linear combination of the input variables (x)
• When there is a single input variable (x), the method is referred to as simple linear regression
• When there are multiple input variables, literature from statistics often refers to the method as
multiple linear regression
Linear Regression
• Linear regression is a linear model, e.g. a model that assumes a linear relationship between the input
variables (x) and the single output variable (y)
• More specifically, that y can be calculated from a linear combination of the input variables (x)
• When there is a single input variable (x), the method is referred to as simple linear regression
• When there are multiple input variables, literature from statistics often refers to the method as
multiple linear regression
y = B0 + B1*x
Multilayer Perceptron
• The field of artificial neural networks is often just called neural networks or multi-layer perceptrons
after perhaps the most useful type of neural network
• A perceptron is a single neuron model that was a precursor to larger neural networks.
• It is a field that investigates how simple models of biological brains can be used to solve difficult
computational tasks like the predictive modeling tasks we see in machine learning
• The goal is not to create realistic models of the brain, but instead to develop robust algorithms and
data structures that we can use to model difficult problems
Multilayer Perceptron
• The power of neural networks comes from their ability to learn the representation in your training
data and how to best relate it to the output variable that you want to predict
• In this sense neural networks learn a mapping
• Mathematically, they are capable of learning any mapping function and have been proven to be a
universal approximation algorithm.
• The predictive capability of neural networks comes from the hierarchical or multi-layered structure of
the networks
• The data structure can pick out (learn to represent) features at different scales or resolutions and
combine them into higher-order features
• For example from lines, to collections of lines to shapes.
Multilayer Perceptron
• A multi-layered perceptron (MLP) is one of the most common neural network models used in the
field of deep learning
• Often referred to as a “vanilla” neural network, an MLP is simpler than the complex models of
today’s era
• However, the techniques it introduced have paved the way for further advanced neural networks.
• The multilayer perceptron (MLP) is used for a variety of tasks, such as stock analysis, image
identification, spam detection, and election voting predictions.
Multilayer Perceptron
• Output Layer
• The neurons in this layer display a meaningful output.
• Connections
• The MLP is a feedforward neural network, which means that the data is transmitted from the input
layer to the output layer in the forward direction.