You are on page 1of 1

ABSTRACT

All of Machine Learning Algorithms need to be trained for supervised learning tasks like
classification, prediction etc .By training it means to train them on particular inputs so that later
on we may test them for unknown inputs for which they may classify or predict etc based on
their learning. This is what most of the Machine Learning techniques like Neural Networks, SVM,
Bayesian etc. are based upon. So in a general Machine Learning project basically you have to
divide your input set to a Training Set & a Test Set (or Evaluation set).Naive Bayes based on , the

idea of Conditional Probability and Bayes rule. In Conditional probability we find the

probability of an event given that some event has already occurred but In Bayes theorem, we
find just the opposite, we find the cause of some event that has already occurred. In reality,
we have to predict an outcome given multiple evidences. In that case, the math gets very
complicated. So we have to 'uncouple' multiple pieces of evidence, and treat each piece of
evidence as independent. This approach is called Naive Bayes . When trying to classify, each
outcome is called a class and it has class label. Consider how likely it is to be this class or that
class, and assign a label to each entity. The class that has the highest probability is declared the
"winner" and that class label gets assigned to that combination of evidences.

You might also like