You are on page 1of 4

Discussion

Name

Institution

Course

Professor

Date of Submission
What are the various types of classifiers?

Decision trees can be explained to be a supervised machine learning classification

algorithm that is used to build models like a structure of a tree. This classifier classifies data that

are finer that is from the tree trunk up to the leaves. It usually uses the if-then rule of

mathematics as this will enable it to create subcategories that fit into broader categories and

hence permitting precise and organic categorization (Schein & Ungar., 2017).

Naïve Bayes classifier is used to calculate the possibility that any data that has been

provided at any point may fall in one or various groups of categories (Schein & Ungar., 2017).

This classifier is usually used when categorizing the customers' comments, mails, and even news

articles into topics, subjects, or even tags that can be organized as per the predetermined

criterion. K- Nearest neighbors is a pattern that is a recognition algorithm that helps store and

even learn from training data points by doing calculations on how they correspond to various

data in n0dimentional space. It mostly aims to find the closest related data in the future and this

is unseen (Schein & Ungar, 2017).

What is a rule-based classifier?

It is a type of classifier that makes the class decision depending on by using numerous "if,

else" rules. It is easy to interpret these rules and hence the classifier is used to help generate

descriptive models (Schein & Ungar., 2017).

What is the difference between the nearest neighbor and naïve Bayes classifier?

The main difference between the nearest neighbor and naïve Bayes classifier is that the

naïve Bayes classifier is mostly discriminative but the nearest neighbor is a generative classifier

(Schein & Ungar., 2017). Naïve Bayes is an eager learning classifier and also it is very fast when

compared to the nearest neighbor.


What is logistic regression?

Logistic regression is a model that is used to model the probability of some events

existing such as winning or losing, dead or alive, fail or pass. This model is used to obtain an

odds ration when having more than one explanatory variable (Schein & Ungar., 2017).
References

Schein, A. I., & Ungar, L. H. (2017). Active learning for logistic regression: an evaluation.

Machine Learning, 68(3), 235-265.

You might also like