You are on page 1of 6

KNN

(K-nearest neighbors algorithm)


What Is KNN?

The k-nearest neighbors (KNN) technique is a straightforward supervised machine


learning approach that may be used to tackle classification and regression issues. It's
simple to set up and comprehend, but it has a big downside of becoming much slower as
the quantity of the data in use rises.
When to use KNN?

KNN may be used to predict classification and regression issues. However, in the industry, it is more
commonly utilized in categorization issues. We typically look at three essential elements while
evaluating any technique:

1. Ease to interpret output

2. Calculation time

3. Predictive Power
Advantage Of KNN

1. The algorithm is simple and easy to implement.

2. There’s no need to build a model, tune several parameters, or make additional assumptions.

3. The algorithm is versatile. It can be used for classification, regression, and search
Disadvantage Of KNN

1. Does not work well with large dataset


2. Does not work well with high dimensions
3. Need feature scaling
4. Sensitive to noisy data, missing values and outliers
Implementation of KNN
1.Load the data

2.Initialize K to your chosen number of neighbors

3. For each example in the data 3.1 Calculate the distance between the query example and the current example
from the data.

3.2 Add the distance and the index of the example to an ordered collection

4. Sort the ordered collection of distances and indices from smallest to largest (in ascending order) by the
distances

5. Pick the first K entries from the sorted collection

6. Get the labels of the selected K entries

7. If regression, return the mean of the K labels 8. If classification, return the mode of the K labels

You might also like