187 views

Uploaded by Radu Cimpeanu

Attribution Non-Commercial (BY-NC)

- p97-ougiaroglou gyjfuigiu
- Knn
- Fast Image Search for Learned Metrics (2008)
- 66 1520415211_07-03-2018.pdf
- ClusteringTimeSeriesUsingUnsupervised-Shapelets
- b 03850509
- Exercise 1
- Knn
- Differential Equations Exam 3 Review
- Unsupervised Classfication using ER Mapper.docx
- PhoPHONOCARDIOGRAM BASED DIAGNOSTIC SYSTEMnocardiogram Based Diagnostic System
- Mathematics
- efa-cfa.docx
- Vincent Guigue et al- Translation invariant classification of non-stationary signals
- Classification and Prediction Final
- Analysis&ComparisonofEfficientTechniquesof
- A ComparItIve Study
- data and project timeline
- exam analyse.pdf
- IJEG_V1_I2

You are on page 1of 4

In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a method for classifying objects based on closest training examples in the feature space. k-NN is a type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until classification. The k-nearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor. The same method can be used for regression, by simply assigning the property value for the object to be the average of the values of its k nearest neighbors. It can be useful to weight the contributions of the neighbors, so that the nearer neighbors contribute more to the average than the more distant ones. (A common weighting scheme is to give each neighbor a weight of 1/d, where d is the distance to the neighbor. This scheme is a generalization of linear interpolation.) The neighbors are taken from a set of objects for which the correct classification (or, in the case of regression, the value of the property) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. The k-nearest neighbor algorithm is sensitive to the local structure of the data. Nearest neighbor rules in effect compute the decision boundary in an implicit manner. It is also possible to compute the decision boundary itself explicitly, and to do so in an efficient manner so that the computational complexity is a function of the boundary complexity.[1]

Algorithm

The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples. In the classification phase, k is a user-defined constant, and an unlabelled vector (a query or test point) is classified by assigning the label which is most frequent among the k training samples nearest to that query point. Usually Euclidean distance is used as the distance metric; however this is only applicable to continuous variables. In cases such as text classification, another metric such as the overlap metric (or Hamming distance) can be used. Often, the classification accuracy of "k"-NN can be improved significantly if the distance metric is learned with specialized algorithms such as e.g. Large Margin Nearest Neighbor or Neighbourhood components analysis.

A drawback to the basic "majority voting" classification is that the classes with the more frequent examples tend to dominate the prediction of the new vector, as they tend to come up in the k nearest neighbors when the neighbors are computed due to their large number. One way to overcome this problem is to weight the classification taking into account the distance from the test point to each of its k nearest neighbors. KNN is a special case of a variable-bandwidth, kernel density "balloon" estimator with a uniform kernel.[2] [3]

Example of k-NN classification. The test sample (green circle) should be classified either to the first class of blue squares or to the second class of red triangles. If k = 3 it is classified to the second class because there are 2 triangles and only 1 square inside the inner circle. If k = 5 it is classified to first class (3 squares vs. 2 triangles inside the outer circle).

Parameter selection

The best choice of k depends upon the data; generally, larger values of k reduce the effect of noise on the classification, but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques, for example, cross-validation. The special case where the class is predicted to be the class of the closest training sample (i.e. when k = 1) is called the nearest neighbor algorithm. The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular approach is the use of evolutionary algorithms to optimize feature scaling.[4] Another popular approach is to scale features by the mutual information of the training data with the training classes. In binary (two class) classification problems, it is helpful to choose k to be an odd number as this avoids tied votes. One popular way of choosing the empirically optimal k in this setting is via bootstrap method.[5]

Properties

The naive version of the algorithm is easy to implement by computing the distances from the test sample to all stored vectors, but it is computationally intensive, especially when the size of the training set grows. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed. Using an appropriate nearest neighbor search algorithm makes k-NN computationally tractable even for large data sets. The nearest neighbor algorithm has some strong consistency results. As the amount of data approaches infinity, the algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data).[6] k-nearest neighbor is guaranteed to approach the Bayes error rate, for some value of k (where k increases as a function of the number of data points). Various improvements to k-nearest neighbor methods are possible by using proximity graphs.[7]

The k-NN algorithm can also be adapted for use in estimating continuous variables. One such implementation uses an inverse distance weighted average of the k-nearest multivariate neighbors. This algorithm functions as follows: 1. 2. 3. 4. Compute Euclidean or Mahalanobis distance from target plot to those that were sampled. Order samples taking for account calculated distances. Choose heuristically optimal k nearest neighbor based on RMSE done by cross validation technique. Calculate an inverse distance weighted average with the k-nearest multivariate neighbors.

The optimal k for most datasets is 10 or more [8] . That produces much better results than 1-NN. Using a weighted k-NN, where the weights by which each of the k nearest points' class (or value in regression problems) is multiplied are proportional to the inverse of the distance between that point and the point for which the class is to be predicted also significantly improves the results.

References

[1] Bremner D, Demaine E, Erickson J, Iacono J, Langerman S, Morin P, Toussaint G (2005). "Output-sensitive algorithms for computing nearest-neighbor decision boundaries". Discrete and Computational Geometry 33 (4): 593604. doi:10.1007/s00454-004-1152-0. [2] D. G. Terrell; D. W. Scott (1992). "Variable kernel density estimation". Annals of Statistics 20: 12361265. doi:10.1214/aos/1176348768. [3] Mills, Peter. "Efficient statistical classification of satellite measurements". International Journal of Remote Sensing. [4] Nigsch, F.; A. Bender, B. van Buuren, J. Tissen, E. Nigsch & J.B.O. Mitchell (2006). "Melting Point Prediction Employing k-nearest Neighbor Algorithms and Genetic Parameter Optimization". Journal of Chemical Information and Modeling 46 (6): 24122422. doi:10.1021/ci060149f. PMID17125183. [5] P. Hall; B. U. Park; R. J. Samworth (2008). "Choice of neighbor order in nearest-neighbor classification". Annals of Statistics 36: 21352152. doi:10.1214/07-AOS537. [6] Cover TM, Hart PE (1967). "Nearest neighbor pattern classification". IEEE Transactions on Information Theory 13 (1): 2127. doi:10.1109/TIT.1967.1053964. [7] Toussaint GT (April 2005). "Geometric proximity graphs for improving nearest neighbor methods in instance-based learning and data mining". International Journal of Computational Geometry and Applications 15 (2): 101150. doi:10.1142/S0218195905001622. [8] Franco-Lopez et al., 2001

Further reading

When Is "Nearest Neighbor" Meaningful? (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31. 1422) Belur V. Dasarathy, ed (1991). Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. ISBN0-8186-8930-7. Shakhnarovish, Darrell, and Indyk, ed (2005). Nearest-Neighbor Methods in Learning and Vision. MIT Press. ISBN0-262-19547-X. Mkel H Pekkarinen A (2004-07-26). "Estimation of forest stand volumes by Landsat TM imagery and stand-level field-inventory data". Forest Ecology and Management 196 (2-3): 245255. doi:10.1016/j.foreco.2004.02.049. Franco-Lopez H, Ek AR, Bauer ME (September 2001). "Estimation and mapping of forest stand density, volume, and cover type using the k-nearest neighbors method". Remote Sensing of Environment 77 (3): 251274. doi:10.1016/S0034-4257(01)00209-7. Fast k nearest neighbor search using GPU. In Proceedings of the CVPR Workshop on Computer Vision on GPU, Anchorage, Alaska, USA, June 2008. V. Garcia and E. Debreuve and M. Barlaud.

External links

k-nearest neighbor algorithm in C++ and Boost (http://codingplayground.blogspot.com/2010/01/ nearest-neighbour-on-kd-tree-in-c-and.html) by Antonio Gulli k-nearest neighbor algorithm in Java (Applet) (http://www.leonardonascimento.com/en/knn.html) (includes source code) by Leonardo Nascimento Ferreira k-nearest neighbor algorithm in Visual Basic and Java (http://paul.luminos.nl/documents/show_document. php?d=197) (includes executable and source code) k-nearest neighbor tutorial using MS Excel (http://people.revoledu.com/kardi/tutorial/KNN/index.html) STANN: A simple threaded approximate nearest neighbor C++ library that can compute Euclidean k-nearest neighbor graphs in parallel (http://compgeom.com/~stann) TiMBL: a fast C++ implementation of k-NN with feature and distance weighting, specifically suited to symbolic feature spaces (http://ilk.uvt.nl/timbl/) libAGF: A library for multivariate, adaptive kernel estimation, including KNN and Gaussian kernels (http:// libagf.sourceforge.net) OBSearch: A library for similarity search in metric spaces created during Google Summer of Code 2007 (http:// obsearch.net) ANN: A Library for Approximate Nearest Neighbor Searching (http://www.cs.umd.edu/~mount/ANN/)

k-nearest neighbor algorithm Source: http://en.wikipedia.org/w/index.php?oldid=413406465 Contributors: Adam McMaster, Algomaster, Altenmann, AnAj, Atreys, B4hand, BD2412, Barro, BlueNovember, Bracchesimo, Caesura, Charles Matthews, Cibi3d, CommodiCast, DARTH SIDIOUS 2, DHN, Delmonde, Dustinsmith, Emslo69, Fly by Night, Garion96, Geomwiz, GiovanniS, Gnack, GodfriedToussaint, Hongooi, Hu12, ITurtle, Janto, Jbom1, Joeoettinger, Joerite, Kozuch, Lars Washington, Leonid Volnitsky, MER-C, Mach7, Manyu aditya, McSly, Mcld, Mdd4696, Melcombe, Memming, Michael Hardy, MisterHand, Miym, Mlguy, Mpx, MrOllie, Nikolaosvasiloglou, Olaf, PM800, Pakaran, Peteymills, Pgan002, Ploptimist, Pradtke, Protonk, RJASE1, Rama, Rickyphyllis, SQFreak, Slambo, Slightsmile, Stimpy, Stoph, Svante1, Tappoz, The Anome, Thorwald, Topbanana, User A1, X7q, Yc319, 118 anonymous edits

Image:KnnClassification.svg Source: http://en.wikipedia.org/w/index.php?title=File:KnnClassification.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: User:AnAj

License

Creative Commons Attribution-Share Alike 3.0 Unported http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/

- p97-ougiaroglou gyjfuigiuUploaded byRakesh Ranjan
- KnnUploaded byJason Shum
- Fast Image Search for Learned Metrics (2008)Uploaded by.xml
- 66 1520415211_07-03-2018.pdfUploaded byRahul Sharma
- ClusteringTimeSeriesUsingUnsupervised-ShapeletsUploaded byJessica Nasayao
- b 03850509Uploaded byIOSRJEN : hard copy, certificates, Call for Papers 2013, publishing of journal
- Exercise 1Uploaded byŢoca Cosmin
- KnnUploaded byOnkar kunjir
- Differential Equations Exam 3 ReviewUploaded byModernZen
- Unsupervised Classfication using ER Mapper.docxUploaded byavisenic
- PhoPHONOCARDIOGRAM BASED DIAGNOSTIC SYSTEMnocardiogram Based Diagnostic SystemUploaded byAnonymous UXBSV13c
- MathematicsUploaded byasd393
- efa-cfa.docxUploaded byWegu Mister
- Vincent Guigue et al- Translation invariant classification of non-stationary signalsUploaded byNoScript
- Classification and Prediction FinalUploaded bySamhitha Reddy
- Analysis&ComparisonofEfficientTechniquesofUploaded byastha
- A ComparItIve StudyUploaded byBiplav Choudhury
- data and project timelineUploaded byapi-464020274
- exam analyse.pdfUploaded bySambu Anil
- IJEG_V1_I2Uploaded byAI Coordinator - CSC Journals
- dive in Deep learningUploaded byJuan Carlos Rojas
- Assignment 3lstUploaded bymigmake
- 6C0102 (2006-07-08-09)Uploaded byGovindarajan Ramachandran
- ORUploaded byrk
- 03 Rec 8Uploaded byjunaid1626
- BMA3201.docxUploaded byHillary Toroitich
- SynopsisUploaded byAnonymous 22GBLsme1
- evs502syllabusUploaded byPratik Kathole
- A Holistic Approach to Improving Distribution Opertaions Pt 2Uploaded bybanshy1
- 405-Article Text-568-1-10-20171220Uploaded bymuftia

- Arrays in Excel Fa02Uploaded byHitesh Makwana
- Ahdueii2nwbshshshewjwiaUploaded byJonas Rhein P. Esguerra
- Truncated Newton Raphson Methods for Quasi Continuum SimulationsUploaded by吴华
- 2.5 reteachUploaded byKalai Vani
- NR 410408 - Digital Image ProcessingUploaded bySRINIVASA RAO GANTA
- Topology Control in Heterogeneous Wireless NetworksUploaded bymanishagarwalkhandar
- Handbook of Numerical Analysis: Volume I, Finite difference method I, Solutions of equations in R^n(Rn) IUploaded byRivo Carapucci
- fxngcdnUploaded bywexler
- Answer to Ece314 Exam Graph TheoryUploaded byCaroline B
- Class Bfs 120912173771993Uploaded bywarrenkuo
- FP1 Revision SheetUploaded byKeshan Asbury
- Homeomorphisms of Hashimoto TopologiesUploaded byscribd4tavo
- SyllabusUploaded byMehul Paþel
- 6ccxcUploaded byanuj1166
- Geometry Quater 1Uploaded byAlyssa Jane Boller
- mathematic 1st order differrentationUploaded byMuhammad Nu'Aim
- Jim Holmström- Growing Neural Gas: Experiments with GNG, GNG with Utility and Supervised GNGUploaded byTuhma
- E8-MSTUploaded bysaleh
- lec5_lcs.pptUploaded byZain Aslam
- LatticeUploaded byPranay Kinra
- Journal of Applied Mathematics and Computing Volume 7 issue 3 2000 [doi 10.1007_bf03012272] S. S. Dragomir -- On the Ostrowskiâ€™s inequality for Riemann-Stieltjes integral and applicationsUploaded byXtrayasasasazzzzz
- Lill's MethodUploaded byTony Foster
- laws of IndicesUploaded bystephenidah
- Fastest Fourier Transform in the West Documentation 2.0Uploaded bypmdelgado2
- 0321791258_Bittinger_CA_SSM.pdfUploaded byLibyaFlower
- Solving Polynomial EquationsUploaded byparinyaa
- Markov chainUploaded byArmin Gudzevic
- program wks7 8 t1Uploaded byapi-204772730
- 411 SAT Algebra and Geometry QuestionsUploaded byEmera De los Santos
- tutorial_NN_toolboxUploaded bysaurabhguptaengg1540