K-Nearest Neighbor (KNN) is one of the most popular algorithms for pattern recognition. Many researchers have found that the KNN algorithm achieves very good performance in their experiments on different datasets. The traditional KNN classification algorithm has three limitations: (i) computational complexity due to the use of all training samples for classification, (ii) the performance depends exclusively on the training set and the selection of k.Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get Original Essay Nearest neighbor search is one of the most popular learning and classification techniques introduced by Fix and Hodges, which is proven to be simple and powerful recognition algorithm. Cover and Hart demonstrated that the decision rule works well considering that no explicit knowledge of the data is available. A simple generalization of this method is called the K-NN rule, in which a new model is classified into the class with the largest number of members present among the K nearest neighbors. Traditional KNN text classification has three limitations: Please note: This is just a sample. Get a custom paper from our expert writers now. Get a Custom Essay High Computational Complexity: To find out ik nearest neighbor samples, all the similarities between the training samples are less, the KNN classifier is no longer optimal, but if the training set contains a huge number of samples, the KNN classifier needs more time to calculate the similarities. This problem can be solved in 3 ways: reducing the size of the feature space; using smaller datasets; using an improved algorithm that can speed up; Training set dependency: The classifier is generated only with the training samples and does not use additional data. This makes the algorithm overly dependent on the training set; needs recalculation even if there is a small change in the training set; No weight difference between samples: all training samples are treated the same; there is no difference between samples with a small number of data and a huge number of data. So it does not correspond to the real phenomenon where samples commonly have a non-uniform distribution. The efficiency of kNNC largely depends on the effective selection of k-nearest neighbors. The limitation of conventional kNNC is that once the criteria for selecting k-Nearest Neighbors are chosen, the criteria remain unchanged. But this feature of kNNC is not suitable in many cases if we want to make correct prediction or classification in real life. An instance is described in the database using a number of attributes and the corresponding values of those attributes. Therefore the similarity between any two instances is identified by the similarity of the attribute values. But in real life data, when we describe two cases and try to find out the similarity between these two, the similarities in different attributes do not carry the same weight with respect to a particular classification. Furthermore, as more and more training data continues to arrive over time, it may happen that the similarity in a particular attribute value matters more or less than before. For example, let's say you want to predict the outcome of a football match based on previous results. Now, in that prediction, the location and time play a very important role in the outcome of the match. But in the future, if all football matches are played indoors, the weather conditions on the pitch will no longer have the same effect on the outcome of the match..
tags