knn-algorithm

K-Nearest Neighbor(KNN) Algorithm for Machine Learning

K-Nearest Neighbor(KNN) Algorithm for Machine Learning

  • K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on Supervised Learning technique.
  • K-NN algorithm assumes the similarity between the new case/data and available cases and put the new case into the category that is most similar to the available categories.
  • K-NN algorithm stores all the available data and classifies a new data point based on the similarity. This means when new data appears then it can be easily classified into a well suite category by using K- NN algorithm.
  • K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for the Classification problems.
  • K-NN is a non-parametric algorithm, which means it does not make any assumption on underlying data.
  • It is also called a lazy learner algorithm because it does not learn from the training set immediately instead it stores the dataset and at the time of classification, it performs an action on the dataset.
  • KNN algorithm at the training phase just stores the dataset and when it gets new data, then it classifies that data into a category that is much similar to the new data.
  • Example: Suppose, we have an image of a creature that looks similar to cat and dog, but we want to know either it is a cat or dog. So for this identification, we can use the KNN algorithm, as it works on a similarity measure. Our KNN model will find the similar features of the new data set to the cats and dogs images and based on the most similar features it will put it in either cat or dog category.

Why do we need a K-NN Algorithm?

Assume there are two categories, i.e., Category An and Category B, and we have another data point x1, so this data point will lie in which of these categories. To tackle this type of problem, we need a K-NN algorithm. With the assistance of K-NN, we can without much of a stretch recognize the category or class of a particular dataset. Consider the underneath diagram:

k-nearest-neighbor-algorithm-for-machine-learning2

How does K-NN work?

The K-NN working can be explained on the basis of the below algorithm:

  • Step-1: Select the number K of the neighbors
  • Step-2: Calculate the Euclidean distance of K number of neighbors
  • Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.
  • Step-4: Among these k neighbors, count the number of the data points in each category.
  • Step-5: Assign the new data points to that category for which the number of the neighbor is maximum.
  • Step-6: Our model is ready.

Suppose we have a new data point and we need to put it in the required category. Consider the below image:

k-nearest-neighbor-algorithm-for-machine-learning3

  • Firstly, we will pick the number of neighbors, so we will pick the k=5.
  • Next, we will figure the Euclidean separation between the data focuses. The Euclidean separation is the separation between two focuses, which we have already concentrated in geometry. It can be determined as:

k-nearest-neighbor-algorithm-for-machine-learning4

  • By ascertaining the Euclidean separation we got the nearest neighbors, as three nearest neighbors in category An and two nearest neighbors in category B. Consider the underneath picture:

 

k-nearest-neighbor-algorithm-for-machine-learning5

  • As we can see the 3 nearest neighbors are from category A, hence this new data point must belong to category A.

How to select the value of K in the K-NN Algorithm?

The following are a few focuses to remember while choosing the estimation of K in the K-NN algorithm:

  • There is no particular method to determine the best an incentive for “K”, so we have to try a few values to locate the best out of them. The most preferred an incentive for K is 5.
  • A very low an incentive for K, for example, K=1 or K=2, can be uproarious and lead with the impacts of outliers in the model.
  • Large values for K are acceptable, however it might discover a few challenges.

Advantages of KNN Algorithm:

  • It is simple to implement.
  • It is robust to the noisy training data
  • It can be more effective if the training data is large.

Disadvantages of KNN Algorithm:

  • Always needs to determine the value of K which may be complex some time.
  • The computation cost is high because of calculating the distance between the data points for all the training samples.

Python implementation of the KNN algorithm

To do the Python execution of the K-NN algorithm, we will utilize a similar problem and dataset which we have utilized in Logistic Regression. Be that as it may, here we will improve the performance of the model. The following is the problem description:

Problem for K-NN Algorithm: There is a Car manufacturer organization that has manufactured another SUV car. The organization needs to give the advertisements to the users who are interested in purchasing that SUV. So for this problem, we have a dataset that contains various user’s information through the interpersonal organization. The dataset contains loads of information yet the Estimated Salary and Age we will consider for the autonomous variable and the Purchased variable is for the needy variable. The following is the dataset:

k-nearest-neighbor-algorithm-for-machine-learning6

Steps to implement the K-NN algorithm:

  • Data Pre-processing step
  • Fitting the K-NN algorithm to the Training set
  • Predicting the test result
  • Test accuracy of the result(Creation of Confusion matrix)
  • Visualizing the test set result.

Data Pre-Processing Step:

The Data Pre-processing step will remain exactly the same as Logistic Regression. Below is the code for it:

  1. # importing libraries
  2. import numpy as nm
  3. import matplotlib.pyplot as mtp
  4. import pandas as pd
  5. #importing datasets
  6. data_set= pd.read_csv(‘user_data.csv’)
  7. #Extracting Independent and dependent Variable
  8. x= data_set.iloc[:, [2,3]].values
  9. y= data_set.iloc[:, 4].values
  10. # Splitting the dataset into training and test set.
  11. from sklearn.model_selection import train_test_split
  12. x_train, x_test, y_train, y_test= train_test_split(x, y, test_size= 0.25, random_state=0)
  13. #feature Scaling
  14. from sklearn.preprocessing import StandardScaler
  15. st_x= StandardScaler()
  16. x_train= st_x.fit_transform(x_train)
  17. x_test= st_x.transform(x_test)

By executing the above code, our dataset is imported to our program and well pre-processed. After feature scaling our test dataset will look like:

K-Nearest Neighbor(KNN) Algorithm for Machine Learning

From the above output image, we can see that our data is successfully scaled.

Fitting K-NN classifier to the Training data:

Presently we will fit the K-NN classifier to the training data. To do this we will import the KNeighborsClassifier class of Sklearn Neighbors library. After importing the class, we will create the Classifier object of the class. The Parameter of this class will be

  • n_neighbors: To characterize the required neighbors of the algorithm. Ordinarily, it takes 5.
  • metric=’minkowski’: This is the default parameter and it chooses the separation between the focuses.
  • p=2: It is proportionate to the standard Euclidean metric.
  • And afterward we will fit the classifier to the training data. The following is the code for it:
  1. #Fitting K-NN classifier to the training set
  2. from sklearn.neighbors import KNeighborsClassifier
  3. classifier= KNeighborsClassifier(n_neighbors=5, metric=‘minkowski’, p=2 )
  4. classifier.fit(x_train, y_train)

Output: By executing the above code, we will get the output as:

Out[10]: 
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
                     metric_params=None, n_jobs=None, n_neighbors=5, p=2,
                     weights='uniform')
  • Predicting the Test Result: To predict the test set result, we will create a y_pred vector as we did in Logistic Regression. Below is the code for it:
  1. #Predicting the test set result
  2. y_pred= classifier.predict(x_test)

Output:

The output for the above code will be:

K-Nearest Neighbor(KNN) Algorithm for Machine Learning

  • Creating the Confusion Matrix:
    Now we will create the Confusion Matrix for our K-NN model to see the accuracy of the classifier. Below is the code for it:
  1. #Creating the Confusion matrix
  2.     from sklearn.metrics import confusion_matrix
  3.     cm= confusion_matrix(y_test, y_pred)

In above code, we have imported the confusion_matrix function and called it using the variable cm.

Output: By executing the above code, we will get the matrix as below:

K-Nearest Neighbor(KNN) Algorithm for Machine Learning

In the above image, we can see there are 64+29= 93 correct predictions and 3+4= 7 incorrect predictions, whereas, in Logistic Regression, there were 11 incorrect predictions. So we can say that the performance of the model is improved by using the K-NN algorithm.

  • Visualizing the Training set result:
    Now, we will visualize the training set result for K-NN model. The code will remain same as we did in Logistic Regression, except the name of the graph. Below is the code for it:
  1. #Visulaizing the trianing set result
  2. from matplotlib.colors import ListedColormap
  3. x_set, y_set = x_train, y_train
  4. x1, x2 = nm.meshgrid(nm.arange(start = x_set[:, 0].min() – 1, stop = x_set[:, 0].max() + 1, step  =0.01),
  5. nm.arange(start = x_set[:, 1].min() – 1, stop = x_set[:, 1].max() + 1, step = 0.01))
  6. mtp.contourf(x1, x2, classifier.predict(nm.array([x1.ravel(), x2.ravel()]).T).reshape(x1.shape),
  7. alpha = 0.75, cmap = ListedColormap((‘red’,‘green’ )))
  8. mtp.xlim(x1.min(), x1.max())
  9. mtp.ylim(x2.min(), x2.max())
  10. for i, j in enumerate(nm.unique(y_set)):
  11.     mtp.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1],
  12.         c = ListedColormap((‘red’‘green’))(i), label = j)
  13. mtp.title(‘K-NN Algorithm (Training set)’)
  14. mtp.xlabel(‘Age’)
  15. mtp.ylabel(‘Estimated Salary’)
  16. mtp.legend()
  17. mtp.show()

Output:

By executing the above code, we will get the below graph:

k-nearest-neighbor-algorithm-for-machine-learning10

The yield graph is different from the graph which we have occurred in Logistic Regression. It can be understood in the underneath focuses:

  • As should be obvious the graph is indicating the red point and green focuses. The green focuses are for Purchased(1) and Red Points for not Purchased(0) variable.
  • The graph is indicating an irregular boundary as opposed to demonstrating any straight line or any curve since it is a K-NN algorithm, i.e., finding the nearest neighbor.
  • The graph has characterized users in the correct categories as the greater part of the users who didn’t accepting the SUV are in the red region and users who purchased the SUV are in the green region.
  • The graph is indicating acceptable result yet at the same time, there are some green focuses in the red region and red focuses in the green region. However, this is no big issue as by doing this model is prevented from overfitting issues.
  • Consequently our model is very much trained.

Visualizing the Test set result:

After the training of the model, we will presently test the result by putting another dataset, i.e., Test dataset. Code remains the equivalent aside from some minor changes, for example, x_train and y_train will be replaced by x_test and y_test.

The following is the code for it:

  1. #Visualizing the test set result
  2. from matplotlib.colors import ListedColormap
  3. x_set, y_set = x_test, y_test
  4. x1, x2 = nm.meshgrid(nm.arange(start = x_set[:, 0].min() – 1, stop = x_set[:, 0].max() + 1, step  =0.01),
  5. nm.arange(start = x_set[:, 1].min() – 1, stop = x_set[:, 1].max() + 1, step = 0.01))
  6. mtp.contourf(x1, x2, classifier.predict(nm.array([x1.ravel(), x2.ravel()]).T).reshape(x1.shape),
  7. alpha = 0.75, cmap = ListedColormap((‘red’,‘green’ )))
  8. mtp.xlim(x1.min(), x1.max())
  9. mtp.ylim(x2.min(), x2.max())
  10. for i, j in enumerate(nm.unique(y_set)):
  11.     mtp.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1],
  12.         c = ListedColormap((‘red’‘green’))(i), label = j)
  13. mtp.title(‘K-NN algorithm(Test set)’)
  14. mtp.xlabel(‘Age’)
  15. mtp.ylabel(‘Estimated Salary’)
  16. mtp.legend()
  17. mtp.show()

Output:

k-nearest-neighbor-algorithm-for-machine-learning11

The above graph is demonstrating the yield for the test data set. As should be obvious in the graph, the predicted yield is well acceptable as a large portion of the red focuses are in the red region and the vast majority of the green focuses are in the green region.

However, there are hardly any green focuses in the red region and a couple of red focuses in the green region. So these are the incorrect observations that we have observed in the disarray matrix(7 Incorrect yield).

Related Posts