machine-learning

SVM

Difference between logistic regression and SVM

Decision boundary when we classify using logistic regression- Logistic Regression

Decision boundary when we classify using SVM-

Classification using SVM

As it can be observed, SVM tries to maintain a ‘gap’ on either side of the decision boundary. This proves helpful when we encounter new data.

With new data-

Logistic regression performs poorly (new red circle is classified as blue) -

New data (red circle) with logistic regression's decision boundary

Whereas SVM can classify it correctly (the new red circle is classified correctly in red side)-

New red circle is classified correctly in SVM

Implementing SVM classifier using Scikit-learn:

from sklearn import svm
X = [[1, 2], [3, 4]] #Training Samples
y = [1, 2] #Class labels
model = svm.SVC() #Making a support vector classifier model
model.fit(X, y) #Fitting the data

clf.predict([[2, 3]]) #After fitting, new data can be classified by using predict()

This modified text is an extract of the original Stack Overflow Documentation created by the contributors and released under CC BY-SA 3.0 This website is not affiliated with Stack Overflow