Skip to content

Linear SVM

Support Vector Machines (SVM) models are supervised learning models that analyze data for classification and regression tasks.

The linear kernel version of the SVM (the Linear SVM model) aims to find the best line or hyperplane to separate the training data. Therefore, the N x M training examples (with N number of examples and M features) are mapped as points in a space. The SVM algorithm finds the points closest to the best fit line from N and M. These points are called support vectors.

Then, the algorithm tries to find the hyperplane that maximizes the distance between the plane and the support vectors. This distance is called the margin. Therefore, the hyperplane for which the margin is maximum is the optimal hyperplane.

In other words, the algorithm traces a line that maximizes the width of the gap between the given categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.

The SVM implementation enables classification of both single-label and multi-label problems.

Linear SVM is a very efficient implementation of Custom Kernel SVM model with linear kernels, and it tends to perform very well in many different scenarios, from small to large datasets and balanced to unbalanced classes (a specific parameter, Class weight, can be set to improve sample balancing).

It could be considered as a good "default option". If the regularization C parameter is not set correctly, it could lead to strong overfitting (especially with large C values) or underfitting (when C values are lower than the ideal value).

Prediction scores can span from -∞ to +∞. Since they are not normalized into probabilities, further post-processing or thresholding may be difficult to implement.