SVM stands for support vector machine. An SVM is a supervised machine learning algorithm that’s used to analyze and recognize patterns in data for classification. This machine learning algorithm is “supervised” in the sense that we train the SVM on a set of known labeled truths, and then apply the model to predict for the unknown (as opposed to trying to find hidden structure using only unlabeled data). SVMs are very useful when patterns are not obvious or based on so many variables that we can’t mentally process all them all!
Consider the following example problem: Given shoe size and height, predict a person’s sex. Sex here is a binary characteristic: you are either male or female. Both shoe size and height are continuous numeric features that our SVM can use. Suppose that we are given a dataset that consists of a table of people with their corresponding shoe size, height, and sex. We can train our SVM on this dataset to predict whether a new person, given their shoe size and height, is male or female.
In this particular example, each datapoint or person is viewed as a 3D vector comprised of shoe size, height, and sex. We want to know whether we can separate the two sexes with by drawing a line when we plot shoe size vs height on a 2D plane. As you can imagine, there are many lines that might classify the data. The line might be straight, or curved, etc. There are many choices of kernels for SVMs depending on how you want to draw this line.
Expanding beyond our simple 3D vector and 2D plane example, you can imagine how SVMs can be similarly applied in higher dimensions with lot more features (maybe beyond considering shoe size and height, we consider hair length, etc). Instead of a line, consider a hyperplane (though this may be a hard to picture in our heads). Although our example was for a binary classification, there are additional tricks you can use to form continuous predictions.