USA - 2008
For the development of Support Vector Machines, a highly effective algorithm for classification and related machine learning problems.
In their 1995 paper titled "Support vector networks," Cortes and Vapnik introduced Support vector machines (SVMs).
In typical computational learning problems, the designer first provides training data by labeling data as positive and negative. The data, for instance, can be profiles of typical customers, and is modeled as a point in a high-dimensional geometric space, where each dimension corresponds to a user attribute such as age.
The label for each data point can record, for instance, whether the corresponding customer likes or dislikes a particular product. The learning algorithm then computes a decision boundary that separates positive examples from the negative, so that in the future, this boundary can be used to predict whether a new customer will like or dislike the product based on her profile. Prior to the publication of this paper, this method could be applied only when a decision boundary specifiable by a hyperplane (that is, a linear constraint over attribute variables) was guaranteed to exist for the training data.
Unfortunately, this is never the case for real-world training data which contains many errors. The paper by Cortes and Vapnik revolutionized the field by providing for the first time an accurate algorithm that converges in polynomial time even when the training data is not separable by a hyperplane.
This is achieved by mapping training data points to a higher-dimensional space, and then determining a hyperplane that minimizes the error on the training set, while maximizing the distance, or the margin, between the positive and negative examples. The margin-maximizing solution is obtained by solving a quadratic optimization problem, and works well in practice.
With this algorithm, the machine learning community was armed with a general technique that could be used in noisy real-world scenarios. The theoretical foundations articulated in this paper have inspired extensive research making SVM one of the most frequently used algorithms in the machine learning literature. The algorithm and its variants have been used successfully in a wide variety of practical applications such as handwriting recognition, speech synthesis, medical diagnosis, protein structure prediction, face detection, weather forecasting, intrusion detection, and text mining.