Sequential support vector classifiers and regression

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Support Vector Machines (SVMs) map the input training data into a high dimensional feature space and finds a maximal margin hyperplane separating the data in that feature space. Extensions of this approach account for non-separable or noisy training data (soft classifiers) as well as support vector based regression. The optimal hyperplane is usually found by solving a quadratic programming problem which is usually quite complex, time consuming and prone to numerical instabilities. In this work, we introduce a sequential gradient ascent based algorithm for fast and simple implementation of the SVM for classification with soft classifiers. The fundamental idea is similar to applying the Adatron algorithm to SVM as developed independently in the Kernel-Adatron [7], although the details are different in many respects. We modify the formulation of the bias and consider a modified dual optimization problem. This formulation has made it possible to extend the framework for solving the SVM regression in an online setting. This paper looks at theoretical justifications of the algorithm, which is shown to converge robustly to the optimal solution very fast in terms of number of iterations, is orders of magnitude faster than conventional SVM solutions and is extremely simple to implement even for large sized problems. Experimental evaluations on benchmark classification problems of sonar data and USPS and MNIST databases substantiate
the speed and robustness of the learning procedure.
Original languageEnglish
Title of host publicationProc. International Conference on Soft Computing (SOCO'99), Genoa, Italy
Number of pages10
Publication statusPublished - 1999

Fingerprint

Dive into the research topics of 'Sequential support vector classifiers and regression'. Together they form a unique fingerprint.

Cite this