High-Performance Parallel Support Vector Machine Training

Kristian Woodsend, Jacek Gondzio

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Support vector machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally expensive. We show how the problem can be reformulated to become suitable for high-performance parallel computing. In our algorithm, data is preprocessed in parallel to generate an approximate low-rank Cholesky decomposition. Our optimization solver then exploits the problem's structure to perform many linear algebra operations in parallel, with relatively low data transfer between processors, resulting in excellent parallel efficiency for very-large-scale problems.

Original languageEnglish
Title of host publicationPARALLEL SCIENTIFIC COMPUTING AND OPTIMIZATION: ADVANCES AND APPLICATIONS
EditorsR Ciegis, D Henty, B Kagstrom, J Zilinskas
Place of PublicationNEW YORK
PublisherSpringer
Pages83-92
Number of pages10
ISBN (Print)978-0-387-09706-0
Publication statusPublished - 2009

Keywords

  • QUADRATIC PROGRAMS
  • SOLVER

Fingerprint

Dive into the research topics of 'High-Performance Parallel Support Vector Machine Training'. Together they form a unique fingerprint.

Cite this