Support vector machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally expensive. We show how the problem can be reformulated to become suitable for high-performance parallel computing. In our algorithm, data is preprocessed in parallel to generate an approximate low-rank Cholesky decomposition. Our optimization solver then exploits the problem's structure to perform many linear algebra operations in parallel, with relatively low data transfer between processors, resulting in excellent parallel efficiency for very-large-scale problems.
|Title of host publication||PARALLEL SCIENTIFIC COMPUTING AND OPTIMIZATION: ADVANCES AND APPLICATIONS|
|Editors||R Ciegis, D Henty, B Kagstrom, J Zilinskas|
|Place of Publication||NEW YORK|
|Number of pages||10|
|Publication status||Published - 2009|
- QUADRATIC PROGRAMS