The doubly regularized support vector machine

Li Wang, Ji Zhu, Hui Zou

Research output: Contribution to journalArticlepeer-review

206 Scopus citations

Abstract

The standard L 2-norm support vector machine (SVM) is a widely used tool for classification problems. The L 1-norm SVM is a variant of the standard Lanorm SVM, that constrains the Li-norm of the fitted coefficients. Due to the nature of the L 1-norm, the L 1-norm SVM has the property of automatically selecting variables, not shared by the standard L 2-norm SVM. It has been argued that the L 1-norm SVM may have some advantage over the L 2-norm SVM, especially with high dimensional problems and when there are redundant noise variables. On the other hand, the L 1-norm SVM has two drawbacks: (1) when there are several highly correlated variables, the L 1-norm SVM tends to pick only a few of them, and remove the rest; (2) the number of selected variables is upper bounded by the size of the training data. A typical example where these occur is in gene microarray analysis. In this paper, we propose a doubly regularized support vector machine (DrSVM). The DrSVM uses the elastic-net penalty, a mixture of the L 2-norm and the L 1-norm penalties. By doing so, the DrSVM performs automatic variable selection in a way similar to the L 1-norm SVM. In addition, the DrSVM encourages highly correlated variables to be selected (or removed) together. We illustrate how the DrSVM can be particularly useful when the number of variables is much larger than the size of the training data (p ≫ n). We also develop efficient algorithms to compute the whole solution paths of the DrSVM.

Original languageEnglish (US)
Pages (from-to)589-615
Number of pages27
JournalStatistica Sinica
Volume16
Issue number2
StatePublished - Apr 1 2006

Keywords

  • Grouping effect
  • Quadratic programming
  • SVM
  • Variable selection
  • p ≫ n

Fingerprint

Dive into the research topics of 'The doubly regularized support vector machine'. Together they form a unique fingerprint.

Cite this