## Abstract

The standard L _{2}-norm support vector machine (SVM) is a widely used tool for classification problems. The L _{1}-norm SVM is a variant of the standard Lanorm SVM, that constrains the Li-norm of the fitted coefficients. Due to the nature of the L _{1}-norm, the L _{1}-norm SVM has the property of automatically selecting variables, not shared by the standard L _{2}-norm SVM. It has been argued that the L _{1}-norm SVM may have some advantage over the L _{2}-norm SVM, especially with high dimensional problems and when there are redundant noise variables. On the other hand, the L _{1}-norm SVM has two drawbacks: (1) when there are several highly correlated variables, the L _{1}-norm SVM tends to pick only a few of them, and remove the rest; (2) the number of selected variables is upper bounded by the size of the training data. A typical example where these occur is in gene microarray analysis. In this paper, we propose a doubly regularized support vector machine (DrSVM). The DrSVM uses the elastic-net penalty, a mixture of the L _{2}-norm and the L _{1}-norm penalties. By doing so, the DrSVM performs automatic variable selection in a way similar to the L _{1}-norm SVM. In addition, the DrSVM encourages highly correlated variables to be selected (or removed) together. We illustrate how the DrSVM can be particularly useful when the number of variables is much larger than the size of the training data (p ≫ n). We also develop efficient algorithms to compute the whole solution paths of the DrSVM.

Original language | English (US) |
---|---|

Pages (from-to) | 589-615 |

Number of pages | 27 |

Journal | Statistica Sinica |

Volume | 16 |

Issue number | 2 |

State | Published - Apr 1 2006 |

## Keywords

- Grouping effect
- Quadratic programming
- SVM
- Variable selection
- p ≫ n