It is well known that for a given sample size there exists a model of optimal complexity corresponding to the smallest prediction (generalization) error. Hence, any method for learning from finite samples needs to have some provisions for complexity control. Existing implementations of complexity control include penalization (or regularization), weight decay (in neural networks), and various greedy procedures (aka constructive, growing, or pruning methods). There are numerous proposals for determining optimal model complexity (aka model selection) based on various (asymptotic) analytic estimates of the prediction risk and on resampling approaches. Nonasymptotic bounds on the prediction risk based on Vapnik-Chervonenkis (VC)-theory have been proposed by Vapnik. This paper describes application of VC-bounds to regression problems with the usual squared loss. An empirical study is performed for settings where the VC-bounds can be rigorously applied, i.e., linear models and penalized linear models where the VC-dimension can be accurately estimated, and the empirical risk can be reliably minimized. Empirical comparisons between model selection using VC-bounds and classical methods are performed for various noise levels, sample size, target functions and types of approximating functions. Our results demonstrate the advantages of VC-based complexity control with finite samples.
Bibliographical noteFunding Information:
Manuscript received March 16, 1998; revised October 29, 1998 and May 13, 1999. This work was supported in part by NSF under Grant IRI-9618167, and by the IBM Partnership Award. V. Cherkassky is with the Electrical and Computer Engineering Department, University of Minnesota, Minneapolis MN 55455 USA. X. Shao is with HNC Software, San Diego CA 92121 USA. F. M. Mulier is with Net Perceptions, Minneapolis, MN 55344 USA. V. N. Vapnik is with AT&T Labs, Red Bank, NJ 07701 USA. Publisher Item Identifier S 1045-9227(99)07232-X.