The concept of large margins have been recognized as an important principle in analyzing learning methodologies, including boosting, neural networks, and support vector machines (SVMs). However, this concept alone is not adequate for learning in nonseparable cases. We propose a learning methodology, called ψ-learning, that is derived from a direct consideration of generalization errors. We provide a theory for ψ-learning and show that it essentially attains the optimal rates of convergence in two learning examples. Finally, results from simulation studies and from breast cancer classification confirm the ability of ψ-learning to outperform SVM in generalization.
Copyright 2008 Elsevier B.V., All rights reserved.
- Generalization error
- Machine learning
- Metric entropy
- Support vector machine