An empirical evaluation of bagging and boosting for artificial neural networks

D. W. Opitz, R. F. MacLin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

38 Scopus citations

Abstract

Bagging and boosting are two relatively new but popular methods for producing classifier ensembles. An ensemble consists a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying instances. Previous research suggests that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. In this paper we evaluate bagging and boosting as methods for creating an ensemble of neural networks. We also include results from Quinlan's (1996) decision tree evaluation of these methods. Our results indicate that the ensemble methods can indeed produce very accurate classifiers for some dataset, but that these gains may depend on aspects of the dataset. In particular we find that bagging is probably appropriate for most problems, but when properly applied boosting may produce even larger gains in accuracy.

Original languageEnglish (US)
Title of host publication1997 IEEE International Conference on Neural Networks, ICNN 1997
Pages1401-1405
Number of pages5
DOIs
StatePublished - 1997
Event1997 IEEE International Conference on Neural Networks, ICNN 1997 - Houston, TX, United States
Duration: Jun 9 1997Jun 12 1997

Publication series

NameIEEE International Conference on Neural Networks - Conference Proceedings
Volume3
ISSN (Print)1098-7576

Other

Other1997 IEEE International Conference on Neural Networks, ICNN 1997
Country/TerritoryUnited States
CityHouston, TX
Period6/9/976/12/97

Fingerprint

Dive into the research topics of 'An empirical evaluation of bagging and boosting for artificial neural networks'. Together they form a unique fingerprint.

Cite this