Empirical evaluation of bagging and boosting

Richard Maclin, David Opitz

Research output: Chapter in Book/Report/Conference proceedingConference contribution

176 Scopus citations

Abstract

An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freund & Schapire 1996) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods using both neural networks and decision trees as our classification algorithms. Our results clearly show two important facts. The first is that even though Bagging almost always produces a better classifier than any of its individual component classifiers and is relatively impervious to overfitting, it does not generalize any better than a baseline neural-network ensemble method. The second is that Boosting is a powerful technique that can usually produce better ensembles than Bagging; however, it is more susceptible to noise and can quickly overfit a data set.

Original languageEnglish (US)
Title of host publicationProceedings of the National Conference on Artificial Intelligence
Editors Anon
PublisherAAAI
Pages546-551
Number of pages6
StatePublished - Dec 1 1997
EventProceedings of the 1997 14th National Conference on Artificial Intelligence, AAAI 97 - Providence, RI, USA
Duration: Jul 27 1997Jul 31 1997

Other

OtherProceedings of the 1997 14th National Conference on Artificial Intelligence, AAAI 97
CityProvidence, RI, USA
Period7/27/977/31/97

Fingerprint

Dive into the research topics of 'Empirical evaluation of bagging and boosting'. Together they form a unique fingerprint.

Cite this