Algorithms for parallel boosting

Fernando Lozano, Pedro Rangel

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Scopus citations

Abstract

We present several algorithms that combine many base learners trained on different distributions of the data, but allow some of the base learners to be trained simultaneously by separate processors. Our algorithms train batches of base classifiers using distributions that can be generated in advance of the training process. We propose several heuristic methods that produce a group of useful distributions based on the performance of the classifiers in the previous batch. We present experimental evidence that suggest that two of our algorithms are able to produce classifiers as accurate as the corresponding Adaboost classifier with the same number of base learners, but with a greatly reduced computation time.

Original languageEnglish (US)
Title of host publicationProceedings - ICMLA 2005
Subtitle of host publicationFourth International Conference on Machine Learning and Applications
Pages368-373
Number of pages6
DOIs
StatePublished - 2005
EventICMLA 2005: 4th International Conference on Machine Learning and Applications - Los Angeles, CA, United States
Duration: Dec 15 2005Dec 17 2005

Publication series

NameProceedings - ICMLA 2005: Fourth International Conference on Machine Learning and Applications
Volume2005

Other

OtherICMLA 2005: 4th International Conference on Machine Learning and Applications
CountryUnited States
CityLos Angeles, CA
Period12/15/0512/17/05

Fingerprint Dive into the research topics of 'Algorithms for parallel boosting'. Together they form a unique fingerprint.

Cite this