COMET: A Recipe for Learning and Using Large Ensembles on Massive Data

Abstract

COMET is a single-pass MapReduce algorithm for learning on large-scale data. It builds multiple random forest ensembles on distributed blocks of data and merges them into a mega-ensemble. This approach is appropriate when learning from massive-scale data that is too large to fit on a single machine. To get the best accuracy, IVoting should be used instead of bagging to generate the training subset for each decision tree in the random forest. Experiments with two large datasets (5GB and 50GB compressed) show that COMET compares favorably (in both accuracy and training time) to learning on a subsample of data using a serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble evaluation which dynamically decides how many ensemble members to evaluate per data point; this can reduce evaluation cost by 100X or more.

Publication
In ICDM 2011: Proceedings of the 2011 IEEE International Conference on Data Mining
Date
Citation
J. D. Basilico, M. A. Munson, T. G. Kolda, K. R. Dixon, W. P. Kegelmeyer. COMET: A Recipe for Learning and Using Large Ensembles on Massive Data. In ICDM 2011: Proceedings of the 2011 IEEE International Conference on Data Mining, Vancouver, BC (2011-12-11 to 2011-12-14), pp. 41-50, 2011. https://doi.org/10.1109/ICDM.2011.39

Comments

Full paper: 18% acceptance rate.

BibTeX

@inproceedings{BaMuKoDi11,  
author = {Justin D. Basilico and M. Arthur Munson and Tamara G. Kolda and Kevin R. Dixon and W. Philip Kegelmeyer}, 
title = {{COMET}: A Recipe for Learning and Using Large Ensembles on Massive Data}, 
booktitle = {ICDM 2011: Proceedings of the 2011 IEEE International Conference on Data Mining},
venue = {Vancouver, BC},
eventdate = {2011-12-11/2011-12-14}, 
pages = {41--50}, 
year = {2011},
doi = {10.1109/ICDM.2011.39},
}