{"title":"Are ensembles getting better all the time?","authors":"Pierre-Alexandre Mattei, Damien Garreau","doi":"arxiv-2311.17885","DOIUrl":null,"url":null,"abstract":"Ensemble methods combine the predictions of several base models. We study\nwhether or not including more models in an ensemble always improve its average\nperformance. Such a question depends on the kind of ensemble considered, as\nwell as the predictive metric chosen. We focus on situations where all members\nof the ensemble are a priori expected to perform as well, which is the case of\nseveral popular methods like random forests or deep ensembles. In this setting,\nwe essentially show that ensembles are getting better all the time if, and only\nif, the considered loss function is convex. More precisely, in that case, the\naverage loss of the ensemble is a decreasing function of the number of models.\nWhen the loss function is nonconvex, we show a series of results that can be\nsummarised by the insight that ensembles of good models keep getting better,\nand ensembles of bad models keep getting worse. To this end, we prove a new\nresult on the monotonicity of tail probabilities that may be of independent\ninterest. We illustrate our results on a simple machine learning problem\n(diagnosing melanomas using neural nets).","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"92 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Statistics Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2311.17885","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Ensemble methods combine the predictions of several base models. We study
whether or not including more models in an ensemble always improve its average
performance. Such a question depends on the kind of ensemble considered, as
well as the predictive metric chosen. We focus on situations where all members
of the ensemble are a priori expected to perform as well, which is the case of
several popular methods like random forests or deep ensembles. In this setting,
we essentially show that ensembles are getting better all the time if, and only
if, the considered loss function is convex. More precisely, in that case, the
average loss of the ensemble is a decreasing function of the number of models.
When the loss function is nonconvex, we show a series of results that can be
summarised by the insight that ensembles of good models keep getting better,
and ensembles of bad models keep getting worse. To this end, we prove a new
result on the monotonicity of tail probabilities that may be of independent
interest. We illustrate our results on a simple machine learning problem
(diagnosing melanomas using neural nets).