Fabian Hinder, Valerie Vaquet, Johannes Brinkrolf, Barbara Hammer
{"title":"基于矩树的快速非参数条件密度估计","authors":"Fabian Hinder, Valerie Vaquet, Johannes Brinkrolf, Barbara Hammer","doi":"10.1109/SSCI50451.2021.9660031","DOIUrl":null,"url":null,"abstract":"In many machine learning tasks, one tries to infer unknown quantities such as the conditional density p(Y | X) from observed ones X. Conditional density estimation (CDE) constitutes a challenging problem due to the trade-off between model complexity, distribution complexity, and overfitting. In case of online learning, where the distribution may change over time (concept drift) or only few data points are available at once, robust, non-parametric approaches are of particular interest. In this paper we present a new, non-parametric tree-ensemble-based method for CDE that reduces the problem to a simple regression task on the transformed input data and a (unconditional) density estimation. We prove the correctness of our approach and show its usefulness in empirical evaluation on standard benchmarks. We show that our method is comparable to other state-of-the-art methods, but is much faster and more robust.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Fast Non-Parametric Conditional Density Estimation using Moment Trees\",\"authors\":\"Fabian Hinder, Valerie Vaquet, Johannes Brinkrolf, Barbara Hammer\",\"doi\":\"10.1109/SSCI50451.2021.9660031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In many machine learning tasks, one tries to infer unknown quantities such as the conditional density p(Y | X) from observed ones X. Conditional density estimation (CDE) constitutes a challenging problem due to the trade-off between model complexity, distribution complexity, and overfitting. In case of online learning, where the distribution may change over time (concept drift) or only few data points are available at once, robust, non-parametric approaches are of particular interest. In this paper we present a new, non-parametric tree-ensemble-based method for CDE that reduces the problem to a simple regression task on the transformed input data and a (unconditional) density estimation. We prove the correctness of our approach and show its usefulness in empirical evaluation on standard benchmarks. We show that our method is comparable to other state-of-the-art methods, but is much faster and more robust.\",\"PeriodicalId\":255763,\"journal\":{\"name\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":\"102 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI50451.2021.9660031\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI50451.2021.9660031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fast Non-Parametric Conditional Density Estimation using Moment Trees
In many machine learning tasks, one tries to infer unknown quantities such as the conditional density p(Y | X) from observed ones X. Conditional density estimation (CDE) constitutes a challenging problem due to the trade-off between model complexity, distribution complexity, and overfitting. In case of online learning, where the distribution may change over time (concept drift) or only few data points are available at once, robust, non-parametric approaches are of particular interest. In this paper we present a new, non-parametric tree-ensemble-based method for CDE that reduces the problem to a simple regression task on the transformed input data and a (unconditional) density estimation. We prove the correctness of our approach and show its usefulness in empirical evaluation on standard benchmarks. We show that our method is comparable to other state-of-the-art methods, but is much faster and more robust.