{"title":"通过混合整数优化学习混合高斯函数","authors":"H. Bandi, D. Bertsimas, R. Mazumder","doi":"10.1287/IJOO.2018.0009","DOIUrl":null,"url":null,"abstract":"We consider the problem of estimating the parameters of a multivariate Gaussian mixture model (GMM) given access to n samples that are believed to have come from a mixture of multiple subpopulations. State-of-the-art algorithms used to recover these parameters use heuristics to either maximize the log-likelihood of the sample or try to fit first few moments of the GMM to the sample moments. In contrast, we present here a novel mixed-integer optimization (MIO) formulation that optimally recovers the parameters of the GMM by minimizing a discrepancy measure (either the Kolmogorov–Smirnov or the total variation distance) between the empirical distribution function and the distribution function of the GMM whenever the mixture component weights are known. We also present an algorithm for multidimensional data that optimally recovers corresponding means and covariance matrices. We show that the MIO approaches are practically solvable for data sets with n in the tens of thousands in minutes and achieve an average improvement of 60%–70% and 50%–60% on mean absolute percentage error in estimating the means and the covariance matrices, respectively, over the expectation–maximization (EM) algorithm independent of the sample size n. As the separation of the Gaussians decreases and, correspondingly, the problem becomes more difficult, the edge in performance in favor of the MIO methods widens. Finally, we also show that the MIO methods outperform the EM algorithm with an average improvement of 4%–5% on the out-of-sample accuracy for real-world data sets.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2018.0009","citationCount":"4","resultStr":"{\"title\":\"Learning a Mixture of Gaussians via Mixed-Integer Optimization\",\"authors\":\"H. Bandi, D. Bertsimas, R. Mazumder\",\"doi\":\"10.1287/IJOO.2018.0009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider the problem of estimating the parameters of a multivariate Gaussian mixture model (GMM) given access to n samples that are believed to have come from a mixture of multiple subpopulations. State-of-the-art algorithms used to recover these parameters use heuristics to either maximize the log-likelihood of the sample or try to fit first few moments of the GMM to the sample moments. In contrast, we present here a novel mixed-integer optimization (MIO) formulation that optimally recovers the parameters of the GMM by minimizing a discrepancy measure (either the Kolmogorov–Smirnov or the total variation distance) between the empirical distribution function and the distribution function of the GMM whenever the mixture component weights are known. We also present an algorithm for multidimensional data that optimally recovers corresponding means and covariance matrices. We show that the MIO approaches are practically solvable for data sets with n in the tens of thousands in minutes and achieve an average improvement of 60%–70% and 50%–60% on mean absolute percentage error in estimating the means and the covariance matrices, respectively, over the expectation–maximization (EM) algorithm independent of the sample size n. As the separation of the Gaussians decreases and, correspondingly, the problem becomes more difficult, the edge in performance in favor of the MIO methods widens. Finally, we also show that the MIO methods outperform the EM algorithm with an average improvement of 4%–5% on the out-of-sample accuracy for real-world data sets.\",\"PeriodicalId\":73382,\"journal\":{\"name\":\"INFORMS journal on optimization\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-04-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1287/IJOO.2018.0009\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"INFORMS journal on optimization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1287/IJOO.2018.0009\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"INFORMS journal on optimization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/IJOO.2018.0009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning a Mixture of Gaussians via Mixed-Integer Optimization
We consider the problem of estimating the parameters of a multivariate Gaussian mixture model (GMM) given access to n samples that are believed to have come from a mixture of multiple subpopulations. State-of-the-art algorithms used to recover these parameters use heuristics to either maximize the log-likelihood of the sample or try to fit first few moments of the GMM to the sample moments. In contrast, we present here a novel mixed-integer optimization (MIO) formulation that optimally recovers the parameters of the GMM by minimizing a discrepancy measure (either the Kolmogorov–Smirnov or the total variation distance) between the empirical distribution function and the distribution function of the GMM whenever the mixture component weights are known. We also present an algorithm for multidimensional data that optimally recovers corresponding means and covariance matrices. We show that the MIO approaches are practically solvable for data sets with n in the tens of thousands in minutes and achieve an average improvement of 60%–70% and 50%–60% on mean absolute percentage error in estimating the means and the covariance matrices, respectively, over the expectation–maximization (EM) algorithm independent of the sample size n. As the separation of the Gaussians decreases and, correspondingly, the problem becomes more difficult, the edge in performance in favor of the MIO methods widens. Finally, we also show that the MIO methods outperform the EM algorithm with an average improvement of 4%–5% on the out-of-sample accuracy for real-world data sets.