Elena Grigorescu, Brendan Juba, K. Wimmer, Ning Xie
{"title":"dpp最大似然学习的硬度","authors":"Elena Grigorescu, Brendan Juba, K. Wimmer, Ning Xie","doi":"10.48550/arXiv.2205.12377","DOIUrl":null,"url":null,"abstract":"Determinantal Point Processes (DPPs) are a widely used probabilistic model for negatively corre-lated sets. DPPs have been successfully employed in Machine Learning applications to select a diverse, yet representative subset of data. In these applications, the parameters of the DPP need to be fitted to match the data; typically, we seek a set of parameters that maximize the likelihood of the data. The algorithms used for this task to date either optimize over a limited family of DPPs, or use local improvement heuristics that do not provide theoretical guarantees of optimality. It is natural to ask if there exist efficient algorithms for finding a maximum likelihood DPP model for a given data set. In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2012) that the problem is NP-complete. The lack of a formal proof prompted Brunel, Moitra, Rigollet and Urschel (2017a) to conjecture that, in opposition to Kulesza’s conjecture, there exists a polynomial-time algorithm for computing a maximum-likelihood DPP. They also presented some preliminary evidence supporting their conjecture. In this work we prove Kulesza’s conjecture. In fact, we prove the following stronger hardness of approximation result: even computing a 1 − 1 polylog N -approximation to the maximum log-likelihood of a DPP on a ground set of N elements is NP-complete. At the same time, we also obtain the first polynomial-time algorithm that achieves a nontrivial worst-case approximation to the optimal log-likelihood: the approximation factor is unconditionally (for data sets that consist of al., 2013b; et al., 2015; Affandi et al., 2013a), signal processing (Xu and Ou, Krause et al., Guestrin et al., 2005), clustering (Zou and 2012; Kang, 2013; and Ghahramani, 2013), recommendation systems (Zhou et al., 2010), revenue maximization (Dughmi et al., 2009), multi-agent reinforcement and al., 2020), modeling neural sketching for linear and low-rank","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"46 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Hardness of Maximum Likelihood Learning of DPPs\",\"authors\":\"Elena Grigorescu, Brendan Juba, K. Wimmer, Ning Xie\",\"doi\":\"10.48550/arXiv.2205.12377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Determinantal Point Processes (DPPs) are a widely used probabilistic model for negatively corre-lated sets. DPPs have been successfully employed in Machine Learning applications to select a diverse, yet representative subset of data. In these applications, the parameters of the DPP need to be fitted to match the data; typically, we seek a set of parameters that maximize the likelihood of the data. The algorithms used for this task to date either optimize over a limited family of DPPs, or use local improvement heuristics that do not provide theoretical guarantees of optimality. It is natural to ask if there exist efficient algorithms for finding a maximum likelihood DPP model for a given data set. In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2012) that the problem is NP-complete. The lack of a formal proof prompted Brunel, Moitra, Rigollet and Urschel (2017a) to conjecture that, in opposition to Kulesza’s conjecture, there exists a polynomial-time algorithm for computing a maximum-likelihood DPP. They also presented some preliminary evidence supporting their conjecture. In this work we prove Kulesza’s conjecture. In fact, we prove the following stronger hardness of approximation result: even computing a 1 − 1 polylog N -approximation to the maximum log-likelihood of a DPP on a ground set of N elements is NP-complete. At the same time, we also obtain the first polynomial-time algorithm that achieves a nontrivial worst-case approximation to the optimal log-likelihood: the approximation factor is unconditionally (for data sets that consist of al., 2013b; et al., 2015; Affandi et al., 2013a), signal processing (Xu and Ou, Krause et al., Guestrin et al., 2005), clustering (Zou and 2012; Kang, 2013; and Ghahramani, 2013), recommendation systems (Zhou et al., 2010), revenue maximization (Dughmi et al., 2009), multi-agent reinforcement and al., 2020), modeling neural sketching for linear and low-rank\",\"PeriodicalId\":11639,\"journal\":{\"name\":\"Electron. Colloquium Comput. Complex.\",\"volume\":\"46 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electron. Colloquium Comput. Complex.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2205.12377\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electron. Colloquium Comput. Complex.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2205.12377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Determinantal Point Processes (DPPs) are a widely used probabilistic model for negatively corre-lated sets. DPPs have been successfully employed in Machine Learning applications to select a diverse, yet representative subset of data. In these applications, the parameters of the DPP need to be fitted to match the data; typically, we seek a set of parameters that maximize the likelihood of the data. The algorithms used for this task to date either optimize over a limited family of DPPs, or use local improvement heuristics that do not provide theoretical guarantees of optimality. It is natural to ask if there exist efficient algorithms for finding a maximum likelihood DPP model for a given data set. In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2012) that the problem is NP-complete. The lack of a formal proof prompted Brunel, Moitra, Rigollet and Urschel (2017a) to conjecture that, in opposition to Kulesza’s conjecture, there exists a polynomial-time algorithm for computing a maximum-likelihood DPP. They also presented some preliminary evidence supporting their conjecture. In this work we prove Kulesza’s conjecture. In fact, we prove the following stronger hardness of approximation result: even computing a 1 − 1 polylog N -approximation to the maximum log-likelihood of a DPP on a ground set of N elements is NP-complete. At the same time, we also obtain the first polynomial-time algorithm that achieves a nontrivial worst-case approximation to the optimal log-likelihood: the approximation factor is unconditionally (for data sets that consist of al., 2013b; et al., 2015; Affandi et al., 2013a), signal processing (Xu and Ou, Krause et al., Guestrin et al., 2005), clustering (Zou and 2012; Kang, 2013; and Ghahramani, 2013), recommendation systems (Zhou et al., 2010), revenue maximization (Dughmi et al., 2009), multi-agent reinforcement and al., 2020), modeling neural sketching for linear and low-rank