{"title":"复杂高斯混合模型的近似查询应答","authors":"Mattis Hartwig, M. Gehrke, R. Möller","doi":"10.1109/ICBK.2019.00019","DOIUrl":null,"url":null,"abstract":"Gaussian mixture models are widely used in a diverse range of research fields. If the number of components and dimensions grow high, the computational costs for answering queries become unreasonably high for practical use. Therefore approximation approaches are necessary to make complex Gaussian mixture models more usable. The need for approximation approaches is also driven by the relatively recent representations that theoretically allow unlimited number of mixture components (e.g. nonparametric Bayesian networks or infinite mixture models). In this paper we introduce an approximate inference algorithm that splits the existing algorithm for query answering into two steps and uses the knowledge from the first step to reduce unnecessary calculations in the second step while maintaining a defined error bound. In highly complex mixture models we observed significant time savings even with low error bounds.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Approximate Query Answering in Complex Gaussian Mixture Models\",\"authors\":\"Mattis Hartwig, M. Gehrke, R. Möller\",\"doi\":\"10.1109/ICBK.2019.00019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Gaussian mixture models are widely used in a diverse range of research fields. If the number of components and dimensions grow high, the computational costs for answering queries become unreasonably high for practical use. Therefore approximation approaches are necessary to make complex Gaussian mixture models more usable. The need for approximation approaches is also driven by the relatively recent representations that theoretically allow unlimited number of mixture components (e.g. nonparametric Bayesian networks or infinite mixture models). In this paper we introduce an approximate inference algorithm that splits the existing algorithm for query answering into two steps and uses the knowledge from the first step to reduce unnecessary calculations in the second step while maintaining a defined error bound. In highly complex mixture models we observed significant time savings even with low error bounds.\",\"PeriodicalId\":383917,\"journal\":{\"name\":\"2019 IEEE International Conference on Big Knowledge (ICBK)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Big Knowledge (ICBK)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICBK.2019.00019\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Big Knowledge (ICBK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICBK.2019.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Approximate Query Answering in Complex Gaussian Mixture Models
Gaussian mixture models are widely used in a diverse range of research fields. If the number of components and dimensions grow high, the computational costs for answering queries become unreasonably high for practical use. Therefore approximation approaches are necessary to make complex Gaussian mixture models more usable. The need for approximation approaches is also driven by the relatively recent representations that theoretically allow unlimited number of mixture components (e.g. nonparametric Bayesian networks or infinite mixture models). In this paper we introduce an approximate inference algorithm that splits the existing algorithm for query answering into two steps and uses the knowledge from the first step to reduce unnecessary calculations in the second step while maintaining a defined error bound. In highly complex mixture models we observed significant time savings even with low error bounds.