{"title":"通过KDE近似快速数据缩减","authors":"D. Freedman, P. Kisilev","doi":"10.1109/DCC.2009.47","DOIUrl":null,"url":null,"abstract":"Many of today’s real world applications need to handle and analyze continually growing amounts of data, while the cost of collecting data decreases. As a result, the main technological hurdle is that the data is acquired faster than it can be processed. Data reduction methods are thus increasingly important, as they allow one to extract the most relevant and important information from giant data sets. We present one such method, based on compressing the description length of an estimate of the probability distribution of a set points.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Fast Data Reduction via KDE Approximation\",\"authors\":\"D. Freedman, P. Kisilev\",\"doi\":\"10.1109/DCC.2009.47\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many of today’s real world applications need to handle and analyze continually growing amounts of data, while the cost of collecting data decreases. As a result, the main technological hurdle is that the data is acquired faster than it can be processed. Data reduction methods are thus increasingly important, as they allow one to extract the most relevant and important information from giant data sets. We present one such method, based on compressing the description length of an estimate of the probability distribution of a set points.\",\"PeriodicalId\":377880,\"journal\":{\"name\":\"2009 Data Compression Conference\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-03-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 Data Compression Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DCC.2009.47\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 Data Compression Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC.2009.47","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Many of today’s real world applications need to handle and analyze continually growing amounts of data, while the cost of collecting data decreases. As a result, the main technological hurdle is that the data is acquired faster than it can be processed. Data reduction methods are thus increasingly important, as they allow one to extract the most relevant and important information from giant data sets. We present one such method, based on compressing the description length of an estimate of the probability distribution of a set points.