{"title":"无损图像编码的上下文选择和量化","authors":"Xiaolin Wu","doi":"10.1109/DCC.1995.515563","DOIUrl":null,"url":null,"abstract":"Summary form only given. After the context quantization, an entropy coder using L2/sup K/ (L is the quantized levels and K is the number of bits) conditional probabilities remains impractical. Instead, only the expectations are approximated by the sample means with respect to different quantized contexts. Computing the sample means involves only cumulating the error terms in the quantized context C(d,t) and keeping a count on the occurrences of C(d,t). Thus, the time and space complexities of the described context based modeling of the prediction errors are O(L2/sup K/). Based on the quantized context C(d,t), the encoder makes a DPCM prediction I, adds to I the most likely prediction error and then arrives at an adaptive, context-based, nonlinear prediction. The error e is then entropy coded. The coding of e is done with L conditional probabilities. The results of the proposed context-based, lossless image compression technique are included.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Context selection and quantization for lossless image coding\",\"authors\":\"Xiaolin Wu\",\"doi\":\"10.1109/DCC.1995.515563\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Summary form only given. After the context quantization, an entropy coder using L2/sup K/ (L is the quantized levels and K is the number of bits) conditional probabilities remains impractical. Instead, only the expectations are approximated by the sample means with respect to different quantized contexts. Computing the sample means involves only cumulating the error terms in the quantized context C(d,t) and keeping a count on the occurrences of C(d,t). Thus, the time and space complexities of the described context based modeling of the prediction errors are O(L2/sup K/). Based on the quantized context C(d,t), the encoder makes a DPCM prediction I, adds to I the most likely prediction error and then arrives at an adaptive, context-based, nonlinear prediction. The error e is then entropy coded. The coding of e is done with L conditional probabilities. The results of the proposed context-based, lossless image compression technique are included.\",\"PeriodicalId\":107017,\"journal\":{\"name\":\"Proceedings DCC '95 Data Compression Conference\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1995-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings DCC '95 Data Compression Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DCC.1995.515563\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings DCC '95 Data Compression Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC.1995.515563","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Context selection and quantization for lossless image coding
Summary form only given. After the context quantization, an entropy coder using L2/sup K/ (L is the quantized levels and K is the number of bits) conditional probabilities remains impractical. Instead, only the expectations are approximated by the sample means with respect to different quantized contexts. Computing the sample means involves only cumulating the error terms in the quantized context C(d,t) and keeping a count on the occurrences of C(d,t). Thus, the time and space complexities of the described context based modeling of the prediction errors are O(L2/sup K/). Based on the quantized context C(d,t), the encoder makes a DPCM prediction I, adds to I the most likely prediction error and then arrives at an adaptive, context-based, nonlinear prediction. The error e is then entropy coded. The coding of e is done with L conditional probabilities. The results of the proposed context-based, lossless image compression technique are included.