{"title":"计算最小冗余前缀码的两种空间经济算法","authors":"R. Milidiú, A. Pessoa, E. Laber","doi":"10.1109/DCC.1999.755676","DOIUrl":null,"url":null,"abstract":"The minimum redundancy prefix code problem is to determine, for a given list W=[w/sub 1/,...,w/sub n/] of n positive symbol weights, a list L=[l/sub 1/,...,l/sub n/] of n corresponding integer codeword lengths such that /spl Sigma//sub i=1//sup n/2/sup -li//spl les/1 and /spl Sigma//sub i=1//sup n/w/sub i/l/sub i/ is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m/sub 1/,...,m/sub H/], where m(l/sub 1/), for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min{log(1/(p/sub 1/)),n}), where p/sub 1/ is the smallest symbol probability, given by w/sub 1///spl Sigma//sub i=1//sup n/w/sub i/. We present the F-LazyHuff and the E-LazyHuff algorithms. F-LazyHuff runs in O(n) time but requires O(min{H/sup 2/,n}) additional space. On the other hand, E-LazyHuff runs in O(nlog(n/H)) time, requiring only O(H) additional space. Finally, since our two algorithms have the advantage of not writing at the input buffer during the code calculation, we discuss some applications where this feature is very useful.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Two space-economical algorithms for calculating minimum redundancy prefix codes\",\"authors\":\"R. Milidiú, A. Pessoa, E. Laber\",\"doi\":\"10.1109/DCC.1999.755676\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The minimum redundancy prefix code problem is to determine, for a given list W=[w/sub 1/,...,w/sub n/] of n positive symbol weights, a list L=[l/sub 1/,...,l/sub n/] of n corresponding integer codeword lengths such that /spl Sigma//sub i=1//sup n/2/sup -li//spl les/1 and /spl Sigma//sub i=1//sup n/w/sub i/l/sub i/ is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m/sub 1/,...,m/sub H/], where m(l/sub 1/), for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min{log(1/(p/sub 1/)),n}), where p/sub 1/ is the smallest symbol probability, given by w/sub 1///spl Sigma//sub i=1//sup n/w/sub i/. We present the F-LazyHuff and the E-LazyHuff algorithms. F-LazyHuff runs in O(n) time but requires O(min{H/sup 2/,n}) additional space. On the other hand, E-LazyHuff runs in O(nlog(n/H)) time, requiring only O(H) additional space. Finally, since our two algorithms have the advantage of not writing at the input buffer during the code calculation, we discuss some applications where this feature is very useful.\",\"PeriodicalId\":103598,\"journal\":{\"name\":\"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1999-03-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DCC.1999.755676\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC.1999.755676","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Two space-economical algorithms for calculating minimum redundancy prefix codes
The minimum redundancy prefix code problem is to determine, for a given list W=[w/sub 1/,...,w/sub n/] of n positive symbol weights, a list L=[l/sub 1/,...,l/sub n/] of n corresponding integer codeword lengths such that /spl Sigma//sub i=1//sup n/2/sup -li//spl les/1 and /spl Sigma//sub i=1//sup n/w/sub i/l/sub i/ is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m/sub 1/,...,m/sub H/], where m(l/sub 1/), for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min{log(1/(p/sub 1/)),n}), where p/sub 1/ is the smallest symbol probability, given by w/sub 1///spl Sigma//sub i=1//sup n/w/sub i/. We present the F-LazyHuff and the E-LazyHuff algorithms. F-LazyHuff runs in O(n) time but requires O(min{H/sup 2/,n}) additional space. On the other hand, E-LazyHuff runs in O(nlog(n/H)) time, requiring only O(H) additional space. Finally, since our two algorithms have the advantage of not writing at the input buffer during the code calculation, we discuss some applications where this feature is very useful.