首页 > 最新文献

[Proceedings] DCC `93: Data Compression Conference最新文献

英文 中文
A low-power analog CMOS vector quantizer 低功耗模拟CMOS矢量量化器
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253108
G. Tuttle, S. Fallahi, A. Abidi
The authors describe the implementation and performance of what might be termed a 'Vector A/D Converter'. The IC stores a codebook of vectors on-chip, accepts a 16-element analog vector at the input, calculates the Euclidean distance between the input and all codevectors (referred to as global search), and outputs an 8-bit code to index the codevector closest to the input prompt. At a 5 MHz clock rate it dissipates less than 50 mW to quantize 16 element analog vectors once every 10 clock periods, giving a 30 Hz frame rate for a 512*512 pixel gray scale image.<>
作者描述了可能被称为“矢量a /D转换器”的实现和性能。IC在芯片上存储一个矢量码本,在输入端接受一个16元模拟矢量,计算输入和所有编矢量之间的欧氏距离(称为全局搜索),并输出一个8位代码来索引最接近输入提示的编矢量。在5 MHz时钟速率下,它的耗散小于50 mW,每10个时钟周期量化一次16元素模拟向量,为512*512像素灰度图像提供30 Hz帧率。
{"title":"A low-power analog CMOS vector quantizer","authors":"G. Tuttle, S. Fallahi, A. Abidi","doi":"10.1109/DCC.1993.253108","DOIUrl":"https://doi.org/10.1109/DCC.1993.253108","url":null,"abstract":"The authors describe the implementation and performance of what might be termed a 'Vector A/D Converter'. The IC stores a codebook of vectors on-chip, accepts a 16-element analog vector at the input, calculates the Euclidean distance between the input and all codevectors (referred to as global search), and outputs an 8-bit code to index the codevector closest to the input prompt. At a 5 MHz clock rate it dissipates less than 50 mW to quantize 16 element analog vectors once every 10 clock periods, giving a 30 Hz frame rate for a 512*512 pixel gray scale image.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125617073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Combining image classification and image compression using vector quantization 结合图像分类和图像压缩矢量量化
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253150
K. Oehler, R. Gray
The goal is to produce codes where the compressed image incorporates classification information without further signal processing. This technique can provide direct low level classification or an efficient front end to more sophisticated full-frame recognition algorithms. Vector quantization is a natural choice because two of its design components, clustering and tree-structured classification methods, have obvious applications to the pure classification problem as well as to the compression problem. The authors explicitly incorporate a Bayes risk component into the distortion measure used for code design in order to permit a tradeoff of mean squared error with classification error. This method is used to analyze simulated data, identify tumors in computerized tomography lung images, and identify man-made regions in aerial images.<>
目标是产生编码,其中压缩图像包含分类信息,而无需进一步的信号处理。该技术可以为更复杂的全帧识别算法提供直接的低级分类或有效的前端。向量量化是一个自然的选择,因为它的两个设计组成部分,聚类和树结构分类方法,在纯分类问题和压缩问题上都有明显的应用。作者明确地将贝叶斯风险成分纳入用于代码设计的失真度量中,以便允许均方误差与分类误差之间的权衡。该方法用于分析模拟数据,识别计算机断层扫描肺图像中的肿瘤,以及识别航空图像中的人造区域。
{"title":"Combining image classification and image compression using vector quantization","authors":"K. Oehler, R. Gray","doi":"10.1109/DCC.1993.253150","DOIUrl":"https://doi.org/10.1109/DCC.1993.253150","url":null,"abstract":"The goal is to produce codes where the compressed image incorporates classification information without further signal processing. This technique can provide direct low level classification or an efficient front end to more sophisticated full-frame recognition algorithms. Vector quantization is a natural choice because two of its design components, clustering and tree-structured classification methods, have obvious applications to the pure classification problem as well as to the compression problem. The authors explicitly incorporate a Bayes risk component into the distortion measure used for code design in order to permit a tradeoff of mean squared error with classification error. This method is used to analyze simulated data, identify tumors in computerized tomography lung images, and identify man-made regions in aerial images.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"188 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124138097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Robust, variable bit-rate coding using entropy-biased codebooks 使用熵偏码本的鲁棒可变比特率编码
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253113
J. Fowler, S. Ahalt
The authors demonstrate the use of a differential vector quantization (DVQ) architecture for the coding of digital images. An artificial neural network is used to develop entropy-biased codebooks which yield substantial data compression without entropy coding and are very robust with respect to transmission channel errors. Two methods are presented for variable bit-rate coding using the described DVQ algorithm. In the first method, both the encoder and the decoder have multiple codebooks of different sizes. In the second, variable bit-rates are achieved by using subsets of one fixed codebook. The performance of these approaches is compared, under conditions of error-free and error-prone channels. Results show that this coding technique yields pictures of excellent visual quality at moderate compression rate.<>
作者演示了差分矢量量化(DVQ)架构在数字图像编码中的应用。利用人工神经网络来开发熵偏码本,该码本在没有熵编码的情况下产生大量的数据压缩,并且对传输信道错误具有很强的鲁棒性。提出了两种利用所描述的DVQ算法进行可变比特率编码的方法。在第一种方法中,编码器和解码器都有多个不同大小的码本。在第二种方法中,可变比特率是通过使用固定码本的子集来实现的。比较了两种方法在无错误通道和易出错通道条件下的性能。结果表明,这种编码技术在中等压缩率下产生了优秀的视觉质量的图像
{"title":"Robust, variable bit-rate coding using entropy-biased codebooks","authors":"J. Fowler, S. Ahalt","doi":"10.1109/DCC.1993.253113","DOIUrl":"https://doi.org/10.1109/DCC.1993.253113","url":null,"abstract":"The authors demonstrate the use of a differential vector quantization (DVQ) architecture for the coding of digital images. An artificial neural network is used to develop entropy-biased codebooks which yield substantial data compression without entropy coding and are very robust with respect to transmission channel errors. Two methods are presented for variable bit-rate coding using the described DVQ algorithm. In the first method, both the encoder and the decoder have multiple codebooks of different sizes. In the second, variable bit-rates are achieved by using subsets of one fixed codebook. The performance of these approaches is compared, under conditions of error-free and error-prone channels. Results show that this coding technique yields pictures of excellent visual quality at moderate compression rate.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115064135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimal piecewise-linear compression of images 图像的最优分段线性压缩
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253133
V. Bhaskaran, B. Natarajan, K. Konstantinides
The authors explore compression using an optimal algorithm for the approximation of waveforms with piecewise linear functions. They describe a modification of the algorithm that is provably good, but simple enough for the associated hardware implementation to be presentable.<>
作者探索压缩使用最优算法近似波形与分段线性函数。它们描述了对算法的一种修改,这种修改被证明是好的,但又足够简单,使得相关的硬件实现能够呈现出来。
{"title":"Optimal piecewise-linear compression of images","authors":"V. Bhaskaran, B. Natarajan, K. Konstantinides","doi":"10.1109/DCC.1993.253133","DOIUrl":"https://doi.org/10.1109/DCC.1993.253133","url":null,"abstract":"The authors explore compression using an optimal algorithm for the approximation of waveforms with piecewise linear functions. They describe a modification of the algorithm that is provably good, but simple enough for the associated hardware implementation to be presentable.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121731466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Compression of DNA sequences DNA序列压缩
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253115
S. Grumbach, F. Tahi
The authors propose a lossless algorithm based on regularities, such as the presence of palindromes, in the DNA. The results obtained, although not satisfactory, are far beyond classical algorithms.<>
作者提出了一种基于规律的无损算法,比如DNA中是否存在回文。所得结果虽然不令人满意,但已远远超出经典算法。
{"title":"Compression of DNA sequences","authors":"S. Grumbach, F. Tahi","doi":"10.1109/DCC.1993.253115","DOIUrl":"https://doi.org/10.1109/DCC.1993.253115","url":null,"abstract":"The authors propose a lossless algorithm based on regularities, such as the presence of palindromes, in the DNA. The results obtained, although not satisfactory, are far beyond classical algorithms.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115348840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 216
A MPEG encoder implementation on the Princeton Engine video supercomputer MPEG编码器在普林斯顿引擎视频超级计算机上的实现
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253107
H. Taylor, D. Chin, A. Jessup
The emergence of world wide standards for video compression has created a demand for design tools and simulation resources to support algorithm research and new product development. Because of the need for subjective study in the design of video compression algorithms it is essential that flexible yet computationally efficient tools be developed. The authors describe implementation of a programmable MPEG encoder on a massively parallel real-time image processing system. The system provides control over program attributes such as the size of the motion search window, buffer management and bit rate. Support is provided for real-time image acquisition and preprocessing from both analog and digital video sources (D1/D2).<>
全球视频压缩标准的出现创造了对设计工具和仿真资源的需求,以支持算法研究和新产品开发。由于视频压缩算法的设计需要主观研究,因此开发灵活且计算效率高的工具是必不可少的。作者描述了一个可编程的MPEG编码器在大规模并行实时图像处理系统上的实现。该系统提供对程序属性的控制,如运动搜索窗口的大小,缓冲区管理和比特率。支持实时图像采集和预处理从模拟和数字视频源(D1/D2)。
{"title":"A MPEG encoder implementation on the Princeton Engine video supercomputer","authors":"H. Taylor, D. Chin, A. Jessup","doi":"10.1109/DCC.1993.253107","DOIUrl":"https://doi.org/10.1109/DCC.1993.253107","url":null,"abstract":"The emergence of world wide standards for video compression has created a demand for design tools and simulation resources to support algorithm research and new product development. Because of the need for subjective study in the design of video compression algorithms it is essential that flexible yet computationally efficient tools be developed. The authors describe implementation of a programmable MPEG encoder on a massively parallel real-time image processing system. The system provides control over program attributes such as the size of the motion search window, buffer management and bit rate. Support is provided for real-time image acquisition and preprocessing from both analog and digital video sources (D1/D2).<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124883707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Minimizing error and VLSI complexity in the multiplication free approximation of arithmetic coding 在算术编码的无乘法近似中最小化误差和VLSI复杂度
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253138
G. Feygin, P. Gulak, P. Chow
Two new algorithms for performing arithmetic coding without multiplication are presented. The first algorithm, suitable for an alphabet of arbitrary size, reduces the worst-case normalized excess length to under 0.8% versus 1.911% for the previously known best method of Chevion et al. The second algorithm, suitable only for alphabets of less than twelve symbols, allows even greater reduction in the excess code length. For the important binary alphabet the worst-case excess code length is reduced to less than 0.1% versus 1.1% for the method of Chevion et al. The implementation requirements of the proposed new algorithms are discussed and shown to be similar.<>
提出了两种不用乘法进行算术编码的新算法。第一种算法适用于任意大小的字母表,将最坏情况归一化多余长度减少到0.8%以下,而之前已知的Chevion等人的最佳方法为1.911%。第二种算法只适用于少于12个符号的字母,可以更大幅度地减少多余的代码长度。对于重要的二进制字母表,最坏情况下的超额代码长度减少到小于0.1%,而Chevion等人的方法减少到1.1%。讨论了提出的新算法的实现要求,并证明了它们是相似的。
{"title":"Minimizing error and VLSI complexity in the multiplication free approximation of arithmetic coding","authors":"G. Feygin, P. Gulak, P. Chow","doi":"10.1109/DCC.1993.253138","DOIUrl":"https://doi.org/10.1109/DCC.1993.253138","url":null,"abstract":"Two new algorithms for performing arithmetic coding without multiplication are presented. The first algorithm, suitable for an alphabet of arbitrary size, reduces the worst-case normalized excess length to under 0.8% versus 1.911% for the previously known best method of Chevion et al. The second algorithm, suitable only for alphabets of less than twelve symbols, allows even greater reduction in the excess code length. For the important binary alphabet the worst-case excess code length is reduced to less than 0.1% versus 1.1% for the method of Chevion et al. The implementation requirements of the proposed new algorithms are discussed and shown to be similar.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127322409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Joint codebook design for summation product-code vector quantizers 求和积码矢量量化器的联合码本设计
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253146
W. Chan, A. Gersho, S. Soong
With respect to the generalized product code (GPC) model for structured vector quantization, multistage VQ (MSVQ) and tree-structured VQ are members of a family of summation product codes (SPCs), defined by the prototypical synthesis function x=f/sub 1/+...+f/sub s/, where f/sub i/, i=1, . . ., s are the residual vector features. The authors describe an algorithm paradigm for the joint design of the feature codebooks constituting a GPC. They specialize the paradigm to a joint design algorithm for the SPCs and exhibit experimental results for the MSVQ of simulated sources. The performance improvements over conventional 'greedy' design are essentially 'free' as the only cost is a moderate increase in design complexity.<>
对于结构化矢量量化的广义积码(GPC)模型,多级积码(MSVQ)和树结构积码(VQ)是由原型综合函数x=f/sub 1/+…定义的求和积码(spc)族的成员。+f/s /,其中f/ i/, i=1,…,s为残差向量特征。作者描述了组成GPC的特征码本联合设计的算法范例。他们将范例专门用于SPCs的联合设计算法,并展示了模拟源的MSVQ的实验结果。与传统的“贪婪”设计相比,性能改进基本上是“免费的”,因为唯一的成本是设计复杂性的适度增加。
{"title":"Joint codebook design for summation product-code vector quantizers","authors":"W. Chan, A. Gersho, S. Soong","doi":"10.1109/DCC.1993.253146","DOIUrl":"https://doi.org/10.1109/DCC.1993.253146","url":null,"abstract":"With respect to the generalized product code (GPC) model for structured vector quantization, multistage VQ (MSVQ) and tree-structured VQ are members of a family of summation product codes (SPCs), defined by the prototypical synthesis function x=f/sub 1/+...+f/sub s/, where f/sub i/, i=1, . . ., s are the residual vector features. The authors describe an algorithm paradigm for the joint design of the feature codebooks constituting a GPC. They specialize the paradigm to a joint design algorithm for the SPCs and exhibit experimental results for the MSVQ of simulated sources. The performance improvements over conventional 'greedy' design are essentially 'free' as the only cost is a moderate increase in design complexity.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115732932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fractal based image compression with affine transformations 基于仿射变换的分形图像压缩
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253125
H. Raittinen, K. Kaski
As the needs for information transfer and storage increase, data coding and compression become increasingly important in applications such as digital HDTV, telefax, ISDN and image data bases. The authors have developed a fractal image compression method and tested it with binary (black and white) images. The decoded results are similar to the original images. The compression ratios are found to be extremely high.<>
随着信息传输和存储需求的增加,数据编码和压缩在数字高清电视、电传、ISDN和图像数据库等应用中变得越来越重要。作者提出了一种分形图像压缩方法,并对二值图像(黑白图像)进行了测试。解码后的结果与原始图像相似。压缩比是非常高的。
{"title":"Fractal based image compression with affine transformations","authors":"H. Raittinen, K. Kaski","doi":"10.1109/DCC.1993.253125","DOIUrl":"https://doi.org/10.1109/DCC.1993.253125","url":null,"abstract":"As the needs for information transfer and storage increase, data coding and compression become increasingly important in applications such as digital HDTV, telefax, ISDN and image data bases. The authors have developed a fractal image compression method and tested it with binary (black and white) images. The decoded results are similar to the original images. The compression ratios are found to be extremely high.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"360 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115782412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Real-time focal-plane image compression 实时焦平面图像压缩
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253109
R. Tawel
A novel analog focal-plane processor, the Vector Array Processor (VAP), is designed specifically for use in real-time/video-rate on-line lossy image compression. This custom CMOS processor is based architecturally on the Vector Quantization algorithm in image coding, The current implementation of the processor can handle codebook sizes of up to 128 vectors of dimensionality 16. The VAP performs vector matching in a fully parallel fashion, utilizing as its basic computational element the 'bump' circuit that computes the similarity between two input voltages and outputs a current proportional to this disparity.<>
矢量阵列处理器(VAP)是一种新型的模拟焦平面处理器,专门用于实时/视频速率在线有损图像压缩。该定制CMOS处理器在架构上基于图像编码中的矢量量化算法,目前该处理器的实现可以处理多达128个16维矢量的码本大小。VAP以完全并行的方式执行矢量匹配,利用“碰撞”电路作为其基本计算元素,计算两个输入电压之间的相似性,并输出与该差异成比例的电流。
{"title":"Real-time focal-plane image compression","authors":"R. Tawel","doi":"10.1109/DCC.1993.253109","DOIUrl":"https://doi.org/10.1109/DCC.1993.253109","url":null,"abstract":"A novel analog focal-plane processor, the Vector Array Processor (VAP), is designed specifically for use in real-time/video-rate on-line lossy image compression. This custom CMOS processor is based architecturally on the Vector Quantization algorithm in image coding, The current implementation of the processor can handle codebook sizes of up to 128 vectors of dimensionality 16. The VAP performs vector matching in a fully parallel fashion, utilizing as its basic computational element the 'bump' circuit that computes the similarity between two input voltages and outputs a current proportional to this disparity.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128522700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
[Proceedings] DCC `93: Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1