首页 > 最新文献

[Proceedings] DCC `93: Data Compression Conference最新文献

英文 中文
Divergence and the construction of variable-to-variable-length lossless codes by source-word extensions 源字扩展的发散和变到变长无损码的构造
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253142
G. H. Freeman
Such codes are described using dual leaf-linked trees: one specifying the parsing of the source symbols into source words, and the other specifying the formation of code words from code symbols. Compression exceeds entropy by the amount of the informational divergence, between source words and code words, divided by the expected source-word length. The asymptotic optimality of Tunstall or Huffman codes derives from the bounding of divergence while the expected source-word length is made arbitrarily large. A heuristic extension scheme is asymptotically optimal but also acts to reduce the divergence by retaining those source words which are well matched to their corresponding code words.<>
这种代码使用双叶链接树来描述:一个指定将源符号解析为源单词,另一个指定从码符号形成码单词。压缩超过熵的方法是源字和码字之间的信息分歧量除以预期的源字长度。当期望源字长度任意大时,Tunstall码或Huffman码的渐近最优性来源于发散边界。启发式扩展方案是渐近最优的,但也通过保留与相应码字匹配良好的源字来减少发散
{"title":"Divergence and the construction of variable-to-variable-length lossless codes by source-word extensions","authors":"G. H. Freeman","doi":"10.1109/DCC.1993.253142","DOIUrl":"https://doi.org/10.1109/DCC.1993.253142","url":null,"abstract":"Such codes are described using dual leaf-linked trees: one specifying the parsing of the source symbols into source words, and the other specifying the formation of code words from code symbols. Compression exceeds entropy by the amount of the informational divergence, between source words and code words, divided by the expected source-word length. The asymptotic optimality of Tunstall or Huffman codes derives from the bounding of divergence while the expected source-word length is made arbitrarily large. A heuristic extension scheme is asymptotically optimal but also acts to reduce the divergence by retaining those source words which are well matched to their corresponding code words.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126935939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Filtering random noise via data compression 通过数据压缩过滤随机噪声
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253144
B. Natarajan
A general technique is suggested for reducing random noise from signals using data compression in conjunction with the principle of Occam's Razor. Not only are classical spectral filters realisable as a particular instance of the technique, but more powerful nonlinear filters fall within its scope.<>
本文提出了一种通用技术,用于结合奥卡姆剃刀原理使用数据压缩来减少信号中的随机噪声。不仅经典的光谱滤波器可以作为该技术的一个特殊实例来实现,而且更强大的非线性滤波器也属于它的范围
{"title":"Filtering random noise via data compression","authors":"B. Natarajan","doi":"10.1109/DCC.1993.253144","DOIUrl":"https://doi.org/10.1109/DCC.1993.253144","url":null,"abstract":"A general technique is suggested for reducing random noise from signals using data compression in conjunction with the principle of Occam's Razor. Not only are classical spectral filters realisable as a particular instance of the technique, but more powerful nonlinear filters fall within its scope.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124339335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Adaptive channel optimization of vector quantized data 矢量量化数据的自适应信道优化
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253121
A. Hung, T. Meng
This paper describes two method to improve the robustness of transmitting such data over wireless communication channels: adaptively changing the quantizer codebook at the decoder, and optimizing codebook design for bursty errors.<>
本文介绍了两种提高在无线通信信道上传输此类数据的鲁棒性的方法:自适应地改变解码器处的量化器码本,以及针对突发错误优化码本设计。
{"title":"Adaptive channel optimization of vector quantized data","authors":"A. Hung, T. Meng","doi":"10.1109/DCC.1993.253121","DOIUrl":"https://doi.org/10.1109/DCC.1993.253121","url":null,"abstract":"This paper describes two method to improve the robustness of transmitting such data over wireless communication channels: adaptively changing the quantizer codebook at the decoder, and optimizing codebook design for bursty errors.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124856299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Multialphabet arithmetic coding at 16 MBytes/sec 16兆字节/秒的多字母算术编码
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253137
H. Printz, P. Stubley
The design and performance of a nonadaptive hardware system for data compression by arithmetic coding are presented. The alphabet of the data source is the full 256-symbol ASCII character set, plus a non-ASCII end-of-file symbol. The key ideas are the non-arithmetic representation of the current interval width, which yields improved coding efficiency in the interval width update, and the design of a circuit for the code point update, which operates at a high speed independent of the length of the code point register. On a reconfigurable coprocessor, constructed from commercially available field-programmable gate arrays and static RAM, implementation compresses its input stream at better than 16 MBytes/sec.<>
介绍了一种非自适应算术编码数据压缩硬件系统的设计和性能。数据源的字母表是完整的256个字符的ASCII字符集,加上一个非ASCII文件结束符号。关键思想是当前间隔宽度的非算术表示,这提高了间隔宽度更新的编码效率,以及码点更新电路的设计,该电路与码点寄存器的长度无关,可以高速运行。在可重构协处理器上,由市售的现场可编程门阵列和静态RAM构成,实现将其输入流压缩到优于16兆字节/秒。
{"title":"Multialphabet arithmetic coding at 16 MBytes/sec","authors":"H. Printz, P. Stubley","doi":"10.1109/DCC.1993.253137","DOIUrl":"https://doi.org/10.1109/DCC.1993.253137","url":null,"abstract":"The design and performance of a nonadaptive hardware system for data compression by arithmetic coding are presented. The alphabet of the data source is the full 256-symbol ASCII character set, plus a non-ASCII end-of-file symbol. The key ideas are the non-arithmetic representation of the current interval width, which yields improved coding efficiency in the interval width update, and the design of a circuit for the code point update, which operates at a high speed independent of the length of the code point register. On a reconfigurable coprocessor, constructed from commercially available field-programmable gate arrays and static RAM, implementation compresses its input stream at better than 16 MBytes/sec.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129290603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Fast and efficient lossless image compression 快速有效的无损图像压缩
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253114
P. Howard, J. Vitter
A new method gives compression comparable with the JPEG lossless mode, with about five times the speed. FELICS is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding, the authors use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter they present and analyze a provably good method for estimating the single coding parameter.<>
一种新的压缩方法的压缩速度是JPEG无损模式的5倍,与JPEG无损模式相当。FELICS基于对两个相邻像素的新颖使用来进行预测和误差建模。对于编码,作者使用单比特,调整后的二进制码,以及Golomb或Rice码。对于后者,他们提出并分析了一种可证明的估计单个编码参数的好方法。
{"title":"Fast and efficient lossless image compression","authors":"P. Howard, J. Vitter","doi":"10.1109/DCC.1993.253114","DOIUrl":"https://doi.org/10.1109/DCC.1993.253114","url":null,"abstract":"A new method gives compression comparable with the JPEG lossless mode, with about five times the speed. FELICS is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding, the authors use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter they present and analyze a provably good method for estimating the single coding parameter.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121044662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 223
Design and analysis of fast text compression based on quasi-arithmetic coding 基于准算术编码的快速文本压缩设计与分析
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253140
P. Howard, J. Vitter
A detailed algorithm for fast text compression, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism, and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. The authors provide details of the use of quasi-arithmetic code tables, and analyze their compression performance. The Fast PPM method is shown experimentally to be almost twice as fast as the PPMC method, while giving comparable compression.<>
与PPM方法相关的一种用于快速文本压缩的详细算法通过消除转义机制简化了建模阶段,并通过结合使用准算术编码和Rice编码来加快编码速度。作者详细介绍了准算术码表的使用,并分析了它们的压缩性能。实验表明,快速PPM方法的速度几乎是PPMC方法的两倍,同时具有相当的压缩
{"title":"Design and analysis of fast text compression based on quasi-arithmetic coding","authors":"P. Howard, J. Vitter","doi":"10.1109/DCC.1993.253140","DOIUrl":"https://doi.org/10.1109/DCC.1993.253140","url":null,"abstract":"A detailed algorithm for fast text compression, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism, and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. The authors provide details of the use of quasi-arithmetic code tables, and analyze their compression performance. The Fast PPM method is shown experimentally to be almost twice as fast as the PPMC method, while giving comparable compression.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126175122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Tree compacting transformations 树压缩变换
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253117
Gary Promhouse
This paper concerns one aspect of the construction of compact representations R(T) for trees T which are instances of an abstract tree data type. The ADT supports operations, on certain trees, built around a top-down traversal primitive, and provides the interface between the second and third stages of a general semantic compression system. The mechanisms used ensure that the time taken to perform all such operations using R(T) is linear with respect to performing the operation directly on T. They fall into two categories: invertible transformations on T which produce an equivalent tree with fewer elements (data or child specifications) than the input form; and the exploitation of statistical variations in the occurrence of data elements to reduce the average space required to represent remaining components.<>
本文讨论了树T的紧表示R(T)的构造的一个方面,树T是抽象树数据类型的实例。ADT支持围绕自顶向下遍历原语构建的特定树上的操作,并提供通用语义压缩系统的第二和第三阶段之间的接口。所使用的机制确保使用R(T)执行所有此类操作所花费的时间相对于直接对T执行操作是线性的。它们分为两类:对T进行可逆变换,产生比输入形式更少元素(数据或子规范)的等效树;并利用数据元素出现的统计变化来减少表示剩余成分所需的平均空间
{"title":"Tree compacting transformations","authors":"Gary Promhouse","doi":"10.1109/DCC.1993.253117","DOIUrl":"https://doi.org/10.1109/DCC.1993.253117","url":null,"abstract":"This paper concerns one aspect of the construction of compact representations R(T) for trees T which are instances of an abstract tree data type. The ADT supports operations, on certain trees, built around a top-down traversal primitive, and provides the interface between the second and third stages of a general semantic compression system. The mechanisms used ensure that the time taken to perform all such operations using R(T) is linear with respect to performing the operation directly on T. They fall into two categories: invertible transformations on T which produce an equivalent tree with fewer elements (data or child specifications) than the input form; and the exploitation of statistical variations in the occurrence of data elements to reduce the average space required to represent remaining components.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129312618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimum DCT quantization 最佳DCT量化
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253131
D. Monro, B. Sherlock
The paper offers a solution to the problem of determining good quantization tables for use with the discrete cosine transform. Using the methods proposed, the designer of a system can choose a selection of test images and a coefficient weighting scenario, from which a quantization table can be produced, optimized for the choices made. The method is based on simulated annealing searches which the space of quantization tables to minimize some chosen measure.<>
本文给出了离散余弦变换用好的量化表的确定问题的一种解决方法。利用所提出的方法,系统设计者可以选择一组测试图像和一个系数加权场景,从中产生量化表,并根据所做的选择进行优化。该方法基于模拟退火,在量化表的空间中搜索以最小化所选的度量。
{"title":"Optimum DCT quantization","authors":"D. Monro, B. Sherlock","doi":"10.1109/DCC.1993.253131","DOIUrl":"https://doi.org/10.1109/DCC.1993.253131","url":null,"abstract":"The paper offers a solution to the problem of determining good quantization tables for use with the discrete cosine transform. Using the methods proposed, the designer of a system can choose a selection of test images and a coefficient weighting scenario, from which a quantization table can be produced, optimized for the choices made. The method is based on simulated annealing searches which the space of quantization tables to minimize some chosen measure.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124876080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
A mean-removed variation of weighted universal vector quantization for image coding 图像编码中加权通用矢量量化的均值去中心化
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253119
Barry D. Andrews, P. Chou, M. Effros, R. Gray
Weighted universal vector quantization uses traditional codeword design techniques to design locally optimal multi-codebook systems. Application of this technique to a sequence of medical images produces a 10.3 dB improvement over standard full search vector quantization followed by entropy coding at the cost of increased complexity. In this proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook. The chosen codebook's codewords are then used to encode the resulting residuals. Application of the mean-removed system to the medical data set achieves up to 0.5 dB improvement at no rate expense.<>
加权通用向量量化利用传统的码字设计技术设计局部最优的多码本系统。将该技术应用于医学图像序列,比标准的全搜索向量量化和熵编码提高10.3 dB,但代价是增加了复杂性。在这种建议的变化中,系统中的每个码本都被赋予一个平均值或“预测”值,该值从映射到给定码本的所有超向量中减去。然后使用所选码本的码字对产生的残差进行编码。将均值去除系统应用于医疗数据集,在不付出任何代价的情况下实现了高达0.5 dB的改进。
{"title":"A mean-removed variation of weighted universal vector quantization for image coding","authors":"Barry D. Andrews, P. Chou, M. Effros, R. Gray","doi":"10.1109/DCC.1993.253119","DOIUrl":"https://doi.org/10.1109/DCC.1993.253119","url":null,"abstract":"Weighted universal vector quantization uses traditional codeword design techniques to design locally optimal multi-codebook systems. Application of this technique to a sequence of medical images produces a 10.3 dB improvement over standard full search vector quantization followed by entropy coding at the cost of increased complexity. In this proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook. The chosen codebook's codewords are then used to encode the resulting residuals. Application of the mean-removed system to the medical data set achieves up to 0.5 dB improvement at no rate expense.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129372787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Coding theory and regularization 编码理论与正则化
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253134
J. Connor, L. Atlas
This paper uses two principles, the robust encoding of residuals and the efficient coding of parameters, to obtain a new learning rule for neural networks. In particular, it examines how different coding techniques give rise to different learning rules. The storage space requirements of parameters and residuals are considered. A 'group regularizer' is derived from encoding of the parameters as a whole group rather than individually.<>
本文利用残差鲁棒编码和参数高效编码两个原理,得到了一种新的神经网络学习规则。特别是,它研究了不同的编码技术如何产生不同的学习规则。考虑了参数和残差对存储空间的要求。“组正则化器”是从作为一个整体而不是单个的参数编码中派生出来的。
{"title":"Coding theory and regularization","authors":"J. Connor, L. Atlas","doi":"10.1109/DCC.1993.253134","DOIUrl":"https://doi.org/10.1109/DCC.1993.253134","url":null,"abstract":"This paper uses two principles, the robust encoding of residuals and the efficient coding of parameters, to obtain a new learning rule for neural networks. In particular, it examines how different coding techniques give rise to different learning rules. The storage space requirements of parameters and residuals are considered. A 'group regularizer' is derived from encoding of the parameters as a whole group rather than individually.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114230548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
[Proceedings] DCC `93: Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1